Help With Headers

Can I ask why in C++ you put the function prototypes / declarations in a .h / .hpp header file, with the functions in a .cpp implementation file?

As a novice I just put the prototypes, functions and classes into the .h header file, which seemed to work, and the main.cpp had no trouble calling these functions.

Why is this wrong?

Why do we need to create a .cpp with the same name as the .h / .hpp, rather than putting it all in the .h / .hpp?


When you inlcude the same header more than once you would have the functions also more than once. The linker would complain about multiple definitions.

You can solve this by making all functions inline.

The main reason particularly for bigger projects is that you don't want to expose all the internal details and having an interface in the header file that is as simple/clean a possible.
https://stackoverflow.com/questions/25274312/is-it-a-good-practice-to-define-c-functions-inside-header-files
Basically, if you could guarantee that the header file is only going to be used once per program, then it might not be a problem, but that would be very limiting in the long run.
@coder777
When you inlcude the same header more than once you would have the functions also more than once. The linker would complain about multiple definitions.

You can solve this by making all functions inline.

The main reason particularly for bigger projects is that you don't want to expose all the internal details and having an interface in the header file that is as simple/clean a possible.


@newbieg
https://stackoverflow.com/questions/25274312/is-it-a-good-practice-to-define-c-functions-inside-header-files
Basically, if you could guarantee that the header file is only going to be used once per program, then it might not be a problem, but that would be very limiting in the long run.


Thank you :)
As a novice I just put the prototypes, functions and classes into the .h header file, which seemed to work, and the main.cpp had no trouble calling these functions.


This is called a monolith -- and its fine if the code is small enough. Lets start there.
Why is this wrong?

It is not wrong. It clearly works. Its bad design for large projects, but I have a couple dozen 'utility' programs that work exactly like this, and they are fine. No reason to break it up and make a mess of it.

As an aside, the names of the files are just convention. Its perfectly ok to have bar.cpp have its prototypes in foo.h. The compiler and linker don't give a rat's rear. You use the same name because its a convention programmers agree to so they don't have to kill each other.

But lets grow that small program until its too complicated to be easy to work on in one file and talk about THAT now. The first step is to understand how your tools (compiler, linker) work so you see what they need to have. I will lead you through a simple piece of that.

say you make all your functions at the top, and main at the bottom, but no prototypes? What happens?
It works fine!
now say one of your functions calls another of your functions. Ok, no problem... just move the one that is called above the one that calls it, and it still all works.
see what is happening here? The compiler and linker are going top-down (the linker does this to an extent but its the order the files produced by the compiler appear in)... it has to see it, or at least a placeholder for it, before it can unravel the calls.
so far so good, but when a calls b and b calls c and c calls a and ... at some point you can't put them all in front of each other! Then you need the prototypes. You will see the idea of a prototype again, for classes and other objects, in the form of a "forward declaration" (look it up sometime). For the same reasons... it need to see one before the other, or a placeholder at least, but they depend on each other in a way that isn't possible, then you do that.

all that just grows as the code base grows. From here, your program gets bigger so you split it into a few files, but the dependency of things on each other force you to expose the prototypes. You can just paste the prototype you need anywhere you need it, eg if both foo.cpp and bar.cpp needs function funct() in funct.cpp then you can stuff the prototype at the top of both cpp files and that works. Ahh-ha, but this is what #include does! #include literally pastes the code in the file at that spot (you can exploit this to force inline functions and do other weirdness).

so that all that leads you eventually to the .h file idea because its the most sane way to handle the needs of the toolset in a useful, consistent, human comprehensible way :). Half the answer is you do what is needed to get it to compile. The other half is that convention dictates that we do things a certain way so that code can be understood by other programmers. As you may have gathered from above, for example, you can #include cpp files. You should not, but it works fine as long as you dodge the 'declared twice' bullets. Even though it works in some cases, convention and good practice demand you don't do this.
Last edited on
Header files are from the past. Nowadays we have modules which are supposed to make things much easier.

https://en.cppreference.com/w/cpp/language/modules
https://itnext.io/c-20-modules-complete-guide-ae741ddbae3d
@jonnin - Thank you for the additional clarification. That was very helpful :)


@thmm - That's quite a bit of new information to get my head around. Thanks for highlighting this new functionality.


As a newbie would it be fair to recognise that Modules have now been implemented in C++ 20, but to ignore them while focusing on building an initial working knowledge of C++ with headers / .hpp and .cpp files, or does best practice now require that I embrace Modules from the very beginning?
Both.
modules are VERY new, and C++ is VERY old. It would be wise to know enough about older code that you can work with it if you need to do so. For the next 10 years I would still put a good bet that most of the code you will see uses the header file design. This is a mix of older code (40 or so years worth!), people that have not moved to the latest toy, cautious businesses that lag behind the bleeding edge, and so on.
At the same time, you are going to start seeing code that uses modules, and its probably in your best interest to make your own new code use them (at least sometimes, for familiarity, and once out in industry, all the time, if the job uses up to date tools).

start with cpp/.h approach though. You won't see any** modules in tutorials, books, assignments, classrooms, or even code off the internet for a while.
**Not counting examples of how to use it or one or the rare new project.
Last edited on
Using modules isn't as hard or confusing as MS documentation tries to make it.

You've seen pre-C++20 code like this:
1
2
3
4
5
6
#include <iostream>

int main()
{
   std::cout << "Hello World!\n";
}

To USE C++20 modules the code is something like this:
1
2
3
4
5
6
import <iostream>;

int main()
{
   std::cout << "Hello World!\n";
}

If you use Visual Studio (2019 or 2022) the code could be:
1
2
3
4
5
6
import std.core;

int main()
{
   std::cout << "Hello World!\n";
}

This version of consuming C++ modules is some serious MS non-standard stuff, and no other compiler AFAIK supports it. It also vomits up some really lame warnings that are really confusing.
Thanks all, I guess there's no way round it, I'll need to learn both headers and modules.

(but I might put modules on the back burner for a few weeks longer while I'm learning the basics)
I have a question for anyone used to modules:
Is there a runtime speed bonus and/or impediment to code using modules over code using traditional headers?

I do see the massive benefit that kernel programmers (and large-scale projects in general) would receive in compile-time, and there's a wonderful reduction in bloat that I didn't think would be possible, but what are the effects at run-time?

-> If there is a runtime benefit as well then I think modules should be adopted as quickly as possible because the reduction in bloat is worth a little extra setup on the coding side (do it enough and it generally becomes muscle memory).

-> If there's no runtime benefit or cost, then yes you could put it off so that learning from older code is easier, but it should be learned alongside the older traditional styles.

-> If there's a runtime impediment, then it's a balancing act of cost over benefit that maybe should be put off until you're fully comfortable with the language.
Last edited on
There is no run-time difference between using modules or traditional headers, AFAIK the same machine code is generated.

The time differential is when compiling.

With Visual Studio 2019/2022 the initial compile speed using modules is longer, because VS compiles the header modules as well as your source module(s). Any compile done after that that isn't a complete rebuild only the modified source modules are recompiled resulting in a faster compile time.
not much new there either. It does that with normal headers etc ... the software for my job is over half an hour from scratch, a few seconds to recompile.
Use the C++ library headers and there is no extra compile time, the C++ headers don't need to be compiled.

Use the C++ library modules and the initial compile time is increased because the C++ library modules have to be compiled.

At least that is what happens with VS. I can't say what other compilers will do when compiling C++ library headers vs. C++ library modules. I haven't been able to get GCC/MinGW to work with modules.
Topic archived. No new replies allowed.