Insights into c++ code

Has anyone used C++ Insights and do you trust the GitHub code? It is pretty helpful that it can take a variadic and break it down to what the compiler is actually doing. This guy also has an online version of it.

It would be great to see line for line what the compiler is doing like this and be able to step through it AND show all the variable values at the same time. Anything else like this out there that can do both of these and that is more popular? I wish Visual Studio has something like this?

https://youtu.be/j1qpRgUhXtk?t=356

https://github.com/andreasfertig/cppinsights
It would be great to see line for line what the compiler is doing like this and be able to step through it AND show all the variable values at the same time.
Well, couldn't you just take the output from this tool, compile it, and run it through a normal debugger?
Yes.
Have you guys heard of this program before and is it safe and trustworthy to download and run? There may be larger programs with classes that I would love to see the behind the scenes compiler workings also, so not just for simple hello world programs.

there is an upload code on the online version but at times you may want to keep your secret sauce and I would like to run it locally. Is there any other trustworthy app or built-in way on the compilers?

1 more question.
If I have a complex function that runs a lot of calculations of floats in a loop for checking then if I only want to add integer 1-10 on an existing float, is it better to setup an unsigned short or float variable for the addition?

using short will save on memory but the compiler will have to constantly convert to float and it will eat up more speed and processing time? So is it better to use float if I want faster speed but short if I want to save on memory?

unsigned short NumToAdd;
float NumToAdd;

Normally neither speed nor memory is really relevant. Computers today have a lot of memory and they are very fast.

Don't do any premature optimization. Do it the way it is the easiest for you to program.

Most processors have a special floating point unit. So there is usually a very small difference speed wise when it comes to floating point and integer operations. I would even recommend double.

So the question is: do you really see any problems regarding speed or memory?
Don't do any premature optimization. Do it the way it is the easiest for you to program.

From my own experience, the compiler has an easier time optimizing when the code is already sensibly optimized.

This comes down to choosing the correct thing to code in the first place. If you use a linked list where a map would have been more suitable, compiler won't optimize that!
Lets just say speed can become an issue as I keep on adding to the code, then it will be best to just use the float I gather.
the compiler has an easier time optimizing when the code is already sensibly optimized.
This depends on whether you and the compiler agree on that optimization.
Generally i would think: The simpler the expression is the easier it is for the compiler to optimize. So I would say that an if() is easier to resolve than a throw.

I keep on adding to the code, then it will be best to just use the float I gather.
An rough assumption may be: whatever needs less lines of code.
But that's not always true. When it appears slow you might compare both variants using a timer...

Premature optimization becomes a problem when it's getting hard to extend the code when necessary.
The default for floating point numbers in C++ is double (eg 2.0 is a double). So you'd normally use double for real variables. Unless you have specific special requirements when this type is not appropriate (eg limited support/memory for embedded processors etc), then you should stick with this. Again for integer types, the default is int for numbers (eg 3 is an int). Again unless you have limited memory then there's no need to use short int (there may be good reasons to use unsigned). Obviously for large int numbers you use (unsigned) long int.

For speed issues re floating point, then what is being done with the numbers may be important. Functions such as pow(), sin(), cos(), exp() can be 'expensive' in terms of performance - so where possible avoid or compute outside of any loops if possible or even have tables of pre-computed numbers.

If I have a complex function that runs a lot of calculations of floats in a loop for checking then if I only want to add integer 1-10 on an existing float, is it better to setup an unsigned short or float variable for the addition?

using short will save on memory but the compiler will have to constantly convert to float and it will eat up more speed and processing time? So is it better to use float if I want faster speed but short if I want to save on memory?
You're talking about a difference of 6 bytes. The energy you spent worrying about this and then typing up the question was worth much more than anything you could use those 6 bytes for.

Remember:
YAGNI - Ya Ain't Gonna Need it
KISS - Keep It Simple, Stupid!
Don't worry about things that may not come to pass; don't add features that may never be needed. Don't add complexity unnecessarily.
This depends on whether you and the compiler agree on that optimization

It's possible one algorithm is better than another, but the "worse" algorithm can somehow be better optimized.

Though I don't believe I've ever run into that. Redoing a program to be more efficient has always lead to a faster program when running it optimized.

I just wouldn't recommend following the literal meaning of "Do it the way it is the easiest for you to program." Maybe if you plan on finding a better way to code it later.


Though, worrying about a few bytes here and there is not worth it.
zapshe wrote:
It's possible one algorithm is better than another, but the "worse" algorithm can somehow be better optimized.

If that makes the "worse algorithm" perform better than the "better algorithm" doesn't that mean the "worse algorithm" is actually the better one? At least for that particular situation. Or how do you decide whether an algorithm is better than another? Big O? Clarity? Genericity?
Last edited on
If that makes the "worse algorithm" perform better than the "better algorithm" doesn't that mean the "worse algorithm" is actually the better and vice versa?

For that specific situation. It's possible that those optimizations don't carry over to a different CPU - making the "better" algorithm faster on different computers.

On paper, a better algorithm will be better if it does less "stuff". Less loops, less work per loops. I mean, do I have to explain this?

I once had an algorithm that had to loop 16 million times. Using some math magic, it only had to loop 4 million times. Needless to say it ran faster. The original formula used was obvious. But after doing some math magic, I produced a more efficient algorithm by using a formula that required less input - therefore less loops.

The whole point of clever algorithms is to try and produce output for the least amount of work (usually iterations).


Though I do agree you can take it too far. I can see it being possible that one can use tricks and vague code that should make your code faster, but makes it difficult for the compiler to know what's happening and optimize the rest of your code properly.
typical PC floating point, its going to pull it from memory, promote it to something big, do the math, and downcast it to the size you wanted. Its going to to do that whether its float or double. The only difference is the width going to and from memory, on the bus, since you can push 2 floats to 1 double. This is very rarely the bottleneck and very rarely of much consequence. There won't be any measurable difference in anything apart from the space taken up, which in turn can affect overall performance (memory pages, data transfer rates, file sizes, network traffic, whatever).

Integer math is typically slightly faster than floating point. There are a few reasons but its simple enough here, use int where an int will do.

why not try it yourself, though?
set up some dumb loops... add a billion ints of a few sizes. add a billion doubles. add a billion floats. See what all the fuss is about and if its worth your time. People used to get all worked up futzing with this stuff, but that was when memory was measures in kilobytes and cpu speeds ran below a megahertz and so on. Ive got a book bigger than a dictionary on this junk for assembly tweaking the 386, Ive endured lectures and papers and more on such things. After the dawn of modern computing, with the FPU on the chip, hundreds of MHZ clock speeds, multiple cores, big fat busses, and all the luxury we have today, the only people who mention these issues are old farts remembering the 'glory' days and people dealing with ultra specialty hardware.
Last edited on
and people dealing with ultra specialty hardware.

For example, GPGPU.
Has anyone used C++ Insights


I've used the on-line version on test code for things like templated code, lambdas etc. I know others have also used the on-line version to understand test code. I haven't (and I don't know anyone) who has installed it. I've found the output informative.

Would I use the on-line version for copyright production code? Probably not - but you shouldn't want/need to. For what you're interested in from your code, produce compilable test code that only demonstrates that which you're investigating - and use that. Shovelling hundreds/thousands of lines of code through C++ Insights is unlikely to be beneficial or productive.
Topic archived. No new replies allowed.