Right, but apparently, even that reasoning was wrong, since the correct answer is integer not bool. So, in the context of a first-year programming course, there is no reason for "integer" to be the correct answer, right?
It must have been an error by the questioner. Since the conversion argument is the one that makes the most logical sense in a first-year course (which is why I chose it), but as for int, there is no logical reasoning behind that unless you want to delve into obscure low-level implementation details that are far beyond the scope of the course.
So the correct answer, according to the primitive first-year programmer reasoning with the "conversion" I mentioned, would be bool, not int. I am going to tell this to my professor, and then top it off by going into the assembly output argument. Hopefully he ends up removing the question entirely.
this may be the worst question of all time. If taken as a beginner question, its talking fastest too early; at this stage worrying about what happens to dump some form of integer into some register and whether there is a slowdown in the middle of it is just way too deep in the weeds. If its an advanced question, the answer is 'it depends' and they did not provide the details.
Update: My teacher actually ended up agreeing with me. He said that he does not write the questions, but they are randomly pulled from a bank of questions online from the textbook's website, which explains these mistakes. He gave me the point.
What is the publication date of the textbook, and the name? As well being probably seriously outdated I doubt the writers of the book are at the cutting edge of C++ evolution.
So here is what the entire email conversation looked like. It took me a while to convince him that it was correct, but at the end he just said he "agrees with the author" but he will remove the question because we are not expected to know it. I told him that it must be a boolean value that is evaluated in a control condition and he replied with this:
It does not have to be a Boolean. For example, “beq $t1, $t2, branchHere” compares what is in two registers and branches to the instruction with label “branchHere”. It does not convert it to a boolean.
I then replied by saying saying that it doesn't matter what the compiler does internally because it will generate the same code for both. On x86, the equivalent to MIPS's BEQ instruction is JE (I believe), and if we run a test on godbolt: https://godbolt.org/z/TF6TFG
The assembly outputs are the EXACT SAME!!! So when I bring this up to my teacher, he says the following:
When you learn about the datapath in computer architecture and compiler design you will learn more and see the differences.
This made no sense to me, since the assembly output is the same for both. Could someone explain what he means by this? And why this would make "integer" the correct answer?
What he's saying there, in rough terms, is that he's not asking which is more efficient in real world practice; he's asking which is more efficient in a hypothetical case based on an unspecified theory of computer architecture and compiler design, which you haven't been taught.
This is part of a long-standing friction between computer science and real software; computer science is about thinking and theorising with an abstract body of knowledge that sometimes simply doesn't have a real-world, practical counterpart that works in the same way (or that can do, if you're willing to deliberately cripple your compiler and hardware to simulate the simple models upon which the theories are based).
This is not to knock computer science, but to recognise that CI is not meant to be about real, practical software, although it can be a foundation upon which to generate such.
For example, in a computer science quiz, the answer to the question "which structure should be used for efficient random insert" is "linked list". In real practical software on modern commodity hardware (i.e. x64), the answer is "usually array / vector".
You're going to experience this mismatch between computer science and software engineering a lot.
And what would that unspecified theory be? Given that they have the exact same assembly output, what would the "datapath" have to do with anything in terms of efficiency of an "int"? Maybe he just made thus point to simply affirm that the author is correct and stop arguing with me.
As far as I understand, if the assembly outputs are exactly equivalent, then they are equally efficient. Right?
@TheToaster, your instructor likely only "gave you the point" to shut you up. What little you posted of the email exchange points out he believes the text book is correct, period. End of discussion.
Welcome to reality. It's messy, but it is what it is.
As far as I understand, if the assembly outputs are exactly equivalent, then they are equally efficient. Right?
Sure, in reality. In real software, when getting a real modern compiler to churn out instructions for a real modern processor to use.
But the CS classroom isn't about real, modern, practical software. There are entire abstract models of compilers and assembly and processors that handle these cases very differently.
@Ganado, Right, but even with that extra piece of information, that would mean that the bool (byte) is more efficient than the integer (dword) since it is smaller, not the other way around. But that is a stretch.
Not if the hardware is optimized to deal with dword chunks. But, the fact that the question is abstracted in such a way to be talking about "bits", "bools", "ints", and even the word "condition" (which may not always be a branch/jump) makes me agree that it's an invalid question. Plus, as you explained, it appears out of place in your course (of course, I don't know exactly what you were taught so I'm just going off what you said).
As far as I understand, if the assembly outputs are exactly equivalent, then they are equally efficient. Right?
sure. But therein lies the devil in the details.
your choices are many here and its driven by hardware and compiler smarts combo.
consider intel: a smart compiler could unravel a loop to use a rep family command. that is basically move something to C register as a (1,2,4,8) byte wide thing, and decrement it until zero. Assembly wouldn't care what the original C thing was: it uses the correct version of rep for the data to adjust for size choice and its the same amount of work regardless.
then take a less simple loop, where the loop modifies its own counter in some screwy way:
for (int i =0; i < 100; i++)
i*=13;
^^ can the compiler figure out a constant to put into rep?
and rep isnt suitable for everything. then you have other loops, most of which look like a goto statement (jumps on condition).
an then there are other platforms; some have fixed width assembly instructions and no matter what the size of the C thing, they jack it into a 64 bit integer because that is what CPU wants. Intel's register splitting and redundant commands to access the split registers isnt done universally.
also think about the bits of a byte going into a cpu: each bit gets a wire in a connector, its not serial, its parallel, so the width of the connectors plays a role. Even if you could talk in raw bits, a bool type would then BE a bit, the compiler would know this and use the hardware appropriately so bit and bool would be both correct.
and then there is the compiler aspect again: a smart compiler would always do the most efficient thing for the bool type, so bool type would be correct because it would clue the compiler to make the best code it could.
Just to scratch the real hardware surface a bit.... at a high level.