What's wrong in this code?

Pages: 12
Hello everybody
I was just surfing the net for interview questions and i get one expression in C++
It says so

long value;
//some stuff
value &= 0xFFFF;


and then

Note: Hint to the candidate about the base platform they’re developing for. If the person still doesn’t find anything wrong with the code, they are not experienced with C++.


I am not able to find that what is the bug in this code .It ran perfectly well on my pc . Help would be appreciated .
closed account (1yR4jE8b)
value is not initialized, so when you try to & (bitwise AND) it with 0xFFFF you will get undefined behavior.
closed account (S6k9GNh0)
Looks like I need to start learning about bitwise operations. >.>
I would object to darkestfright answer on technical grounds - the behaviour is defined, the result is undefined.
closed account (1yR4jE8b)
Ok, well maybe I didn't word it properly but what I meant to say was....

You can't guarantee what the result will be, because the value of an unintialized variable is undefined so performing an AND operation on it will give an undefined result.
I would start by asking how big a long is on the target architecture.
Last edited on
closed account (1yR4jE8b)
Now that you mention it, let's assume value IS initialized (since we have //do stuff, I never noticed that before lol);

0xFFFF is a 2 byte value, on a typical pc a long is 8 bytes. There's your problem, unless you *intended* to only AND 2 bytes of value.
Last edited on
On a "typical" PC a long is the same size as an int, which is either 32 or 64 bits, depending upon your OS.
The sizes of long and int are implementation-dependent, not platform-dependent. Old versions (I don't know how old and I don't know if they stopped doing it) of Borland had 16-bit ints and 32-bit longs.

EDIT: There's nothing wrong with that code. The AND operation is perfectly valid and equivalent to value%=65536. Wanting to leave only the lower 16 bits of a value is useful is many circumstances.
Last edited on
closed account (1yR4jE8b)
GCC 4.4.1 says sizeof(int) is 4 and sizeof(long) is 8. I never knew that it was implementation dependant though, but it just makes sense.

I'm just thinking out loud(?) here, but I thought the purpose of a long was to have larger memory for an integer type. How exactly would you accomplish that with int and long having the same size?

Ultimately, I agree with Helios though. There really isn't anything wrong with the code, it compiles, and it doesn't crash at run-time so it really depends what you are trying to do with it though that would make it "logically" incorrect.
I thought the purpose of a long was to have larger memory for an integer type. How exactly would you accomplish that with int and long having the same size?
Well, that's more or less why we have different types, but in reality, a type is only guaranteed to be at least as big as the previous type in this order: char, bool, short, int, long. It's perfectly possible to make a compliant implementation where long is as big as char. This is (I'm guessing) because not every platform has as much variety of types as x86.
closed account (1yR4jE8b)
Thanks, I never did think of it that way. Perhaps I should study the ANSI standards a bit so that I know more about the details rather than just "what works on my computer".
On second thought, I suspect they're looking for Big Endian vs. Little Endian.
Bitwise AND is performed on values, not value representations, and therefore native endianness doesn't affect the result of the operation.
closed account (z05DSL3A)
If your platform uses 16Bit int, the the literal (0xFFFF) will be sign extended when usual arithmetic conversion takes place for the bitwise AND operation.

The code should be
1
2
3
long value(0);
//some stuff
value &= 0xFFFFL; //might even be 0x0FFFFL  
Last edited on
Sign extension had crossed my mind (as I thought the original question was a trick question) - I thought
that 0xFFFF would be extended to 0xFFFFFFFF and there would be no change to the value - but when I tried it on mingw an MSVC it didn't happen like that.
closed account (z05DSL3A)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#include <iostream>

int main()
{
	std::cout << (sizeof(int)  * 8) << "bit int" << std::endl;
	std::cout << (sizeof(long) * 8) << "bit long" << std::endl;
	
	long value1 = 0xaaaaaaaabbbbbbbb;
	long value2 = 0xaaaaaaaabbbbbbbb;
	
	int mask = 0xFFFFFFFF;
	
	value1 &= 0xFFFFFFFF;;
	value2 &= mask;
	
	std::cout << std::hex << value1 << std::endl;
	std::cout << std::hex << value2 << std::endl;
    
    return 0;
}

32bit int.
64bit long
bbbbbbbb
aaaaaaaabbbbbbbb

Hmm...not entirely what I was expecting, maybe compilers have got smarter.
According to the standard, 2.13.1, paragraph 2:
The type of an integer constant is the first of the corresponding list in Table 5 in which its value can be represented.

And the table for hexadecimal literals without type suffix is: int, unsigned int, long int, unsigned long int, long long int, unsigned long long int.

If I'm not mistaken, this means that if the compiler finds that the literal is negative in signed type, it will try the unsigned version.
So helios what would be the exact answer to the original problem.
The question is wrong. There's nothing wrong with that code. Although I'm dying to know what the author thought was wrong with it.
Pages: 12