bitset not printing correct number

Hi guys

similar to a question I asked a little earlier

I am trying to print the max number of an unsigned long long but to no avail

The bitset seems to only print 32 bits of 1's or half the number

anybody know how come

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#include <iostream>
#include <bitset>
#include <limits>

using namespace std;

int main()
{
    typedef unsigned long long ulong;
    ulong c = ULONG_LONG_MAX;
    cout << "size of ulong == " << sizeof(c) << endl;
    bitset<64> bt(c);
    cout << bt << endl;
}




size of ulong == 8
0000000000000000000000000000000011111111111111111111111111111111


Try
ulong c = numeric_limits<ulong>::max();
still getting

size of ulong == 8
 c == 18446744073709551615
0000000000000000000000000000000011111111111111111111111111111111


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#include <iostream>
#include <bitset>
#include <limits>

using namespace std;

int main()
{
    typedef unsigned long long ulong;
    ulong c = numeric_limits<ulong>::max();
    cout << "size of ulong == " << sizeof(c) << endl;
    cout << " c == " << c << endl;
    cout <<  bitset<64>(c) << endl;

}
With the above code I get:

size of ulong == 8
c == 18446744073709551615
1111111111111111111111111111111111111111111111111111111111111111


However I am getting the following warning:

||=== Build: Debug in c++homework (compiler: GNU GCC Compiler) ===|
main.cpp||In function ‘int main()’:|
main.cpp|9|warning: declaration of ‘ulong’ shadows a global declaration [-Wshadow]|
/usr/include/x86_64-linux-gnu/sys/types.h|149|note: shadowed declaration is here|
||=== Build finished: 0 error(s), 1 warning(s) (0 minute(s), 0 second(s)) ===|


What compiler and OS are you using?

I'm going to guess c++98
Before C++11, std::bitset<N>'s constructor takes unsigned long and not unsigned long long, an issue on ILP32 machines.

unsigned long long is a C++11 addition; GCC provided it as a compiler extension prior to that.

(You can also say -1ull instead of using library code just to get the maximum value.)
Last edited on
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#include <iostream>

int main()
{
    unsigned long z = 0b1111111111111111111111111111111111111111111111111111111111111111;
    //                  123456789 123456789 123456789 123456789 123456789 123456789 1234
    
    std::cout
    << "bitset<64>: " << std::bitset<64>(z) << '\n'
    << "       hex: " << std::hex << z << '\n'
    << "   decimal: " << std::dec << z << "\n\n"
    << "Size of z = " << sizeof(z) << " bytes\n";
    
    return 0;
}


bitset<64>: 1111111111111111111111111111111111111111111111111111111111111111
       hex: ffffffffffffffff
   decimal: 18446744073709551615

Size of z = 8 bytes
 
Exit code: 0 (normal program termination)

https://www.geeksforgeeks.org/c-data-types/

PS #include <bitset> if you want to run it on cpp.sh
Last edited on
To try to simulate it ...
1
2
3
4
5
6
7
8
9
10
11
12
13
#include <iostream>
#include <bitset>
#include <limits>
using namespace std;

int main()
{
    typedef unsigned long long ulong;
    ulong c = numeric_limits<ulong>::max();
    cout << "size of ulong == " << sizeof(c) << endl;
    cout << " c == " << c << endl;
    cout <<  bitset<64>((unsigned)c) << endl;
}

size of ulong == 8
 c == 18446744073709551615
0000000000000000000000000000000011111111111111111111111111111111
Windows 10, and compiler is minGW 32 bit

I was using c++98 but I checked the c++11 flag on my compilers settings and still the same prints

0000000000000000000000000000000011111111111111111111111111111111


changing unsigned to signed actually seems to do the trick, but I don't know why

1
2
3
4
5
6
typedef unsigned long long ulong;
    ulong c = numeric_limits<ulong>::max();
    cout << "size of ulong == " << sizeof(c) << endl;
    cout << " c == " << c << endl;
    cout <<  bitset<64>((signed)c) << endl; // now it works, but probably not for the right reasons 
}


size of ulong == 8
 c == 18446744073709551615
1111111111111111111111111111111111111111111111111111111111111111


Last edited on
As far as the original code goes, I assume it's MinGW doing something dumb and not implementing the bitset constructor correctly. (This wouldn't be the first time MinGW, especially a relatively old version, isn't supporting the standard correctly.)

Do you have the link to where you downloaded this version of MinGW from? My guess would be that if you dug into the actual library implementation, it's narrowing the unsigned long long into the old unsigned long version.

What's this print, by the way?
1
2
3
4
5
6
7
#include <iostream>
using namespace std;

int main()
{
    std::cout << __cplusplus << '\n';
}
Last edited on
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#include <iostream>
#include <bitset>
#include <limits>

using namespace std;

int main()
{
    unsigned long long c = numeric_limits<unsigned long long>::max();
    cout << "size of ulong == " << sizeof(c) << endl;
    cout << " c == " << c << endl;
    cout <<  bitset<64>(c) << endl;
    cout <<  bitset<64>((unsigned)c) << endl;
}


size of ulong == 8
 c == 18446744073709551615
1111111111111111111111111111111111111111111111111111111111111111
0000000000000000000000000000000011111111111111111111111111111111
Topic archived. No new replies allowed.