reinterpret_cast

My goal here is to display the binary representation of a float in the console. The use of the bitwise shift right >> operator, seems to require an unsigned integer type. For this reason I am trying to convert a float to unsigned int.

Here is my code:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
#include <iostream>
#include <string>

std::string toBinary(const int &number)
{
    unsigned int bitPattern(0x80000000);  // MSB is 1, followed by 0's.
    std::string binaryString("");
    for (unsigned int bit(0); bit < 32; ++ bit) {
        if ((bitPattern & number) > 0)
            binaryString += "1";
        else
            binaryString += "0";
        bitPattern >>= 1;
    }
    return binaryString;
}

int main()
{
    std::cout << sizeof(float) << std::endl;           // 4
    std::cout << sizeof(unsigned int) << std::endl;    // 4
    float f = 3.14159f;
    unsigned int f_ui = reinterpret_cast<unsigned int>(f);    // ERROR!
    std::cout << toBinary(f_ui) << std::endl;
    return 0;
}


The compiler is giving me an error at line 23: invalid cast from type 'float' to type 'unsigned int'
I chose unsigned int because it is represented by the same amount of bits as a float in my system. 4 bytes (32 bits).

I found another thread on this site; http://www.cplusplus.com/forum/general/60160/ , with a very similar question. There is also a working solution to the problem given as a reply. That's great, but I would like to know more.

Why can't I reinterpret_cast a type to another when they have the same size? Logically it seems like the least dangerous reinterpret_cast operation I can think of.

All the examples I have seen of reinterpret_cast deals with pointers. Can reinterpret_cast ONLY be used with pointers? That is my main question.
Last edited on
> Can reinterpret_cast ONLY be used with pointers? That is my main question.

The object representation of any - repeat any - object may be examined as a sequence of bytes (sequence of char, signed char or unsigned char).

To view the bits in the bytes that make up an object of type float:
a. get the number of bytes in the object representation with sizeof()
b. reinterpret_cast<>() the address of the object as a pointer to const unsigned char
c. print out the bits in each byte one by one.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
#include <iostream>
#include <limits>
#include <bitset>
#include <memory>
#include <string>
#include <vector>

template < typename T > void show_bytes( const T& object, const char* tag = nullptr )
{
    using byte = unsigned char ;
    constexpr auto BITS_PER_BYTE = std::numeric_limits<byte>::digits ;
    using byte_bits = std::bitset<BITS_PER_BYTE> ;

    const auto n = sizeof(T) ;
    const byte* const pbyte = reinterpret_cast< const byte* >( std::addressof(object) ) ;

    for( std::size_t i = 0 ; i<n ; ++i ) std::cout << byte_bits( pbyte[i] ) << ' ' ;
    if(tag) std::cout << " (" << tag << ')' ;
    std::cout << "\n\n" ;
}

#define SHOW_BYTES(a) show_bytes( a, #a ) ;

int main()
{
    SHOW_BYTES( 12345 ) ; // int
    SHOW_BYTES( short(12345) ) ; // short
    SHOW_BYTES( (unsigned long long)(12345) ) ; // unsigned long long

    std::string string = "hello world!" ;
    SHOW_BYTES( string ) ; // string
    string.reserve(1000) ;
    SHOW_BYTES( string ) ; // string
    string += "!!!" ;
    SHOW_BYTES( string ) ; // string

    double dbl = 0 ;
    SHOW_BYTES( dbl ) ; // double
    dbl = 1234.56 ;
    SHOW_BYTES( dbl ) ; // double
    dbl = 123456789.123 ;
    SHOW_BYTES( dbl ) ; // double
    SHOW_BYTES( float(dbl) ) ; // float

    std::vector<float> vector { 1, 2, 3, 4 } ;
    SHOW_BYTES( vector ) ; // vector

    SHOW_BYTES( std::cin ) ; // stream
}

http://coliru.stacked-crooked.com/a/611922e7f9a967bd
I'm not too familiar with templates yet, but I understood parts of your reply. ~Using a char pointer to evaluate the variable one byte at a time, rather than going through the entire chunk of data in one go.

Do I have a correct understanding that reinterpret_cast can only be used to cast pointer types, not plain (non-compound) types?
> Using a char pointer to evaluate the variable one byte at a time,
> rather than going through the entire chunk of data in one go.

Yes.

> Do I have a correct understanding that reinterpret_cast can only be used to cast pointer types,
> not plain (non-compound) types?

reinterpret_cast is also be used to cast between integer and pointer types. For instance:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#include <iostream>
#include <cstdint>

int main()
{
    const double a[10] = {} ;

    const double* p0 = a ;
    const double* p4 = p0 + 4 ;
    std::cout << p4 - p0 << " == 4 \n" ;

    // std::uintptr_t is an unsigned integer type large enough to hold the value of a pointer
    const std::uintptr_t v0 = reinterpret_cast<std::uintptr_t>(p0) ;
    const std::uintptr_t v4 = reinterpret_cast<std::uintptr_t>(p4) ;
    std::cout << v4 - v0 << " == " << 4 * sizeof(double) << '\n' ;
}
Going back to your first reply and the example code you gave me. The program seems to display the bytes in reverse order. I don't understand why.

I have modified my code somwhat based on your advice, and the same thing happens there. Bytes are presented in reverse order.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#include <iostream>
#include <string>

std::string toBinary(const unsigned char* c_ptr, const std::size_t &bytes)
{
    unsigned char bitPattern;
    std::string binaryString("");
    for (std::size_t byte(0); byte < bytes; ++ byte) {
        bitPattern = 0x80;    // 10000000 in binary
        for (unsigned int bit = 0; bit < 8; ++ bit) {
            if ((bitPattern & *(c_ptr + byte)) > 0)    // (c_ptr + byte) is the current address.
                binaryString += "1";
            else
                binaryString += "0";
            bitPattern >>= 1;
        }
    }
    return binaryString;
}

int main()
{
    unsigned short int variable = 45;  // _Any_ type that I want to display in binary.
    unsigned char *c_ptr = reinterpret_cast<unsigned char*>(&variable);
    std::cout << toBinary(c_ptr, sizeof(variable)) << std::endl;
    return 0;
}


For simplicity I have used an unsigned short int instead of a float. The value is 45, or 0000000000101101 in binary. The output from the program is
0010110100000000


I'm confused. Is memory allocated in reverse order for variables? That is, with decreasing address.
> Is memory allocated in reverse order for variables? That is, with decreasing address.

See: http://en.wikipedia.org/wiki/Endianness

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#include <iostream>
#include <string>
#include <cstdint>
#include <netinet/in.h>

std::string toBinary(const unsigned char* c_ptr, const std::size_t &bytes)
{
    unsigned char bitPattern;
    std::string binaryString("");
    for (std::size_t byte(0); byte < bytes; ++ byte) {
        bitPattern = 0x80;    // 10000000 in binary
        for (unsigned int bit = 0; bit < 8; ++ bit) {
            if ((bitPattern & *(c_ptr + byte)) > 0)    // (c_ptr + byte) is the current address.
                binaryString += "1";
            else
                binaryString += "0";
            bitPattern >>= 1;
        }
    }
    return binaryString;
}

int main()
{
    std::uint16_t variable_host = 45;  // host byte order
    const unsigned char *c_ptr = reinterpret_cast<const unsigned char*>(&variable_host);
    std::cout << toBinary(c_ptr, sizeof(variable_host)) << std::endl;

    // http://www.beej.us/guide/bgnet/output/html/multipage/htonsman.html
    std::uint16_t variable_nw = htons(variable_host) ;  // netwok byte order
    c_ptr = reinterpret_cast<const unsigned char*>(&variable_nw);
    std::cout << toBinary(c_ptr, sizeof(variable_nw)) << std::endl;
}

http://coliru.stacked-crooked.com/a/12cbdf4928a55b08
This is short and concise

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#include <iostream>
#include <iomanip>
using namespace std;

void displayBits(unsigned x);

int main()
{
    float x = 10.0;    //x is inputted as float!!
    displayBits(x);
    return 0;
}
void displayBits(unsigned x)
{
    unsigned c; //counter
    //define display mask and left shift 31 bits
    unsigned displayMask = 1 << 31;
    cout<<setw(4)<<x<<" = ";
    for(c = 1; c <= 32; c++)
    {
        if(x & displayMask)
            cout<<1;
        else
            cout<<0;
        x <<= 1;
        if(c%8 == 0)
            cout<<" ";
    }
    cout<<endl;
}
shadowCODE wrote:
This is short and concise

... and doesn't do what's required.
shadowCODE, you are converting the float to unsigned int and in the process completely changing the binary representation of the value in memory. Try your program with float x = 3.14159f; and you'll see what I mean.
The result from your program is
00000000 00000000 00000000 00000011

The (hopefully correct) result I'm currently getting with my code is
01000000 01001001 00001111 11010000


Anyway, I learned a lot from this thread. Thanks JLBorges.
Topic archived. No new replies allowed.