Bitwise shifting

Nov 7, 2009 at 10:46pm
I thought I had understood this :l

I'm trying to shift a hex number to get the first and last bits in separate variables... For example, if I had the hex number 0xFA then I would get F in one variable and A in the other.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#include <iostream>

int main() {
    int hex = 0xAF;
    int hex1 = (hex >> 4);
    int hex2 = (hex << 4);
    
    std::cout.setf(std::ios::hex, std::ios::basefield);
    
    std::cout << "Hex number:\t\t0x" << hex
              << "\nRight-shifted 4 places:\t0x" << hex1
              << "\nLeftt-shifted 4 places:\t0x" << hex2 << std::endl;
    
    return 0;    
}

With this code I get 0xa and 0xaf0.

How can I get the first 4 bits in hex1 and the last 4 bits in hex2?

I'm pretty new to shifting...
Last edited on Nov 7, 2009 at 10:52pm
Nov 7, 2009 at 10:59pm
You need the bitwise and:
1
2
int hex1 = hex & 0xf;
int hex2 = (hex >> 4) & 0xf;

Nov 7, 2009 at 11:01pm
I thought it would have something to do with bitwise and, or or xor. I was playing around with them.

Why is it 0xF? Because F is the largest Hex value?

It worked; thanks :)

I like this now:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#include <iostream>

int main() {
    int hex = 0xABCD;
    int hex1 = (hex >> 12) & 0xF;
    int hex2 = (hex >> 8) & 0xF;
    int hex3 = (hex >> 4) & 0xF;
    int hex4 = hex & 0xF;
    
    std::cout.setf(std::ios::hex, std::ios::basefield);
    std::cout << "Hex number:\t0x" << hex
              << "\nFirst:\t0x"    << hex1
              << "\nSecond:\t0x"   << hex2
              << "\nThird:\t0x"    << hex3
              << "\nFourth:\t0x"   << hex4
              << std::endl;
    
    return 0;    
}


I never understood bitwise shifting in the slightest before :P
Last edited on Nov 7, 2009 at 11:06pm
Nov 7, 2009 at 11:06pm
Why is it 0xF? Because F is the largest Hex value?
Yes:
Random number: | 1001 0101
0xF            | 0000 1111
Result:        | 0000 0101
Nov 7, 2009 at 11:11pm
Oh; no wonder they use hex for low level stuff!

How would I do it for larger hex numbers?
This way I had to already know the number was 0xAF or 0xABCD; what if I can get anything from 0x0 to 0xFFFFFF? Obviously I'd use an array and some form of a loop; but then how would I know how big to make the array, and how many times to loop?
Nov 7, 2009 at 11:13pm
0x... is an integer so it will contain 2*sizeof(int) digits
Nov 7, 2009 at 11:15pm
Oh. Well, I'm gonna spend some time playing with this stuff.

Thank you for the help :)
Nov 7, 2009 at 11:20pm
self quoting:
0x... is an integer so it will contain 2*sizeof(int) digits

Just realised this works only if CHAR_BIT == 8, the right formula is sizeof(int)*16/CHAR_BIT
Anyway, who uses non 8-bit bytes any more?
Nov 7, 2009 at 11:23pm
I never understood bitwise shifting in the slightest before :P

Me neither! Still don't to be honest.

Where is this useful? All I ever hear is when you are dealing with file compression and have to work with bits.
Nov 7, 2009 at 11:28pm
Knowing how to get a specific byte from a longer value is useful in many situations
Nov 7, 2009 at 11:29pm
What is CHAR_BIT? sizeof(char)? Or the particular bit I want to get?
Nov 7, 2009 at 11:30pm
Is the size of a char in bits ( sizeof returns the size in bytes )
http://www.cplusplus.com/reference/clibrary/climits/
Last edited on Nov 7, 2009 at 11:31pm
Nov 7, 2009 at 11:39pm
I'm trying this; but I'm having trouble figuring out how much to shift the number by...

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#include <iostream>
#include <climits>

void get_hex_digits(int, int*);

int main() {
    int hexnum;
    
    std::cout.setf(std::ios::hex, std::ios::basefield);
    std::cout << "Enter hex number: ";

    std::cin.setf(std::ios::hex, std::ios::basefield);
    std::cin >> hexnum;
    
    int separatenumbers[(sizeof(hexnum) * 16) / CHAR_BIT];
    get_hex_digits(hexnum, separatenumbers);
    
    std::cout.setf(std::ios::dec, std::ios::basefield);
    std::cout << "\nIn decimal, it is " << hexnum << "\n";
    
    std::cout.setf(std::ios::hex, std::ios::basefield);              
    std::cout << "0x" << hexnum << " in seperate digits is:\n\t";
    
    for (int i = 0; i < sizeof(separatenumbers); i++) {
         std::cout.setf(std::ios::dec, std::ios::basefield);
         std::cout << i + 1 << ". 0x";
         std::cout.setf(std::ios::hex, std::ios::basefield);
         std::cout << separatenumbers[i] << "\n\t";
    }
    
    std::cout << std::endl;
    
    return 0;    
}

void get_hex_digits(int hexnumber, int* array) {
    int i = 0, j = (sizeof(hexnumber) * 16) / CHAR_BIT, k = j;
    
    for (; i < j; i++, k--) {
        array[i] = (hexnumber >> k) & 0xF;
    }
}
Enter hex number: 0xABCDEFG  

In decimal, it is 11259375
0xabcdef in seperate digits is:
	1. 0xd
	2. 0xb
	3. 0x7
	4. 0xf
	5. 0xe
	6. 0xd
	7. 0xb
	8. 0x7
	9. 0x0
	10. 0x0
	11. 0xa
	12. 0xabcdef
	13. 0xef3fd440
	14. 0x7fff
	15. 0x400c50
	16. 0x0
	17. 0x0
	18. 0x0
	19. 0x323ce5a6
	20. 0x7fda
	21. 0x0
	22. 0x0
	23. 0xef3fd448
	24. 0x7fff
	25. 0x0
	26. 0x1
	27. 0x4009ee
	28. 0x0
	29. 0x400c50
	30. 0x0
	31. 0xbc108f94
	32. 0x2ae81c36
Last edited on Nov 7, 2009 at 11:43pm
Nov 8, 2009 at 12:10am
0xABCDEFG
What the hell? G? This is hexadecimal, not heptadecimal.

sizeof(array) doesn't give the size of array in elements. It gives it in bytes. Your for on line 24 is overflowing the buffer.

The right formula would be sizeof(int)*CHAR_BIT/4:
With 16-bit bytes and 32-bit ints:
(32/4==8)
sizeof(int)*16/16 == 2*16/16 == 2
sizeof(int)*16/4 == 2*16/4 == 8
With 12-bit bytes and 36-bit ints:
(36/4==9)
sizeof(int)*16/12 == 3*16/12 == 4
sizeof(int)*12/4 == 3*12/4 == 9

Actually, the program is riddled with many small mistakes. Here:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
#include <iostream>
#include <climits>

void get_hex_digits(unsigned, unsigned*);

int main() {
    unsigned hexnum;
    
    std::cout.setf(std::ios::hex, std::ios::basefield);
    std::cout << "Enter hex number: ";

    std::cin.setf(std::ios::hex, std::ios::basefield);
    std::cin >> hexnum;
    
    unsigned separatenumbers[sizeof(hexnum) * CHAR_BIT / 4];
    get_hex_digits(hexnum, separatenumbers);
    
    std::cout.setf(std::ios::dec, std::ios::basefield);
    std::cout << "\nIn decimal, it is " << hexnum << "\n";
    
    std::cout.setf(std::ios::hex, std::ios::basefield);              
    std::cout << "0x" << hexnum << " in seperate digits is:\n\t";
    
    for (unsigned i = 0; i < sizeof(hexnum) * CHAR_BIT / 4; i++) {
         std::cout.setf(std::ios::dec, std::ios::basefield);
         std::cout << i + 1 << ". 0x";
         std::cout.setf(std::ios::hex, std::ios::basefield);
         std::cout << separatenumbers[i] << "\n\t";
    }
    
    std::cout << std::endl;
    
    return 0;    
}

void get_hex_digits(unsigned hexnumber, unsigned* array) {
    unsigned i = 0,
		j = sizeof(hexnumber) * CHAR_BIT / 4,
		k = j*4-4;
	
	std::cout <<j<<std::endl;
    
    for (; i < j; i++, k-=4) {
        array[i] = (hexnumber >> k) & 0xF;
    }
}
Nov 8, 2009 at 2:10am
What the hell? G? This is hexadecimal, not heptadecimal.

Typo :)

sizeof(array) doesn't give the size of array in elements. It gives it in bytes.

I usually use
#define ARRAYSIZE(array) sizeof(array) / sizeof(array[0])
because then you get sizeof(array) (amount of bytes for the whole array) / sizeof(array[0]) (amount of bytes in the 0 element (which all arrays have, so it shouldn't segfault) which should mean sizeof(array) /(sizeof(data type of array) which should == amount of elements in the array.

At least, I hope so.

Actually, the program is riddled with many small mistakes. Here:

Thanks. I'll look through it.

Why are you using unsigned ints instead of just ints? Preference; or could the value get too large?

Edit: I just realised my password for everything is valid hex :l
That's only 10108 possible character combinations (amount of characters 10 numbers * 12 letters)!
CHANGE CHANGE CHANGE CHANGE.
Last edited on Nov 8, 2009 at 2:19am
Nov 8, 2009 at 2:23am
It's not a good idea to use signed integers when doing bit twiddling. That's bitten me in the ass too many times. Just to name one example, char(0x80)>>7==(char)0xFF, when you'd expect it to be 1.

EDIT: 12 letters? Don't you mean 10 letters? ABCDEFabcdef. Also, your formula is wrong. n^(10+10).
Last edited on Nov 8, 2009 at 2:25am
Nov 8, 2009 at 2:24am
Ok. I'm playing around in SDL at the moment; and I thought I would need this for some reason (I forget what I thought I was going to use it for). Oh well; it was as good a time as any to learn about shifting.

No:

A B C D E F
1 2 3 4 5 6
a b c d e f
7 8 9 10 11 12

F is the sixth letter in the basic modern Latin alphabet.
-- http://en.wikipedia.org/wiki/F
Last edited on Nov 8, 2009 at 2:28am
Nov 8, 2009 at 2:28am
Wow. Now that's what I call miscounting.
Nov 8, 2009 at 2:29am
What, you, or me?

Hopefully you; unless my whole perception is broken.

Edit: I thought I'd need bitwise shifts to represent 8-bit colours (don't ask why), e.g. 0x0F is a black background with white text.
Last edited on Nov 8, 2009 at 2:30am
Topic archived. No new replies allowed.