what is the best way to do this?

Let's say I want to write a function that takes an integer that is 8 bits wide as input, and the function does some manipulation on the bits position and return the value. For example, the input X is 0x35, which is binary 00110101. The output I want it to be is 01010011 -- the higher nibble and lower nibble swap their positions.

What is the best way to do this? Thanks.
C and C++ doesn't have integer type with 8bits,
but you can use char

and return type I think should be string.

 
std::string getBinaryRepresentation(char symbol);


or if you need to do output in the binary format to console or file you can just use std streams
I think this is what you want:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
unsigned char swap(unsigned char byte)
{
    // shift lower bits by 4 palces and shift high bits by 4. Then perform bitwise  OR
    return (byte<<4)|(byte>>4);
}

int main()
{
int x=0x35;

cout <<(int)swap(x); 


}


C and C++ doesn't have integer type with 8bits,
but you can use char


char actually is the integer type with 8 bits. (or at least it's 8 bits on systems where a byte is defined as 8 bits).

+1 to Null's solution. Although it's unncessary to cast the return value to int before printing it.
Although it's unncessary to cast the return value to int before printing it.

I know but otherwise cout displays character instead of number.
osht, you're right. I thought unsigned char wouldn't have that problem but I guess it does.

Nevermind!
Thanks. It is very helpful. But I am actually looking for more generic answer -- I am converting a piece of Verilog code to C/C++ code. Verilog is similar to C, except it can process individual bits directly.

So let's look at a more generic example. This time the input of the function is a 35-bit long "integer", and the return value is the same type (35 bits integer). The function re-arranges the bit position of the input - for example, input has bit order 0, 1, ... 34, and the output is bit 15,9,3,27... .

What is an elegant way of doing this? Thanks.
char actually is the integer type with 8 bits. (or at least it's 8 bits on systems where a byte is defined as 8 bits).


C99 introduces the typedef for uint8_t, which is commonly an unsigned char.
C99 introduces the typedef for uint8_t, which is commonly an unsigned char.
How do I use that in Visual C++ 2008? In Code::Blocks there is no problem.
Maybe you should use bitset?
http://cplusplus.com/reference/stl/bitset/
How do I use that in Visual C++ 2008? In Code::Blocks there is no problem.

I'm not sure how portable it is, it is defined in /usr/include/stdint.h on my [RHEL 5.2] machine.

Try:
#include <stdint.h>
Last edited on
stdint.h is not standard C++. VC++, while also being a C compiler, doesn't fully support C99. One of the things it doesn't support is that header.
Just use unsigned char, which is guaranteed to be 8 bits long. Sort of.
VC++ has __int8, __int16, __int32, and __int64 with their unsigned versions as data types, but those aren't portable, either.
Last edited on
Topic archived. No new replies allowed.