Adder

Hi guys,

I'm trying to make a basic adder, why? Just for fun.

I thought this would be rather trivial but I've stumbled a little. Firstly when I try to print (x << 7) it prints 384?? I was expecting it to print 128, my guess would be that ofstream doesn't have a method that takes in a int8_t so instead it converts it to an integer(4 bytes), would this be correct?

but the next part leaves me a little puzzled, I shift x by 7 places (x << 7) and shift y by 7 places (y << 7) this in theory well at least in my theory should leave me with 1000000 and 00000000, then I xor both the numbers and should get 10000000 or 128. So I expect 128 to be printed yet I get some weird question mark ascii symbol. Any idea what's going on?

Thanks

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

#include <iostream>
#include <cstdint>

using namespace std;

int adder(int8_t a, int8_t b){

   // 00000011
   // 00001010
   //----------
   // 00001101
   
   // shift both x and y by 7 places
   // 10000000
   // 00000000

   int8_t x = 3;
   int8_t y = 10;
   int8_t add = 0;
   int8_t carry = 0;

   cout << (x << 7) << endl;
   add = (x <<= 7) ^ (y <<= 7);
   cout << add;
}


int main()
{
    adder(9,9);
}

Firstly when I try to print (x << 7) it prints 384?? I was expecting it to print 128, my guess would be that ofstream doesn't have a method that takes in a int8_t so instead it converts it to an integer(4 bytes), would this be correct?
No.
7 is an int, so x is promoted to int before applying <<, thus bit 8 is preserved in the result.

I shift x by 7 places (x << 7) and shift y by 7 places (y << 7) this in theory well at least in my theory should leave me with 1000000 and 00000000, then I xor both the numbers and should get 10000000 or 128. So I expect 128 to be printed yet I get some weird question mark ascii symbol. Any idea what's going on?
There are operator<<() overloads for std::ostream for both signed char and unsigned char, which both do the same thing: print the character that corresponds to that byte. (std::int8_t and std::uint8_t are signed char and unsigned char respectively.)
To print an un/signed char as if it was a number first cast it to int:
 
std::cout << (int)add;
In your example above, directly casting to print will sign-extend the value, causing -128 to be printed. To treat it as an unsigned value first remove the signedness:
 
std::cout << (int)(std::uint8_t)add;
Or better yet, don't do bit twiddling on signed values. That's generally the best strategy to avoid weird behavior.
Last edited on
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#include <iostream>
#include <cstdint>

using namespace std;

int adder(int8_t a, int8_t b) {

	// 00000011
	// 00001010
	//----------
	// 00001101

	// shift both x and y by 7 places
	// 10000000
	// 00000000

	uint8_t x = 3;
	uint8_t y = 10;
	uint8_t add = 0;
	uint8_t carry = 0;

	cout << (unsigned)(uint8_t)(x << 7) << '\n';
	add = (uint8_t)(x <<= 7) ^ (uint8_t)(y <<= 7);
	cout << (unsigned)add;

	return add;
}



128
128

@OP For this sort of stuff it's often better to use bitsets to start with and use them throughout. Convert to int etc at the end only of any calculation.

Here's a fairly common adder function.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
#include <iostream>
#include <bitset>

const size_t N{8};

std::bitset<N> getSum(std::bitset<N> A, std::bitset<N> B)
{

    std::bitset<N> C;
    bool carry{0};
    for(size_t i = 0; i < N; i++)
    {
        C[i] = A[i]^B[i]^carry;
        carry = ( A[i] & B[i] ) ^ (B[i] & carry) ^ (carry & A[i]);
    }
    return C;
}

int main()
{
    std::bitset<N> a{9};
    std::bitset<N> b{6};
    std::bitset<N> c{15};

    std::cout
    << "     a: " <<  a << ' ' << a.to_ulong() << '\n'
    << "     b: " << b << '\n'
    << "     c: " << c << ' ' << c.to_ulong() <<'\n'
    << "   Sum: " << getSum(a,b) << '\n'

    << "    ~a: " << ~a << ' ' << ~a.to_ulong() << ' ' << (int)~a.to_ulong()  << '\n'
    << "  a<<2: " << (a<<2) << '\n'
    << "  a<<6: " << (a<<6) << '\n'
    << "   a&b: " << (a&b) << ' ' << (a&b).to_ulong() << '\n'
    << "   a^b: " << (a^b) << '\n'
    << "a.flip: " << a.flip() << '\n'
    << "     a: " << a << '\n';
}

[output]
     a: 00001001 9
     b: 00000110
     c: 00001111 15
   Sum: 00001111
    ~a: 11110110 18446744073709551606 -10
  a<<2: 00100100
  a<<6: 01000000
   a&b: 00000000 0
   a^b: 00001111
a.flip: 11110110
     a: 11110110
Program ended with exit code: 0
[/output]
Makes sense,

@Helios, cout << (x << 7) << endl;

As you said the literal 7 is an int by default? so x is converted or promoted to an int in this case.


std::cout << (int)(std::uint8_t)add;

Never came across this syntax before, casting like this makes sense std::cout << (int)add;

but what affect does putting two data types in sequence have on the variable being casted?

If int_8 is essentially a signed char, why do so many people use it rather than just char? I mean it doesn't seem like there is any benefits other than readability, also causes you to use an extra include.

@againtry good idea, would be much easier.
Last edited on
but what affect does putting two data types in sequence have on the variable being casted?


Here, cout treats uint8_t as a character type and shows the char - not the integer value. For cout to show the required integer value - as opposed to the char representation, then an additional cast to int (or unsigned) is required.
And the source of that can be seen at first hand by reading the header file stdint.h
Go figure the original naming logic of that 8 bits given they meant 'char' in the first place.

https://code.woboq.org/gtk/include/stdint.h.html

Topic archived. No new replies allowed.