Summing bits with <bitset>

Hello forum.

I've been practicing with booleans and operations between them.
As you can check the program takes 2 16 bits arrays in input and outputs the sum on a 32 bit array.

You can alterate the 16 bits however you want, it shouldn't brake. However I think that the function I wrote it's pretty messy and ugly.. I would like to implement it recursively but don't know were to start... any advice?
I wrote the function by translating the truth table of a full adder, couldn't think of something better... it's just too ugly and dissapointing.

Thank you in advance.

1
2
3
4
5
6
IO example

first 16 bit sequence: 1010101010101010
second 16 bit sequence: 0101010101010101

Binary sum of the sequences: 00000000000000001111111111111111


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
#include <iostream>
#include <bitset>

std::bitset<32> binarySum(std::bitset<16> x, std::bitset<16> y);

int main() {

    std::bitset<16> twoBytes;
    std::bitset<16> twoBytes1;
    std::bitset<32> fourBytes;

    //initialize 32bit array at 0
    for(unsigned int i=0; i<32; i++)
        fourBytes[i] = false;

   //1010 1010 1010 1010
    for(unsigned int i=0; i<16; i++) {
        if (i % 2 != 0) twoBytes[i] = true;
        else twoBytes[i] = false;
    }

    //0101 0101 0101 010
    for(unsigned int i=0; i<16; i++) {
        if (i % 2 == 0) twoBytes1[i] = true;
        else twoBytes1[i] = false;
    }


    std::cout << "first 16 bit sequence: " << twoBytes << std::endl;
    std::cout << "second 16 bit sequence: " << twoBytes1 << std::endl;

    //function to sum the 2 16 bit sequences
    fourBytes = binarySum(twoBytes,twoBytes1);

    std::cout << "Binary sum of the sequences: " << fourBytes << std::endl;

    return 0;
}

std::bitset<32> binarySum(std::bitset<16> x, std::bitset<16> y){
    std::bitset<32> Result;
    bool HIGH_FLAG_BIT = false;
    //bool SUMofBITS = false;

    //SUM -> Ai && !Bi || !Ai && Bi --> XOR GATE
    //CARRY -> Ai && Bi

    for(unsigned int i = 0; i<16;i++){
        //only if Ai xor Bi
FULLADDER:
        if(HIGH_FLAG_BIT){
            if(!(x[i] || y[i])){
                x[i] = true;
                HIGH_FLAG_BIT = false;
            }
            else if( x[i] || y[i]){
                if(!x[i]){
                    x[i] = true;
                    HIGH_FLAG_BIT = false;
                }
                else if(!y[i]){
                    y[i] = true;
                    HIGH_FLAG_BIT = false;
                }
            }else break;
        }

        if(((!x[i]&&y[i]) || (x[i]&&!y[i]))) Result[i] = true;
        else{
            if(x[i] && y[i]){
                Result[i] = false;
                HIGH_FLAG_BIT = true;
                ++i;
                goto FULLADDER;
            }
            else Result[i] = false;
        }
    }
    return Result;
}
Last edited on
That's not what a full adder looks like, though.
1
2
3
4
5
6
7
8
9
10
bool full_adder(bool a, bool b, bool &carry){
    bool ret = a^b^carry;
    carry = carry && (a^b) || a && b;
    return ret;
}

bool carry = false;
for (int bit i = 0; i < bits; i++)
    sum[i] = full_adder(a[i], b[i], carry);
sum[bits] = carry;
Dear helios, thanks for your kind answer.

I don't understand some code in your function

1
2
ret = a^b^carry; --> what is the output here? what boolean operation is ^ ? or it's a power? 
carry = carry && (a^b) || a && b;  


Looks a lot better than mine, yet it's harder for me to understand...
In c++, ^ is x-or.

You also need to understand c++ operator precedence. ^ comes before && which comes before ||
Last edited on
Thank you very much. This makes this easier a lot... I was like so sure that I could only use and and or operators ... Now I understand the function.

My last question would be, is it better to pass to this kind of functions a full bitstream or a bit at a time ?

Because I'm using bit sets and would like to keep it like this..

it depends!
Calling functions that the compiler cannot deploy 'inline' has a small cost. Calling over and over one bit at a time is a LOT of calls, 32 for just one integer. It would be better to pass in blocks, in that case. If the function can be done inline, it makes no difference on that side. Are you asking about performance for a large number of bits? If so...

to sum the bits, if you did not know it, the intel CPU has a command (you would have to embed assembly language) to do that for you via the cpu circuitry which is MUCH faster than anything we can craft. But, this won't work on other CPUs. If you want some efficiency for this task on other cpus, consider doing a lookup table of 256 entries with the sum for each byte, and do it byte wise by adding up those lookups (8 additions instead of 64 for a 64 bit int!). If you don't care about speed, it really does not matter how you pass the values in, apart from how you plan to use the function. In that case, keep it as the bitset. Bitset has many efficiencies already built into it, and is ideal for what it does: dealing with bits :) Bitset is directly convertible to integer so you can also mix and match if it helps solve a particular problem (eg if you wanted to use the assembly command, you would shove it to int and then use the int to talk to the assembly for ease of use)

I can't think of any reason to pass in less than 1 byte's worth even if its a real time stream. Everything I have ever worked with works in bytes or larger -- if there is a device that streams bits, I do not know it. Hardware does this (any sort of bus, or an old style serial port), but by the time we consume it, its byte-wise.
Last edited on
My last question would be, is it better to pass to this kind of functions a full bitstream or a bit at a time ?
I'm somewhat confused by this question. First, what do you mean by "better"? And second, what do you mean by "this kind of functions"?
What you're doing here is obviously educational. You're replicating the binary logic of a 16-bit adder circuit in software. From that point of view, the code that best represents the topology of the circuit is a function that receives two whole integers, processes their bits one pair at a time by passing them to a full adder function, and streams the carry bits from one call to the next.
If you're going to do something different for the sake of "performance" then you may as well just do
1
2
3
std::int32_t add(std::int16_t a, std::int16_t b){
    return (std::int32_t)a + (std::int32_t)b;
}
@helios
with better I mean the way it works faster and with less chance of bugs/errors at runtime. With this kind of function I refer to functions that elaborate chunks of data with bool types (bits, nibbles, bytes, words etc)

It is indeed educational, I would like to work with firmware and driver building, I've studied electronics and doing computer science. I like C and C++ so I'm kinda discovering what can I do with it.

You could say that the question is -> Can I actually process all the 16 bits all at a time without actually moving 1 index at a time in the bitset?

@jonnin
Interesting, thank you.
Still if I need to check something I that bitset I would have to act like it's an array.. so I would still need like 32 cycles to control it... I'm not getting how to elaborate all those bits togheter but in a binary form, without converting them in other bases...


with better I mean the way it works faster and with less chance of bugs/errors at runtime.
Those are quite often conflicting requirements.
Code is generally more likely to be correct when any errors that might appear are completely obvious, and for that the code needs to be clear, preferably short (the less code the easier it is to inspect it), and quite often implementing a naive solution.
Conversely, optimized code is generally more obscure. For example, the code may take shortcuts that a casual reader may not immediately understand; for example, is a primality check for n that checks divisibility in all integers in [2; sqrt(n)) correct? If they can't completely understand it then they can't reason about its behavior and be sure that any particular part is correct or not, and thus inevitably hand-optimized code is more likely to be incorrect than unoptimized code.

With this kind of function I refer to functions that elaborate chunks of data with bool types (bits, nibbles, bytes, words etc)
Practically speaking, no code that does something useful works on individual bits. With that in mind, I'm still confused why you're interested in optimizing something that performs no useful work, when the purpose of optimization is to get as much useful work from as few resources as possible.

Can I actually process all the 16 bits all at a time without actually moving 1 index at a time in the bitset?
Yes:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
std::uint16_t to_int(const std::bitset<16> &b){
    std::uint16_t ret = 0;
    for (int i = 0; i < 16; i++)
        ret |= (std::uint16_t)b[i] << i;
    return ret;
}

std::bitset<32> to_bitset(std::uint32_t n){
    std::bitset<32> ret;
    for (int i = 0; i < 32; i++){
        ret[i] = n & 1;
        n >>= 1;
    }
}

std::bitset<32> binarySum(std::bitset<16> x, std::bitset<16> y){
    std::uint32_t x2, y2;
    x2 = to_int(x);
    y2 = to_int(y);
    return to_bitset(x2 + y2);
}
But this seems pointless to me. If you need to add two numbers together as fast as possible, doesn't matter how, in what circumstances would you have them loaded in bitsets rather than in simple variables? You say this is educational, but I'm unclear what you're trying to learn about, since structuring the code like this teaches you neither about electronic logic circuit design nor about software design.
Practically speaking, no code that does something useful works on individual bits.


a group of flags, is the only thing I can think of. you check each flag individually as a bit, but its stored as some sort of int or bitset or vector bool or whatever.
there are algorithms that benefit from looking at bits ... integer power can cycle the bits of the exponent, things like that. Dealing with weird messing protocols.

still, just a few edge cases .. its not common.
Last edited on
Flags are not really an example of what I mean. You normally do something like if (flag & mask == mask). You don't iterate over every bit of a flag.
The only real examples where you operate on large-ish amounts of data on a bitwise basis is bit-packed compression algorithms such as DEFLATE, where the data is literally a stream of bits, or something like shift-and-add multiplication, for example to implement a bignum library out of simple algorithms (more serious efforts use something like Karatsuba or Fourier transforms).
There are data structures like bloom filters and so forth as well, built atop bit-sets.
A bloom filter operates on a small subset of the bits.
Simply put, how many O(n) or slower algorithms can you think of that operate on the input bit-per-bit?
Last edited on
just the ones I said off the top of my head...

if(bit0 == 1) //these may look like bitset[0] or int & 1,2,4,8,... or whatever way to express
blah
if(bit1 == 1)
foo
.. //O(N) on number of booleans it represents, eg extracted an int field from a binary message where the bits are bools..

and the power one, which is dubious... its no better than a dumb loop on the average so whether it counts or no..? N is number of bits in the power, could be in the 50s but usually its < 10 for most practical problems...

I would say turning text binary back into a value is O(n) but there is probably a smarter way to do that. And out of school, who would have text binary inputs?? A C++ compiler has to eat them, but its unusual
Last edited on
Aside from compression as you said, there's actual physical-layer stuff. Something has to take the messy signal on a wire and turn it back into bits. But overall, I can't think of much.
Last edited on
There are a few, but it's rare. Even if you're pulling bits off a data stream, you're almost always going to shift them into an integer as soon as possible and work with the data like that. It's just what makes the most sense to take advantage of the hardware. Why process one bit at a time when you can process 64?
Topic archived. No new replies allowed.