How to find kth least significant bit?

Jun 18, 2019 at 3:13pm
This code is not passing all test cases?
Can anybody tell me what is wrong with this code?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#include<iostream>
#include<algorithm>
#define ll long long
using namespace std;
int main()
{
	//code
	ll t;
	cin>>t;
	while(t--)
	{
	    ll n,k;
	    cin>>n>>k;
	    cout<<(1&(n>>(k-1)))<<"\n"; 
	}
	return 0;
}

Last edited on Jun 18, 2019 at 3:14pm
Jun 18, 2019 at 3:41pm
what are you really wanting to know?

there are 2 ways to look at it..
1) you have n bits, and bits 1 & 2 (for example) are not trusted). The LSB is &4.
2) you have n bits, and only the highest z are good. find the first set bit, and count towards the 1's position until you find z or run out (if you run out all are good).
Jun 18, 2019 at 4:02pm
You should use unsigned long long.
Jun 18, 2019 at 6:02pm
example brute force / crude / fun solution prints the number in binary and again with only N significant bits. You can do it with less code, but it would quickly become hard to follow if you get cute at it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
 int main()
 {
    int soi = 8*sizeof (unsigned long long); //in bits	
	unsigned long long  testnum = 31415926539;
	unsigned long long u = 9223372036854775808ull; //2^63
	int j;
	for(j = 0; j++ < soi; u/= 2)
	{
	   cout << (int)((testnum&u) > 0);
	}
	cout << endl; //written in binary for fun.
	
	
	u = 9223372036854775808ull; //2^63 
	int sigb = 5;
	int s = 0;
	bool b = false;
	for(j = 0; j++ < soi; u/= 2)
	{
	   if(testnum&u)
	    b = true;
	  if(b && s < sigb)
	  {
		  s++;
		  cout << (int)((testnum&u) > 0); //the last one this line prints is what you asked.
                //its location is &u, or you can convert that to the nth bit easily. 
	  }
	  else
	  {
		cout << '0';  
	  }	  
	}  
    cout << endl; 
 }


which is kind of nuts. some messing with this would get you there, just handle negatives and roundoff carefully to be sure its always correct:

cout << (int)(log2(testnum)+0.999999) << endl; //the most sig bit of the number, from the 1's bit. so that minus the number of bits you want is the least... etc...
Last edited on Jun 18, 2019 at 7:08pm
Topic archived. No new replies allowed.