Hi there,
I am looking for some help with this rather simple function that i wrote. It does work, however i always seems to be 0. How this simple math operation is not working i don't understand. Any suggestions?
void setLED(int x, char state) {
if (x < LED_SIZE && x >= 0) {
int i = (x/8);
if (state == ON)
LED_Buffer[i] |= (1 << x);
else
LED_Buffer[i] &= ~(1 << x);
updateLEDs();
}
NOP();
}
// This is the function where i use it
void LED_Test(void){
for (int i = LED_SIZE - 1; i >= 0; i--){
setLED(i, ON);
__delay_ms(20);
}
for (int i = 0; i < LED_SIZE; i++){
setLED(i, OFF);
__delay_ms(20);
}
for (int i = 0; i < LED_SIZE; i++){
setLED(i, ON);
__delay_ms(20);
}
for (int i = LED_SIZE - 1; i >= 0; i--){
setLED(i, OFF);
__delay_ms(20);
}
clearBuffer();
}
Oh.. as soon as i wrote it, I got it. The 8 bit buffer overflows... I need to fix it somehow. Still opened for suggestions if you think I can write it a neater way.. Cheers!
some compilers don't like to do bit stuff on signed. maybe try unsigned char. This could be one of those ghosts from my past, I don't know the standard on this, I just remember it not working on some compilers.
looks like embedded code, so you can gigo it and just assume x is going to be correct, or you can force it correct (x%8 perhaps) or you can somehow trigger an error (set all the lights to on, maybe, if you can't print anything to a screen).
<< and >> can give trouble on signed values. If you set the 'sign bit' to 1 and all the rest to zero, you may throw an exception. This is not his primary bug, but its a problem with the code. Make x unsigned char seems to be more 'correct' - the compiler will fuss if he sends in either an int (too big) or a signed value, both of which are trouble.