bit and byte interpretation in AVR C - c

I know this may be wrong section for this but my problem is Microcontroller programming specific (AVR mostly)!
I am sending bytes between two AVR atmega8 using Uart where each bit in the byte stands for something and only one bit is 1 in each byte sent
So if I want to check, say for example, 5th bit in the received byte then if write it as follows:
short byte=UDR;
if(byte&(1<<5))
{
// do stuff for bit 5
}
Then it works fine always
BUT, if I write it as this:
short byte=UDR;
if(byte==0b00100000)
OR
short byte=UDR;
if(byte==0x20)
Then it won't work , also it fails if I use Switch-case instead of if-else
I can't understand the problem, does the compiler interprets it as signed no and 7th Bit as sign?
or something else?
The compiler is AVR-gnu from AVR studio 5
If someone asks I also have LEDs on the receiver that shows received byte
and so I know the byte received is correct but for some reason I cannot compare it for conditions! still can there some noise that causes bits to be misunderstood by the Uart and thus changing the actual byte received?
Help !
LOOK OUT EVERY ONE
Something here is like PARANORMAL
Finally I have cornered the area of problem
I added 8 LEDs to represent the bits of received bytes and here's what I found:
The LEDs represent (1<<5) as 0b00100000 that's ok as its what I sent
BUT
The other LEDs (excluding the 8) assigned to glow on receiving 0b00100000 does not glow!
FTW MAN!
I'm damn sure the received byte is correct ..but something's wrong with if-else and switch-case comparision

It doesn't work because the second formulation changes the meaning of the code, not just the spelling of the mask constant. To test a bit, you must apply the bitwise and (&) to the constant, not just compare the value with the constant:
if (byte & 0b00100000) /* note: 0b00100000 is a gcc extension */
or:
if (byte & 0x20) /* or byte & 32, or byte & (1 << 5), etc. */

C doesn't have a syntax for binary literals, you can't type 0b00100000 and have it compile.
It's a bit hard to understand why it wouldn't work for the == 0x20 case, since we don't know the value of UDR that is specific to your platform.
If UDR has more than 1 bit set, then the exact equality check will of course fail, while the single bit test will succeed.
You can only use switch() with exact values for each case, but you can of course mask before inspecting:
switch( byte & 0x20 )
{
case 0:
break;
case 0x20:
break;
}

Related

AVR uint8_t doesn't get correct value

I have a uint8_t that should contain the result of a bitwise calculation. The debugger says the variable is set correctly, but when i check the memory, the var is always at 0. The code proceeds like the var is 0, no matter what the debugger tells me. Here's the code:
temp = (path_table & (1 << current_bit)) >> current_bit;
//temp is always 0, debugger shows correct value
if (temp > 0) {
DS18B20_send_bit(pin, 0x01);
} else {
DS18B20_send_bit(pin, 0x00);
}
Temp's a uint8_t, path_table's a uint64_t and current_bit's a uint8_t. I've tried to make them all uint64_t but nothing changed. I've also tried using unsigned long long int instead. Nothing again.
The code always enters the else clause.
Chip's Atmega4809, and uses uint64_t in other parts of the code with no issues.
Note - If anyone knows a more efficient/compact way to extract a single bit from a variable i would really appreciate if you could share ^^
1 is an integer constant, of type int. The expression 1 << current_bit also has type int, but for 16-bit int, the result of that expression is undefined when current_bit is larger than 14. The behavior being undefined in your case, then, it is plausible that your debugger presents results for the overall expression that seem inconsistent with the observed behavior. If you used an unsigned int constant instead, i.e. 1u, then the resulting value of temp would be well defined as 0 whenever current_bit was greater than 15, because the result of the left shift would be zero.
Solve this problem by performing the computation in a type wide enough to hold the result. Here's a compact, correct, and pretty clear way to correct your code to do that:
DS18B20_send_bit(pin, (path_table & (((uint64_t) 1) << current_bit)) != 0);
Or if path_table has an unsigned type then I prefer this, though it's more of a departure from your original:
DS18B20_send_bit(pin, (path_table >> current_bit) & 1);
Realization #1 here is that AVR is 1980-1990s technology core. It is not a x64 PC that chews 64 bit numbers for breakfast, but an extremely inefficient 8-bit MCU. As such:
It likes 8 bit arithmetic.
It will struggle with 16 bit arithmetic, by doing tricks with 16 bit index registers, double accumulators or whatever 8 bit core tricks it prefers to do.
It will literally take ages to execute 32 bit arithmetic, by invoking software libraries inline.
It will probably melt through the floor if attempting 64 bit arithmetic.
Before you do anything else, you need to get rid of all 64 bit arithmetic and radically minimize the use of 32 bit arithmetic. Period. There should be no single variable of uint64_t in your code or you are doing it very very wrong.
With this revelation also comes that all 8 bit MCUs always have an int type which is 16 bits.
In the code 1<<current_bit, the integer constant 1 is of type int. Meaning that if current_bit is 15 or larger, you will shift bits into the sign bit of this temporary int. This is always a bug. Strictly speaking this is undefined behavior. In practice, you might end up with random change of sign of your numbers.
To avoid this, never use any form of bitwise operators on signed numbers. When mixing integer constants such as 1 with bitwise operators, change them to 1u to avoid bugs like the one mentioned.
If anyone knows a more efficient/compact way to extract a single bit from a variable i would really appreciate if you could share
The most efficient way in C is: uint8_t variable; ... if(variable & (1u << bits)). This should translate to the relevant "branch if bit set" instruction.
My general advise would be find your tool chain's disassembler and see what machine code that the C code actually generated. You don't have to be an assembler guru to read it, peeking at the instruction set should be enough.

Interpreting a hex number as decimal

I have an embedded system running C-code which works pretty straightforwardly; it reads characters via a serial connection, interprets the received chars as hexadecimal numbers and depending on what was received, proceeds to do something else. However, there is one very special case where the chars received are decimal instead of hex. Since this case is very rare (kind of an error case of an error case), I don't wish to modify the actual character reception to decide whether to interpret the received value as dec or hex, but rather to add a quick algorithm into the case handling where I change the number into decimal.
What would you say is the fastest (as in most efficient processor-wise) way of doing this? Since the software is running on a small MCU, any C library functions are not an option since I don't wish to add any more unnecessary #include's, so a purely mathematical algorithm is what I'm searching for.
Just to be clear, I'm not asking the quickest way to do a basic hex-to-dec- conversion as in 0x45 -> dec 69, but what I want to do is to transform eg. 0x120 into decimal 120.
Thanks in advance!
EDIT: Sorry, I'll try to explain in more detail. The actual code is way too long, and I think pasting it here is unnecessary. So here's what happens:
First I read a received number from the serial line, let's say "25". Then I turn it into hex number, so I have a variable with the read value, let's say X = 0x25. This works already fine, and I don't want to do modifications to this. What I would like to do now in this very special case is just to change the interpretation of the variable so that instead of X == 0x25, X==25. Hexadecimal 0x25 turns into decimal 25. There has to be some kind of mathematical formula for such a change, without the need of any processor-specific instructions or library functions?
If I'm understanding correctly, you've already converted a stream of ASCII characters into a char/int variable, assuming them to be a stream of hex-digits. In some cases, they were actually a stream of decimal digits (e.g. you received 45 and, treating this as hex, got a variable with value 69 when -- in one special case -- you actuially want its value to be 45.
Assuming two-characters, (00-ff in general, but for "was meant to be decimal" we're talking 00-99) then:
int hexVal = GetHexStringFromSerialPort() ;
int decVal = 10*(hexVal >> 4) + (hexVal & 0x0f) ;
should do the trick. If you've got longer strings, you'll need to extend the concept further.
Just do a simple while loop like this, supposing onum and dnum are unsigned integers
dnum = 0;
while (onum) {
digit = onum & 0xF;
dnum = dnum*10 + digit;
onum >>= 4;
}
this supposes that onum is really of the form that you describe (no hexdigits that are >9). It just succs the least significant hexdigit out of your number and adds it to your decimal.
Checking if your string starts with 0x characters and removing them should do the trick.

How to increment number with different endianess?

I am doing some micro-controller programming where I have to load the firmware of a DSP chip at run time. The DSP chip requires that the register addresses be written in a different endianess so the addres 1024 becomes 0x04, 0x00. I have the address in a 2 element uint8_t array with the most significant byte being the 0 position and least significant byte being the 1 position. However, I need to run through a loop where i increment each register address by one every iteration. The micro controller is a different endianess so I can't simply cast the array to uint16_t* and increment.
How would i go about incrementing the address?
I would use a normal int counter, and then convert to the correct endianness before sending it to the DSP. You can use macros in the byteorder or endian family. This will be easier to debug and more portable.
What is it you are looking for from us?
1) swap before sending
2) increment the lower byte, add the carry to the upper byte (asm makes this easy)
3) endian swap and increment (x=(upper<<8)|lower; x++)

Reading & comparing 10-bit register in 32-bit architecture

I am currently C programming 10-bit ADC inside 32-bit ARM9 based microcontroller. This 10-bit ADC is saving digitalised analog value in 10 bit register named "ADC_DATA_REG" that uses bits 9-0 (LSB). I have to read this register's value and compare it to a 32 bit constant named "CONST". My attempt looked like this, but it isn't working. What am i missing here? Should i use shift operations? This is my frst time dealing with this so any example will be welcomed.
The below code has been edited regarding coments and anwsers and is still not working. I allso added a while statement which checks if ADC_INT_STATUS flag is raized before reading ADC_DATA_REG. The flag mentioned indicates an interrupt which is pending as soon as ADC finishes conversion and data is ready to read from ADC_DATA_REG. It turns out data remains 0 even after assigning value of register ADC_DATA_REG to it, so that is why my LED is always on. And it allso means i got an interrupt and there should be data in ADC_DATA_REG, instead it seems there isnt...
#define CONST 0x1FF
unsigned int data = 0;
while (!(ADC_INT_STATUS_REG & ADC_INT_STATUS))
data = ADC_DATA_REG;
if ((data & 0x3FF)> CONST){
//code to turn off the LED
}
else{
//code to turn on the LED
}
You dont write how ADC_DATA_REG fetches the 10-bit value. But I assume that it is only a read to some IO-address. In this case the read from the address return 32 bits. In your case only the lower 10 are valid (or interresting). The other 22 bit can be anything (e.g. status bits, rubbish, ...), so before you proceed with the data, you should zero the first 22 bits.
In case the 10-bit value is signed, you should also perform a sign extension and correct your datatype (I know the port IO is unsigned, but maybe the 10-bit value the adc returns isnt). Then your comparison should work.

AVR GCC - typecasting trouble

I'm using an AVR microcontroller to write to a programmable frequency divider chip via the I2C bus. At certain intervals I'm trying to have the following function is called to update the frequency output of the chip:
void 1077WriteDiv(int16_t data)
{
uint8_t upperByte = (uint8_t)((uint16_t)data>>2);
i2c_start(DS1077_BASE_ADDRESS);
i2c_write(DIVIDE_REGISTER);
i2c_write(upperByte);
i2c_write(0x0);
i2c_stop();
}
I'm trying to get the top 8 bits of a ten bit value in the "data" variable and write it out. The second "write" command writes the lower 8 bits of the "divide" register on the chip, 0 in this case.
As a test case I'm incrementing the "data" variable (which has to be signed for certain reasons) from zero, shifting it left 2 bits and calling this function each time. I get garbage out. However, when I do this:
void 1077WriteDiv(int16_t data)
{
//uint8_t upperByte = (uint8_t)((uint16_t)data>>2);
static uint8_t thing = 0;
i2c_start(DS1077_BASE_ADDRESS);
i2c_write(DIVIDE_REGISTER);
i2c_write(thing++);
i2c_write(0x0);
i2c_stop();
}
Everything works as expected. There's obviously some kind of problem in how I'm shifting and typecasting the original "data" variable, but I've tried all kinds of permutations with the same results. It would be much appreciated if anyone could point out where I might be going wrong.
Try
uint8_t upperByte = (uint8_t) ((data & 0x3FC) >> 2);
You cannot rely on the cast to a smaller int to delete the high-order bits that you are trying to get rid of.
i2c_write(thing++);
Would mean your divider increments every call. If you increment "data" and shift it right by two then your divider increments every four calls. Your two code sections are not equivalent.
What interval are you calling this function at? What is "garbage out"? How do you know the value passed into the function is correct? How do you know the value sent out to the DS1077 is wrong?
Check all your assumptions.
The cast and shift look fine to me. At least I think they'd work in any C compiler I've ever used. From a C standard perspective you can refer to this draft (ISO/IEC 9899:TC2 6.3 Conversions):
Otherwise, if the new type is unsigned, the value is converted by
repeatedly adding or subtracting one more than the maximum value that
can be represented in the new type until the value is in the range of
the new type
Which is the only one I have access to right now. Perhaps someone else can chime in on the standard question. The compiler may not be standard compliant...

Resources