I am programming the 8051 in C using the Si Labs IDE. I currently have three bytes: address_byte3, address_byte2, and address_byte1. I then initialized a variable address_sum to be an unsigned long int then did the following operation on it...
address_sum=(address_byte3<<16)+(address_byte2<<8)+(address_byte1);
This operation would lead me to believe that the value loaded into address_sum if address_byte3, address_byte2, & address_byte1 were 0x92, 0x56, & 0x78, respectively, would be 0xXX925678. Instead I am getting a value of 0xXX005678. My logic seems sound but then again I am the one writing the code so I'm biased and could be blinded by my own ignorance. Does anyone have a solution or an explanation as to why the value for address_byte is "lost"?
Thank you.
Variables shorter than int are promoted to int when doing calculations on them. It seems that your int type is 16-bit, so shifting it by 16 bits doesn't work right.
You should explicitly cast the variables to the result type (unsigned long):
address_sum = ((unsigned long)address_byte3<<16) +
((unsigned long)address_byte2<<8) +
(unsigned long)address_byte1;
The last casting is superfluous but doesn't hurt.
A shift of a 16-bit int/unsigned, as well explained by #anatolyg will only result in a 16-bit answer.
I avoid casting, as a general promotion scheme, as sometimes it may narrow the result as code evolves over time and the maintainer uses wider operands.
Alternatives:
((type_of_target) 1) *: This will insure each operation is at least the width of the target.
unsigned long address_sum;
...
address_sum = (1UL*address_byte3<<16) + (1UL*address_byte2<<8) + address_byte1;
Assign to the destination and then operate:
address_sum = address_byte3;
address_sum = address_sum << 8 + address_byte2;
address_sum = address_sum << 8 + address_byte1;
A sneaky, thought not pleasant looking 1-line alternative. Recall * + higher order precedence than shift
address_sum = (0*address_sum + address_byte3 << 16) +
(0*address_sum + address_byte2 << 8) + address_byte1;
Consider #Eugene Sh. concern and use 8-bit unsigned "bytes".
My preference is a variation on chux
bigger declaration address;
byte declaration a,b,c;
address =a; address<<=8;
address|=b; address<<=8;
address|=c;
Despite being the most verbose all of the answers thus far should optimize into basically the same code. But would have to test the specific compiler to see. Can the 8051 shift more than one bit at a time per instruction anyway? Dont remember.
Related
int n_b ( char *addr , int i ) {
char char_in_chain = addr [ i / 8 ] ;
return char_in_chain >> i%8 & 0x1;
}
Like what is that : " i%8 & Ox1" ?
Edit: Note that 0x1 is the hexadecimal notation for 1. Also note that :
0x1 = 0x01 = 0x000001 = 0x0...01
i%8 means i modulo 8, ie the rest in the Euclidean division of i by 8.
& 0x1 is a bitwise AND, it converts the number before to binary form then computes the bitwise operation. (it's already in binary but it's just so you understand)
Example : 0x1101 & 0x1001 = 0x1001
Note that any number & 0x1 is either 0 or one.
Example: 0x11111111 & 0x00000001 is 0x1 and 0x11111110 & 0x00000001 is 0x0
Essentially, it is testing the last bit on the number, which the bit determining parity.
Final edit:
I got the precedence wrong, thanks to the comments for pointing it out. Here is the real precedence.
First, we compute i%8.
The result could be 0, 1, 2, 3, 4, 5, 6, 7.
Then, we shift the char by the result, which is maximum 7. That means the i % 8 th bit is now the least significant bit.
Then, we check if the original i % 8 bit is set (equals one) or not. If it is, return 1. Else, return 0.
This function returns the value of a specific bit in a char array as the integer 0 or 1.
addr is the pointer to the first char.
i is the index to the bit. 8 bits are commonly stored in a char.
First, the char at the correct offset is fetched:
char char_in_chain = addr [ i / 8 ] ;
i / 8 divides i by 8, ignoring the remainder. For example, any value in the range from 24 to 31 gives 3 as the result.
This result is used as the index to the char in the array.
Next and finally, the bit is obtained and returned:
return char_in_chain >> i%8 & 0x1;
Let's just look at the expression char_in_chain >> i%8 & 0x1.
It is confusing, because it does not show which operation is done in what sequence. Therefore, I duplicate it with appropriate parentheses: (char_in_chain >> (i % 8)) & 0x1. The rules (operation precedence) are given by the C standard.
First, the remainder of the division of i by 8 is calculated. This is used to right-shift the obtained char_in_chain. Now the interesting bit is in the least significant bit. Finally, this bit is "masked" with the binary AND operator and the second operand 0x1. BTW, there is no need to mark this constant as hex.
Example:
The array contains the bytes 0x5A, 0x23, and 0x42. The index of the bit to retrieve is 13.
i as given as argument is 13.
i / 8 gives 13 / 8 = 1, remainder ignored.
addr[1] returns 0x23, which is stored in char_in_chain.
i % 8 gives 5 (13 / 8 = 1, remainder 5).
0x23 is binary 0b00100011, and right-shifted by 5 gives 0b00000001.
0b00000001 ANDed with 0b00000001 gives 0b00000001.
The value returned is 1.
Note: If more is not clear, feel free to comment.
What the various operators do is explained by any C book, so I won't address that here. To instead analyse the code step by step...
The function and types used:
int as return type is an indication of the programmer being inexperienced at writing hardware-related code. We should always avoid signed types for such purposes. An experienced programmer would have used an unsigned type, like for example uint8_t. (Or in this specific case maybe even bool, depending on what the data is supposed to represent.)
n_b is a rubbish name, we should obviously never give an identifier such a nondescript name. get_bit or similar would have been a better name.
char* is, again, an indication of the programmer being inexperienced. char is particularly problematic when dealing with raw data, since we can't even know if it is signed or unsigned, it depends on which compiler that is used. Had the raw data contained a value of 0x80 or larger and char was negative, we would have gotten a negative type. And then right shifting a negative value is also problematic, since that behavior too is compiler-specific.
char* is proof of the programmer lacking the fundamental knowledge of const correctness. The function does not modify this parameter so it should have been const qualified. Good code would use const uint8_t* addr.
int i is not really incorrect, the signedness doesn't really matter. But good programming practice would have used an unsigned type or even size_t.
With types unsloppified and corrected, the function might look like this:
#include <stdint.h>
uint8_t get_bit (const uint8_t* addr, size_t i ) {
uint8_t char_in_chain = addr [ i / 8 ] ;
return char_in_chain >> i%8 & 0x1;
}
This is still somewhat problematic, because the average C programmer might not remember the precedence of >> vs % vs & on top of their head. It happens to be % over >> over &, but lets write the code a bit more readable still by making precedence explicit: (char_in_chain >> (i%8)) & 0x1.
Then I would question if the local variable really adds anything to readability. Not really, we might as well write:
uint8_t get_bit (const uint8_t* addr, size_t i ) {
return ((addr[i/8]) >> (i%8)) & 0x1;
}
As for what this code actually does: this happens to be a common design pattern for how to access a specific bit in a raw bit-field.
Any bit-field in C may be accessed as an array of bytes.
Bit number n in that bit-field, will be found at byte n/8.
Inside that byte, the bit will be located at n%8.
Bit masking in C is most readably done as data & (1u << bit). Which can be obfuscated as somewhat equivalent but less readable (data >> bit) & 1u, where the masked bit ends up in the LSB.
For example lets assume we have 64 bits of raw data. Bits are always enumerated from 0 to 63 and bytes (just like any C array) from index 0. We want to access bit 33. Then 33/8 integer division = 4.
So byte[4]. Bit 33 will be found at 33%8 = 1. So we can obtain the value of bit 33 from ordinary bit masking byte[33/8] & (1u << (bit%8)). Or similarly, (byte[33/8] >> (bit%8)) & 1u
An alternative, more readable version of it all:
bool is_bit_set (const uint8_t* data, size_t bit)
{
uint8_t byte = data [bit / 8u];
size_t mask = 1u << (bit % 8u);
return (byte & mask) != 0u;
}
(Strictly speaking we could as well do return byte & mask; since a boolean type is used, but it doesn't hurt to be explicit.)
The following three lines of code are optimized ways to modify bits with 1 MOV instruction instead of using a less interrupt safe read-modify-write idiom. They are identical to each other, and set the LED_RED bit in the GPIO Port's Data Register:
*((volatile unsigned long*)(0x40025000 + (LED_RED << 2))) = LED_RED;
*(GPIO_PORTF_DATA_BITS_R + LED_RED) = LED_RED;
GPIO_PORTF_DATA_BITS_R[LED_RED] = LED_RED;
LED_RED is simply (volatile unsigned long) 0x02. In the memory map of this microcontroller, the first 2 LSBs of this register (and others) are unused, so the left shift in the first example makes sense.
The definition for GPIO_PORTF_DATA_BITS_R is:
#define GPIO_PORTF_DATA_BITS_R ((volatile unsigned long *)0x40025000)
My question is: How come I do not need to left shift twice when using pointer arithmetic or array indexing (2nd method and 3rd method, respectively)? I'm having a hard time understanding. Thank you in advance.
Remember how C pointer arithmetic works: adding an offset to a pointer operates in units of the type pointed to. Since GPIO_PORTF_DATA_BITS_R has type unisgned long *, and sizeof(unsigned long) == 4, then GPIO_PORTF_DATA_BITS_R + LED_RED effectively adds 2 * 4 = 8 bytes.
Note that (1) does arithmetic on 0x40025000, which is an integer, not a pointer, so we need to add 8 to get the same result. Left shift by 2 is the same as multiplication by 4, so LED_RED << 2 again equals 8.
(3) is exactly equivalent to (2) by definition of the [] operator.
I encountered this piece of C code that's existing. I am struggling to understand it.
I supposidly reads a 4 byte unsigned value passed in a buffer (in little endian format) into a variable of type "long".
This code runs on a 64 bit word size, little endian x86 machine - where sizeof(long) is 8 bytes.
My guess is that this code is intended to also run on a 32 bit x86 machine - so a variable of type long is used instead of int for sake of storing value from a four byte input data.
I am having some doubts and have put comments in the code to express what I understand, or what I don't :-)
Please answer questions below in that context
void read_Value_From_Four_Byte_Buff( char*input)
{
/* use long so on 32 bit machine, can still accommodate 4 bytes */
long intValueOfInput;
/* Bitwise and of input buffer's byte 0 with 0xFF gives MSB or LSB ?*/
/* This code seems to assume that assignment will store in rightmost byte - is that true on a x86 machine ?*/
intValueOfInput = 0xFF & input[0];
/*left shift byte-1 eight times, bitwise "or" places in 2nd byte frm right*/
intValueOfInput |= ((0xFF & input[1]) << 8);
/* similar left shift in mult. of 8 and bitwise "or" for next two bytes */
intValueOfInput |= ((0xFF & input[2]) << 16);
intValueOfInput |= ((0xFF & input[3]) << 24);
}
My questions
1) The input buffer is expected to be in "Little endian". But from code looks like assumption here is that it read in as Byte 0 = MSB, Byte 1, Byte 2, Byte 3= LSB. I thought so because code reads bytes starting from Byte 0, and subsequent bytes ( 1 onwards) are placed in the target variable after left shifting. Is that how it is or am I getting it wrong ?
2) I feel this is a convoluted way of doing things - is there a simpler alternative to copy value from 4 byte buffer into a long variable ?
3) Will the assumption "that this code will run on a 64 bit machine" will have any bearing on how easily I can do this alternatively? I mean is all this trouble to keep it agnostic to word size ( I assume its agnostic to word size now - not sure though) ?
Thanks for your enlightenment :-)
You have it backwards. When you left shift, you're putting into more significant bits. So (0xFF & input[3]) << 24) puts Byte 3 into the MSB.
This is the way to do it in standard C. POSIX has the function ntohl() that converts from network byte order to a native 32-bit integer, so this is usually used in Unix/Linux applications.
This will not work exactly the same on a 64-bit machine, unless you use unsigned long instead of long. As currently written, the highest bit of input[3] will be put into the sign bit of the result (assuming a twos-complement machine), so you can get negative results. If long is 64 bits, all the results will be positive.
The code you are using does indeed treat the input buffer as little endian. Look how it takes the first byte of the buffer and just assigns it to the variable without any shifting. If the first byte increases by 1, the value of your result increases by 1, so it is the least-significant byte (LSB). Left-shifting makes a byte more significant, not less. Left-shifting by 8 is generally the same as multiplying by 256.
I don't think you can get much simpler than this unless you use an external function, or make assumptions about the machine this code is running on, or invoke undefined behavior. In most instances, it would work to just write uint32_t x = *(uint32_t *)input; but this assumes your machine is little endian and I think it might be undefined behavior according to the C standard.
No, running on a 64-bit machine is not a problem. I recommend using types like uint32_t and int32_t to make it easier to reason about whether your code will work on different architectures. You just need to include the stdint.h header from C99 to use those types.
The right-hand side of the last line of this function might exhibit undefined behavior depending on the data in the input:
((0xFF & input[3]) << 24)
The problem is that (0xFF & input[3]) will be a signed int (because of integer promotion). The int will probably be 32-bit, and you are shifting it so far to the left that the resulting value might not be representable in an int. The C standard says this is undefined behavior, and you should really try to avoid that because it gives the compiler a license to do whatever it wants and you won't be able to predict the result.
A solution is to convert it from an int to a uint32_t before shifting it, using a cast.
Finally, the variable intValueOfInput is written to but never used. Shouldn't you return it or store it somewhere?
Taking all this into account, I would rewrite the function like this:
uint32_t read_value_from_four_byte_buff(char * input)
{
uint32_t x;
x = 0xFF & input[0];
x |= (0xFF & input[1]) << 8;
x |= (0xFF & input[2]) << 16;
x |= (uint32_t)(0xFF & input[3]) << 24;
return x;
}
From the code, Byte 0 is LSB, Byte 3 is MSB. But there are some typos. The lines should be
intValueOfInput |= ((0xFF & input[2]) << 16);
intValueOfInput |= ((0xFF & input[3]) << 24);
You can make the code shorter by dropping 0xFF but using the type "unsigned char" in the argument type.
To make the code shorter, you can do:
long intValueOfInput = 0;
for (int i = 0, shift = 0; i < 4; i++, shift += 8)
intValueOfInput |= ((unsigned char)input[i]) << shift;
TL;DR:
Why isn't (unsigned long)(0x400253FC) equivalent to (unsigned long)((*((volatile unsigned long *)0x400253FC)))?
How can I make a macro which works with the former work with the latter?
Background Information
Environment
I'm working with an ARM Cortex-M3 processor, the LM3S6965 by TI, with their StellarisWare (free download, export controlled) definitions. I'm using gcc version 4.6.1 (Sourcery CodeBench Lite 2011.09-69). Stellaris provides definitions for some 5,000 registers and memory addresses in "inc/lm3s6965.h", and I really don't want to redo all of those. However, they seem to be incompatible with a macro I want to write.
Bit Banding
On the ARM Cortex-M3, a portion of memory is aliased with one 32-bit word per bit of the peripheral and RAM memory space. Setting the memory at address 0x42000000 to 0x00000001 will set the first bit of the memory at address 0x40000000 to 1, but not affect the rest of the word. To change bit 2, change the word at 0x42000004 to 1. That's a neat feature, and extremely useful. According to the ARM Technical Reference Manual, the algorithm to compute the address is:
bit_word_offset = (byte_offset x 32) + (bit_number × 4)
bit_word_addr = bit_band_base + bit_word_offset
where:
bit_word_offset is the position of the target bit in the bit-band memory region.
bit_word_addr is the address of the word in the alias memory region that maps to the
targeted bit.
bit_band_base is the starting address of the alias region.
byte_offset is the number of the byte in the bit-band region that contains the targeted bit.
bit_number is the bit position, 0 to 7, of the targeted bit
Implementation of Bit Banding
The "inc/hw_types.h" file includes the following macro which implements this algorithm. To be clear, it implements it for a word-based model which accepts 4-byte-aligned words and 0-31-bit offsets, but the resulting address is equivalent:
#define HWREGBITB(x, b) \
HWREGB(((unsigned long)(x) & 0xF0000000) | 0x02000000 | \
(((unsigned long)(x) & 0x000FFFFF) << 5) | ((b) << 2))
This algorithm takes the base which is either in SRAM at 0x20000000 or the peripheral memory space at 0x40000000) and ORs it with 0x02000000, adding the bit band base offset. Then, it multiples the offset from the base by 32 (equivalent to a five-position left shift) and adds the bit number.
The referenced HWREG simply performs the requisite cast for writing to a given location in memory:
#define HWREG(x) \
(*((volatile unsigned long *)(x)))
This works quite nicely with assignments like
HWREGBITW(0x400253FC, 0) = 1;
where 0x400253FC is a magic number for a memory-mapped peripheral and I want to set bit 0 of this peripheral to 1. The above code computes (at compile-time, of course) the bit offset and sets that word to 1.
What doesn't work
Unfortunately, the aforememntioned definitions in "inc/lm3s6965.h" already perform the cast done by HWREG. I want to avoid magic numbers and instead use provided definitions like
#define GPIO_PORTF_DATA_R (*((volatile unsigned long *)0x400253FC))
An attempt to paste this into HWREGBITW causes the macro to no longer work, as the cast interferes:
HWREGBITW(GPIO_PORTF_DATA_R, 0) = 1;
The preprocessor generates the following mess (indentation added):
(*((volatile unsigned long *)
((((unsigned long)((*((volatile unsigned long *)0x400253FC)))) & 0xF0000000)
| 0x02000000 |
((((unsigned long)((*((volatile unsigned long *)0x400253FC)))) & 0x000FFFFF) << 5)
| ((0) << 2))
)) = 1;
Note the two instances of
(((unsigned long)((*((volatile unsigned long *)0x400253FC)))))
I believe that these extra casts are what is causing my process to fail. The following result of preprocessing HWREGBITW(0x400253FC, 0) = 1; does work, supporting my assertion:
(*((volatile unsigned long *)
((((unsigned long)(0x400253FC)) & 0xF0000000)
| 0x02000000 |
((((unsigned long)(0x400253FC)) & 0x000FFFFF) << 5)
| ((0) << 2))
)) = 1;
The (type) cast operator has right-to-left precedence, so the last cast should apply and an unsigned long used for the bitwise arithmetic (which should then work correctly). There's nothing implicit anywhere, no float to pointer conversions, no precision/range changes...the left-most cast should simply nullify the casts to the right.
My question (finally...)
Why isn't (unsigned long)(0x400253FC) equivalent to (unsigned long)((*((volatile unsigned long *)0x400253FC)))?
How can I make the existing HWREGBITW macro work? Or, how can a macro be written to do the same task but not fail when given an argument with a pre-existing cast?
1- Why isn't (unsigned long)(0x400253FC) equivalent to (unsigned long)((*((volatile unsigned long *)0x400253FC)))?
The former is an integer literal and its value is 0x400253FCul while the latter is the unsigned long value stored in the (memory or GPIO) address 0x400253FC
2- How can I make the existing HWREGBITW macro work? Or, how can a macro be written to do the same task but not fail when given an argument with a pre-existing cast?
Use HWREGBITW(&GPIO_PORTF_DATA_R, 0) = 1; instead.
I've run into a small issue here. I have an unsigned char array, and I am trying to access bytes 2-3 (0xFF and 0xFF) and get their value as a short.
Code:
unsigned char Temp[512] = {0x00,0xFF,0xFF,0x00};
short val = (short)*((unsigned char*)Temp+1)
While I would expect val to contain 0xFFFF it actually contains 0x00FF. What am I doing wrong?
There's no guarantee that you can access a short when the data is improperly aligned.
On some machines, especially RISC machines, you'd get a bus error and core dump for misaligned access. On other machines, the misaligned access would involve a trap into the kernel to fix up the error — which is only a little quicker than the core dump.
To get the result reliably, you'd be best off doing shifting and or:
val = *(Temp+1) << 8 | *(Temp+2);
or:
val = *(Temp+2) << 8 | *(Temp+1);
Note that this explicitly offers big-endian (first option) or little-endian (second) interpretation of the data.
Also note the careful use of << and |; if you use + instead of |, you have to parenthesize the shift expression or use multiplication instead of shift:
val = (*(Temp+1) << 8) + *(Temp+2);
val = *(Temp+1) * 256 + *(Temp+2);
Be logical and use either logic or arithmetic and not a mixture.
Well you're dereferencing a unsigned char* when you should be derefencing a short*
I think this should work:
short val = *((short*)(Temp+1))
Your problem is that you are only accessing one byte of the array:
*((unsigned char*)Temp+1) will dereference the pointer Temp+1 giving you 0xFF
(short)*((unsigned char*)Temp+1) will cast the result of the dereference to short. Casting unsigned char 0xFF to short obviously gives you 0x00FF
So what you are trying to do is *((short*)(Temp+1))
It should however be noted that what you are doing is a horrible hack. First of all when you have different chars the result will obviously depend on the endianess of the machine.
Second there is no guarantee that the accessed data is correctly aligned to be accessed as a short.
So it might be a better idea to do something like short val= *(Temp+1)<<8 | *(Temp+2) or short val= *(Temp+2)<<8 | *(Temp+1) depending on the endianess of your architecture
I do not recommend this approach because it is architecture-specific.
Consider the following definition of Temp:
unsigned char Temp[512] = {0x00,0xFF,0x88,0x00};
Depending on the endianness of the system, you will get different results casting Temp + 1 to a short *; on a little endian system, the result would be the value 0x88FF, but on a Big endian system, the result would be 0xFF88.
Also, I believe that this is an undefined cast because of issues with alignment.
What you could use is:
short val = (((short)Temp[1]) << 8) | Temp[2];