I am trying to cast a preprocesor to an array, But I am not sure if it is possible at all,
Where for example I have defined:
Number 0x44332211
Code below:
#include <stdio.h>
#include <stdint.h>
#define number 0x44332211
int main()
{
uint8_t array[4] = {(uint8_t)number, (uint8_t)number << 8,(uint8_t)(number <<16 ),(uint8_t)(number <<24)};
printf("array[%x] \n\r",array[0]); // 0x44
printf("array[%x] \n\r",array[1]); // 0x33
printf("array[%x] \n\r",array[2]); // 0x22
printf("array[%x] \n\r",array[3]); // 0x11
return 0;
}
and I want to cast it two an uint8_t array[4] where array[0] = 0x44, array[1] = 0x33, array[2] = 0x22, array[3] = 0x11
Is it possible?
my output:
array[11]
array[0]
array[0]
array[0]
A couple of realizations are needed:
uint8_t masks out the least significant byte of the data. Meaning you have to right shift data down into the least significant byte, not left shift data away from it.
0x44332211 is an integer constant, not a "preprocessor". It is of type int and therefore signed. You shouldn't use bitwise operators on signed types. Easily solved by changing to 0x44332211u with unsigned suffix.
Typo here: (uint8_t)number << 8. You should shift then cast. Casts have higher precedence than shift.
#include <stdio.h>
#include <stdint.h>
#define number 0x44332211u
int main()
{
uint8_t array[4] =
{
(uint8_t)(number >> 24),
(uint8_t)(number >> 16),
(uint8_t)(number >> 8),
(uint8_t) number
};
printf("array[%x] \n\r",array[0]); // 0x44
printf("array[%x] \n\r",array[1]); // 0x33
printf("array[%x] \n\r",array[2]); // 0x22
printf("array[%x] \n\r",array[3]); // 0x11
return 0;
}
This is not really a cast in any way. You have defined a constant and compute the values of the array based on that constant. Keep in mind that in this case, the preprocessor simply does a search and replace, nothing clever.
Also, your shift is in the wrong direction. You keep the last (rightmost) 8 bits when casting int to uint8_t, not the first (leftmost) ones.
Yes, you are casting an int to a uint8_t. The only problem is that, when you make the shifts, the result won't fit in the type you are casting to and information will be lost.
Your uint8_t casts are just taking the least significant byte. that's why you get 11 in the first case and 0 in the others... because your shifts to the left leave 0 in the rightmost positions.
Related
So I made a custom type by using typedef unsigned char byte;, and then declared an array of it, like using byte mem[255];. I used mem[0] = 0x10100000; to init the first value, but when I print it using printf("%d", mem[0]); I get 0. Why?
An unsigned char can typically only hold values between 0 and 255. The hex value 0x10100000 is well out of range for that type, so (essentially) only the low-order byte of that value is used, which is 0.
Presumably you wanted to use a binary constant. Not all compilers support that, but those that do would specify it as 0b10100000. For those than don't you can use the hex value 0xA0.
You're assigning it the hexidecimal number 0x10100000 which is far larger than a single character, and thus can't be stored in a byte. If you want to use a binary number, and your compiler supports this, you might try using 0b10100000 instead.
unsigned char can only hold the value of ((1 << CHAR_BIT) - 1)
You can check what is the maximum value yourself
#include <stdio.h>
#include <limits.h>
int main(void)
{
printf("%u\n", (1 << CHAR_BIT) - 1);
}
On most systems it is 255 or 0xff.
When you assign the unsigned char with 0x10100000 only the lowest two hex digits will be assigned (in your case 0x00).
If you wanted to copy all the bytes from the 0x10100000 to the byte array mem you defined, the assignment will not work. You need to copy then instead:
#include <stdio.h>
#include <limits.h>
#include <string.h>
typedef unsigned char byte;
int main(void)
{
byte mem[100];
memcpy(mem, &(unsigned){0x10100000}, sizeof(0x10100000));
for(size_t index = 0; index < sizeof(0x10100000); index++)
{
printf("mem[%zu] = 0x%hhx\n", index, mem[index]);
}
}
Output:
mem[0] = 0x0
mem[1] = 0x0
mem[2] = 0x10
mem[3] = 0x10
https://godbolt.org/z/cGYa8MTef
Why in this order? Because the machine, where godbolt is run, uses little endioan. https://en.wikipedia.org/wiki/Endianness
0x prefix means that number hexadecimal. If you wanted to use binary number then gcc supports 0b prefix which is not standard.
mem[0] = 0b10100000
You can also create .h file
#define b00000000 0
#define b00000001 1
#define b00000010 2
#define b00000011 3
/* .... */
#define b11111110 254
#define b11111110 255
and use those definitions portable way
mem[0] = b10100000;
You can't fit a 32 bit value inside an 8 bit variable (mem[0]). Do you perhaps mean to do this?
*(int *)mem = 0x10100000;
I'd expect the following combination of two uint8_t (0x00 and 0x01) into one uint16_t to give me a value of 0x0001, when I combine them consecutively in memory. Instead I obtain 0x0100 = 256, which I'm surprised of.
#include <stdio.h>
#include <stdint.h>
int main(void){
uint8_t u1 = 0x00, u2 = 0x01;
uint8_t ut[2] = {u1, u2};
uint16_t *mem16 = (uint16_t*) ut;
printf("mem16 = %d\n", *mem16);
return 0;
}
Could anyone explain me what I've missed in my current understanding of C memory?
Thank you! :-)
It is called endianess.
Most system nowadays use little endian. In this system first is stored the least significant byte. So the 0x0100 is stored (assuming 2 bytes representation) as {0x00, 0x01} exactly as in your case
ut[0] is inserted on LSB of mem16 , and ut[1] on MSB.
I'm trying to create a table in "c" (for an embedded application) where each row is a sequence of 8 bits, for example:
11010011
01010011
10000000
then I need to have functions to set/clear/read any bit in any row. What is the most efficient way to do that?
For bit manipulation I thought to use:
uint8_t msk = 0x01;
row3 |= msk;
But to do this I think I need to define row3 (and every row) as uint8_t and convert it to hexadecimal as well, and that leads me to my second question:
how to store an 8-bit binary sequence in a uint8_t? I've seen different ways to do similar tasks like the one discussed here but none of them worked for me. Can you help me?
Thanks.
To represent binary bit patterns in C, it is normal to use hexadecimal notation. The utility of hex is that a single hex digit exactly coincides with 4 binary digits, so with experience you can quickly convert between the 16 hex digits and the corresponding binary value in your head, and for longer integers it is simply a matter of converting each digit in turn - 4 bits at a time. Representing long integers in binary quickly becomes impractical.
So your table might be represented as:
uint8_t row[] = { 0xd3, // 11010011
0x53, // 01010011
0x80 // 10000000
} ;
Then you set/clear bits in the following manner:
row[2] |= mask ; // Set mask bits
row[0] &= mask ; // Clear mask bits
To create a mask specifying numbered bits without hard-coding the hex value you can use an expression such as:
uint8_t mask = 1<<7 | 1<<5 | 1<<0 ; // 10100001 mask bits 0, 5 & 7
Occasionally a "visual" binary representation is desirable - for character bitmaps for example, the character A is much easier to visualise when represented in binary:
00000000
00011000
00100100
01000010
01111110
01000010
01000010
00000000
It is possible to efficiently code such a table while maintaining the "visualisation" by exhaustively defining a macro for each binary value; e.g:
#define B_00000000 0x00
#define B_00000001 0x01
#define B_00000010 0x02
#define B_00000011 0x03
#define B_00000100 0x04
...
#define B_11111110 0xfe
#define B_11111111 0xff
Note to create the above macros, it is best perhaps to write a code generator - i.e. a program to generate the code, and put the macros in a header file.
Given such macros you can then represent your table as:
uint8_t row[] = { B_11010011
B_01010011
B_10000000
} ;
or the character bitmap for A thus:
uint8_t charA[] = { B_00000000,
B_00011000,
B_00100100,
B_01000010,
B_01111110,
B_01000010,
B_01000010,
B_00000000 } ;
In the case that the bits are received at run-time serially, then the corresponding uint8_t can be built using sequential mask and shift:
uint8_t getByte()
{
uint8_t mask = 0x80 ;
uint8_y byte = 0 ;
while( mask != 0 )
{
uint8_t bit = getBit() ;
byte |= bit ? mask : 0 ;
mask >>= 1 ;
}
return byte ;
}
What the getBit() function does is for you to define; it may read a file, or a string, or keyboard entry for example, but it must return either zero or non-zero or the binary digits 0 and 1 respectively. If the data is received LSB first then the mask starts from 0x01, and a << shift used instead.
C doesn't have a syntax for entering binary literals, so you should type them as octal or hex, e.g.
uint8_t row1 = 0xd3;
uint8_t row2 = 0x53;
uint8_t row3 = 0x80;
See How do you set, clear, and toggle a single bit? for how you can manipulate specific bits.
row3 |= mask;
will set the bits that are set in mask.
If I understand correctly, you require a list of hexadecimal numbers and you would like to manipulate the bits in any of the elements of the list. I would approach it by making an array of unsigned integers with the size of the array being defined by the number of elements you need in your table.
#include <stdio.h>
#include <stdint.h>
#define LIST_SIZE 3
//accepts the array and sets bit_pos bit of the list idx
int set_bit(uint8_t* hex_list, int list_idx, int bit_pos){
hex_list[list_idx] = hex_list[list_idx] | 0x01<<bit_pos; //left shift by the bit position you want to set
}
int clear_bit(uint8_t* hex_list, int list_idx, int bit_pos){
hex_list[list_idx] = hex_list[list_idx] & ~(0x01<<bit_pos); //left shifts and does logical inversion to get the bit to clear
}
int main(void) {
uint8_t hex_list[LIST_SIZE] = {0x00, 0x01, 0x02};
set_bit(hex_list, 0, 1); // will make 0th element 0x00-> 0x02 by seting bit position 1
clear_bit(hex_list, 1,0); //will make 1st element 0x01-> 0x00 by clearing bit position 0
set_bit(hex_list, 2, 0); // will make 2nd element 0x02->0x03 by setting bit at position 1
clear_bit(hex_list, 2 , 0);// will make 2nd element 0x03 ->0x02 by clearing bit at position 0
//print the result and verify
for(int i = 0; i<LIST_SIZE; ++i){
// modified array will be {0x02, 0x00, 0x02}
printf("elem%d = %x \n", i, hex_list[i]);
}
return 0;
}
An uint8_t in represents a 8-bit unsigned integer whose value can vary between 0(binary representation = 00000000)to 255(binary representation = 11111111).
Take a look at this code:
#include <stdio.h>
#include <stdlib.h>
int byteToInt(char *bytes) {
int32_t v =
(bytes[0] ) +
(bytes[1] << 8 ) +
(bytes[2] << 16) +
(bytes[3] << 24);
return v;
}
int main() {
char b1[] = {0xec, 0x51, 0x04, 0x00};
char b2[] = {0x0c, 0x0c, 0x00, 0x00};
printf("%d\n", byteToInt(b1));
printf("%d\n", byteToInt(b2));
printf("%d\n", *(uint32_t *)b1);
printf("%d\n", *(uint32_t *)b2);
return 0;
}
{0xec, 0x51, 0x04, 0x00} is equal to 283116, but when I use byteToInt function, it returns, for some reason, 282860. There are some byte arrays that cause similar troubles. I realized that value is always mistaken by 256. Still, most of the cases work without any problems - just take a look at b2, it's being calculated as 3084, which is correct. Casting method works in these cases perfetcly but I'd like to know what described problems happen. Could someone, please, explain this to me?
Perhaps char is a signed type (it is implementation-defined), and (int)(char)(0xec) is -20, while (int)(unsigned char)(0xec) is 236.
Try to use unsigned char and uint32_t.
uint32_t byteToInt(unsigned char *bytes) {
uint32_t v =
((uint32_t)bytes[0]) +
((uint32_t)bytes[1] << 8) +
((uint32_t)bytes[2] << 16) +
((uint32_t)bytes[3] << 24);
return v;
}
int main() {
unsigned char b1[] = { 0xec, 0x51, 0x04, 0x00 };
unsigned char b2[] = { 0x0c, 0x0c, 0x00, 0x00 };
printf("%u\n", byteToInt(b1)); // 'u' for unsigned
printf("%u\n", byteToInt(b2));
//printf("%u\n", *(uint32_t *)b1); // undefined behavior
//printf("%u\n", *(uint32_t *)b2); // ditto
return 0;
}
Note that re-interpretation memory content as it is done in two last printfs is undefined behavior (although often works in practice).
BTW, shifting signed negative values is undefined according to the standard:
The result of E1 << E2 is E1 left-shifted E2 bit positions; ...
If E1 has a signed
type and nonnegative value, and E1 × 2E2 is representable in the result type, then that is
the resulting value; otherwise, the behavior is undefined.
There are several potential issues with this code. The first is that it is compiler dependent on whether the char type is 8 bits, 16 bits, or even 32 bits. When you do a shift operation on the character type, it may potentially lose the bits "off the end" of the value.
It is safer to first cast the values to a 32 bit type before shifting them and adding them. For example:
unsigned long v =
((unsigned long)bytes[0] ) +
((unsigned long)bytes[1] << 8 ) +
((unsigned long)bytes[2] << 16) +
((unsigned long)bytes[3] << 24);
Your use of the int32_t is also compiler dependent. If memory serves, that's a Windows specific reclassification of int. "int" itself is compiler dependent, older compilers may have it as a 16 bit value, as the standard says it should be the size of a word on the machine you are working on. Using "long" instead of "int" guarantees a 32 bit value.
Additionally, I used "unsigned long" in the example, because I don't think you want to deal with negative numbers in this case. In binary representation, negative numbers have the highest bit set (0x8000000).
If you do want to use negative numbers, then the type should be "long" instead, although this opens a different can of worms when adding positive valued bytes to a negative valued largest byte. In the case where you wanted to deal with negative numbers, you should do a wholly different conversion that peels off the high bit of the high byte, does the addition, and then, if the high bit was set, makes the value negative (v = -v;), and then you need to subtract 1 because of the representation of negative numbers (which is probably outside the scope of this question.)
The revised code, then would be:
#include <stdio.h>
#include <stdlib.h>
unsigned long byteToInt(char *bytes) {
unsigned long v =
((unsigned long)bytes[0] ) +
((unsigned long)bytes[1] << 8 ) +
((unsigned long)bytes[2] << 16) +
((unsigned long)bytes[3] << 24);
return v;
}
int main() {
char b1[] = {0xec, 0x51, 0x04, 0x00};
char b2[] = {0x0c, 0x0c, 0x00, 0x00};
printf("%d\n", byteToInt(b1));
printf("%d\n", byteToInt(b2));
printf("%d\n", *(unsigned long *)b1);
printf("%d\n", *(unsigned long *)b2);
return 0;
}
Here is a part of the code:
#define GPIO_PORTF_DATA_BITS_R ((volatile unsigned long *)0x40025000)
#define LED_BLUE 0x04
#define LED_GREEN 0x08
#define LED_RED 0x02
GPIO_PORTF_DATA_BITS_R[LED_BLUE | LED_GREEN | LED_RED] = (LED_GREEN | LED_RED)
With my the little understanding I have about pointers, it is equivalent to
volatile unsigned long *p = 0x40025400;
p[0x0E] = 0x0A;
If I am correct, what does p[0x0E] mean or do here?
In C, the indexing operator [] has the following semantics: a[b] means *(a + b), so either a or b must evaluate to an address.
Thus, your example means *(0x40025400 + 0xe) = 0xa, i.e. it accesses a register which is at offset 0xe * sizeof (unsigned long) from the base address at 0x40025400. The scaling is since the pointer is to unsigned long, and pointer arithmetic is always scaled by the size of the type being pointed at.
Agree with #Lundin. The defines LED_BLUE, LED_GREEN, LED_RED all being powers of 2 and LED control typically needing only a bit on or off imply that these defines are bit masks.
Suggest you need the following.
void LED_Red_On(void) {
*GPIO_PORTF_DATA_BITS_R |= LED_RED;
}
void LED_Green_Off(void) {
*GPIO_PORTF_DATA_BITS_R &= ~((unnsigned long)LED_GREEN);
}
...