How to write a byte to register with specific memory address? - c

I want to write a byte to register with specific memory address (0x1228A432)
But, this register has a following structure:
Bits | Access | Name | Reset | Description |
[31:8] | Read only | -------- | ------ | Reserved |
[7:0] | Read-write | REG[7:0] | 0xXX | ----------- |
Please tell me, how to write a byte to this register without "touching" the Reserved bits?
EDIT1: My target is Cortex A9.
I could successfully read/write to onboard DDR2 memory using 256-bit values (such as 0xFF)
EDIT2: I used to work with DDR2 memory in the following way :
// First stage
static unsigned char *p = 0;
char * argv1="0x60000000";
unsigned long address=strtoul(argv1, 0, 0);
p = (unsigned char *) argv1;
// Second stage
char * argv4="FF";
int value=strtol(argv4,0,16);
// Third stage
int offset = 9;
p[offset]=value;
EDIT3: I found out the following information:
All registers are 32 bits wide and do not support byte writes.
Write operations must be word-wide and bits marked as reserved must be preserved
using read-modify-write.

One way to preserve bits [31:8], assuming 32-bit wide access, is to read the value, zero-out bits [7:0], bitwise-or it with the value needed and then write it back to the register.
Something like (stealing from RedX a bit ;) ):
uint8_t your_8_bit_value = 0x42;
uint32_t volatile * const mem_map_register = (uint32_t volatile *) 0x1228a432;
*mem_map_register = (*mem_map_register & 0xFFFFFF00) | your_8_bit_value;
Yet I think there should be more info available about your hardware. I've seen several datasheets saying e.g. that you have to write all 1 to reserved bits (meaning that reserved bits are reserved for future use, and 1 is a safe default), etc. So it is not always obvious, that leaving reserved bits untouched is the right thing to do.
You should find more details about your hardware - are byte-wide writes supported, are writes to reserved bits ignored perhaps, or should be all 0/1, etc.

Look up the assembler instruction handbook for an 8 bit writing instruction (not sure if it exists). If it does, use an uint8_t for your assignment to that memory location (uint8_t volatile * const reg = (uint8_t volatile * const) 0x1228a432;).
Else do what Omkant said. Overwriting the bits with the same number should not produce any unwanted results, since they are not "zeroed" before being overwritten.
His code in C (this is the verbose version for better readability):
uint8_t your_8_bit_value = 0x42;
uint32_t volatile * const mem_map_register = (uint32_t volatile *) 0x1228a432;
*mem_map_register = (*mem_map_register & 0xFFFFFF00) | your_8_bit_value;

[register value] = ([register value] | [00 00 00 FF]) & [FF FF FF XX]
Here , xx is the one byte read from your given address and then set a mask of 24 bits.
And perform bitwise & on the values shown above
I think this should work

Related

Converting 32 bit number to four 8bit numbers

I am trying to convert the input from a device (always integer between 1 and 600000) to four 8-bit integers.
For example,
If the input is 32700, I want 188 127 00 00.
I achieved this by using:
32700 % 256
32700 / 256
The above works till 32700. From 32800 onward, I start getting incorrect conversions.
I am totally new to this and would like some help to understand how this can be done properly.
Major edit following clarifications:
Given that someone has already mentioned the shift-and-mask approach (which is undeniably the right one), I'll give another approach, which, to be pedantic, is not portable, machine-dependent, and possibly exhibits undefined behavior. It is nevertheless a good learning exercise, IMO.
For various reasons, your computer represents integers as groups of 8-bit values (called bytes); note that, although extremely common, this is not always the case (see CHAR_BIT). For this reason, values that are represented using more than 8 bits use multiple bytes (hence those using a number of bits with is a multiple of 8). For a 32-bit value, you use 4 bytes and, in memory, those bytes always follow each other.
We call a pointer a value containing the address in memory of another value. In that context, a byte is defined as the smallest (in terms of bit count) value that can be referred to by a pointer. For example, your 32-bit value, covering 4 bytes, will have 4 "addressable" cells (one per byte) and its address is defined as the first of those addresses:
|==================|
| MEMORY | ADDRESS |
|========|=========|
| ... | x-1 | <== Pointer to byte before
|--------|---------|
| BYTE 0 | x | <== Pointer to first byte (also pointer to 32-bit value)
|--------|---------|
| BYTE 1 | x+1 | <== Pointer to second byte
|--------|---------|
| BYTE 2 | x+2 | <== Pointer to third byte
|--------|---------|
| BYTE 3 | x+3 | <== Pointer to fourth byte
|--------|---------|
| ... | x+4 | <== Pointer to byte after
|===================
So what you want to do (split the 32-bit word into 8-bits word) has already been done by your computer, as it is imposed onto it by its processor and/or memory architecture. To reap the benefits of this almost-coincidence, we are going to find where your 32-bit value is stored and read its memory byte-by-byte (instead of 32 bits at a time).
As all serious SO answers seem to do so, let me cite the Standard (ISO/IEC 9899:2018, 6.2.5-20) to define the last thing I need (emphasis mine):
Any number of derived types can be constructed from the object and function types, as follows:
An array type describes a contiguously allocated nonempty set of objects with a particular member object type, called the element type. [...] Array types are characterized by their element type and by the number of elements in the array. [...]
[...]
So, as elements in an array are defined to be contiguous, a 32-bit value in memory, on a machine with 8-bit bytes, really is nothing more, in its machine representation, than an array of 4 bytes!
Given a 32-bit signed value:
int32_t value;
its address is given by &value. Meanwhile, an array of 4 8-bit bytes may be represented by:
uint8_t arr[4];
notice that I use the unsigned variant because those bytes don't really represent a number per se so interpreting them as "signed" would not make sense. Now, a pointer-to-array-of-4-uint8_t is defined as:
uint8_t (*ptr)[4];
and if I assign the address of our 32-bit value to such an array, I will be able to index each byte individually, which means that I will be reading the byte directly, avoiding any pesky shifting-and-masking operations!
uint8_t (*bytes)[4] = (void *) &value;
I need to cast the pointer ("(void *)") because I can't bear that whining compiler &value's type is "pointer-to-int32_t" while I'm assigning it to a "pointer-to-array-of-4-uint8_t" and this type-mismatch is caught by the compiler and pedantically warned against by the Standard; this is a first warning that what we're doing is not ideal!
Finally, we can access each byte individually by reading it directly from memory through indexing: (*bytes)[n] reads the n-th byte of value!
To put it all together, given a send_can(uint8_t) function:
for (size_t i = 0; i < sizeof(*bytes); i++)
send_can((*bytes)[i]);
and, for testing purpose, we define:
void send_can(uint8_t b)
{
printf("%hhu\n", b);
}
which prints, on my machine, when value is 32700:
188
127
0
0
Lastly, this shows yet another reason why this method is platform-dependent: the order in which the bytes of the 32-bit word is stored isn't always what you would expect from a theoretical discussion of binary representation i.e:
byte 0 contains bits 31-24
byte 1 contains bits 23-16
byte 2 contains bits 15-8
byte 3 contains bits 7-0
actually, AFAIK, the C Language permits any of the 24 possibilities for ordering those 4 bytes (this is called endianness). Meanwhile, shifting and masking will always get you the n-th "logical" byte.
It really depends on how your architecture stores an int. For example
8 or 16 bit system short=16, int=16, long=32
32 bit system, short=16, int=32, long=32
64 bit system, short=16, int=32, long=64
This is not a hard and fast rule - you need to check your architecture first. There is also a long long but some compilers do not recognize it and the size varies according to architecture.
Some compilers have uint8_t etc defined so you can actually specify how many bits your number is instead of worrying about ints and longs.
Having said that you wish to convert a number into 4 8 bit ints. You could have something like
unsigned long x = 600000UL; // you need UL to indicate it is unsigned long
unsigned int b1 = (unsigned int)(x & 0xff);
unsigned int b2 = (unsigned int)(x >> 8) & 0xff;
unsigned int b3 = (unsigned int)(x >> 16) & 0xff;
unsigned int b4 = (unsigned int)(x >> 24);
Using shifts is a lot faster than multiplication, division or mod. This depends on the endianess you wish to achieve. You could reverse the assignments using b1 with the formula for b4 etc.
You could do some bit masking.
600000 is 0x927C0
600000 / (256 * 256) gets you the 9, no masking yet.
((600000 / 256) & (255 * 256)) >> 8 gets you the 0x27 == 39. Using a 8bit-shifted mask of 8 set bits (256 * 255) and a right shift by 8 bits, the >> 8, which would also be possible as another / 256.
600000 % 256 gets you the 0xC0 == 192 as you did it. Masking would be 600000 & 255.
I ended up doing this:
unsigned char bytes[4];
unsigned long n;
n = (unsigned long) sensore1 * 100;
bytes[0] = n & 0xFF;
bytes[1] = (n >> 8) & 0xFF;
bytes[2] = (n >> 16) & 0xFF;
bytes[3] = (n >> 24) & 0xFF;
CAN_WRITE(0x7FD,8,01,sizeof(n),bytes[0],bytes[1],bytes[2],bytes[3],07,255);
I have been in a similar kind of situation while packing and unpacking huge custom packets of data to be transmitted/received, I suggest you try below approach:
typedef union
{
uint32_t u4_input;
uint8_t u1_byte_arr[4];
}UN_COMMON_32BIT_TO_4X8BIT_CONVERTER;
UN_COMMON_32BIT_TO_4X8BIT_CONVERTER un_t_mode_reg;
un_t_mode_reg.u4_input = input;/*your 32 bit input*/
// 1st byte = un_t_mode_reg.u1_byte_arr[0];
// 2nd byte = un_t_mode_reg.u1_byte_arr[1];
// 3rd byte = un_t_mode_reg.u1_byte_arr[2];
// 4th byte = un_t_mode_reg.u1_byte_arr[3];
The largest positive value you can store in a 16-bit signed int is 32767. If you force a number bigger than that, you'll get a negative number as a result, hence unexpected values returned by % and /.
Use either unsigned 16-bit int for a range up to 65535 or a 32-bit integer type.

Bitshifting vs array indexing, which is more appropriate for usart interfaces on 32bit MCUs

I have an embedded project with a USART HAL. This USART can only transmit or receive 8 or 16 bits at a time (depending on the usart register I chose i.e. single/double in/out). Since it's a 32-bit MCU, I figured I might as well pass around 32-bit fields as (from what I have been lead to understand) this is a more efficient use of bits for the MPU. Same would apply for a 64-bit MPU i.e. pass around 64-bit integers. Perhaps that is misguided advice, or advice taken out of context.
With that in mind, I have packed the 8 bits into a 32-bit field via bit-shifting. I do this for both tx and rx on the usart.
The code for the 8-bit only register is as follows (the 16-bit register just has half the amount of rounds for bit-shifting):
int zg_usartTxdataWrite(USART_data* MPI_buffer,
USART_frameconf* MPI_config,
USART_error* MPI_error)
{
MPI_error = NULL;
if(MPI_config != NULL){
zg_usartFrameConfWrite(MPI_config);
}
HPI_usart_data.txdata = MPI_buffer->txdata;
for (int i = 0; i < USART_TXDATA_LOOP; i++){
if((USART_STATUS_TXC & usart->STATUS) > 0){
usart->TXDATAX = (i == 0 ? (HPI_usart_data.txdata & USART_TXDATA_DATABITS) : (HPI_usart_data.txdata >> SINGLE_BYTE_SHIFT) & USART_TXDATA_DATABITS);
}
usart->IFC |= USART_STATUS_TXC;
}
return 0;
}
EDIT: RE-ENTERTING LOGIC OF ABOVE CODE WITH ADDED DEFINES FOR CLARITY OF TERNARY OPERATOR IMPLICIT PROMOTION PROBLEM DISCUSSED IN COMMENTS SECTION
(the HPI_usart and USART_data structs are the same just different levels, I have since removed the HPI_usart layer, but for the sake of this example I will leave it in)
#define USART_TXDATA_LOOP 4
#define SINGLE_BYTE_SHIFT 8
typedef struct HPI_USART_DATA{
...
uint32_t txdata;
...
}HPI_usart
HPI_usart HPI_usart_data = {'\0'};
const uint8_t USART_TXDATA_DATABITS = 0xFF;
int zg_usartTxdataWrite(USART_data* MPI_buffer,
USART_frameconf* MPI_config,
USART_error* MPI_error)
{
MPI_error = NULL;
if(MPI_config != NULL){
zg_usartFrameConfWrite(MPI_config);
}
HPI_usart_data.txdata = MPI_buffer->txdata;
for (int i = 0; i < USART_TXDATA_LOOP; i++){
if((USART_STATUS_TXC & usart->STATUS) > 0){
usart->TXDATAX = (i == 0 ? (HPI_usart_data.txdata & USART_TXDATA_DATABITS) : (HPI_usart_data.txdata >> SINGLE_BYTE_SHIFT) & USART_TXDATA_DATABITS);
}
usart->IFC |= USART_STATUS_TXC;
}
return 0;
}
However, I now realize that this is potentially causing more issues than it solves because I am essentially internally encoding these bits which then have to be decoded almost immediately when they are passed through to/from different data layers. I feel like it's a clever and sexy solution, but I'm now trying to solve a problem that I shouldn't have created in the first place. Like how to extract variable bit fields when there is an offset i.e. in gps nmea sentences where the first 8 bits might be one relevant field and then the rest are 32bit fields. So it ends up being like this:
32-bit array member 0:
bits 24-31 bits 15-23 bits 8-15 bits 0-7
| 8-bit Value | 32-bit Value A, bits 24-31 | 32-bit Value A, bits 16-23 | 32-bit Value A, bits 8-15 |
32-bit array member 1:
bits 24-31 bits 15-23 bits 8-15 bits 0-7
| 32-bit Value A, bits 0-7 | 32-bit Value B, bits 24-31 | 32-bit Value B, bits 16-23 | 32-bit Value B, bits 8-15 |
32-bit array member 2:
bits 24-31 15-23 8-15 ...
| 32-bit Value B, bits 0-7 | etc... | .... | .... |
The above example requires manual decoding, which is fine I guess, but it's different for every nmea sentence and just feels more manual than programmatic.
My question is this: bitshifting vs array indexing, which is more appropriate?
Should I just have assigned each incoming/outgoing value to a 32-bit array member and then just index that way? I feel like that is the solution since it would not only make it easier to traverse the data on other layers, but I would be able to eliminate all this bit-shifting logic and then the only difference between an rx or tx function would be the direction the data is going.
It does mean a small rewrite of the interface and the resulting gps module layer, but that feels like less work and also a cheap lesson early on in my project.
Also any thoughts and general experience on this would be great.
Since it's a 32-bit MCU, I figured I might as well pass around 32-bit fields
That's not really the programmer's call to make. Put the 8 or 16 bit variable in a struct. Let the compiler add padding if needed. Alternatively you can use uint_fast8_t and uint_fast16_t.
My question is this: bitshifting vs array indexing, which is more appropriate?
Array indexing is for accessing arrays. If you have an array, use it. If not, then don't.
While it is possible to chew through larger chunks of data byte by byte, such code must be written much more carefully, to prevent running into various subtle type conversion and pointer aliasing bugs.
In general, bit shifting is preferred when accessing data up to the CPU's word size, 32 bits in this case. It is fast and also portable, so that you don't have to take endianess in account. It is the preferred method of serialization/de-serialization of integers.

shifting in C and assembly

Consider the following C code that writes and reads to and from memory mapped I/O.
volatile int* switches = (volatile int*) 0x1b430010;
volatile int* leds = (volatile int*) 0xabbb8050;
int temp = ((*switches) >> 6) & 0x3;
*leds = (*leds & 0xff9f) | XXX;
From the C code above it is possible to derive that some bits are read from the 16-bit switch port and used to turn on or off some LEDs, without affecting the other LEDs on the 16-bit LED port. One missing expression is marked as XXX. Write out what expression XXX should be, so that the LEDs are correctly turned on or off.
The answer is XXX = temp << 5.
When i try to translate to assembly and calculate in bits, I got temp is 0 after ((*switches) >> 6) & 0x3;, so why temp << 5 is valid here because shifting 0 does not make any different? (maybe I misscalculated, can provide all my calculations in bits if neccessary)
0xff9f is 1111111110011111 in binary. The mask zeroes bit 5 and 6, keeps the others.
temp is a value on 2 bits because the code and-masks with 0x3, 11 in binary (all other bits are zeroed). You have to shift it left 5 times to be able to insert in the final value by | masking.
(The fact that you're getting 0 for temp is either a bug or expected depending on the input data, it doesn't change the answer above)

Mapping a number to bit position in C

I'm developing an programm running on Atmel AT90CAN128. Connected to this controller there are 40 devices, each with a status (on/off). As I need to report the status of each of this devices to a PC through Serial Communication, I have 40 bits, which define whether the device is on or off. In addition, the PC can turn any of this devices on or off.
So, my first attempt was to create the following struct:
typedef struct {
unsigned char length; //!< Data Length
unsigned data_type; //!< Data type
unsigned char data[5]; //!< CAN data array 5 * 8 = 40 bits
} SERIAL_packet;
The problem with this was that the PC will send an unsigned char address telling me the device to turn on/off, so accessing the bit corresponding to that address number turned out to be rather complicated...
So I started looking for options, and I stumbled upon the C99 _Bool type. I thought, great so now I'll just create a _Bool data[40] and I can access the address bit just by indexing my data array. Turns out that in C (or C++) memory mapping needs an entire byte for addressing it. So even if I declare a _Bool the size of that _Bool will be 8 bits which is a problem (it needs to be as fast as possible so the more bits I send the slower it gets, and the PC will be specting 40 bits only) and not very efficient for the communication. So I started looking into Bit Fields, and tried the following:
typedef struct {
unsigned char length; //!< Data Length
unsigned data_type; //!< Data type
arrayData data[40]; //!< Data array 5 bytes == 40 bits
} SERIAL_packet;
typedef struct {
unsigned char aux : 1;
} arrayData;
And I wonder, is this going to map that data[40] into a consequent memory block with a size of 40 bits (5 bytes)?
If not, is there any obvious solution I'm missing? This doesn't seem like a very complicated thing to do (would be much simpler if there were less than 32 devices so I could use a int and just access through a bit mask).
Assuming the addresses you get back are in the range 0 - 39 and that a char has 8 bits, you can treat your data array as an array of bits:
| data[0] | data[1] ...
-----------------------------------------------------------------
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10| 11| 12| 13| 14| 15|
-----------------------------------------------------------------
To set bit i:
packet.data[i/8] |= (1 << (i%8));
To clear bit i:
packet.data[i/8] &= (1 << (i%8)) ^ 0xff;
To read bit i:
int flag = (packet.data[i/8] & (1 << (i%8)) != 0;

Explain this Function

Can someone explain to me the reason why someone would want use bitwise comparison?
example:
int f(int x) {
return x & (x-1);
}
int main(){
printf("F(10) = %d", f(10));
}
This is what I really want to know: "Why check for common set bits"
x is any positive number.
Bitwise operations are used for three reasons:
You can use the least possible space to store information
You can compare/modify an entire register (e.g. 32, 64, or 128 bits depending on your processor) in a single CPU instruction, usually taking a single clock cycle. That means you can do a lot of work (of certain types) blindingly fast compared to regular arithmetic.
It's cool, fun and interesting. Programmers like these things, and they can often be the differentiator when there is no difference between techniques in terms of efficiency/performance.
You can use this for all kinds of very handy things. For example, in my database I can store a lot of true/false information about my customers in a tiny space (a single byte can store 8 different true/false facts) and then use '&' operations to query their status:
Is my customer Male and Single and a Smoker?
if (customerFlags & (maleFlag | singleFlag | smokerFlag) ==
(maleFlag | singleFlag | smokerFlag))
Is my customer (any combination of) Male Or Single Or a Smoker?
if (customerFlags & (maleFlag | singleFlag | smokerFlag) != 0)
Is my customer not Male and not Single and not a Smoker)?
if (customerFlags & (maleFlag | singleFlag | smokerFlag) == 0)
Aside from just "checking for common bits", you can also do:
Certain arithmetic, e.g. value & 15 is a much faster equivalent of value % 16. This only works for certain numbers, but if you can use it, it can be a great optimisation.
Data packing/unpacking. e.g. a colour is often expressed as a 32-bit integer that contains Alpha, Red, Green and Blue byte values. The Red value might be extracted with an expression like red = (value >> 16) & 255; (shift the value down 16 bit positions and then carve off the bottom byte)
Data manipulation and swizzling. Some clever tricks can be achieved with bitwise operations. For example, swapping two integer values without needing to use a third temporary variable, or converting ARGB colour values into another format (e.g RGBA or BGRA)
The Ur-example is "testing if a number is even or odd":
unsigned int number = ...;
bool isOdd = (0 != (number & 1));
More complex uses include bitmasks (multiple boolean values in a single integer, each one taking up one bit of space) and encryption/hashing (which frequently involve bit shifting, XOR, etc.)
The example you've given is kinda odd, but I'll use bitwise comparisons all the time in embedded code.
I'll often have code that looks like the following:
volatile uint32_t *flags = 0x000A000;
bool flagA = *flags & 0x1;
bool flagB = *flags & 0x2;
bool flagC = *flags & 0x4;
It's not a bitwise comparison. It doesn't return a boolean.
Bitwise operators are used to read and modify individual bits of a number.
n & 0x8 // Peek at bit3
n |= 0x8 // Set bit3
n &= ~0x8 // Clear bit3
n ^= 0x8 // Toggle bit3
Bits are used in order to save space. 8 chars takes a lot more memory than 8 bits in a char.
The following example gets the range of an IP subnet using given an IP address of the subnet and the subnet mask of the subnet.
uint32_t mask = (((255 << 8) | 255) << 8) | 255) << 8) | 255;
uint32_t ip = (((192 << 8) | 168) << 8) | 3) << 8) | 4;
uint32_t first = ip & mask;
uint32_t last = ip | ~mask;
e.g. if you have a number of status flags in order to save space you may want to put each flag as a bit.
so x, if declared as a byte, would have 8 flags.
I think you mean bitwise combination (in your case a bitwise AND operation). This is a very common operation in those cases where the byte, word or dword value is handled as a collection of bits, eg status information, eg in SCADA or control programs.
Your example tests whether x has at most 1 bit set. f returns 0 if x is a power of 2 and non-zero if it is not.
Your particular example tests if two consecutive bits in the binary representation are 1.

Resources