Understanding use of pointer in embedded system - c

Here is a part of the code:
#define GPIO_PORTF_DATA_BITS_R ((volatile unsigned long *)0x40025000)
#define LED_BLUE 0x04
#define LED_GREEN 0x08
#define LED_RED 0x02
GPIO_PORTF_DATA_BITS_R[LED_BLUE | LED_GREEN | LED_RED] = (LED_GREEN | LED_RED)
With my the little understanding I have about pointers, it is equivalent to
volatile unsigned long *p = 0x40025400;
p[0x0E] = 0x0A;
If I am correct, what does p[0x0E] mean or do here?

In C, the indexing operator [] has the following semantics: a[b] means *(a + b), so either a or b must evaluate to an address.
Thus, your example means *(0x40025400 + 0xe) = 0xa, i.e. it accesses a register which is at offset 0xe * sizeof (unsigned long) from the base address at 0x40025400. The scaling is since the pointer is to unsigned long, and pointer arithmetic is always scaled by the size of the type being pointed at.

Agree with #Lundin. The defines LED_BLUE, LED_GREEN, LED_RED all being powers of 2 and LED control typically needing only a bit on or off imply that these defines are bit masks.
Suggest you need the following.
void LED_Red_On(void) {
*GPIO_PORTF_DATA_BITS_R |= LED_RED;
}
void LED_Green_Off(void) {
*GPIO_PORTF_DATA_BITS_R &= ~((unnsigned long)LED_GREEN);
}
...

Related

Bit-wise left shifting a value into a dynamically allocated memory pointed by void*

I am trying to create a CPU-Map for a Libvirt API function. The idea is depending on the number of physical CPUs, I allocate memory of 1, 2 or 4 bytes (supporting upto 32 CPUs). For example if a platform has 16 CPUs, I allocate 2 bytes.
After memory allocation, I need to set the set the CPU bit. For example: If I have 16 CPUs in the platform and I want to pin a Virtual CPU to the 16th CPU, I need to set the MSB in the memory allocated. See the code snippet below.
void *cpuMap;
switch (pCpus/8) {
case 0:
case 1:
size = sizeof(unsigned char);
cpuMap = (unsigned char*) malloc(size);
break;
case 2:
size = sizeof(unsigned short int);
cpuMap = (unsigned short int*) malloc(size);
break;
case 3:
case 4:
default:
size = sizeof(unsigned int);
cpuMap = (unsigned int*) malloc(size);
break;
}
*cpuMap = 0x1 << cpu_number; //error: invalid use of void expression
I get the below error during compilation
vcpu_scheduler.c:317:3: warning: dereferencing ‘void *’ pointer
317 | *cpuMap = 0x1 << cpu_number;
| ^~~~~~~
vcpu_scheduler.c:317:11: error: invalid use of void expression
317 | *cpuMap = 0x1 << cpu_number;
| ^
make: *** [Makefile:4: compile] Error 1
Any help is appreciated.
You're probably better off just always using unsigned char (or uint8_t) and just allocating mulitple bytes (and indexing) when you need more than one:
unsigned char *cpuMap;
cpuMap = malloc((pCpus + 7U)/8U); // round up to a mulitple of 8
memset(cpuMap, 0, (pCpus + 7U)/8U);
cpuMap[cpu_number / 8U] |= 1U << (cpu_number % 8U);
You cannot deference void* because it is a incomplete type that cannot be completed by design. One must cast it to an appropriate type assign the right value.
For example:
case 2:
size = sizeof(unsigned short int);
cpuMap = malloc(size);
*(unsigned short int*)cpuMap = 0x1 << cpu_number;
break;
add similar code for other cases.
Note that after setting cpuMap can only be accessed by the type used to initialize it (+ few exception) to avoid violating strict aliasing rule.
It makes no sense to allocate 1 or 2 bytes. Always allocate 4 bytes. Allocating less will not save you any memory, but instead will complicate the code for no reason.
Also using dynamic memory allocation for 4 bytes of memory makes no (or a very, very little) sense. The pointer where you store its reference will be twice in size on 64 bits machine :)
To be sure how many bits the type has, use fixed-size integers from standard
uint32_t cpumap = 0;
/* ... */
cpumap |= 0x1UL << cpu_number;
Using void * breaks type safety. Better to use a single type (e.g.) uint8_t -- using uint8_t/uint16_t/uint32_t depending on number of CPUs provides little advantage speedwise and makes the code more complex.
We have to round up the number of CPUs to get the number of bytes.
No real need to use malloc. Just use a fixed size based on maximum possible CPUs (e.g. 32, 64, 128, 256). For your [maximum] use case of 32 CPUs, using four bytes all the time isn't a real issue as to the size of the vector.
When I do bit vectors, I prefer to use macros/functions to isolate the operations.
Here are some macros I've used in the past:
// btv.h -- bit vector primitives
// NOTE: by using 0x80, we are compatible with perl's vec function
typedef unsigned char btv_t;
// number of bytes for vector
#define BTVSIZE(_bit) \
(((_bit) + 7) >> 3)
// index/offset into bit vector
#define BTVOFF(_bit) \
((_bit) >> 3)
// bit vector bit mask
#define BTVMSK(_bit) \
(0x80 >> ((_bit) >> 3))
// bit vector byte pointer
#define BTVPTR(_sym,_bit) \
(((btv_t *) _sym) + BTVOFF(_bit))
// set bit in vector
#define BTVSET(_sym,_bit) \
*BTVPTR(_sym,_bit) |= BTVMSK(_bit)
// clear bit in vector
#define BTVCLR(_sym,_bit) \
*BTVPTR(_sym,_bit) &= ~BTVMSK(_bit)
// test bit in vector
#define BTVTST(_sym,_bit) \
(*BTVPTR(_sym,_bit) & BTVMSK(_bit))
// set/clear bit in vector
#define BTVFLG(_sym,_bit,_set) \
do { \
btv_t *__btvptr__ = BTVPTR(_sym,_bit); \
btv_t __btvmsk__ = BTVMSK(_bit); \
if (_set) \
*__btvptr__ |= __btvmsk__; \
else \
*__btvptr__ &= ~__btvmsk__; \
} while (0)
Here's a wrapper for functions for use with cpus:
// cpumap.c -- CPU mask primitives
#include "btv.h"
// pick whatever maximum number of CPUs you want ...
#define CPUMAX 256
typedef btv_t cpumap_t[BTVSIZE(CPUMAX)];
void
cpumapset(cpumap_t map,unsigned int cpuno)
{
BTVSET(map,cpuno);
}
void
cpumapclr(cpumap_t map,unsigned int cpuno)
{
BTVCLR(map,cpuno);
}
btv_t
cpumaptst(cpumap_t map,unsigned int cpuno)
{
return BTVTST(map,cpuno);
}
The libvirt API provides macros that are intended for use with the pinning APIs to make this simpler:
unsigned char *cpuMap = malloc(VIR_CPU_MAPLEN(pCpus));
VIR_USE_CPU(cpuMap, cpu_number);

ARM Cortex M4 - C Programming and Memory Access Optimization

The following three lines of code are optimized ways to modify bits with 1 MOV instruction instead of using a less interrupt safe read-modify-write idiom. They are identical to each other, and set the LED_RED bit in the GPIO Port's Data Register:
*((volatile unsigned long*)(0x40025000 + (LED_RED << 2))) = LED_RED;
*(GPIO_PORTF_DATA_BITS_R + LED_RED) = LED_RED;
GPIO_PORTF_DATA_BITS_R[LED_RED] = LED_RED;
LED_RED is simply (volatile unsigned long) 0x02. In the memory map of this microcontroller, the first 2 LSBs of this register (and others) are unused, so the left shift in the first example makes sense.
The definition for GPIO_PORTF_DATA_BITS_R is:
#define GPIO_PORTF_DATA_BITS_R ((volatile unsigned long *)0x40025000)
My question is: How come I do not need to left shift twice when using pointer arithmetic or array indexing (2nd method and 3rd method, respectively)? I'm having a hard time understanding. Thank you in advance.
Remember how C pointer arithmetic works: adding an offset to a pointer operates in units of the type pointed to. Since GPIO_PORTF_DATA_BITS_R has type unisgned long *, and sizeof(unsigned long) == 4, then GPIO_PORTF_DATA_BITS_R + LED_RED effectively adds 2 * 4 = 8 bytes.
Note that (1) does arithmetic on 0x40025000, which is an integer, not a pointer, so we need to add 8 to get the same result. Left shift by 2 is the same as multiplication by 4, so LED_RED << 2 again equals 8.
(3) is exactly equivalent to (2) by definition of the [] operator.

Is possible cast from a preprocessor to array?

I am trying to cast a preprocesor to an array, But I am not sure if it is possible at all,
Where for example I have defined:
Number 0x44332211
Code below:
#include <stdio.h>
#include <stdint.h>
#define number 0x44332211
int main()
{
uint8_t array[4] = {(uint8_t)number, (uint8_t)number << 8,(uint8_t)(number <<16 ),(uint8_t)(number <<24)};
printf("array[%x] \n\r",array[0]); // 0x44
printf("array[%x] \n\r",array[1]); // 0x33
printf("array[%x] \n\r",array[2]); // 0x22
printf("array[%x] \n\r",array[3]); // 0x11
return 0;
}
and I want to cast it two an uint8_t array[4] where array[0] = 0x44, array[1] = 0x33, array[2] = 0x22, array[3] = 0x11
Is it possible?
my output:
array[11]
array[0]
array[0]
array[0]
A couple of realizations are needed:
uint8_t masks out the least significant byte of the data. Meaning you have to right shift data down into the least significant byte, not left shift data away from it.
0x44332211 is an integer constant, not a "preprocessor". It is of type int and therefore signed. You shouldn't use bitwise operators on signed types. Easily solved by changing to 0x44332211u with unsigned suffix.
Typo here: (uint8_t)number << 8. You should shift then cast. Casts have higher precedence than shift.
#include <stdio.h>
#include <stdint.h>
#define number 0x44332211u
int main()
{
uint8_t array[4] =
{
(uint8_t)(number >> 24),
(uint8_t)(number >> 16),
(uint8_t)(number >> 8),
(uint8_t) number
};
printf("array[%x] \n\r",array[0]); // 0x44
printf("array[%x] \n\r",array[1]); // 0x33
printf("array[%x] \n\r",array[2]); // 0x22
printf("array[%x] \n\r",array[3]); // 0x11
return 0;
}
This is not really a cast in any way. You have defined a constant and compute the values of the array based on that constant. Keep in mind that in this case, the preprocessor simply does a search and replace, nothing clever.
Also, your shift is in the wrong direction. You keep the last (rightmost) 8 bits when casting int to uint8_t, not the first (leftmost) ones.
Yes, you are casting an int to a uint8_t. The only problem is that, when you make the shifts, the result won't fit in the type you are casting to and information will be lost.
Your uint8_t casts are just taking the least significant byte. that's why you get 11 in the first case and 0 in the others... because your shifts to the left leave 0 in the rightmost positions.

What do these macros do?

I have inherited some heavily obfuscated and poorly written PIC code to modify. There are two macros here:
#define TopByteInt(v) (*(((unsigned char *)(&v)+1)))
#define BottomByteInt(v) (*((unsigned char *)(&v)))
Is anyone able to explain what on earth they do and what that means please?
Thanks :)
They access a 16-bit integer variable one byte at a time, allowing access to the most significant and least significant byte halves. Little-endian byte order is assumed.
Usage would be like this:
uint16_t v = 0xcafe;
const uint8_t v_high = TopByteInt(&v);
const uint8_t v_low = BottomByteInt(&v);
The above would result in v_high being 0xca and v_low being 0xfe.
It's rather scary code, it would be cleaner to just do this arithmetically:
#define TopByteInt(v) (((v) >> 8) & 0xff)
#define BottomByteInt(v) ((v) & 0xff)
(*((unsigned char *)(&v)))
It casts the v (a 16 bit integer) into a char (8 bits), doing this you get only the bottom byte.
(*(((unsigned char *)(&v)+1)))
This is the same but it gets the address of v and sum 1 byte, so it gets only the top byte.
It'll only work as expected if v is a 16 bits integer.
Ugg.
Assuming you are on a little-endian platform, that looks like it might meaningfully be recorded as
#define TopByteInt(v) (((v) >> 8) & 0xff)
#define BottomByteInt(v) ((v) & 0xff)
It is basically taking the variable v, and extracting the least significant byte (BottomByteInt) and the next more significant byte (TopByteInt) from that. 'TopByte' is a bit of a misnomer if v isn't a 16 bit value.

Casting troubles when using bit-banding macros with a pre-cast address on Cortex-M3

TL;DR:
Why isn't (unsigned long)(0x400253FC) equivalent to (unsigned long)((*((volatile unsigned long *)0x400253FC)))?
How can I make a macro which works with the former work with the latter?
Background Information
Environment
I'm working with an ARM Cortex-M3 processor, the LM3S6965 by TI, with their StellarisWare (free download, export controlled) definitions. I'm using gcc version 4.6.1 (Sourcery CodeBench Lite 2011.09-69). Stellaris provides definitions for some 5,000 registers and memory addresses in "inc/lm3s6965.h", and I really don't want to redo all of those. However, they seem to be incompatible with a macro I want to write.
Bit Banding
On the ARM Cortex-M3, a portion of memory is aliased with one 32-bit word per bit of the peripheral and RAM memory space. Setting the memory at address 0x42000000 to 0x00000001 will set the first bit of the memory at address 0x40000000 to 1, but not affect the rest of the word. To change bit 2, change the word at 0x42000004 to 1. That's a neat feature, and extremely useful. According to the ARM Technical Reference Manual, the algorithm to compute the address is:
bit_word_offset = (byte_offset x 32) + (bit_number × 4)
bit_word_addr = bit_band_base + bit_word_offset
where:
bit_word_offset is the position of the target bit in the bit-band memory region.
bit_word_addr is the address of the word in the alias memory region that maps to the
targeted bit.
bit_band_base is the starting address of the alias region.
byte_offset is the number of the byte in the bit-band region that contains the targeted bit.
bit_number is the bit position, 0 to 7, of the targeted bit
Implementation of Bit Banding
The "inc/hw_types.h" file includes the following macro which implements this algorithm. To be clear, it implements it for a word-based model which accepts 4-byte-aligned words and 0-31-bit offsets, but the resulting address is equivalent:
#define HWREGBITB(x, b) \
HWREGB(((unsigned long)(x) & 0xF0000000) | 0x02000000 | \
(((unsigned long)(x) & 0x000FFFFF) << 5) | ((b) << 2))
This algorithm takes the base which is either in SRAM at 0x20000000 or the peripheral memory space at 0x40000000) and ORs it with 0x02000000, adding the bit band base offset. Then, it multiples the offset from the base by 32 (equivalent to a five-position left shift) and adds the bit number.
The referenced HWREG simply performs the requisite cast for writing to a given location in memory:
#define HWREG(x) \
(*((volatile unsigned long *)(x)))
This works quite nicely with assignments like
HWREGBITW(0x400253FC, 0) = 1;
where 0x400253FC is a magic number for a memory-mapped peripheral and I want to set bit 0 of this peripheral to 1. The above code computes (at compile-time, of course) the bit offset and sets that word to 1.
What doesn't work
Unfortunately, the aforememntioned definitions in "inc/lm3s6965.h" already perform the cast done by HWREG. I want to avoid magic numbers and instead use provided definitions like
#define GPIO_PORTF_DATA_R (*((volatile unsigned long *)0x400253FC))
An attempt to paste this into HWREGBITW causes the macro to no longer work, as the cast interferes:
HWREGBITW(GPIO_PORTF_DATA_R, 0) = 1;
The preprocessor generates the following mess (indentation added):
(*((volatile unsigned long *)
((((unsigned long)((*((volatile unsigned long *)0x400253FC)))) & 0xF0000000)
| 0x02000000 |
((((unsigned long)((*((volatile unsigned long *)0x400253FC)))) & 0x000FFFFF) << 5)
| ((0) << 2))
)) = 1;
Note the two instances of
(((unsigned long)((*((volatile unsigned long *)0x400253FC)))))
I believe that these extra casts are what is causing my process to fail. The following result of preprocessing HWREGBITW(0x400253FC, 0) = 1; does work, supporting my assertion:
(*((volatile unsigned long *)
((((unsigned long)(0x400253FC)) & 0xF0000000)
| 0x02000000 |
((((unsigned long)(0x400253FC)) & 0x000FFFFF) << 5)
| ((0) << 2))
)) = 1;
The (type) cast operator has right-to-left precedence, so the last cast should apply and an unsigned long used for the bitwise arithmetic (which should then work correctly). There's nothing implicit anywhere, no float to pointer conversions, no precision/range changes...the left-most cast should simply nullify the casts to the right.
My question (finally...)
Why isn't (unsigned long)(0x400253FC) equivalent to (unsigned long)((*((volatile unsigned long *)0x400253FC)))?
How can I make the existing HWREGBITW macro work? Or, how can a macro be written to do the same task but not fail when given an argument with a pre-existing cast?
1- Why isn't (unsigned long)(0x400253FC) equivalent to (unsigned long)((*((volatile unsigned long *)0x400253FC)))?
The former is an integer literal and its value is 0x400253FCul while the latter is the unsigned long value stored in the (memory or GPIO) address 0x400253FC
2- How can I make the existing HWREGBITW macro work? Or, how can a macro be written to do the same task but not fail when given an argument with a pre-existing cast?
Use HWREGBITW(&GPIO_PORTF_DATA_R, 0) = 1; instead.

Resources