I want to access a 32-bit data pointed to by an address in a hardware register (which is 64 bits, with only 40 LSb's set). So I do:
paddr_t address = read_hw(); // paddr_t is unsigned long long
unsigned int value = *(unsigned int*) address; // error: cast to pointer from integer of different size
unsigned int value2 = (unsigned int) *((paddr_t*) address); // error: cast to pointer from integer of different size
What would be the right way to do this without compiler error (I use -Werror)?
Nominally with C99 the first option is closest to correct,
uint32_t value = *(uint32_t*)address;
However you may also choose to use the other pointer/integer helpers,
uintptr_t address = read_hw();
uint32_t value = *(uint32_t*)address;
I'm not sure I understand the question.
"I want to access a 32-bit data pointed to by an address in a hardware register (which is 64 bits, with only 40 LSb's set)."
So you have a hardware register 64 bits wide, the least significant 40 of which should be interpreted as an address in memory which contains 32 bits of data?
Can you try
uint32_t* pointer = (*(uint64_t *) register_address) & (~0 >> 24)
uint32_t value = *pointer
Although this might get more complicated depending on endian-ness and whether the compiler interprets >> as a logical or arithmetic right-shift.
Although, really, I want to ask,
Does "I am using a cross-compiler, I don't have the luxury of printf" mean you can't actually run your code, or just that you have to do it some some hardware that lacks a convenient output channel?
What is your target architecture, that your pointers are 40 bits long?!
From what you have written you have a 64 pointer of which only 40 bits are the pointer, and that pointer points to some data that is 32 bits in size.
Your code seems to be trying to mangle the 40 bit pointer into a 32 bit pointer.
What you should be doing is &'ing the relevant 40 bits within the 64 bit pointer so that it remains a 64 bit pointer, and then using that to access the data, which you can then similarly & to get the data. Otherwise you are (as the errors indicate) truncating the pointer.
Something like (I don't have 64 bit so I can't test this, but you get the idea):
address = address & 0x????????????????; // use the ?s to mask off the bits you
// want to ignore
value64 = *address; // value64 is 64 bits
value32 = (int)(value64 & 0x00000000ffffffff); // if the data is in the lower
// half of value64
or
value32 = (int)((value64 & 0xffffffff00000000) > 32); // if the data is in the
// higher half of value64
where the ?'s are masking the bits as needed (depending on the endiness that you are working with).
You'll probably also need to change the (int) casts to suit (you want to instead cast it to whatever 32 bit data type the data represents - ie. the type of value32).
check the real sizes for your pointers and paddr_t type:
printf("paddr_t size: %d, pointer size: %d\n",
sizeof(paddr_t), sizeof(unsigned int *));
what do you get?
update:
ARM is a 32 bits architecture, so you are trying to convert from a 64bits integer to a 32bits pointer an your compiler doesn't like it!
If you are sure that the value in paddr_t fits in a 32bits pointer you can just cast it to an int first:
unsigned int *p = (unsigned int *)(int)addrs;
Related
I'm a bit troubled by this code:
typedef struct _slink{
struct _slink* next;
char type;
void* data;
}
assuming what this describes is a link in a file, where data is 4bytes long representing either an address or an integer(depending on the type of the link)
Now I'm looking at reformatting numbers in the file from little-endian to big-endian, and so what I wanna do is change the order of the bytes before writing back to the file, i.e.
for 0x01020304, I wanna convert it to 0x04030201 so when I write it back, its little endian representation is gonna look like the big endian representation of 0x01020304, I do that by multiplying the i'th byte by 2^8*(3-i), where i is between 0 and 3. Now this is one way it was implemented, and what troubles me here is that this is shifting bytes by more than 8 bits.. (L is of type _slink*)
int data = ((unsigned char*)&L->data)[0]<<24) + ((unsigned char*)&L->data)[1]<<16) +
((unsigned char*)&L->data)[2]<<8) + ((unsigned char*)&L->data)[3]<<0)
Can anyone please explain why this actually works? without having explicitly cast these bytes to integers to begin with(since they're only 1 bytes but are shifted by up to 24 bits)
Thanks in advance.
Any integer type smaller than int is promoted to type int when used in an expression.
So the shift is actually applied to an expression of type int instead of type char.
Can anyone please explain why this actually works?
The shift does not occur as an unsigned char but as a type promoted to an int1. #dbush.
Reasons why code still has issues.
32-bit int
Shifting a int 1 into the the sign's place is undefined behavior UB. See also #Eric Postpischil.
((unsigned char*)&L->data)[0]<<24) // UB
16-bit int
Shifting by the bit width or more is insufficient precision even if the type was unsigned. As int it is UB like above. Perhaps then OP would have only wanted a 2-byte endian swap?
Alternative
const uint8_t *p = &L->data;
uint32_t data = (uint32_t)p[0] << 24 | (uint32_t)p[1] << 16 | //
(uint32_t)p[2] << 8 | (uint32_t)p[3] << 0;
For the pedantic
Had int used non-2's complement, the addition of a negative value from ((unsigned char*)&L->data)[0]<<24) would have messed up the data pattern. Endian manipulations are best done using unsigned types.
from little-endian to big-endian
This code does not swap between those 2 endians. It is a big endian to native endian swap. When this code is run on a 32-bit unsigned little endian machine, it is effectively a big/little swap. On a 32-bit unsigned big endian machine, it could have been a no-op.
1 ... or posibly an unsigned on select platforms where UCHAR_MAX > INT_MAX.
#include <stdio.h>
#define READ8(Address) \
(*((volatile long *)(Address)))
int main()
{
int Array[2];
long out_value;
Array[0] = 55;
Array[1] = 66;
out_value = READ8(&Array[0]);
printf("%d\n", out_value);
}
I am trying to read 8 bit data, 16 bit data and 32 bit data and store in out_value variable. i am changing reading size of data type by changing the data type in Macro as int / long but every time out put is printed as 55 only.
i want to print as 55 and also 5566.
Casting between pointer types is not portable, and will break horribly due to unaligned access or to compiler optimisations. The (reasonably) portable way is to use memcpy:
unsigned int a[1000] = {...};
unsigned long long x;
memcpy(&x, &a[57], sizeof(unsigned long long))
Don't worry about the extra function call — both gcc and clang will recognise this pattern and optimise the call to memcpy away.
Obviously what you were after wasn't meant to be portable. Your observations revealed that int and long have the same size (most probably 4 byte) in your environment, and are stored in little-endian byte order. You seem to think sizeof(int) were 2 when you define int Array[2]; if you want a data type with a certain size, you should use the standard types from <stdint.h>, for 16 bits this would be int16_t instead of int.
i want to print as 55 and also 5566
If you read the two consecutive 16-bit words (int16_t []){55, 66} into a 32-bit long on your controller, the resulting value cannot be 5566, but rather 4325431 (55 + 66 × 0x10000).
I'm attempting to write a Gameboy emulator in C, and am currently in the process of deciding how to implement the following behavior:
Two 8-bit registers can be combined and treated as a single 16-bit register
changing the value of one of the 8-bit registers in the pairing should change the value of the combined register
For example, registers A and F, which are 8-bit registers, can be used jointly as the 16-bit register AF. However when the contents of registers A and F change, these changes should be reflected in subsequent referrals to register AF.
If I implement register AF as a uint16_t*, can I store the contents of registers A and F as uint8_t*'s pointing to the first and second byte of register AF respectively? If not, any other suggestions would be appreciated :)
EDIT: Just to clarify, this is a very similar architecture to the Z80
Use a union.
union b
{
uint8_t a[2];
uint16_t b;
};
The members a and b share the bytes. When a value is written in member a and then read using member b the value is reinterpreted in this type. This could be a trap representation, which would cause undefined behavior, but types uint8_t and uint16_t don't have them.
Another issue is endianness, writing into the first element of member a will always change the first byte of member b, but depending on endianness that byte might represent most or least significant bits of b, so the resulting value will differ over architectures.
To avoid trap representations and endianness, rather only use the type uint16_t and write into it using bitwise operations. For example, to write into the most significant 8 bits:
uint16_t a = 0;
uint8_t b = 200;
a = ( uint16_t )( ( ( unsigned int )a & 0xFF ) | ( ( unsigned int )b << 8 ) ) ;
and similarly for the least significant 8 bits:
a = ( uint16_t )( ( ( unsigned int )a & 0xFF00 ) | ( unsigned int )b );
These operations should be put into a function.
The casts to ( unsigned int ) are there to avoid integer promotions. If INT_MAX equals 2^15-1, and there are trap representations for signed integers, then the operation: b << 8 could technically cause undefined behavior.
You could solve it like this:
volatile uint16_t afvalue = 0x55AA; // Needs long lifetime.
volatile uint16_t *afptr = &afvalue;
volatile uint8_t *aptr = (uint8_t *)afptr;
volatile uint8_t *fptr = aptr + 1;
if (*aptr == 0xAA) { // Take endianness into account.
aptr++; // Swap the pointers.
fptr--;
}
afvalue = 0x0000;
Now aptr points to the high bits and fptr points to the low bits, no matter if the emulator runs on a little endian or big endian machine.
Note: since these pointer types are different, a compiler might not be aware that they point to the same memory, thereby optimizing the code so that a write to *afptr is not seen by a later read from *aptr. Therefore the volatile.
You could of course use pointers to point anywhere you want.
#include<stdio.h>
#include <stdint.h>
int main() {
uint16_t a=1048;
uint8_t *ah=(uint8_t*)&a,*al=((uint8_t*)&a)+1;
printf("%u,%u,%u\n",a,*ah,*al);
}
You would need to take care of endianness as mentioned in the comment (
I believe the gameboy is little endian, while the x86 is big endian ).
And of course as most poeple recommend you should use a union which would make your code less error prone due to pointer address calculations
I am learning c. As an experiment I am trying to implement 128-bit addition and multiplication.
I want to do this using a void pointer as follows:
void* x = malloc(16);
memset(x, 0, 16);
Is it possible to read and write the ith bit of x? I understand how to do this using bitwise operations on standard data types but the compiler complains about invalid operands when I try and use bitwise operations on a void pointer. In general, I am interested in whether it is possible to malloc an arbitraryly sized block of memory and then manipulated each individual bit in c?
The C standard is surprisingly (although logically) inflexible when it comes to casting of pointer types, although you can cast the result of malloc to any pointer type you want.
In order to perform bit and byte analsysis, a safe type to use is char* or unsigned char*:
unsigned char* x = malloc(16);
(I'd always pick the unsigned char since some platforms have a 1's complement signed char type which, crudely put, is a pain in the neck).
Then you can use pointer arithmetic on x to manipulate the memory block on a byte by byte basis. Note that the standard states that sizeof(char) and sizeof(unsigned char) is 1, so pointer arithmetic on such a type will traverse the entire memory block without omitting any bits of it.
As for examining bit by bit, the standards-defined CHAR_BIT will tell you how many bits there are in a byte. (It's normally 8 but it would be unwise to assume that.)
Finally, don't forget to call free(x) at some point.
To set a bit at position i
*((uint8_t*)x + i/8) |= (1 << (i%8))
To clear it
*((uint8_t*)x + i/8) &= ~(1 << (i%8))
I was wondering if you could define a type in bits.
Specifically, I want to define a 24 bits type, in order to store the cumulative number of package lost in RTP.
If not, how can I memcpy 3 bytes from an int.
If I do this, I'm not sure how it'll end:
memcpy(pkg + 29, (&clamped_pkgs_lost)+(sizeof(char)), 3*sizeof (char));
You can define a type with at least 24 bits using a bitfield, but a bitfield must be a member of a struct:
struct {
unsigned pkgs_lost: 24;
};
Whether you use such a bitfield, or just a simple type with at least 24 bits like unsigned long to store the value within your application, when you copy it to the RTP packet the simplest portable way to do it is to copy it a byte at a time. This is because the value in the RTP packet is always big-endian, and the endianness of your host is unknown.
Assuming that pkg is of type unsigned char *, you would do something like:
pkg[33] = pkgs_lost >> 16;
pkg[34] = pkgs_lost >> 8;
pkg[35] = pkgs_lost;
to place the 24-bit big endian number at byte position 33 in the outgoing packet.
In C you can define integer types only in terms of the fundamental types or bitfields thereof.
Bitfields are quirky. You can't take their address. And they won't save you any space if you need just 24 bits, but your platform only has fundamental types of 8, 16 and 32 bits. You'd still need to use either 3 8-bit integers or 1 32-bit integer (or 1 16-bit and 1 8-bit) to store those 24 bits of yours.
For something as simple as a counter, I'd just use a 32-bit integer. If I'm interested in limiting it to 24 bit values, I have two options:
zeroing out the 8 most significant bits and thus simulating a wrap around
limiting the value to 224-1, so it never grows beyond it nor wraps around
You can store a narrow integer in a larger integer. Just mask-off the bits you want.
int main() {
long data;
data & 0xFFFFFF;
}
Or, you can define a bitfield on a structure member. But don't try to write the struct to disk and open it on a different system because bitfield layouts are not standardized.
struct {
long data:24;
};