Is there an integer type with the same size as pointer? Guaranteed on all microarchitectures?
According to this Wikipedia page, in C99 your stdint.h header might declare intptr_t and uintptr_t, but then that of course requires
C99
A compiler implementor which has chosen to implement this optional part of the standard
So in general I think this one is tough.
Simply put, no. Not guaranteed on all architectures.
My question is: why? If you want to allocate a type big enough to store a void*, the best thing to allocate is (surprisingly enough :-) a void*. Why is there a need to fit it within an int?
EDIT: Based on your comments to your duplicate question, you want to store special values of the pointer (1,2,3) to indicate extra information.
NO!! Don't do this!!. There is no guarantee that 1, 2 and 3 aren't perfectly valid pointers. That may be the case in systems where you're required to align pointers on 4-byte boundaries but, since you asked about all architectures, I'm assuming you have portability as a high value.
Find another way to do it that's correct. For example, use the union (syntax from memory, may be wrong):
typedef struct {
int isPointer;
union {
int intVal;
void *ptrVal;
}
} myType;
Then you can use the isPointer 'boolean' to decide if you should treat the union as an integer or pointer.
EDIT:
If execution speed is of prime importance, then the typedef solution is the way to go. Basically, you'll have to define the integer you want for each platform you want to run on. You can do this with conditional compilation. I would also add in a runtime check to ensure you've compiled for each platform correctly thus (I'm defining it in the source but you would pass that as a compiler flag, like "cc -DPTRINT_INT"):
#include <stdio.h>
#define PTRINT_SHORT
#ifdef PTRINT_SHORT
typedef short ptrint;
#endif
#ifdef PTRINT_INT
typedef int ptrint;
#endif
#ifdef PTRINT_LONG
typedef long ptrint;
#endif
#ifdef PTRINT_LONGLONG
typedef long long ptrint;
#endif
int main(void) {
if (sizeof(ptrint) != sizeof(void*)) {
printf ("ERROR: ptrint doesn't match void* for this platform.\n");
printf (" sizeof(void* ) = %d\n", sizeof(void*));
printf (" sizeof(ptrint ) = %d\n", sizeof(ptrint));
printf (" =================\n");
printf (" sizeof(void* ) = %d\n", sizeof(void*));
printf (" sizeof(short ) = %d\n", sizeof(short));
printf (" sizeof(int ) = %d\n", sizeof(int));
printf (" sizeof(long ) = %d\n", sizeof(long));
printf (" sizeof(long long) = %d\n", sizeof(long long));
return 1;
}
/* rest of your code here */
return 0;
}
On my system (Ubuntu 8.04, 32-bit), I get:
ERROR: ptrint typedef doesn't match void* for this platform.
sizeof(void* ) = 4
sizeof(ptrint ) = 2
=================
sizeof(short ) = 2
sizeof(int ) = 4
sizeof(long ) = 4
sizeof(long long) = 8
In that case, I'd know I needed to compile with PTRINT_INT (or long). There may be a way of catching this at compile time with #if, but I couldn't be bothered researching it at the moment. If you strike a platform where there's no integer type sufficient for holding a pointer, you're out of luck.
Keep in mind that using special pointer values (1,2,3) to represent integers may also not work on all platforms - this may actually be valid memory addresses for pointers.
Still ,if you're going to ignore my advice, there's not much I can do to stop you. It's your code after all :-). One possibility is to check all your return values from malloc and, if you get 1, 2 or 3, just malloc again (i.e., have a mymalloc() which does this automatically). This'll be a minor memory leak but it'll guarantee no clashes between your special pointers and real pointers.
The C99 standard defines standard int types:
7.18.1.4 Integer types capable of holding object pointers
The following type designates a signed integer type with the property that any valid pointer to void can be converted to this type, then converted back to pointer to void, and the result will compare equal to the original pointer:
intptr_t
The following type designates an unsigned integer type with the property that any valid pointer to void can be converted to this type, then converted back to pointer to void, and the result will compare equal to the original pointer:
uintptr_t
These types are optional.
C99 also defines size_t and ptrdiff_t:
The types are
ptrdiff_t
which is the signed integer type of the result of subtracting two pointers;
size_t
which is the unsigned integer type of the result of the sizeof operator; and
The architectures I've seen have the maximum size of an object equal to the whole memory, so sizeof(size_t) == sizeof(void*), but I'm not aware of anything that is both portable to C89 ( which size_t is ) and guaranteed to be large enough ( which uintptr_t is ).
This would be true on a standard 32 bit system, but there certainly are no guarantees, and you could find lots of architectures where it isn't true. For example, a common misconception is that sizeof(int) on x86_64 would be 8 (since it's a 64 bit system, I guess), which it isn't. On x86_64, sizeof(int) is still 4, but sizeof(void*) is 8.
The standard solution to this problem is to write a small program which checks the sizes of all int types (short int, int, long int) and compares them to void*. If there is a match, it emits a piece of code which defines the intptr type. You can put this in a header file and use the new type.
It's simple to include this code in the build process (using make, for example)
No, the closest you will come to a portable pointer-capable integer type would be intptr_t and ptrdiff_t.
No.
I do not believe the C standard even specifies standard int sizes. Combine that with all the architectures out there (8/16/32/64bit etc) and there is no way to guarantee anything.
int data type would be the answer on most architectures.
But thre is NO guarantee to this for ANY (micro)architecture.
The answer seems to be "no", but if all you need is a type that can act as both, you can use a union:
union int_ptr_t {
int i;
void* p;
};
Usually sizeof(*void) depends on memory bus width (although not necessarily - pre-RISC AS/400 had 48-bit address bus but 64-bit pointers), and int usually is as big as CPU's general-purpose register (there are also exceptions - SGI C used 32-bit ints on 64-bit MIPS).
So there is no guarantee.
Related
#define NUMOFDMABUF 20
u8 *data[NUMOFDMABUF];
int main()
{
for (u32 i = 0; i < NUMOFDMABUF; i++)
{
data[i] = (void *)(RX_BUFFER_BASE + (i * MAX_PKT_LEN)); //-Wint-to-pointer-cast
}
}
This is for bare metal application and i want to explicitly set memory addresses of ram locations to an array of pointers. The code works, but how do i write the code to avoid the occurrence of this warning?
If i explicitly set the address as a value, no issues... but when i do it this way it throws the the warning. I am brain farting, so thoughts appreciated :)
The text of the warning produced by gcc is more descriptive.
warning: cast to pointer from integer of different size [-Wint-to-pointer-cast]
The OP mentions in the comments how RX_BUFFER_BASE is defined:
#define RX_BUFFER_BASE 0x40000000
That macro is later textually expanded in the line
data[i] = (void *)(RX_BUFFER_BASE + (i * MAX_PKT_LEN));
The type of i is a non standard U32, I'll assume it to be a 32-bit unsigned integer. MAX_PKT_LEN is undisclosed, but I'll assume it to be defined as RX_BUFFER_BASE.
Now, the type of an integer constant is the first type in which the value can fit1, so 0x40000000 could be an unsigned int or an unsigned long int depending on the implementation, or if we consider fixed width types, an uint32_t, but not an uint64_t.
Using an integer suffix, like 0x40000000ull, would have forced it to be an unsigned long long.
We can assume2 that the other variables in that expression have the same type, while pointers, in OP's implementation, have a wider representation. Therefore the warning.
The asker could have used a type specifically designed for this kind of pointer arithmetic, like uintptr_t, which is an unsigned integer type capable of holding a pointer to void:
#include <stdint.h>
const uintptr_t buffer_base = 0x40000000;
1) C17 standard (ISO/IEC 9899:2018): 6.4.4.1 Integer constants (p: 45-46) or see e.g. https://en.cppreference.com/w/c/language/integer_constant#The_type_of_the_integer_constant
2) Sorry, that's a lot assumptions, I know, but it's not entirely my fault.
how do i write the code to avoid the occurrence of this warning?
This may-or-may-not work in all environments.
It is just a proposal that may work for this OP.
I haven't done "bare metal" stuff to any serious degree.
But this compiles on my old computer without warnings...
int main() {
uint8_t *d[20];
memset( d, 0, sizeof d ); // start clean
for( uint32_t i = 0; i < sizeof d/sizeof d[0]; i++ )
d[i] += 0x2468 + ( i * 256 ); // NB "+="
return 0;
}
I'm not going to go anywhere near dereferencing any of those pointers though.
I have a function foo(void* pBuf). I need to pass it a 64 bit address but I can't seem to get the right typecast when I'm passing by value.
Example: foo(address). Where- uint64_t address=0x00000000DEADBEEF
EDIT: Compiling using an ARM compiler.
uint64_t foo(void *pBuf){
uint64_t retAddr = (uint64_t) pBuf;
retAddr += 0x100000;
return retAddr;
}
I'm on a 32-bit ARM and sizeof(void *) is 4
Clarification: Why I needed a 64-bit address on a 32-bit ARM?
Because my memory map uses 36-bit addressing.
Call it this way:
uint64_t address = 0xDEADBEEF;
foo((void*)address);
That is, you cast the address to a void-pointer to be compatible with the function signature.
Sorry to necro this question, but none of these answers seem reasonable to me. This is a fairly straightforward type conversion problem. It seems as though people were caught up on 64-bit addressing on a 32-bit system, when this could easily be for a peripheral or some other address space besides the system itself.
In the OP's case, a cast directly to uint64_t would cause undefined behavior because of the additional four bytes that do not exist in void *. In the case of the M4 calling convention, p would typically be passed in a single register, likely r0. There are no additional upper bytes for uint64_t to alias, so your compiler is rightly issuing a warning for this.
Under the GCC 7.3 arm-none-eabi port, void * can be safely cast to size_t (aka unsigned int) because they both have size and alignment of 4. Once that is done, you can safely promote unsigned int to uint64_t (aka unsigned long long int) by assignment. The promotion is better defined behavior than a cast.
uint64_t foo(void *p){
uint64_t a = (size_t) p;
a += 0x100000;
return a;
}
You should not use a 64-bits type for an address, as it is undefined behavior for 32-bits (or any non-64 bits) systems.
Rather, prefer using uintptr_t, which is standard C.
See this question for more details or this page for references.
Then a solution could be :
uintptr_t address = 0xDEADBEEF; /* will trigger a warning if the constant is > max possible memory size */
foo((void*)address);
Note : if uintptr_t is not available on your system, size_t is usually a good second choice.
Part 2 :
Looks like, in your rephrased question, you want to convert an address into a 64-bits integer.
In which case, a direct cast from ptr to integer is likely to trigger a compiler warning, due to potential differences in wideness.
Prefer a double cast :
uint64_t value = (uint64_t)(size_t) ptr;
I can think of two ways to get this right. Got a solution to my problem by calling foo the first way
foo((void*)(uint32_t)address)
This works only because my input to foo is always a 32-bit value. The returned value can be 64-bit.
Of course, a proper fix would be to change foo itself, if I could modify it.
I could just pass foo(&address). Inside foo, retAddr = *pBuf.
Thanks for all the suggestions!
I'd like to make a custom pointer address printer (like the printf(%p)) and I'd like to know what is the maximum value that a pointer can have on the computer I'm using, which is an iMac OS X 10.8.5.
Someone recommended I use an unsigned long. Is the following cast the adapted one and big enough ?
function print_address(void *pointer)
{
unsigned long a;
a = (unsigned long) pointer;
[...]
}
I searched in the limits.h header but I couldn't find any mention of it. Is it a fixed value or there a way to find out what is the maximum on my system ?
Thanks for your help !
Quick summary: Convert the pointer to uintptr_t (defined in <stdint.h>), which will give you a number in the range 0 to UINTPTR_MAX. Read on for the gory details and some unlikely problems you might theoretically run into.
In general there is no such thing as a "maximum value" for a pointer. Pointers are not numbers, and < and > comparisons aren't even defined unless both pointers point into the same (array) object or just past the end of it.
But I think that the size of a pointer is really what you're looking for. And if you can convert a void* pointer value to an unsigned 32-bit or 64-bit integer, the maximum value of that integer is going to be 232-1 or 264-1, respectively.
The type uintptr_t, declared in <stdint.h>, is an unsigned integer type such that converting a void* value to uintptr_t and back again yields a value that compares equal to the original pointer. In short, the conversion (uintptr_t)ptr will not lose information.
<stdint.h> defines a macro UINTPTR_MAX, which is the maximum value of type uintptr_t. That's not exactly the "maximum value of a pointer", but it's probably what you're looking for.
(On many systems, including Mac OSX, pointers are represented as if they were integers that can be used as indices into a linear monolithic address space. That's a common memory model, but it's not actually required by the C standard. For example, some systems may represent a pointer as a combination of a descriptor and an offset, which makes comparisons between arbitrary pointer values difficult or even impossible.)
The <stdint.h> header and the uintptr_t type were added to the C language by the 1999 standard. For MacOS, you shouldn't have to worry about pre-C99 compilers.
Note also that the uintptr_t type is optional. If pointers are bigger than any available integer type, then the implementation won't define uintptr_t. Again, you shouldn't have to worry about that for MacOS. If you want to be fanatical about portable code, then you can use
#include <stdint.h>
#ifdef UINTPTR_MAX
/* uintptr_t exists */
#else
/* uintptr_t doesn't exist; do something else */
#endif
where "something else" is left as an exercise.
You probably are looking for the value of UINTPTR_MAX defined in <stdint.h>.
As ouah's answer says, uintptr_t sounds like the type you really want.
unsigned long is not guaranteed to to be able to represent a pointer value. Use uintptr_t which is an unsigned integer type that can hold a pointer value.
I was hoping somebody could explain why
#include <stdbool.h>
printf("size of bool %d\n", sizeof(bool));
printf("size of int %d\n", sizeof(int));
outputs to
size of bool 1
size of int 4
I've looked at http://pubs.opengroup.org/onlinepubs/009695399/basedefs/stdbool.h.html which seems to indicate that bool is essentially a macro for _Bool which, when set to true or false, is really just a macro for an integer constant. If it is an integer, why is it not the same size?
I'm asking because it took us far too long to debug a program for which we did not allocate enough memory.
The _Bool type in C99 (typedef'ed to bool in stdbool.h) doesn't have a standard defined size, but according to section 6.2.5 of the C99 Standard:
2 An object declared as type _Bool is large enough to store the values 0 and 1.
In C, the smallest addressable object (aside from bitfields) is the char, which is at least 8-bits wide, and sizeof(char) is always 1.
_Bool and bool therefore have a sizeof of at least 1, and in most implementations that I've seen, sizeof(bool) / sizeof(_Bool) is 1.
If you take a look at GCC's stdbool.h, you'll get this:
#define bool _Bool
#if __STDC_VERSION__ < 199901L && __GNUC__ < 3
typedef int _Bool;
#endif
#define false 0
#define true 1
So if using an older version of GCC and an old version of the C standard when compiling, you will use int as a _Bool type.
Of course, as an interesting thing, check this out:
#include <stdio.h>
#include <stdbool.h>
int main() {
printf("%zu\n", sizeof(_Bool));
printf("%zu\n", sizeof(true));
printf("%zu\n", sizeof(false));
}
Output:
λ > ./a.out
1
4
4
GCC 4.2.4, Clang 3.0, and GCC 4.7.0 all output the same. As trinithis points out, sizeof(true) and sizeof(false) produce larger sizes because they are taking the size of an int literal, which is at least sizeof(int).
C99 standard introduced the _Bool type. _Bool is guaranteed to be big enough to hold integer constants 0 or 1 , this does not neccessarily mean it is an int. The actual size is compiler dependent.
I am pretty sure it is dependant on your compiler, perhaps yours is using byte instead of int for bool? Either way you shouldn't have memory allocation problems if you use sizeof() to know how much memory to allocate, for example, if you want to allocate 10 bools worth of memory, don't allocate 10 * 4, do 10 * sizeof(bool) so you can't go wrong.
It is an integer, but not an int. char, short, int, long, long long, etc., are all integers. char is guaranteed to have a size of 1. The rest are at least as large as char (and it's hard to imagine how you'd make the I/O system work correctly if int wasn't larger than char). There's also a required order from char to long long (same order I listed them above) where each type must have at least as much range as its predecessors.
Other than that, however, you're not guaranteed much about sizes of integer types. Specifically, char is the only one of the "base" types that has a guaranteed size (though there are types like int8_t and int32_t that have guaranteed sizes).
As I know from porting code from Windows to Unix you can never be sure about the size of a datatype. It depends on the operating system and sometimes even on the compiler you use.
The specifications of stdbool.h only say that TRUE and FALSE are mapped to an integer (1 and 0). This does't mean that the datatype bool is of type int. From my experience bool is of the smallest datatype available (i.e char or byte):
bool flag = TRUE; -> byte flag = 0x01;
In eclipse CDT and Visual Studio you can follow the macro definitions in order to see what really lies behind your datatypes.
So I would suggest you always ask your compiler about the memory space needed in order to allocate enough memory (This is also what I saw in a lot of libraries):
malloc(4*sizeof(bool));
I hope this helps.
The _Bool is native to the compiler, is defined in C99 and may be activated like gcc -std=c99; stdbool.h #define bool to be _Bool, and true and false to simply int literals that pretty well fit into that _Bool.
First off, this is not a dupe of:
Is it safe to cast an int to void pointer and back to int again?
The difference in the questions is this: I'm only using the void* to store the int, but I never actually use it as a void*.
So the question really comes down to this:
Is a void * guaranteed to be at least as wide as an int
I can't use intptr_t because I'm using c89 / ANSI C.
EDIT
In stdint.h from C99 ( gcc version ) I see the following:
/* Types for `void *' pointers. */
#if __WORDSIZE == 64
# ifndef __intptr_t_defined
typedef long int intptr_t;
# define __intptr_t_defined
# endif
typedef unsigned long int uintptr_t;
#else
# ifndef __intptr_t_defined
typedef int intptr_t;
# define __intptr_t_defined
# endif
typedef unsigned int uintptr_t;
#endif
Could I possibly just jerry rig something similar and expect it to work? It would seem that the casting should work as all intptr_t is is a typedef to an integral type...
No, this is not guaranteed to be safe.
The C99 standard has this to say (section 6.3.2.3):
An integer may be converted to any pointer type. Except as previously specified, the
result is implementation-defined, might not be correctly aligned, might not point to an
entity of the referenced type, and might be a trap representation.
Any pointer type may be converted to an integer type. Except as previously specified, the
result is implementation-defined. If the result cannot be represented in the integer type,
the behavior is undefined. The result need not be in the range of values of any integer
type.
I'm pretty confident that pre-C99 won't be any different.
FreeRTOS stores timer IDs in Timer_t as void* pvTimerID. So when using this as a storage space, and NOT a pointer to something, it is necessary to cast it to something that can be used as an array index, for instance.
so to read the id, stored as a void*:
void* pvId = pxTimer->pvTimerID;
int index = (int)(pvId - NULL);
There is a C FAQ: Can I temporarily stuff an integer into a pointer, or vice versa? .
The cleanest answer is: no, this is not safe, avoid it and get on with it. But POSIX requires this to be possible. So it is safe on POSIX-compliant systems.
Here's a portable alternative.
static const char dummy[MAX_VALUE_NEEDED];
void *p = (void *)(dummy+i); /* cast to remove the const qualifier */
int i = p-dummy;
Of course it can waste prohibitively large amounts of virtual address space if you need large values, but if you just want to pass small integers, it's a 100% portable and clean way to store integer values in void *.