As part of a program for a class, I have to print the output a specific way, split up into blocks of sixteen bytes. I've been searching for quite a while for a way to cast the pointer to an int or another way to perform a modulus or division remainder operation on the pointer address stored in a variable. I've hit a roadblock, does anyone here know how I could perform this seemingly simple operation? Here's the basic form of the function:
void printAddress(char *loc, char *minLoc, char *maxLoc) {
minLoc = (loc - (loc % 16));
maxLoc = minLoc + 16;
printf("%p - %p - %p", minLoc, loc, maxLoc);
}
I removed all my attempts at casting it to make it clear what I'm trying to do.
The type you're looking for is uintptr_t, defined in <stdint.h>. It is an unsigned integer type big enough to hold any pointer to data. The formats are in <inttypes.h>. They allow you to format the code correctly. When you include <intttypes.h>, it is not necessary to include <stdint.h> too. I chose 16 assuming you have a 64-bit processor; you can use 8 if you're working with a 32-bit processor.
void printAddress(char *loc)
{
uintptr_t absLoc = (uintptr_t)loc;
uintptr_t minLoc = absLoc - (absLoc % 16);
uintptr_t maxLoc = minLoc + 16;
printf("0x%16" PRIXPTR " - 0x%16" PRIXPTR " - 0x%16" PRIXPTR "\n",
minLoc, absLoc, maxLoc);
}
You could also write:
uintptr_t minLoc = absLoc & ~(uintptr_t)0x0F;
See also Solve the memory alignment in C interview question that stumped me.
Note that there might, theoretically, be a system where uintptr_t is not defined; I know of no system where it cannot actually be supported (but I don't know all systems).
I might not fully understood the problem, but for me it looks as if you are trying to do the good old hexdump?
void hexdump(char *buf, int size)
{
int i;
for (i = 0; i < size; i++)
{
if (i % 16 == 0)
{
puts("");
printf("%p", &buf[i]);
}
printf("%02x ", buff[i]);
}
}
Related
I've been lightly studying C for a few weeks now with some book.
int main(void)
{
float num = 3.15;
int *ptr = (int *)# //so I can use line 8 and 10
for (int i = 0; i < 32; i++)
{
if (!(i % 8) && (i / 8))
printf(" ");
printf("%d", *ptr >> (31 - i) & 1);
}
return 0;
}
output : 01000000 01001001 10011001 10011010
As you see 3.15 in single precision float is 01000000 01001001 10011001 10011010.
So let's say ptr points to address 0x1efb40.
Here are the questions:
As I understood in the book, first 8 bits of num data is stored in 0x1efb40, 2nd 8 bits in 0x1efb41, next 8 bits in 0x1efb42 and last 8 bits in 0x1efb43. Am I right?
If I'm right, is there any way I can directly access the 2nd 8 bits with hex address value 0x1efb41? Thereby can I change the data to something like 11111111?
The ordering of bytes within a datatype is known as endianness and is system specific. What you describe with the least significant byte (LSB) first is called little endian and is what you would find on x86 based processors.
As for accessing particular bytes of a representation, you can use a pointer to an unsigned char to point to the variable in question to view the specific bytes. For example:
float num = 3.15;
unsigned char *p = (unsigned char *)#
int i;
for (i=0; i<sizeof(num); i++) {
printf("byte %d = %02x\n", i, p[i]);
}
Note that this is only allowed to access bytes via a character pointer, not an int *, as the latter violates strict aliasing.
The code you wrote is not actually valid C. C has a rule called "strict aliasing," which states that if a region of memory contains a value of one type (i.e. float), it cannot be accessed as though it was another type (i.e. int). This rule has its origins in some performance optimizations that let the compiler generate faster code. I can't say it's an obvious rule, but it's the rule.
You can work around this by using union. If you make a union like union { float num, int numAsInt }, you can store a float and then read it as an integer. The result is unspecified. Alternatively, you are always permitted to access the bytes of a value as chars (just not anything larger). char is given special treatment (presumably to make it so you can copy a buffer of data as bytes, then cast it to your data's type and access it, which is something that happens a lot in low level code like network stacks).
Welcome to a fun corner of learning C. There's unspecified behavior and undefined behavior. Informally, unspecified behavior says "we won't say what happens, but it will be reasonable." The C spec will not say what order the bytes are in. But it will say that you will get some bytes. Undefined behavior is nastier. Undefined behavior says anything can happen, ranging from compiler errors to exceptions at runtime, to absolutely nothing at all (making you think your code is valid when it is not).
As for the values, dbush points out in his answer that the order of the bytes is defined by the platform you are on. You are seeing a "little endian" representation of a IEE754 floating point number. On other platforms, it may be different.
Union punning is much safer:
#include <stdio.h>
typedef union
{
unsigned char uc[sizeof(double)];
float f;
double d;
}u_t;
void print(u_t u, size_t size, int endianess)
{
size_t start = 0;
int increment = 1;
if(endianess)
{
start = size - 1;
increment = -1;
}
for(size_t index = 0; index < size; index++)
{
printf("%hhx ", u.uc[start]);
start += increment;
}
printf("\n");
}
int main(void)
{
u_t u;
u.f = 3.15f;
print(u, sizeof(float),0);
print(u, sizeof(float),1);
u.d = 3.15;
print(u, sizeof(double),0);
print(u, sizeof(double),1);
return 0;
}
you can test it yourself: https://ideone.com/7ABZaj
I have a float in my program that changes after each iteration, namely G[0]. I want to use the last bit of the mantissa as a "random" bit that I will use to pick which path to take in my computation. However, the calculation I'm using now always prints out 0, while it should be roughly 50/50; what's my error? The function signature defines float* G.
unsigned rand_bit = *((unsigned *)&G[0])>>31;
printf("%i", rand_bit);
First of all, *((unsigned *)&G[0]) causes undefined behaviour by violating the strict aliasing rule. In Standard C it is not permitted to access memory of one type by using a different type, except for a handful of special cases.
You can fix this either by disabling strict aliasing in your compiler, or using a union or memcpy.
(Also your code is relying on unsigned being the same size as float, which is not true in general).
But supposing you did fix those issues, your code is testing the most-significant bit. In the IEEE 32-bit floating point format, that bit is the sign bit. So it will read 0 for positive numbers and 1 for negative numbers.
The last bit of the mantissa would be the least significant bit after reinterpreting the memory as integer.
Corrected code could look like:
unsigned u;
assert( sizeof u == sizeof *G );
memcpy(&u, G, sizeof u);
printf("%u", u & 1);
NB. I would be hesitant about assuming this bit will be "random", if you want a random distribution of bits there are much better options.
Although you actual problem got solved I want to propose to use a union instead, it is a bit cleaner, but I do not know what is faster (if you are using the GPU I think I can safely assume that you want it fast). Endianness is also a problem; I was not able to find much information in that direction regarding GPUs, so here are some lines you might want to use.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
int last_bit(float f)
{
// Endianness testing should be done at compile-time, such that a simple macro
// would suffice. Your compiler/libc might already offer one for that
#ifdef USE_TYPE_PUNNING
// So called "type punning" is frowned upon by some
uint32_t ui = 0x76543210;
unsigned char *c = (unsigned char *) &ui;
#else
union {
uint32_t ui;
uint8_t uc[4];
} end = {0x76543210};
// only to save some code branching, end.uc can be used directly, of course
unsigned char *c = (unsigned char *) end.uc;
#endif
int rand_bit;
union {
float fl;
uint32_t ui;
} w;
w.fl = f;
#ifdef DEBUG
printf("%x\n", w.ui);
#endif
// Little endian
if (*c == 0x10) {
rand_bit = w.ui & 0x1;
}
// Big endian
else if (*c == 0x76) {
rand_bit = w.ui & 0x1;
}
// Too old/new
else {
fprintf(stderr, "unknown endianness\n");
return -1;
}
return rand_bit;
}
int main(int argc, char **argv)
{
float f;
// all checks omitted!
if (argc >= 2) {
f = atof(argv[1]);
} else {
// 0x40e00000 even
//f = 7;
// 0x3f8ccccd odd
f = 1.1;
}
printf("last bit of mantissa = %d\n", last_bit(f));
exit(EXIT_SUCCESS);
}
I am trying to solve the Ex 2-1 of K&R's C book. The exercise asks to, among others, determine the ranges of char by direct computation (rather than printing the values directly from the limits.h). Any idea on how this should be done nicely?
Ok, I throw my version in the ring:
unsigned char uchar_max = (unsigned char)~0;
// min is 0, of course
signed char schar_min = (signed char)(uchar_max & ~(uchar_max >> 1));
signed char schar_max = (signed char)(0 - (schar_min + 1));
It does assume 2's complement for signed and the same size for signed and unsigned char. While the former I just define, the latter I'm sure can be deduced from the standard as both are char and have to hold all encodings of the "execution charset" (What would that imply for RL-encoded charsets like UTF-8).
It is straigt-forward to get a 1's complement and sing/magnitude-version from this. Note that the unsigned version is always the same.
One advantage is that is completely runs with char types and no loops, etc. So it will be still performant on 8-bit architectures.
Hmm ... I really thought this would need a loop for signed. What did I miss?
Assuming that the type will wrap intelligently1, you can simply start by setting the char variable to be zero.
Then increment it until the new value is less than the previous value.
The new value is the minimum, the previous value was the maximum.
The following code should be a good start:
#include<stdio.h>
int main (void) {
char prev = 0, c = 0;
while (c >= prev) {
prev = c;
c++;
}
printf ("Minimum is %d\n", c);
printf ("Maximum is %d\n", prev);
return 0;
}
1 Technically, overflowing a variable is undefined behaviour and anything can happen, but the vast majority of implementations will work. Just keep in mind it's not guaranteed to work.
In fact, the difficulty in working this out in a portable way (some implementations had various different bit-widths for char and some even used different encoding schemes for negative numbers) is probably precisely why those useful macros were put into limits.h in the first place.
You could always try the ol' standby, printf...
let's just strip things down for simplicity's sake.
This isn't a complete answer to your question, but it will check to see if a char is 8-bit--with a little help (yes, there's a bug in the code). I'll leave it up to you to figure out how.
#include <stdio.h>
#DEFINE MMAX_8_BIT_SIGNED_CHAR 127
main ()
{
char c;
c = MAX_8_BIT_SIGNED_CHAR;
printf("%d\n", c);
c++;
printf("%d\n", c);
}
Look at the output. I'm not going to give you the rest of the answer because I think you will get more out of it if you figure it out yourself, but I will say that you might want to take a look at the bit shift operator.
There are 3 relatively simple functions that can cover both the signed and unsigned types on both x86 & x86_64:
/* signed data type low storage limit */
long long limit_s_low (unsigned char bytes)
{ return -(1ULL << (bytes * CHAR_BIT - 1)); }
/* signed data type high storage limit */
long long limit_s_high (unsigned char bytes)
{ return (1ULL << (bytes * CHAR_BIT - 1)) - 1; }
/* unsigned data type high storage limit */
unsigned long long limit_u_high (unsigned char bytes)
{
if (bytes < sizeof (long long))
return (1ULL << (bytes * CHAR_BIT)) - 1;
else
return ~1ULL - 1;
}
With CHAR_BIT generally being 8.
the smart way, simply calculate sizeof() of your variable and you know it's that many times larger than whatever has sizeof()=1, usually char. Given that you can use math to calculate the range. Doesn't work if you have odd sized types, like 3 bit chars or something.
the try hard way, put 0 in the type, and increment until it doesn't increment anymore (wrap around or stays the same depending on machine). Whatever the number before that was, that's the max. Do the same for min.
void *memory;
unsigned int b=65535; //1111 1111 1111 1111 in binary
int i=0;
memory= &b;
for(i=0;i<100;i++){
printf("%d, %d, d\n", (char*)memory+i, *((unsigned int * )((char *) memory + i)));
}
I am trying to understand one thing.
(char*)memory+i - print out adress in range 2686636 - 2686735.
and when i store 65535 with memory= &b this should store this number at adress 2686636 and 2686637
because every adress is just one byte so 8 binary characters so when i print it out
*((unsigned int * )((char *) memory + i)) this should print 2686636, 255 and 2686637, 255
instead of it it prints 2686636, 65535 and 2686637, random number
I am trying to implement memory allocation. It is school project. This should represent memory. One adress should be one byte so header will be 2686636-2586639 (4 bytes for size of block) and 2586640 (1 byte char for free or used memory flag). Can someone explain it to me thanks.
Thanks for answers.
void *memory;
void *abc;
abc=memory;
for(i=0;i<100;i++){
*(int*)abc=0;
abc++;
}
*(int*)memory=16777215;
for(i=0;i<100;i++){
printf("%p, %c, %d\n", (char*)memory+i, *((char *)memory +i), *((char *)memory +i));
}
output is
0028FF94, , -1
0028FF95, , -1
0028FF96, , -1
0028FF97, , 0
0028FF98, , 0
0028FF99, , 0
0028FF9A, , 0
0028FF9B, , 0
i think it works. 255 only one -1, 65535 2 times -1 and 16777215 3 times -1.
In your program it seems that address of b is 2686636 and when you will write (char*)memory+i or (char*)&b+i it means this pointer is pointing to char so when you add one to it will jump to only one memory address i.e2686637 and so on till 2686735(i.e.(char*)2686636+99).
now when you are dereferencing i.e.*((unsigned int * )((char *) memory + i))) you are going to get the value at that memory address but you have given value to b only (whose address is 2686636).all other memory address have garbage values which you are printing.
so first you have to store some data at the rest of the addresses(2686637 to 2686735)
good luck..
i hope this will help
I did not mention this in my comments yesterday but it is obvious that your for loop from 0 to 100 overruns the size of an unsigned integer.
I simply ignored some of the obvious issues in the code and tried to give hints on the actual question you asked (difficult to do more than that on a handy :-)). Unfortunately I did not have time to complete this yesterday. So, with one day delay my hints for you.
Try to avoid making assumptions about how big a certain type is (like 2 bytes or 4 bytes). Even if your assumption holds true now, it might change if you switch the compiler or switch to another platform. So use sizeof(type) consequently throughout the code. For a longer discussion on this you might want to take a look at: size of int, long a.s.o. on Stack Overflow. The standard mandates only the ranges a certain type should be able to hold (0-65535 for unsigned int) so a minimal size for types only. This means that the size of int might (and tipically is) bigger than 2 bytes. Beyond primitive types sizeof helps you also with computing the size of structures where due to memory alignment && packing the size of a structure might be different from what you would "expect" by simply looking at its attributes. So the sizeof operator is your friend.
Make sure you use the correct formatting in printf.
Be carefull with pointer arithmetic and casting since the result depends on the type of the pointer (and obviously on the value of the integer you add with).
I.e.
(unsigned int*)memory + 1 != (unsigned char*)memory + 1
(unsigned int*)memory + 1 == (unsigned char*)memory + 1 * sizeof(unsigned int)
Below is how I would write the code:
//check how big is int on our platform for illustrative purposes
printf("Sizeof int: %d bytes\n", sizeof(unsigned int));
//we initialize b with maximum representable value for unsigned int
//include <limits.h> for UINT_MAX
unsigned int b = UINT_MAX; //0xffffffff (if sizeof(unsigned int) is 4)
//we print out the value and its hexadecimal representation
printf("B=%u 0x%X\n", b, b);
//we take the address of b and store it in a void pointer
void* memory= &b;
int i = 0;
//we loop the unsigned chars starting at the address of b up to the sizeof(b)
//(in our case b is unsigned int) using sizeof(b) is better since if we change the type of b
//we do not have to remember to change the sizeof in the for loop. The loop works just the same
for(i=0; i<sizeof(b); ++i)
{
//here we kept %d for formating the individual bytes to represent their value as numbers
//we cast to unsigned char since char might be signed (so from -128 to 127) on a particular
//platform and we want to illustrate that the expected (all bytes 1 -> printed value 255) occurs.
printf("%p, %d\n", (unsigned char *)memory + i, *((unsigned char *) memory + i));
}
I hope you will find this helpfull. And good luck with your school assignment, I hope you learned something you can use now and in the future :-).
I have a simple function in C which provides a void* pointer to a data array. I know the size (in bytes) of each individual data-point within this memory block, and need to guarantee that I can modify each data-point in this block without accidentally altering neighboring data-points. In this example, I want to decrement each value by 1.
All data points are either 8-bit, 16-bit, or 32-bit.
eg:
void myFunction(void* data, size_t arraySize, size_t widthPerDataPoint)
{
if(!data)
return -1;
size_t w = widthPerDataPoint;
int numPoints = arraySize / widthPerDataPoint;
int i;
for(i=0; i<numPoints; i++)
{
if(w==1) // 8 bit
(*((int8_t*)data + i))--;
else if(w==2) // 16 bit
(*((int16_t*)data + i))--;
else if(w==4) // 32 bit
(*((int32_t*)data + i))--;
}
}
Unfortunately, the int8_t, etc, datatypes only guarantee their minimum size, according to C99 specifications, and not an exact size. Is there any way to re-cast and modify the data in-place and guarantee I won't smash my array or touch neighboring data points? Also, is there an equivalent technique that would somehow work for other data widths (ie: 24-bit, 60-bit, etc)?
int8_t is guaranteed to be exactly 8 bits, and if CHAR_BIT==1, exactly 1 byte.
Quoting the N1570 draft of the latest C standard, section 7.20.1.1:
The typedef name intN_t designates a signed integer type
with width N, no padding bits, and a two’s complement
representation. Thus, int8_t denotes such a signed integer type
with a width of exactly 8 bits.
Though for your purposes it might make more sense to use uint8_t, uint16_t, et al.
If the implementation doesn't support types with the required characteristics, it won't define them; you can detect this by checking, for example:
#include <stdint.h>
#ifdef UINT8_MAX
/* uint8_t exists */
#else
/* uint8_t doesn't exist */
#endif
(If CHAR_BIT != 8, then neither int8_t nor uint8_t will be defined.)
It's the [u]intleast_N_t and [u]intfast_t types for which the standard only guarantees minimum sizes.
You'll have to guarantee that both the array and the offsets within it are properly aligned for the types you're using to access it. I presume you're already taking care of that.
By definition, incrementing a pointer of type T* by n will shift it by n * sizeof(T) bytes. Therefore, consistency is guaranteed to you by the compiler. No worries.
The code doesn't seem entirely unreasonable. I personally would probably do something like this:
switch(widthPerDataPoint)
{
case 1:
{
int8_t *dptr = data;
for(i = 0; i < numPoints; i++)
dptr[i]--;
}
break;
case 2:
{
int16_t *dptr = data;
for(i = 0; i < numPoints; i++)
dptr[i]--;
}
break;
case 4:
{
int32_t *dptr = data;
for(i = 0; i < numPoints; i++)
dptr[i]--;
}
break;
default:
fprintf(stderr, "Someone gave the wrong width - width=%d\n",
widthPerDatapoint);
break;
}
The advantage here is that you don't get a bunch of conditions in every loop. The compiler MAY sort it out anyways, but I don't always trust compilers to figure such things out - and I think it's a bit cleaner too.
How about the following?
Copy the value out to a local int of whatever size
Perform the decrement on the local variable
Use an appropriate bit mask to zero out the location in the array
(e.g., ~0xFF for 8-bit)
'And' the local variable back into the array.