I have an unsigned int number (2 byte) and I want to convert it to unsigned char type. From my search, I find that most people recommend to do the following:
unsigned int x;
...
unsigned char ch = (unsigned char)x;
Is the right approach? I ask because unsigned char is 1 byte and we casted from 2 byte data to 1 byte.
To prevent any data loss, I want to create an array of unsigned char[] and save the individual bytes into the array. I am stuck at the following:
unsigned char ch[2];
unsigned int num = 272;
for(i=0; i<2; i++){
// how should the individual bytes from num be saved in ch[0] and ch[1] ??
}
Also, how would we convert the unsigned char[2] back to unsigned int.
Thanks a lot.
You can use memcpy in that case:
memcpy(ch, (char*)&num, 2); /* although sizeof(int) would be better */
Also, how would be convert the unsigned char[2] back to unsigned int.
The same way, just reverse the arguments of memcpy.
How about:
ch[0] = num & 0xFF;
ch[1] = (num >> 8) & 0xFF;
The converse operation is left as an exercise.
How about using a union?
union {
unsigned int num;
unsigned char ch[2];
} theValue;
theValue.num = 272;
printf("The two bytes: %d and %d\n", theValue.ch[0], theValue.ch[1]);
It really depends on your goal: why do you want to convert this to an unsigned char? Depending on the answer to that there are a few different ways to do this:
Truncate: This is what was recomended. If you are just trying to squeeze data into a function which requires an unsigned char, simply cast uchar ch = (uchar)x (but, of course, beware of what happens if your int is too big).
Specific endian: Use this when your destination requires a specific format. Usually networking code likes everything converted to big endian arrays of chars:
int n = sizeof x;
for(int y=0; n-->0; y++)
ch[y] = (x>>(n*8))&0xff;
will does that.
Machine endian. Use this when there is no endianness requirement, and the data will only occur on one machine. The order of the array will change across different architectures. People usually take care of this with unions:
union {int x; char ch[sizeof (int)];} u;
u.x = 0xf00
//use u.ch
with memcpy:
uchar ch[sizeof(int)];
memcpy(&ch, &x, sizeof x);
or with the ever-dangerous simple casting (which is undefined behavior, and crashes on numerous systems):
char *ch = (unsigned char *)&x;
Of course, array of chars large enough to contain a larger value has to be exactly as big as this value itself.
So you can simply pretend that this larger value already is an array of chars:
unsigned int x = 12345678;//well, it should be just 1234.
unsigned char* pChars;
pChars = (unsigned char*) &x;
pChars[0];//one byte is here
pChars[1];//another byte here
(Once you understand what's going on, it can be done without any variables, all just casting)
You just need to extract those bytes using bitwise & operator. OxFF is a hexadecimal mask to extract one byte. Please look at various bit operations here - http://www.catonmat.net/blog/low-level-bit-hacks-you-absolutely-must-know/
An example program is as follows:
#include <stdio.h>
int main()
{
unsigned int i = 0x1122;
unsigned char c[2];
c[0] = i & 0xFF;
c[1] = (i>>8) & 0xFF;
printf("c[0] = %x \n", c[0]);
printf("c[1] = %x \n", c[1]);
printf("i = %x \n", i);
return 0;
}
Output:
$ gcc 1.c
$ ./a.out
c[0] = 22
c[1] = 11
i = 1122
$
Endorsing #abelenky suggestion, using an union would be a more fail proof way of doing this.
union unsigned_number {
unsigned int value; // An int is 4 bytes long
unsigned char index[4]; // A char is 1 byte long
};
The characteristics of this type is that the compiler will allocate memory only for the biggest member of our data structure unsigned_number, which in this case is going to be 4 bytes - since both members (value and index) have the same size. Had you defined it as a struct instead, we would have 8 bytes allocated on memory, since the compiler does its allocation for all the members of a struct.
Additionally, and here is where your problem is solved, the members of an union data structure all share the same memory location, which means they all refer to same data - think of that like a hard link on GNU/Linux systems.
So we would have:
union unsigned_number my_number;
// Assigning decimal value 202050300 to my_number
// which is represented as 0xC0B0AFC in hex format
my_number.value = 0xC0B0AFC; // Representation: Binary - Decimal
// Byte 3: 00001100 - 12
// Byte 2: 00001011 - 11
// Byte 1: 00001010 - 10
// Byte 0: 11111100 - 252
// Printing out my_number one byte at time
for (int i = 0; i < (sizeof(my_number.value)); i++)
{
printf("index[%d]: %u, 0x%x\n", \
i, my_number.index[i], my_number.index[i]);
}
// Printing out my_number as an unsigned integer
printf("my_number.value: %u, 0x%x", my_number.value, my_number.value);
And the output is going to be:
index[0]: 252, 0xfc
index[1]: 10, 0xa
index[2]: 11, 0xb
index[3]: 12, 0xc
my_number.value: 202050300, 0xc0b0afc
And as for your final question, we wouldn't have to convert from unsigned char back to unsigned int since the values are already there. You just have to choose by which way you want to access it
Note 1: I am using an integer of 4 bytes in order to ease the understanding of the concept. For the problem you presented you must use:
union unsigned_number {
unsigned short int value; // A short int is 2 bytes long
unsigned char index[2]; // A char is 1 byte long
};
Note 2: I have assigned byte 0 to 252 in order to point out the unsigned characteristic of our index field. Was it declared as a signed char, we would have index[0]: -4, 0xfc as output.
Related
I am trying to take a given unsigned char and store the 8 bit value in an unsigned char array of size 8 (1 bit per array index).
So given the unsigned char A
Id like to create an unsigned char array containing 0 1 0 0 0 0 0 1 (one number per index)
What would be the most effective way to achieve this? Happy Thanksgiving btw!!
The fastest (not sure if that's what you menat by "effective") way of doing this is probably something like
void char2bits1(unsigned char c, unsigned char * bits) {
int i;
for(i=sizeof(unsigned char)*8; i; c>>=1) bits[--i] = c&1;
}
The function takes the char to convert as the first argument and fills the array bits with the corresponding bit pattern. It runs in 2.6 ns on my laptop. It assumes 8-bit bytes, but not how many bytes long a char is, and does not require the input array to be zero-initialized beforehand.
I didn't expect this to be the fastest approach. My first attempt looked like this:
void char2bits2(unsigned char c, unsigned char * bits) {
for(;c;++bits,c>>=1) *bits = c&1;
}
I thought this would be faster by avoiding array lookups, by looping in the natural order (at the cost of producing the bits in the opposite order of what was requested), and by stopping as soon as c is zero (so the bits array would need to be zero-initialized before calling the function). But to my surprise, this version had a running time of 5.2 ns, double that of the version above.
Investigating the corresponding assembly revealed that the difference was loop unrolling, which was being performed in the former case but not the latter. So this is an illustration of how modern compilers and modern CPUs often have surprising performance characteristics.
Edit: If you actually want the unsigned chars in the result to be the chars '0' and '1', use this modified version:
void char2bits3(unsigned char c, unsigned char * bits) {
int i;
for(i=sizeof(unsigned char)*8; i; c>>=1) bits[--i] = '0'+(c&1);
}
You could use bit operators as recommended.
#include <stdio.h>
main() {
unsigned char input_data = 8;
unsigned char array[8] = {0};
int idx = sizeof(array) - 1;
while (input_data > 0) {
array[idx--] = input_data & 1;
input_data /= 2; // or input_data >>= 1;
}
for (unsigned long i = 0; i < sizeof(array); i++) {
printf("%d, ", array[i]);
}
}
Take the value, right shift it and mask it to keep only the lower bit. Add the value of the lower bit to the character '0' so that you get either '0' or '1' and write it into the array:
unsigned char val = 65;
unsigned char valArr[8+1] = {};
for (int loop=0; loop<8; loop++)
valArr[7-loop] = '0' + ((val>>loop)&1);
printf ("val = %s", valArr);
I'm new on C programming and I'm testing some code where I receive and process an UDP packet formatted as follow:
UINT16 port1
UINT16 port2
The corresponding values on this test are:
6005
5555
If I print the whole packet buffer I get something like this:
u^W³^U><9e>^D
So I thought that I would just have to break it and process as an unsigned int of 16 bytes. So I've tried something like this:
int l = 0;
unsigned int *primaryPort = *(unsigned int) &buffer[l];
AddToLog(logInfo, "PrimaryPort: %u\n", primaryPort);
l += sizeof(primaryPort);
unsigned int *secondaryPort = *(unsigned int) &buffer[l];
AddToLog(logInfo, "SecondaryPort: %u\n", secondaryPort);
l += sizeof(secondaryPort);
But I get wrong numbers with 8 digits.
I even tried another approach like follow, but also get the wrong number as well.
int l = 0;
unsigned char primaryPort[16];
snprintf(primaryPort, sizeof(primaryPort), "%u", &buffer[l]);
AddToLog(logInfo, "PrimaryPort: %d\n", primaryPort);
l += sizeof(primaryPort);
unsigned char secondaryPort[16];
snprintf(secondaryPort, sizeof(secondaryPort), "%u", &buffer[l]);
AddToLog(logInfo, "SecondaryPort: %d\n", secondaryPort);
l += sizeof(secondaryPort);
What I'm doing wrong? Also, why I have to free on a char string variables, but I don't need to free on int variables?
You are passing to AddToLog and snprintf pointers to the integers. So what you're seeing are the addresses of the integers, not the integers themselves.
You need to dereference your pointers -- for example, put an asterisk (*) in front of primaryPort in your calls to AddToLog in your first approach.
As #rileyberton suggests, most likely unsigned int is 4 bytes on your system, which is the C99 type uint32_t. For a 16-bit integer, use uint16_t. These are defined in stdint.h. These are traditionally called "short integers" or "half integers" and require the %hu qualifier in printf or similar functions, rather than just %u (which stands for unsigned int, whose size depends on the target machine.)
Also, as #igor-tandetnik suggests, you may need to switch the byte order of the integers in your packet, if for example the packet is using network order (big-endian) format and your target machine is using little-endian format.
unsigned int on your system is likely 4 bytes (uint32_t). You can use unsigned int here if you mask out the values in the correct endianess, or simply use a short.
int l = 0;
unsigned short *primaryPort = *(unsigned short) &buffer[l];
AddToLog(logInfo, "PrimaryPort: %u\n", primaryPort);
l += sizeof(*primaryPort);
unsigned short *secondaryPort = *(unsigned short) &buffer[l];
AddToLog(logInfo, "SecondaryPort: %u\n", secondaryPort);
l += sizeof(*secondaryPort);
You declared primaryPort and secondaryPort to be pointers to unsigned short.
But when you assign them values from a section of buffer, you already de-referenced the pointer. You don't need pointers-to-unsigned-short. You just need an unsigned short.
Change it to:
unsigned short primaryPort = *((unsigned short*) &buffer[l]);
unsigned short secondaryPort = *((unsigned short *) &buffer[l]);
Note the removal of a * in the variable declarations.
If you're still having problems, you'll need to examine buffer byte-by-byte, looking for the value you expect. You can expect that 6005 will show up as either hex 17 75 or 75 17, depending on your platform's endianness.
A char is 1 byte and an integer is 4 bytes. I want to copy byte-by-byte from a char[4] into an integer. I thought of different methods but I'm getting different answers.
char str[4]="abc";
unsigned int a = *(unsigned int*)str;
unsigned int b = str[0]<<24 | str[1]<<16 | str[2]<<8 | str[3];
unsigned int c;
memcpy(&c, str, 4);
printf("%u %u %u\n", a, b, c);
Output is
6513249 1633837824 6513249
Which one is correct? What is going wrong?
It's an endianness issue. When you interpret the char* as an int* the first byte of the string becomes the least significant byte of the integer (because you ran this code on x86 which is little endian), while with the manual conversion the first byte becomes the most significant.
To put this into pictures, this is the source array:
a b c \0
+------+------+------+------+
| 0x61 | 0x62 | 0x63 | 0x00 | <---- bytes in memory
+------+------+------+------+
When these bytes are interpreted as an integer in a little endian architecture the result is 0x00636261, which is decimal 6513249. On the other hand, placing each byte manually yields 0x61626300 -- decimal 1633837824.
Of course treating a char* as an int* is undefined behavior, so the difference is not important in practice because you are not really allowed to use the first conversion. There is however a way to achieve the same result, which is called type punning:
union {
char str[4];
unsigned int ui;
} u;
strcpy(u.str, "abc");
printf("%u\n", u.ui);
Neither of the first two is correct.
The first violates aliasing rules and may fail because the address of str is not properly aligned for an unsigned int. To reinterpret the bytes of a string as an unsigned int with the host system byte order, you may copy it with memcpy:
unsigned int a; memcpy(&a, &str, sizeof a);
(Presuming the size of an unsigned int and the size of str are the same.)
The second may fail with integer overflow because str[0] is promoted to an int, so str[0]<<24 has type int, but the value required by the shift may be larger than is representable in an int. To remedy this, use:
unsigned int b = (unsigned int) str[0] << 24 | …;
This second method interprets the bytes from str in big-endian order, regardless of the order of bytes in an unsigned int in the host system.
unsigned int a = *(unsigned int*)str;
This initialization is not correct and invokes undefined behavior. It violates C aliasing rules an potentially violates processor alignment.
You said you want to copy byte-by-byte.
That means the the line unsigned int a = *(unsigned int*)str; is not allowed. However, what you're doing is a fairly common way of reading an array as a different type (such as when you're reading a stream from disk.
It just needs some tweaking:
char * str ="abc";
int i;
unsigned a;
char * c = (char * )&a;
for(i = 0; i < sizeof(unsigned); i++){
c[i] = str[i];
}
printf("%d\n", a);
Bear in mind, the data you're reading may not share the same endianness as the machine you're reading from. This might help:
void
changeEndian32(void * data)
{
uint8_t * cp = (uint8_t *) data;
union
{
uint32_t word;
uint8_t bytes[4];
}temp;
temp.bytes[0] = cp[3];
temp.bytes[1] = cp[2];
temp.bytes[2] = cp[1];
temp.bytes[3] = cp[0];
*((uint32_t *)data) = temp.word;
}
Both are correct in a way:
Your first solution copies in native byte order (i.e. the byte order the CPU uses) and thus may give different results depending on the type of CPU.
Your second solution copies in big endian byte order (i.e. most significant byte at lowest address) no matter what the CPU uses. It will yield the same value on all types of CPUs.
What is correct depends on how the original data (array of char) is meant to be interpreted.
E.g. Java code (class files) always use big endian byte order (no matter what the CPU is using). So if you want to read ints from a Java class file you have to use the second way. In other cases you might want to use the CPU dependent way (I think Matlab writes ints in native byte order into files, c.f. this question).
If your using CVI (National Instruments) compiler you can use the function Scan to do this:
unsigned int a;
For big endian:
Scan(str,"%1i[b4uzi1o3210]>%i",&a);
For little endian:
Scan(str,"%1i[b4uzi1o0123]>%i",&a);
The o modifier specifies the byte order.
i inside the square brackets indicates where to start in the str array.
These are my two variables with which I want to do an xor operation (in C).
unsigned long int z=0xB51A06CD;
unsigned char array[] = {0xF0,0xCC,0xAA,0xF0};
desired output= 0X45D6AC3D
I know I cannot do a simple z ^ array, because it's a character array and not a single character. Will I have to do XOR of one byte at a time or is there a function for it in C?
I am trying all kinds of crazy things to get it done, but failing miserably all the time. If anyone can help me out with a small code snippet or at least a rough idea, I would be extremely grateful.
Cast the array, which is treat as a pointer to the first element in an expression like this one, to a long pointer instead of char pointer , and dereference it.
unsigned long result = z ^ *(unsigned long *)array;
Just make an unsigned long int out of your array (warning, it depends on the machine endianness!):
unsigned long int z=0xB51A06CD;
unsigned char array[] = {0xF0,0xCC,0xAA,0xF0};
unsigned long int w = 0;
w |= array[0] << 0;
w |= array[1] << 8;
w |= array[2] << 16;
w |= array[3] << 24;
unsigned long output = z ^ w;
unsigned long int z=0xB51A06CD;
unsigned char array[] = {0xF0,0xCC,0xAA,0xF0};
unsigned long tmp;
memcpy(&tmp, array, sizeof tmp);
... z ^ tmp ...
Note that this still makes a number of non-portable assumptions: that unsigned long is 4 bytes, and that the system's endianness is what you expect it to be.
As others mentioned, you have to worry about endianness and size of long. Here's how to make it safe:
1) instead of unsigned long, use uint32_t (from inttypes.h), to be sure you get a 4 byte type.
2) use htonl() as a platform-independent way to ensure you interpret the array as a big-endian value
z ^ htonl(*(uint32_t *)array);
As part of my CS course I've been given some functions to use. One of these functions takes a pointer to unsigned chars to write some data to a file (I have to use this function, so I can't just make my own purpose built function that works differently BTW). I need to write an array of integers whose values can be up to 4095 using this function (that only takes unsigned chars).
However am I right in thinking that an unsigned char can only have a max value of 256 because it is 1 byte long? I therefore need to use 4 unsigned chars for every integer? But casting doesn't seem to work with larger values for the integer. Does anyone have any idea how best to convert an array of integers to unsigned chars?
Usually an unsigned char holds 8 bits, with a max value of 255. If you want to know this for your particular compiler, print out CHAR_BIT and UCHAR_MAX from <limits.h> You could extract the individual bytes of a 32 bit int,
#include <stdint.h>
void
pack32(uint32_t val,uint8_t *dest)
{
dest[0] = (val & 0xff000000) >> 24;
dest[1] = (val & 0x00ff0000) >> 16;
dest[2] = (val & 0x0000ff00) >> 8;
dest[3] = (val & 0x000000ff) ;
}
uint32_t
unpack32(uint8_t *src)
{
uint32_t val;
val = src[0] << 24;
val |= src[1] << 16;
val |= src[2] << 8;
val |= src[3] ;
return val;
}
Unsigned char generally has a value of 1 byte, therefore you can decompose any other type to an array of unsigned chars (eg. for a 4 byte int you can use an array of 4 unsigned chars). Your exercise is probably about generics. You should write the file as a binary file using the fwrite() function, and just write byte after byte in the file.
The following example should write a number (of any data type) to the file. I am not sure if it works since you are forcing the cast to unsigned char * instead of void *.
int homework(unsigned char *foo, size_t size)
{
int i;
// open file for binary writing
FILE *f = fopen("work.txt", "wb");
if(f == NULL)
return 1;
// should write byte by byte the data to the file
fwrite(foo+i, sizeof(char), size, f);
fclose(f);
return 0;
}
I hope the given example at least gives you a starting point.
Yes, you're right; a char/byte only allows up to 8 distinct bits, so that is 2^8 distinct numbers, which is zero to 2^8 - 1, or zero to 255. Do something like this to get the bytes:
int x = 0;
char* p = (char*)&x;
for (int i = 0; i < sizeof(x); i++)
{
//Do something with p[i]
}
(This isn't officially C because of the order of declaration but whatever... it's more readable. :) )
Do note that this code may not be portable, since it depends on the processor's internal storage of an int.
If you have to write an array of integers then just convert the array into a pointer to char then run through the array.
int main()
{
int data[] = { 1, 2, 3, 4 ,5 };
size_t size = sizeof(data)/sizeof(data[0]); // Number of integers.
unsigned char* out = (unsigned char*)data;
for(size_t loop =0; loop < (size * sizeof(int)); ++loop)
{
MyProfSuperWrite(out + loop); // Write 1 unsigned char
}
}
Now people have mentioned that 4096 will fit in less bits than a normal integer. Probably true. Thus you can save space and not write out the top bits of each integer. Personally I think this is not worth the effort. The extra code to write the value and processes the incoming data is not worth the savings you would get (Maybe if the data was the size of the library of congress). Rule one do as little work as possible (its easier to maintain). Rule two optimize if asked (but ask why first). You may save space but it will cost in processing time and maintenance costs.
The part of the assignment of: integers whose values can be up to 4095 using this function (that only takes unsigned chars should be giving you a huge hint. 4095 unsigned is 12 bits.
You can store the 12 bits in a 16 bit short, but that is somewhat wasteful of space -- you are only using 12 of 16 bits of the short. Since you are dealing with more than 1 byte in the conversion of characters, you may need to deal with endianess of the result. Easiest.
You could also do a bit field or some packed binary structure if you are concerned about space. More work.
It sounds like what you really want to do is call sprintf to get a string representation of your integers. This is a standard way to convert from a numeric type to its string representation. Something like the following might get you started:
char num[5]; // Room for 4095
// Array is the array of integers, and arrayLen is its length
for (i = 0; i < arrayLen; i++)
{
sprintf (num, "%d", array[i]);
// Call your function that expects a pointer to chars
printfunc (num);
}
Without information on the function you are directed to use regarding its arguments, return value and semantics (i.e. the definition of its behaviour) it is hard to answer. One possibility is:
Given:
void theFunction(unsigned char* data, int size);
then
int array[SIZE_OF_ARRAY];
theFunction((insigned char*)array, sizeof(array));
or
theFunction((insigned char*)array, SIZE_OF_ARRAY * sizeof(*array));
or
theFunction((insigned char*)array, SIZE_OF_ARRAY * sizeof(int));
All of which will pass all of the data to theFunction(), but whether than makes any sense will depend on what theFunction() does.