This question already has answers here:
Difference between unsigned and unsigned int in C
(5 answers)
Closed 9 years ago.
I understand the difference between unsigned and unsigned int. But my question is a bit different.
I am ioremaping(linux) a particular memory and i want to read the memory.
I did the following thig :
func()
{
unsigned int *p;
p = (unsigned int *)ioremap(ADDR,8*sizeof(unsigned int));
for (i = 0; i <= 7; i++)
pr_err("p[%d] = %d", i, p[i]);
}
This works perfectly. But I see a standard code doing the same and using (unsidned *) instead of (unsigned int *). That is p is of unsigned *p.
func()
{
unsigned *p;
p = (unsigned *)ioremap(ADDR,8*sizeof(unsigned));
for (i = 0; i <= 7; i++)
pr_err("p[%d] = %d", i, p[i]);
}
I would like to know if it is a good programming practice(platform independent code??). If yes please sate the reason.
unsigned and unsigned int has no difference at all.
Therefor, unsigned * and unsigned int * has no difference at all.
Similarly, long is short for long int, int is short for signed int, etc. There is no difference in one to the other. The only exception to notice is that whether plain char is signed or unsigned is implementation-defined, so it's not the same as signed char.
unsigned and unsigned int are the same type, so are pointers to them. The int is implicit.
Related
I've tried every possible combination of *'s and &'s. I can't figure out what is wrong with this program and what is causing this compile time error
I'm getting this error:
error: initialization makes pointer from integer without a cast
compilation terminated due to -Wfatal-errors.
With this code:
int main() {
unsigned int ten = (int) 10;
unsigned short *a = (short) 89;
unsigned int *p =(int) 1;
unsigned short *q = (short) 10;
unsigned short *s = (short) 29;
function(ten,a,p, q, s);
return 0;
}
The function prototype is:
int function(unsigned int a, unsigned short *const b,
unsigned int *const c,
unsigned short *const d,
unsigned short *const e);
The function is empty, I'm just trying to get it to compile with an input file.
These variables are pointers and you try to assign them integer (short) values:
unsigned short *a = (short) 89;
unsigned int *p =(int) 1;
unsigned short *q = (short) 10;
unsigned short *s = (short) 29;
That is why you get the compiler errors.
The question is not 100 % clear, but maybe this is what you want:
unsigned int ten = (int) 10;
unsigned short a = (short) 89;
unsigned int p =(int) 1;
unsigned short q = (short) 10;
unsigned short s = (short) 29;
function(ten, &a, &p, &q, &s);
The problem is you are initializing pointers with integer constants for a, p, q and s (ie, saying assign value 89 to type int* where 89 is of type int, not int*). You probably dont want those variables to be pointers, you can pass them in as pointers to those variables instead:
int main() {
unsigned int ten = (int) 10;
unsigned short a = (short) 89;
unsigned int p = (int) 1;
unsigned short q = (short) 10;
unsigned short s = (short) 29;
function(ten, &a, &p, &q, &s);
return 0;
}
Without more information about decode_instruction, it is hard to tell if this is the correct form; it really does depend what that function does.
I'm trying to convert an unsigned char array buffer into a signed int (vice versa).
Below is a demo code:
int main(int argv, char* argc[])
{
int original = 1054;
unsigned int i = 1054;
unsigned char c[4];
int num;
memcpy(c, (char*)&i, sizeof(int));
//num = *(int*) c; // method 1 get
memcpy((char *)&num, c, sizeof(int)); // method 2 get
printf("%d\n", num);
return 0;
}
1) Which method should I use to get from unsigned char[] to int?
method 1 get or method 2 get?
(or any suggestion)
2) How do I convert the int original into an unsigned char[]?
I need to send this integer via a buffer that only accepts unsigned char[]
Currently what i'm doing is converting the int to unsigned int then to char[], example :
int g = 1054;
unsigned char buf[4];
unsigned int n;
n = g;
memcpy(buf, (char*)&n, sizeof(int));
Although it works fine but i'm not sure if its the correct way or is it safe?
PS. I'm trying to send data between 2 devices via USB serial communication (between Raspberry Pi & Arduino)
Below approach will work regardless of endianness on machines (assuming sizeof(int)==4):
unsigned char bytes[4];
unsigned int n = 45;
bytes[3] = (n >> 24) & 0xFF;
bytes[2] = (n >> 16) & 0xFF;
bytes[3] = (n >> 8) & 0xFF;
bytes[0] = n & 0xFF;
Above code converts integer to byte array in little endian way. Here is link also with more information.
For reverse operation, see the answers here.
The approach you have with memcpy may give different results on different computers. Because memcpy will copy whatever is there in source address to destionation, and depending if computer is little endian or big endian, there maybe a LSB or MSB at the starting source address.
You could store both int (or unsigned int) and unsigned char array as union. This method is called type punning and it is fully sanitized by standard since C99 (it was common practice earlier, though). Assuming that sizeof(int) == 4:
#include <stdio.h>
union device_buffer {
int i;
unsigned char c[4];
};
int main(int argv, char* argc[])
{
int original = 1054;
union device_buffer db;
db.i = original;
for (int i = 0; i < 4; i++) {
printf("c[i] = 0x%x\n", db.c[i]);
}
}
Note that values in array are stored due to byte order, i.e. endianess.
I am converting an input raw pcm stream into mp3 using lame. The encoding function within that library returns mp3 encoded samples in an array of type unsigned char. This mp3-encoded stream now needs to be placed within an flv container which uses a function that writes encoded samples in char array. My problem is that I am passing the array from lame (of type unsigned char) into the flv library. The following piece of code (only symbolic) illustrates my problem:
/* cast from unsigned char to char. */
#include <stdio.h>
#include <stdlib.h>
void display(char *buff, int len) {
int i = 0;
for(i = 0; i < len; i++) {
printf("buff[%d] = %c\n", i, buff[i]);
}
}
int main() {
int len = 10;
unsigned char* buff = (unsigned char*) malloc(len * sizeof(unsigned char));
int i = 0;
for(i = 65; i < (len + 65); i++) {
buff[i] = (unsigned char) i;
printf("char = %c", (char) i);
}
printf("Displaying array in main.\n");
for(i = 0; i < len; i++) {
printf("buff[%d] = %u\n", i, 'buff[i]');
}
printf("Displaying array in func.\n");
display(buff, len);
return 0;
}
My question(s):
1. Is the implicit type conversion in the code below (as demonstrated by passing of buff into function display safe? Is some weird behaviour likely to occur?
2. Given that I have little option but to stick to the functions as they are present, is there a "safe" way of converting an array of unsigned chars into chars?
The only problem with converting unsigned char * into char * (or vice versa) is that it's supposed to be an error. Fix it with a cast.
display((char *) buff, len);
Note: This cast is unnecessary:
printf("char = %c", (char) i);
This is fine:
printf("char = %c", i);
The %c formatter takes an int arg to begin with, since it is impossible to pass a char to printf() anyway (it will always get converted to int, or in an extremely unlikely case, unsigned int.)
You seem to worry a lot about type safety where there is no need for it. Since this is C and not C++, there is no strong typing system in place. Conversions from unsigned char to char are usually harmless, as long as the "sign bit" is never set. The key to avoiding problems is to actually understand them. The following problems/features exist in C:
The default char type has implementation-defined signedness. One should never make any assumptions of its signedness, nor use it in arithmetic of any kind, particularly not bit-wise operations. char should only be used for storing/printing ASCII letters. It should never be mixed with hex literals or there is a potential for subtle bugs.
The integer promotions in C implicitly promote all small integer types, among them char and unsigned char, to an integer type that can hold their result. This will in practice always be int.
Formally, pointer conversions between different types could be undefined behavior. But pointer conversions between unsigned char and char are in practice safe.
Character literals '\0' etc are of type int in C.
printf and similar functions default promote all character parameters to int.
You are also casting the void* result of malloc, which is completely pointless in C, and potentially harmful in older versions of the C standard that translated functions to "default int" if no function prototype was visible.
And then you have various weird logic-related bugs and bad practice, which I have fixed but won't comment in detail. Use this modified code:
#include <stdio.h>
#include <stdlib.h>
void display(const char *buff, int len) {
for(int i = 0; i < len; i++) {
printf("buff[%d] = %c\n", i, buff[i]);
}
}
int main() {
int len = 10;
unsigned char* buff = malloc(len * sizeof(unsigned char));
if(buff == NULL)
{
// error handling
}
char ch = 'A';
for(int i=0; i<len; i++)
{
buff[i] = (unsigned char)ch + i;
printf("char = %c\n", buff[i]);
}
printf("\nDisplaying array in main.\n");
for(int i = 0; i < len; i++) {
printf("buff[%d] = %u\n", i, buff[i]);
}
printf("\nDisplaying array in func.\n");
display((char*)buff, len);
free(buff);
return 0;
}
C/C++ casts from any integral type to any other same-or-larger integral type are guaranteed not to produce data loss. Casts between signed and unsigned fields would generally create overflow and underflow hazards, but the buffer you're converting actually points to raw data whose type is really void*.
why should a code like this should provide a so high result when I give it the number 4293974227 (or higher)
int main (int argc, char *argv[])
{
unsigned long long int i;
unsigned long long int z = atoi(argv[1]);
unsigned long long int tmp1 = z;
unsigned long long int *numbers = malloc (sizeof (unsigned long long int) * 1000);
for (i=0; tmp1<=tmp1+1000; i++, tmp1++) {
numbers[i] = tmp1;
printf("\n%llu - %llu", numbers[i], tmp1);
}
}
Result should start with the provided number but starts like this:
18446744073708558547 - 18446744073708558547
18446744073708558548 - 18446744073708558548
18446744073708558549 - 18446744073708558549
18446744073708558550 - 18446744073708558550
18446744073708558551 - 18446744073708558551
ecc...
What's this crap??
Thanks!
atoi() returns int. If you need larger numbers, try strtol(), strtoll(), or their relatives.
atoi() returns (int), and can't deal with (long long). Try atoll(), or failing that atol() (the former is preferred).
You are printing signed integers as unsigned.
I need to convert decimal number stored in an int, to a array of bytes (aka stored in a unsigned char array).
Any clues?
Or if you know what you are doing:
int n = 12345;
char* a = (char*)&n;
Simplest possible approach - use sprintf (or snprintf, if you have it):
unsigned char a[SOMESIZE]
int n = 1234;
sprintf( a, "%d", n );
Or if you want it stored in binary:
unsigned char a[sizeof( int ) ];
int n = 1234;
memcpy( a, & n, sizeof( int ) );
This could work
int n=1234;
const int arrayLength=sizeof(int);
unsigned char *bytePtr=(unsigned char*)&n;
for(int i=0;i<arrayLength;i++)
{
printf("[%X]",bytePtr[i]);
}
Take care of order that depends on endianness
I understand the problem as converting a number to a string representation (as Neil does).
Below is a simple way to do it without using any lib.
int i = 0;
int j = 0;
do {a[i++] = '0'+n%10; n/=10;} while (n);
a[i--] = 0;
for (j<i; j++,i--) {int tmp = a[i]; a[i] = a[j]; a[j] = tmp;}
The question probably needs some clarification as others obviously understood you wanted the underlying bytes used in internal representation of int (but if you want to do that kind of thing, you'd better use some fixed size type defined in instead of an int, or you won't know for sure the length of your byte array).
Warning: untested code.
This should be an endianness-agnostic conversion. It goes from low to high. There's probably a more efficient way to do it, but I can't think of it at the moment.
#include <limits.h> // CHAR_BIT, UCHAR_MAX
int num = 68465; // insert number here
unsigned char bytes[sizeof(int)];
for (int i=0; i<sizeof(int); i++)
{
bytes[i] = num & UCHAR_MAX;
num >>= CHAR_BIT;
}
I'm posting this mostly because I don't see another solution here for which the results don't change depending on what endianness your processor is.