initialization makes pointer from integer without a cast - c

I've tried every possible combination of *'s and &'s. I can't figure out what is wrong with this program and what is causing this compile time error
I'm getting this error:
error: initialization makes pointer from integer without a cast
compilation terminated due to -Wfatal-errors.
With this code:
int main() {
unsigned int ten = (int) 10;
unsigned short *a = (short) 89;
unsigned int *p =(int) 1;
unsigned short *q = (short) 10;
unsigned short *s = (short) 29;
function(ten,a,p, q, s);
return 0;
}
The function prototype is:
int function(unsigned int a, unsigned short *const b,
unsigned int *const c,
unsigned short *const d,
unsigned short *const e);
The function is empty, I'm just trying to get it to compile with an input file.

These variables are pointers and you try to assign them integer (short) values:
unsigned short *a = (short) 89;
unsigned int *p =(int) 1;
unsigned short *q = (short) 10;
unsigned short *s = (short) 29;
That is why you get the compiler errors.
The question is not 100 % clear, but maybe this is what you want:
unsigned int ten = (int) 10;
unsigned short a = (short) 89;
unsigned int p =(int) 1;
unsigned short q = (short) 10;
unsigned short s = (short) 29;
function(ten, &a, &p, &q, &s);

The problem is you are initializing pointers with integer constants for a, p, q and s (ie, saying assign value 89 to type int* where 89 is of type int, not int*). You probably dont want those variables to be pointers, you can pass them in as pointers to those variables instead:
int main() {
unsigned int ten = (int) 10;
unsigned short a = (short) 89;
unsigned int p = (int) 1;
unsigned short q = (short) 10;
unsigned short s = (short) 29;
function(ten, &a, &p, &q, &s);
return 0;
}
Without more information about decode_instruction, it is hard to tell if this is the correct form; it really does depend what that function does.

Related

this code is extracting the mantissa, exponent, but why are we &'ing with ptr at first and what's the value of ptr at first? 0?

void getSME( int& s, int& m, int& e, float number )
{
unsigned int* ptr = (unsigned int*)&number;
why did this code & ptr with number ? was this necessary and what's ptr current value? wouldn't this result in 0
s = *ptr >> 31; *getting the sign bit*
e = *ptr & 0x7f800000;masking with exponent**
e >>= 23;then extracting the exponent**
m = *ptr & 0x007fffff;*extracting mantissa*
}
Although you have tagged C, this is C++ code, not C.
unsigned int* ptr = (unsigned int*)&number; is an attempt to get the bits that encode the floating-point value in number. However, this is not a correct method in either C or C++. Better code would be unsigned int x; memcpy(&x, &number, sizeof x);. (For C++, use std::memcpy.)
In &number, & is a unary operator that produces the address of its operand, so &number is the address of number. It is a pointer to a float.
Then (unsigned int*) is a cast that converts this to a pointer to an unsigned int.
Then using *ptr uses this pointer to get an unsigned int from the address. The intent is that the bits that encode the float will be loaded from memory and interpreted as an unsigned int, which allows operating on those bits with the operators >> and &.
By using unsigned int x; memcpy(&x, &number, sizeof x); instead, the C and C++ standards ensure the bytes that represent number are copied into x. This avoids various restrictions and semantic problems in the language standards. It does require that unsigned int be the desired size, 32 bits. (The code also expects that the IEEE-754 binary32 format is used for float.)
It is not the C code
It uses horrible pointer punning - violating the strict aliasing rule
C code:
void getSME( int *s, int *m, int *e, float number )
{
union
{
unsigned int num;
float fnum;
}fu = {.fnum = number};
*s = fu.num >> 31; //*getting the sign bit*
*e = fu.num & 0x7f800000; //masking with exponent**
*e >>= 23; //then extracting the exponent**
*m = fu.num & 0x007fffff; //*extracting mantissa*
}
or
void getSME( int *s, int *m, int *e, float number )
{
unsigned int unum ;
memcpy(&unum, &number, sizeof(unum));
*s = unum >> 31; //*getting the sign bit*
*e = unum & 0x7f800000; //masking with exponent**
*e >>= 23; //then extracting the exponent**
*m = unum & 0x007fffff; //*extracting mantissa*
}

Program which displays the all signed short numbers

*EDIT: I Deleted by mistake the remarks I wrote on that using short & char is kind of obsolete / not efficient in modern programming. this one is just for practice basic stuff.**
This program creates and prints the series of signed short values starting from their equivalent in the unsigned short "space/world" starting at value 0 .
**example : on a machine where short is 16 bit :
unsigned short : 0 1 2 .... 65535
=> signed short : 0 1 2 ... 32766 -32767 -32766 -32765 ... -2 -1
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
//Initialize memory pointed by p with values 0 1 ... n
//Assumption : the value of n can be converted to
// short int (without over/under-flow)
unsigned int initArr (short int *p, unsigned int n);
int main (void)
{
const unsigned int lastNumInSeq = USHRT_MAX;
short *p_arr = (short *) malloc ( (lastNumInSeq + 1) * sizeof (short));
short int lastValSet = initArr (p_arr, lastNumInSeq); //returns the "max" val written
// for (unsigned i = 0; i < numOfElem; i++)
// printf ("[%d]=%d \n", i, (*(p_arr + i)));
printf ("lastValSet = %d *(p_arr + lastNumInSeq) = %d ",
lastValSet,*(p_arr + lastNumInSeq ));
return 0;
}
unsigned int initArr (short *p, unsigned int n)
{
unsigned int offset,index = 0;
while (index <= n){
offset = index;
*(p + offset) = ++index -1 ;
}
return offset;
There are some other cleanups needed.
The function signature should change from
short initArr (short *p, unsigned int n);
to
unsigned int initArr (short *p, unsigned int n);
The variable 'lastValSet' should change its type to unsigned int.
This comment is also confusing:
//Assumption : the value of n can be converted to
// short int (without over/under-flow)
It should be something like:
//Assumption : the value of n which is of type int can be converted to
// short int (without over/under-flow) up to 32767 which is the
// max value for a variable of short type.

casting unsigned char to char would result in different binary representations?

I think the title is pretty self explanatory but basically what I'm saying is that, if I have the following instruction:
a = (char) b;
knowing that a's type is char and b's is unsigned char, can that instruction result in making a and b have different binary representations?
The type char can be either signed or unsigned. Char types have no padding, so all bits are value bits.
If char is unsigned, then the value bits of a will be the same as those of b.
If char is signed, then...
if the value of b is representable by char, the common value bits of a and b will the same.
otherwise, the conversion from unrepresentable unsigned char value to char results in an implementation-defined result.
The answer in general, is no, there is no difference. Here you can test it yourself. Just supply the respective values for 'a' and 'b'
#include <stdio.h>
#include <string.h>
const char *byte_to_binary(int x)
{
static char b[9];
b[0] = '\0';
int z;
for (z = 128; z > 0; z >>= 1)
strcat(b, ((x & z) == z) ? "1" : "0");
}
return b;
}
int main(void) {
unsigned char b = -7;
char a = -7;
printf("1. %s\n", byte_to_binary(a));
a = (char) b;
printf("2. %s\n", byte_to_binary(a));
return 0;
}

conversion of BCD to unsigned char

I have a unsigned char array containing the following value : "\x00\x91\x12\x34\x56\x78\x90";
That is number being sent in Hexadecimal format.
Additionally, it is in BCD format : 00 in byte, 91 in another byte (8 bits)
On the other side I require to decode this value as 0091234567890.
I'm using the following code:
unsigned int conver_bcd(char *p,size_t length)
{
unsigned int convert =0;
while (length--)
{
convert = convert * 100 + (*p >> 4) * 10 + (*p & 15);
++p
}
return convert;
}
However, the result which I get is 1430637214.
What I understood was that I'm sending hexadecimal values (\x00\x91\x12\x34\x56\x78\x90) and my bcd conversion is acting upon the decimal values.
Can you please help me so that I can receive the output as 00911234567890 in Char
Regards
Karan
It looks like you are simply overflowing your unsigned int, which is presumably 32 bits on your system. Change:
unsigned int convert =0;
to:
uint64_t convert = 0;
in order to guarantee a 64 bit quantity for convert.
Make sure you add:
#include <stdint.h>
Cast char to unsigned char, then print it with %02x.
#include <stdio.h>
int main(void)
{
char array[] = "\x00\x91\x12\x34\x56\x78\x90";
int size = sizeof(array) - 1;
int i;
for(i = 0; i < size; i++){
printf("%02x", (unsigned char )array[i]);
}
return 0;
}
Change return type to unsigned long long to insure you have a large enough integer.
Change p type to an unsigned type.
Print value with leading zeros.
unsigned long long conver_bcd(const char *p, size_t length) {
const unsigned char *up = (const unsigned char*) p;
unsigned long long convert =0;
while (length--) {
convert = convert * 100 + (*up >> 4) * 10 + (*up & 15);
++up;
}
return convert;
}
const char *p = "\x00\x91\x12\x34\x56\x78\x90";
size_t length = 7;
printf( "%0*llu\n", (int) (length*2), conver_bcd(p, length));
// 00911234567890

(unsigned *) better than (unsigned int *) for parsing memory? [duplicate]

This question already has answers here:
Difference between unsigned and unsigned int in C
(5 answers)
Closed 9 years ago.
I understand the difference between unsigned and unsigned int. But my question is a bit different.
I am ioremaping(linux) a particular memory and i want to read the memory.
I did the following thig :
func()
{
unsigned int *p;
p = (unsigned int *)ioremap(ADDR,8*sizeof(unsigned int));
for (i = 0; i <= 7; i++)
pr_err("p[%d] = %d", i, p[i]);
}
This works perfectly. But I see a standard code doing the same and using (unsidned *) instead of (unsigned int *). That is p is of unsigned *p.
func()
{
unsigned *p;
p = (unsigned *)ioremap(ADDR,8*sizeof(unsigned));
for (i = 0; i <= 7; i++)
pr_err("p[%d] = %d", i, p[i]);
}
I would like to know if it is a good programming practice(platform independent code??). If yes please sate the reason.
unsigned and unsigned int has no difference at all.
Therefor, unsigned * and unsigned int * has no difference at all.
Similarly, long is short for long int, int is short for signed int, etc. There is no difference in one to the other. The only exception to notice is that whether plain char is signed or unsigned is implementation-defined, so it's not the same as signed char.
unsigned and unsigned int are the same type, so are pointers to them. The int is implicit.

Resources