casting unsigned char to char would result in different binary representations? - c

I think the title is pretty self explanatory but basically what I'm saying is that, if I have the following instruction:
a = (char) b;
knowing that a's type is char and b's is unsigned char, can that instruction result in making a and b have different binary representations?

The type char can be either signed or unsigned. Char types have no padding, so all bits are value bits.
If char is unsigned, then the value bits of a will be the same as those of b.
If char is signed, then...
if the value of b is representable by char, the common value bits of a and b will the same.
otherwise, the conversion from unrepresentable unsigned char value to char results in an implementation-defined result.

The answer in general, is no, there is no difference. Here you can test it yourself. Just supply the respective values for 'a' and 'b'
#include <stdio.h>
#include <string.h>
const char *byte_to_binary(int x)
{
static char b[9];
b[0] = '\0';
int z;
for (z = 128; z > 0; z >>= 1)
strcat(b, ((x & z) == z) ? "1" : "0");
}
return b;
}
int main(void) {
unsigned char b = -7;
char a = -7;
printf("1. %s\n", byte_to_binary(a));
a = (char) b;
printf("2. %s\n", byte_to_binary(a));
return 0;
}

Related

this code is extracting the mantissa, exponent, but why are we &'ing with ptr at first and what's the value of ptr at first? 0?

void getSME( int& s, int& m, int& e, float number )
{
unsigned int* ptr = (unsigned int*)&number;
why did this code & ptr with number ? was this necessary and what's ptr current value? wouldn't this result in 0
s = *ptr >> 31; *getting the sign bit*
e = *ptr & 0x7f800000;masking with exponent**
e >>= 23;then extracting the exponent**
m = *ptr & 0x007fffff;*extracting mantissa*
}
Although you have tagged C, this is C++ code, not C.
unsigned int* ptr = (unsigned int*)&number; is an attempt to get the bits that encode the floating-point value in number. However, this is not a correct method in either C or C++. Better code would be unsigned int x; memcpy(&x, &number, sizeof x);. (For C++, use std::memcpy.)
In &number, & is a unary operator that produces the address of its operand, so &number is the address of number. It is a pointer to a float.
Then (unsigned int*) is a cast that converts this to a pointer to an unsigned int.
Then using *ptr uses this pointer to get an unsigned int from the address. The intent is that the bits that encode the float will be loaded from memory and interpreted as an unsigned int, which allows operating on those bits with the operators >> and &.
By using unsigned int x; memcpy(&x, &number, sizeof x); instead, the C and C++ standards ensure the bytes that represent number are copied into x. This avoids various restrictions and semantic problems in the language standards. It does require that unsigned int be the desired size, 32 bits. (The code also expects that the IEEE-754 binary32 format is used for float.)
It is not the C code
It uses horrible pointer punning - violating the strict aliasing rule
C code:
void getSME( int *s, int *m, int *e, float number )
{
union
{
unsigned int num;
float fnum;
}fu = {.fnum = number};
*s = fu.num >> 31; //*getting the sign bit*
*e = fu.num & 0x7f800000; //masking with exponent**
*e >>= 23; //then extracting the exponent**
*m = fu.num & 0x007fffff; //*extracting mantissa*
}
or
void getSME( int *s, int *m, int *e, float number )
{
unsigned int unum ;
memcpy(&unum, &number, sizeof(unum));
*s = unum >> 31; //*getting the sign bit*
*e = unum & 0x7f800000; //masking with exponent**
*e >>= 23; //then extracting the exponent**
*m = unum & 0x007fffff; //*extracting mantissa*
}

Double from unsigned int[2]?

I have a 64-bit number written as two 32-bit unsinged ints: unsigned int[2]. unsigned int[0] is MSB, and unsigned int[1] is LSB. How would I convert it to double?
double d_from_u2(unsigned int*);
memcpy it from your source array to a double object in proper order. E.g. if you want to swap the unsigned parts
unsigned src[2] = { ... };
double dst;
assert(sizeof dst == sizeof src);
memcpy(&dst, &src[1], sizeof(unsigned));
memcpy((unsigned char *) &dst + sizeof(unsigned), &src[0], sizeof(unsigned));
Of course, you can always just reinterpret both source and destination objects as arrays of unsigned char and copy them byte-by-byte in any order you wish
unsigned src[2] = { ... };
double dst;
unsigned char *src_bytes = (unsigned char *) src;
unsigned char *dst_bytes = (unsigned char *) &dst;
assert(sizeof dst == 8 && sizeof src == 8);
dst_bytes[0] = src_bytes[7];
dst_bytes[1] = src_bytes[6];
...
dst_bytes[7] = src_bytes[0];
(The second example is not intended to be equivalent to the first one.)
There are several ways to copy the bits of your two integers into an object of type double.
At the lowest level, you can convert your input pointer to a [unsigned] char *, create a [unsigned] char * to the first byte of the return value, and copy between those by whatever means you choose. This provides you every opportunity to adjust byte order as may be needed -- for example, although your array is ordered most-significant word first, the order of the bytes within those words might not be what you need.
In the event that you need the bytes to be transferred into your double most-significant byte first, and that you do not want to depend on the machine byte order, you might do this:
double d_from_u2(unsigned int *in) {
double result;
unsigned char *result_bytes = (unsigned char *) &result;
for (int i = 0; i < 4; i++) {
result_bytes[i] = in[0] >> (24 - 8 * i);
result_bytes[i + 4] = in[1] >> (24 - 8 * i);
}
return result;
}
Using arithmetic (shifts, in this case) allows you to operate on the numeric values of the input independently of details of numeric representation.
Here is a solution that works without memcpybut using union:
#include "stdio.h"
#include "stdint.h"
double d_from_u2(unsigned int* v) {
union {
int32_t x[2];
int64_t y;
} u = { .x = { v[1], v[0] }};
printf("%llu\n", u.y); // 1311768467463794450
return (double)u.y;
}
int main(void) {
int32_t x[2];
x[0] = 0x12345678;
x[1] = 0x9abcef12;
printf("%f\n", d_from_u2(x)); // 1311768467463794432.000000
return 0;
}
See demo. In initializes the array int32_t[2] in the union and uses the int64_t to convert it to a double. The order of the initialization depends on which machine (little or big endian) it runs or where the values comes from (1 first).

copying between variables in C

I want to copy an unsigned int value to a char[2] variable. I presume the copying is straight forward since both of them have the same size (16 bits). Here's my code:
#include <stdlib.h>
#include <stdio.h>
int main()
{
unsigned short a = 63488; //16 bit value which is 1111100000000000;
unsigned char* b = malloc(2);
*b = a;
printf("%d\n",b[0]); // I expect the lower part here which is 0
printf("%d\n",b[1]); // I expect the higher part here which is 11111000
return 0;
}
But my result shows zero values. Do I have to copy each part separately? Isn't there any other easier method to do that?
Thank you
If you just want to interpret the short as a char array, you don't even need to copy. Just cast:
#include <stdio.h>
int main()
{
size_t i;
unsigned short a = 63488;
unsigned char* b = (unsigned char*)&a; // Cast the address of a to
// a pointer-to-unsgigned-char
printf("Input value: %d (0x%X)\n", a, a);
printf("Each byte:\n");
for (i = 0; i < sizeof(a); i++)
printf("b[%d] = %d (0x%X)\n", i, b[i], b[i]);
return 0;
}
Output:
$ gcc -Wall -Werror so1.c && ./a.out
Input value: 63488 (0xF800)
Each byte:
b[0] = 0 (0x0)
b[1] = 248 (0xF8)
Note that I ran this on my x86 PC, which is a little endian machine, which is why the first byte is the low byte of the input.
Also note that my code also never makes assumptions about the size of short.
Try like this
memcpy(b, &a, sizeof(a));
Or
b[0] = a & 0xFF;
b[1] = (a >> 8) & 0xFF;
Note that b is of type unsigned char so assigning to *b should be a value of the same type or the value will be truncated.

Copying Ascii Value to int

I have code snippet as Below
unsigned char p = 0;
unsigned char t[4] = {'a','b','c','d'};
unsigned int m = 0;
for(p=0;p<4;p++)
{
m |= t[p];
printf("%c",m);
m = m << 2;
}
Can anybody help me in solving this. consider i have an ascii value abcd stored in an array t[]. I want to store the same value in 'm'. m is my unsigned int variable . which stores the major number. when i copy the array into m & print m . m should print abcd. can anybody state their logic.
As I understand you, you want to encode the 4 characters into a single int.
Your bit shifting is not correct. You need to shift by 8 bits rather than 2. You also need to perform the shifting before the bitwise or. Otherwise you shift too far.
And it makes more sense, in my view, to print the character rather than m.
#include <stdio.h>
int main(void)
{
const unsigned char t[4] = {'a','b','c','d'};
unsigned int m = 0;
for(int p=0;p<4;p++)
{
m = (m << 8) | t[p];
printf("%c", t[p]);
}
printf("\n%x", m);
return 0;
}
Why not just look at the t array as an unsigned int?:
unsigned int m = *(unsigned int*)t;
Or you could use an union for nice access to the same memory block in two different ways, which I think is better than shifting bits manually.
Below is an union example. With unions, both the t char array and the unsigned int are stored in the same memory blob. You get a nice interface to each, and it lets the compiler do the bit shifting (more portable, I guess):
#include <stdio.h>
typedef union {
unsigned char t[4];
unsigned int m;
} blob;
int main()
{
blob b;
b.t[0]='a';
b.t[1]='b';
b.t[2]='c';
b.t[3]='d';
unsigned int m=b.m; /* m holds the value of blob b */
printf("%u\n",m); /* this is the t array looked at as if it were an unsignd int */
unsigned int n=m; /* copy the unsigned int to another one */
blob c;
c.m=n; /* copy that to a different blob */
int i;
for(i=0;i<4;i++)
printf("%c\n",c.t[i]); /* even after copying it as an int, you can still look at it as a char array, if you put it into the blob union -- no manual bit manipulation*/
printf("%lu\n", sizeof(c)); /* the blob has the bytesize of an int */
return 0;
}
Simply assign t[p] to m.
m = t[p];
this will implicitly promote char to unsigned int.
unsigned char p = 0;
unsigned char t[4] = {'a','b','c','d'};
unsigned int m = 0;
for(p=0;p<4;p++)
{
m = t[p];
printf("%c",m);
}

conversion of BCD to unsigned char

I have a unsigned char array containing the following value : "\x00\x91\x12\x34\x56\x78\x90";
That is number being sent in Hexadecimal format.
Additionally, it is in BCD format : 00 in byte, 91 in another byte (8 bits)
On the other side I require to decode this value as 0091234567890.
I'm using the following code:
unsigned int conver_bcd(char *p,size_t length)
{
unsigned int convert =0;
while (length--)
{
convert = convert * 100 + (*p >> 4) * 10 + (*p & 15);
++p
}
return convert;
}
However, the result which I get is 1430637214.
What I understood was that I'm sending hexadecimal values (\x00\x91\x12\x34\x56\x78\x90) and my bcd conversion is acting upon the decimal values.
Can you please help me so that I can receive the output as 00911234567890 in Char
Regards
Karan
It looks like you are simply overflowing your unsigned int, which is presumably 32 bits on your system. Change:
unsigned int convert =0;
to:
uint64_t convert = 0;
in order to guarantee a 64 bit quantity for convert.
Make sure you add:
#include <stdint.h>
Cast char to unsigned char, then print it with %02x.
#include <stdio.h>
int main(void)
{
char array[] = "\x00\x91\x12\x34\x56\x78\x90";
int size = sizeof(array) - 1;
int i;
for(i = 0; i < size; i++){
printf("%02x", (unsigned char )array[i]);
}
return 0;
}
Change return type to unsigned long long to insure you have a large enough integer.
Change p type to an unsigned type.
Print value with leading zeros.
unsigned long long conver_bcd(const char *p, size_t length) {
const unsigned char *up = (const unsigned char*) p;
unsigned long long convert =0;
while (length--) {
convert = convert * 100 + (*up >> 4) * 10 + (*up & 15);
++up;
}
return convert;
}
const char *p = "\x00\x91\x12\x34\x56\x78\x90";
size_t length = 7;
printf( "%0*llu\n", (int) (length*2), conver_bcd(p, length));
// 00911234567890

Resources