How to count the characters of a hexadecimal number - c

since I'm fairly new to C, so this might be a dumb question.
I aimed to use the bitwise operators & to do bit masking.
For example:
If inputs are 0x00F7 and 0x000F,my program should returns 0x0007. However, the output of 0x007&0x000F is just 7, 0x0077&0x00FF is just 77. So is there any way I can count the characters of a hexadecimal number so that I can know how many zeros should I print out?
printf("%X",0x077&0x00FF);

You don't need to count.
You can get printf to pad the number for you, just specify the maximum length like so:
printf("%04X", 0x77 & 0xff);
Note that when you write an integer (no quotes) it makes no difference if you write 0x0077 or 0x77.
Only when you use printf to output a string zeros can be added for visibility.

unsigned char uc = 0x077&0x00FF;
unsigned short us = 0x077&0x00FF;
unsigned int ui = 0x077&0x00FF;
unsigned long long ull = 0x077&0x00FF;
printf("%0*X\n",sizeof(uc)*2, uc);
printf("%0*X\n",sizeof(us)*2, uc);
printf("%0*X\n",sizeof(ui)*2, uc);
printf("%0*X\n",sizeof(ull)*2, uc);
end the output:
77
0077
00000077
0000000000000077

Related

32-bits as hex number in C with simple function call

I would like to know if I could get away with using printf to print 32 bits of incoming binary data from a microcontroller as a hexadecimal number. I already have collected the bits into an large integer variable and I'm trying "%x" option in printf but all I seem to get are 8-bit values, although I can't tell if that's a limitation with printf or my microcontroller is actually returning that value.
Here's my code to receive data from the microcontroller:
printf("Receiving...\n");
unsigned int n=0,b=0;
unsigned long lnum=0;
b=iolpt(1); //call to tell micro we want to read 32 bits
for (n=0;n<32;n++){
b=iolpt(1); //read bit one at a time
printf("Bit %d of 32 = %d\n",n,b);
lnum<<1; //shift bits in our big number left by 1 position
lnum+=b; //and add new value
}
printf("\n Data returned: %x\n",lnum); //always returns 8-bits
The iolpt() function always returns the bit read from the microcontroller and the value returned is a 0 or 1.
Is my idea of using %x acceptable for a 32-bit hexadecimal number or should I attempt something like "%lx" instead of "%x" to try to represent long hex even though its documented nowhere or is printf the wrong function for 32-bit hex? If its the wrong function then is there a function I can use that works, or am I forced to break up my long number into four 8-bit numbers first?
printf("Receiving...\n");
iolpt(1); // Tell micro we want to read 32 bits.
/* Is this correct? It looks pretty simple to be
initiating a read. It is the same as the calls
below, iolpt(1), so what makes it different?
Just because it is first?
*/
unsigned long lnum = 0;
for (unsigned n = 0; n < 32; n++)
{
unsigned b = iolpt(1); // Read bits one at a time.
printf("Bit %u of 32 = %u.\n", n, b);
lnum <<= 1; // Shift bits in our big number left by 1 position.
// Note this was changed to "lnum <<= 1" from "lnum << 1".
lnum += b; // And add new value.
}
printf("\n Data returned: %08lx\n", lnum);
/* Use:
0 to request leading zeros (instead of the default spaces).
8 to request a field width of 8.
l to specify long.
x to specify unsigned and hexadecimal.
*/
Fixed:
lnum<<1; to lnum <<= 1;.
%x in final printf to %08lx.
%d in printf in loop to %u, in two places.
Also, cleaned up:
Removed b= in initial b=iolpt(1); since it is unused.
Moved definition of b inside loop to limit its scope.
Moved definition of n into for to limit its scope.
Used proper capitalization and punctuation in comments to improve clarity and aesthetics.
Would something like that work for you?
printf("Receiving...\n");
unsigned int n=0,b=0;
unsigned long lnum=0;
b=iolpt(1); //call to tell micro we want to read 32 bits
for (n=0;n<32;n++){
b=iolpt(1); //read bit one at a time
printf("Bit %d of 32 = %d\n",n,b);
lnum<<1; //shift bits in our big number left by 1 position
lnum+=b; //and add new value
}
printf("\n Data returned: %#010lx\n",lnum); //now returns 32-bit

Convert integer char[] to hex char[] avr microcontroller

I'm looking to convert a input variable of this type:
char var[] = "9876543210000"
to a hex equivalent char
char hex[] = "08FB8FD98210"
I can perform this by e.g. the following code::
long long int freq;
char hex[12];
freq = strtoll(var,NULL,10);
sprintf(hex, "%llX",freq);
However I'm doing this on a avr microcontroller, thus strtoll is not available only strtol (avr-libgcc). So I'm restricted to 32 bits integer which is not enough. Any ideas?
Best regards
Simon
Yes.... this method works fine only with positive number, so if you have a minus sign, just save it before doing next. Just divide the string number in two halves, lets say that you select six digit each, to get two decimal numbers: u_int32_t left_part; and u_int32_t right_part; with the two halves of your number.... you can construct your 64 bit number as follows:
u_int64_t number = (u_int64_t) left_part * 1000000 + right_part;
If you have the same problem on the printing side, that is, you cannot print but 32 bit hex numbers, you can just get left_part = number >> 32; and right_part = number & 0xffffffff; and finally print both with:
if (left_part) printf("%x%08x", left_part, right_part);
else printf("%x", right_part);
the test makes result not to be forced to 8 digits when it is less than 0x100000000.
It looks like you might have to parse the input one digit at a time, and save the result into a uint64_t variable.
Initialize the uint64_t result variable to 0. In a loop, multiply the result by 10 and add the next digit converted to an int.
Now to print the number out in hex, you can use sprintf() twice. First print result >> 32 as a long unsigned int, followed by (long unsigned int)result at &hex[4](or 6 or 8 or wherever) to pick up the remaining 32 bits.
You will need to specify the format correctly to get the characters in the array in the correct places. Perhaps, just pad it with 0s? Don't forget about room for the trailing null character.
Change this:
freq = strtoll(var,NULL,10);
To this:
sscanf(var,"%lld",&freq);

Converting Long -> Hex, then assigning that Hex to a Char in C

I am using a function that calculates a 16bit CRC checksum.
The function produces a LONG containing the checksum (base 10 number format). Of course, this can be printed to the console in it's hex equivalent as follows:
printf("Checksum: 0x%x\n", crctablefast((unsigned char *)string, datalength));
For a given 20-byte char array being checked, it would produce the checksum 23277 in Hex format:
Checksum: 5AED
I need to store the check sum as char in the 21st and 22nd places in the char array as the following:
Char [20] = 0x5A
Char [21] = 0xED
The problem is that functions like scanf and sscanf, the best I can do is to assign the characters literally, as follows:
Char [20] = "0x5A"
Char [21] = "0xED"
...which is no good.
What can I do to take two characters at a time, and use those to assign a hex value to a char? Or is there a much easier way in general?
Thank you in advance!
Use bit masks:
ch[20] = (crc >> 8) & 0xff;
ch[21] = crc & 0xff;
It can be saved like this into a character or char array:
Char [20]=(char) 0x5A;
Char [21]=(char) 0xED;
In this way it will save the character equivalent for the integer value of the hex number and you can convert it back by casting and use it.
Well, nothing in the background is in hex, everything in machine is in binary, while human do things in decimal. So, hex is a man machine interface, a representation only.

How to split a 64-bit number into eight 8-bit values?

Is there a simple way to split one 64-bit (unsigned long long) variable into eight int8_t values?
For example:
//1001000100011001100100010001100110010001000110011001000110011111
unsigned long long bigNumber = 10455547548911899039;
int8_t parts[8] = splitULongLong(bigNumber);
parts would be something along the lines of:
[0] 10011111
[1] 10010001
[2] 00011001
...
[7] 10010001
{
uint64_t v= _64bitVariable;
uint8_t i=0,parts[8]={0};
do parts[i++]=v&0xFF; while (v>>=8);
}
First you shouldn't play such games with signed values, this only complicates the issue. Then you shouldn't use unsigned long long directly, but the appropriate fixed width type uint64_t. This may be unsigned long long, but not necessarily.
Any byte (assuming 8 bit) in such an integer you may obtain by shifting and masking the value:
#define byteOf(V, I) (((V) >> (I)*8)&UINT64_C(0xFF))
To initialize your array you would place calls to that macro inside an initializer.
BTW there is no standard "binary" format for integers as you seem to be assuming.
I think if it can be this:
64bit num %8 ,save this result,and then minus the result then divide the result by 8
last save divided num and save (64bit num %8) num, and last you get two 8bit num , and you can use this two num to replace 64bit num 。 but when you need to operate , you may need to operate 8bit num to 64 bit mun
You should be able to use a Union to split the data up without any movement or processing.
This leaves you with the problem of the resulting table being in hte wrong order which can be easily solved with a macro (if you have lots of hard coded values) or a simple "8-x" subscript calculation.
#define rv(ss) = 8 - ss;
union SameSpace {
unsigned long long _64bitVariable;
int8_t _8bit[8];
} samespace;
_64bitVariable = 0x1001000100011001100100010001100110010001000110011001000110011111;
if (_8bit[rv(1)] == 0x10011111) {
printf("\n correct");
}

Converting char to int [C]

I have a char byte that I want to convert to an int. Basically I am getting the value (which is 0x13) from a file using the fopen command and storing it into a char buffer called buff.
I am doing the following:
//assume buff[17] = 0x13
v->infoFrameSize = (int)buff[17] * ( 128^0 );
infoFrameSize is a type int that is stored in a structure called 'v'.
The value I get for v->infoFrameSize is 0x00000980. Should this be 0x00000013?
I tried taking out the multiply by 128 ^ 0 and I get the correct output:
v->infoFrameSize = 0x00000013
Any info or suggested reading material on what is happening here would be great. Thanks!
^ is bitwise xor operation, not exponentiation.
Operator ^ in C does bit operation - XOR.
128 xor 0 equals 128.
In C 128 ^ 0 equates the bitwise XOR of 128 and 0, it doesn't raise 128 to the power of 0 (which is just 1).
A char is simply an integer consisting of a single byte, to "convert" it to an int (which isn't really converting, you're just storing the byte into a larger data type) you do:
char c = 5;
int i = (int)c
tada.
There is no point in the ^0 term. Anything xor'd with zero remains unchanged (so 128^0 is 128).
The value you get is correct; when you multiply 0x13 (aka 19) by 128 (aka 0x80), you get 0x0980 (aka 2432).
Why would you expect the assignment to ignore the multiplication?
128^0 is not doing what you think it does.
cout << (128^0)
returns 128.
Try pow(128,0). Then, add the following to the top of your code:
#include <math.h>
Also, note that pow always returns a float. So you'll need to cast your final answer to an int. So:
(int)(buff[17] * pow(128,0));
To convert a char to an int, you merely cast it:
char c = ...;
int x = (int) c;
K&R would have you read the one byte from the file using getc() and store it directly into an int which eliminates any issues you might be seeing. However, if you are reading from the file into an array of bytes, simply cast to int as follows:
v->infoFrameSize = (int)buff[17];
I'm not sure why you're multiplying by 128^0.
The only problem I know of when converting from char to int is that char can be signed or unsigned, depending on the platform. If it happens to be signed, a big positive number stored inside a char may end up being considered as negative. When you will print it, it will either be a negative number or an abnormally big number (if you print it as an unsigned integer).
The solution is simply to use signed char or unsigned char explicitly in cases like this one.
"^" is a bitwise XOR Operation, if you want to do an exponent use
pow(128,0);
Why are you multiplying by one?
You can convert from a char to an int by simply defining an int and setting it like so:
char x = 0x13;
int y;
y = (int)x;

Resources