Hex to Decimal conversion in C - c

Here is my code which is doing the conversion from hex to decimal. The hex values are stored in a unsigned char array:
int liIndex ;
long hexToDec ;
unsigned char length[4];
for (liIndex = 0; liIndex < 4 ; liIndex++)
{
length[liIndex]= (unsigned char) *content;
printf("\n Hex value is %.2x", length[liIndex]);
content++;
}
hexToDec = strtol(length, NULL, 16);
Each array element contains 1 byte of information and I have read 4 bytes. When I execute it, here is the output that I get :
Hex value is 00
Hex value is 00
Hex value is 00
Hex value is 01
Chunk length is 0
Can any one please help me understand the error here. Th decimal value should have come out as 1 instead of 0.
Regards,
darkie

My guess from your use of %x is that content is encoding your hexademical number as an array of integers, and not an array of characters. That is, are you representing a 0 digit in content as '\0', or '0'?
strtol only works in the latter case. If content is indeed an array of integers, the following code should do the trick:
hexToDec = 0;
int place = 1;
for(int i=3; i>=0; --i)
{
hexToDec += place * (unsigned int)*(content+i);
place *= 16;
}
content += 4;

strtol is expecting a zero-terminated string. length[0] == '\0', and thus strtol stops processing right there. It converts things like "0A21", not things like {0,0,0,1} like you have.
What are the contents of content and what are you trying to do, exactly? What you've built seems strange to me on a number of counts.

Related

How to move uint32_t number to char[]?

I have to copy uint32_t number into the middle of the char[] buffer.
The situation is like this:
char buf[100];
uint8_t pos = 52; // position in buffer to which I want to copy my uint32_t number
uint32_t seconds = 23456; // the actual number
I tried to use memcpy like this:
memcpy(&buf[position], &seconds, sizeof(seconds));
But in buffer I'm getting some strange characters, not the number i want
I also tried using byte-shifting
int shiftby = 32;
for (int i = 0; i < 8; i++)
{
buf[position++] = (seconds >> (shiftby -= 4)) & 0xF;
}
Is there any other option how to solve this problem?
What you're doing in your memcpy code is to put the value 23456 in buff, starting at byte 52 (so bytes 52-55, since the size of seconds is 4 bytes). What you want to do (if I understand you correctly) is to put the string "23456" in buff, starting at byte 52. In this second case, each character takes one byte, and each byte would hold the ASCII value of its character.
Probably the best way to do that is to use snprintf:
int snprintf(char *buffer, size_t n, const char *format-string,
argument-list);
In your example:
snprintf(&buff[position], 5, "%d", seconds)
Note that the n arguments holds the number of digits, rather than the size of the variable. As I said - you take one byte per digit/character.
Obviously you should calculate the number of digits in seconds rather than hard-code it if it can change, and you should also check the return value of snprintf to see if the operation was performed successfully
It is unclear how you are intending to represent this uint32_t, but your code fragment suggest that you are expecting hexadecimal (or perhaps BCD). In that case:
for( int shiftby = 28; shiftby >= 0 ; shiftby -= 4 )
{
char hexdigit = (seconds >> shiftby) & 0xF ;
buf[position++] = hexdigit < 10 ? hexdigit + '0' : hexdigit + 'A' - 10 ;
}
Note that the only real difference between this and your code is the conversion to hex-digit characters by adding conditionally either '0' or 'A' - 10. The use of shiftby as the loop control variable is just a simplification or your algorithm.
The issue with your code is that it inserted integer values 0 to 15 into buf and the characters associated with these values are all ASCII control characters, nominally non-printing. How or whether they render as a glyph on any particular display depends on what you are using to present them. In Windows console for example, printing characters 0 to 15 results in the following:
00 = <no glyph>
01 = '☺'
02 = '☻'
03 = '♥'
04 = '♦'
05 = '♣'
06 = '♠'
07 = <bell> (emits a sound, no glyph)
08 = <backspace>
09 = <tab>
10 = <linefeed>
11 = '♂'
12 = '♀'
13 = <carriage return>
14 = '♫'
15 = '☼'
The change above transforms the values 0 to 15 to ASCII '0'-'9' or 'A'-'F'.
If a hexadecimal presentation is not what you were intending then you need to clarify the question.
Note that if the encoding is BCD (Binary Coded Decimal) where each decimal digit is coded into a 4 bit nibble, then the conversion can be simplified because the range of values is reduced to 0 to 9:
char bcddigit = (seconds >> shiftby) & 0xF ;
buf[position++] = bcddigit + '0' ;
but the hex conversion will work for BCD also.

8 Byte Number as Hex in C

I have given a number, for example n = 10, and I want to calculate its length in hex with big endian and save it in a 8 byte char pointer. In this example I would like to get the following string:
"\x00\x00\x00\x00\x00\x00\x00\x50".
How do I do that automatically in C with for example sprintf?
I am not even able to get "\x50" in a char pointer:
char tmp[1];
sprintf(tmp, "\x%x", 50); // version 1
sprintf(tmp, "\\x%x", 50); // version 2
Version 1 and 2 don't work.
I have given a number, for example n = 10, and I want to calculate its length in hex
Repeatedly divide by 16 to find the number of hexadecimal digits. A do ... while insures the result is 1 when n==0.
int hex_length = 0;
do {
hex_length++;
} while (number /= 16);
save it in a 8 byte char pointer.
C cannot force your system to use 8-byte pointer. So if you system uses 4 byte char pointer, we are out of luck. Let us assume OP's system uses 8-byte pointer. Yet integers may be assigned to pointers. This may or may not result in valid pointer.
assert(sizeof (char*) == 8);
char *char_pointer = n;
printf("%p\n", (void *) char_pointer);
In this example I would like to get the following string: "\x00\x00\x00\x00\x00\x00\x00\x50".
In C, a string includes the various characters up to an including a null character. "\x00\x00\x00\x00\x00\x00\x00\x50" is not a valid C string, yet is a valid string literal. Code cannot construct string literals at run time, that is a part of source code. Further the relationship between n==10 and "\x00...\x00\x50" is unclear. Instead perhaps the goal is to store n into a 8-byte array (big endian).
char buf[8];
for (int i=8; i>=0; i--) {
buf[i] = (char) n;
n /= 256;
}
OP's code certainly will fail as it attempts to store a string which is too small. Further "\x%x" is not valid code as \x begins an invalid escape sequence.
char tmp[1];
sprintf(tmp, "\x%x", 50); // version 1
Just do:
int i;
...
int length = round(ceil(log(i) / log(16)));
This will give you (in length) the number of hexadecimal digits needed to represent i (without 0x of course).
log(i) / log(base) is the log-base of i. The log16 of i gives you the exponent.
To make clear what we're doing here: When rising 16 to the power of the found exponent, we get back i: 16^log16(i) = i.
By rounding up this exponent using ceil(), you get the number of digits.

In C, how am I able to use the printf() function to 'store' a string?

I am attempting to represent a bit16 representation of a number (floating point representation) using unsigned integers. The fraction field here deviates from the standard of 10, and is 8 bits - implying the exponent field is 7 bits and the sign is 1 bit.
The code I have is as follows:
bit16 float_16(bit16 sign, bit16 exp, bit16 frac) {
//make the sign the number before binary point, make the fraction binary.
//concatenate the sign then exponent then fraction
//
bit16 result;
int theExponent;
theExponent = exp + 63; // bias = 2^(7-1) + 1 = 2^6 + 1 = 63
//printf("%d",sign);
int c, k;
for(c = 6; c > 0; c--)
{
k = theExponent >> c;
if( k & 1)
printf("1");
else
printf("0");
}
for(c = 7; c >= 0; c--)
{
k = frac >> c;
if( k & 1)
printf("1");
else
printf("0");
}
//return result;
}
My thinking to 'recreate' a 16 bit sequence from these fields is to concatenate them together as so, but if I want to use them in a further application I am unable to do so. Is there a way to store the final result after everything has been printed (16-bit sequence) into a variable which can then be represented as an unsigned integer? Or is there a more optimal way to do this procedure?
While printf will not work in this case (you can't 'store' it's result), you can use sprintf.
int sprintf ( char * output_str, const char * format, ... );
sprintf writes formatted data to string
Composes a string with the same text that would be printed if format was used on printf, but instead of being printed (or displayed on the console), the content is stored as a C string in the buffer pointed by output_str.
The size of the buffer should be large enough to contain the entire resulting string. See Buffer Overflow.
A terminating null character (\0) will automatically be appended at the end of your output_str.
From output_str to an integer variable
You can use the atoi function to do this. You can get your answer in an integer variable like this:
int i = atoi (output_str);

c and bit shifting in a char

I am new to C and having a hard time understanding why the code below prints out ffffffff when binary 1111111 should equal hex ff.
int i;
char num[8] = "11111111";
unsigned char result = 0;
for ( i = 0; i < 8; ++i )
result |= (num[i] == '1') << (7 - i);
}
printf("%X", bytedata);
You print bytedata which may be uninitialized.
Replace
printf("%X", bytedata);
with
printf("%X", result);
Your code then run's fine. code
Although it is legal in C, for good practice you should make
char num[8] = "11111111";
to
char num[9] = "11111111";
because in C the null character ('\0') always appended to the string literal. And also it would not compile as a C++ file with g++.
EDIT
To answer your question
If I use char the result is FFFFFFFF but if I use unsigned char the result is FF.
Answer:
Case 1:
In C size of char is 1byte(Most implementation). If it is unsigned we can
use 8bit and hold maximum 11111111 in binary and FF in hex(decimal 255). When you print it with printf("%X", result);, this value implicitly converted to unsigned int which becomes FF in hex.
Case 2: But when you use char(signed), then MSB bit use as sign bit, so you can use at most 7 bit for your number whose range -128 to 127 in decimal. When you assign it with FF(255 in decimal) then Integer Overflow occur which leads to Undefined behavior.

ASCII and printf

I have a little (big, dumb?) question about int and chars in C. I rememeber from my studies that "chars are little integers and viceversa," and that's okay to me. If I need to use small numbers, the best way is to use a char type.
But in a code like this:
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char *argv[]) {
int i= atoi(argv[1]);
printf("%d -> %c\n",i,i);
return 0;
}
I can use as argument every number I want. So with 0-127 I obtain the expected results (the standard ASCII table) but even with bigger or negative numbers it seems to work...
Here is some example:
-181 -> K
-182 -> J
300 -> ,
301 -> -
Why? It seems to me that it's cycling around the ascii table, but I don't understand how.
When you pass an int corresponding to the "%c" conversion specifier, the int is converted to an unsigned char and then written.
The values you pass are being converted to different values when they are outside the range of an unsigned (0 to UCHAR_MAX). The system you are working on probably has UCHAR_MAX == 255.
When converting an int to an unsigned char:
If the value is larger than
UCHAR_MAX, (UCHAR_MAX+1) is
subtracted from the value as many
times as needed to bring it into the
range 0 to UCHAR_MAX.
Likewise, if the
value is less than zero, (UCHAR_MAX+1)
is added to the value as many times
as needed to bring it into the range
0 to UCHAR_MAX.
Therefore:
(unsigned char)-181 == (-181 + (255+1)) == 75 == 'K'
(unsigned char)-182 == (-182 + (255+1)) == 74 == 'J'
(unsigned char)300 == (300 - (255+1)) == 44 == ','
(unsigned char)301 == (301 - (255+1)) == 45 == '-'
The %c format parameter interprets the corresponding value as a character, not as an integer. However, when you lie to printf and pass an int in what you tell it is a char, its internal manipulation of the value (to get a char back, as a char is normally passed as an int anyway, with varargs) happens to yield the values you see.
My guess is that %c takes the first byte of the value provided and formats that as a character. On a little-endian system such as a PC running Windows, that byte would represent the least-significant byte of any value passed in, so consecutive numbers would always be shown as different characters.
You told it the number is a char, so it's going to try every way it can to treat it as one, despite being far too big.
Looking at what you got, since J and K are in that order, I'd say it's using the integer % 128 to make sure it fits in the legal range.
Edit: Please disregard this "answer".
Because you are on a little-endian machine :)
Serously, this is an undefined behavior. Try changing the code to printf("%d -> %c, %c\n",i,i,'4'); and see what happens then...
When we use the %c in printf statement, it can access only the first byte of the integer.
Hence anything greater than 256 is treated as n % 256.
For example
i/p = 321 yields op=A
What atoi does is converting the string to numerical values, so that "1234" gets 1234 and not just a sequence of the ordinal numbers of the string.
Example:
char *x = "1234"; // x[0] = 49, x[1] = 50, x[2] = 51, x[3] = 52 (see the ASCII table)
int y = atoi(x); // y = 1234
int z = (int)x[0]; // z = 49 which is not what one would want

Resources