How to get hexadecimal val of char in c - c

So im new to C and im so confused. I dont know how to get a hexadecimal value of a char.
how would i get the chars hexadecimal val below???
char a = 'a';

With char a = 'a';, a has a value. Depending on how we want to see that value:
To print that value as a (ASCII) character: printf("%c\n", a);
To print its value in decimal: printf("%d\n", a);
To print its value in hexadecimal: printf("%x\n", a);
In all 3 cases, the value in a is promoted to an int as an argument to a ... function. It is that int code is printing.
As a char may be signed or unsigned, additional considerations are needed when a < 0 or in rare implementations where char is as wide as an int.

Related

How does printing 577 with %c output "A"?

#include<stdio.h>
int main()
{
int i = 577;
printf("%c",i);
return 0;
}
After compiling, its giving output "A". Can anyone explain how i'm getting this?
%c will only accept values up to 255 included, then it will start from 0 again !
577 % 256 = 65; // (char code for 'A')
This has to do with how the value is converted.
The %c format specifier expects an int argument and then converts it to type unsigned char. The character for the resulting unsigned char is then written.
Section 7.21.6.1p8 of the C standard regarding format specifiers for printf states the following regarding c:
If no l length modifier is present, the int argument is converted to an
unsigned char, and the resulting character is written.
When converting a value to a smaller unsigned type, what effectively happens is that the higher order bytes are truncated and the lower order bytes have the resulting value.
Section 6.3.1.3p2 regarding integer conversions states:
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or
subtracting one more than the maximum value that can be represented in the new type
until the value is in the range of the new type.
Which, when two's complement representation is used, is the same as truncating the high-order bytes.
For the int value 577, whose value in hexadecimal is 0x241, the low order byte is 0x41 or decimal 65. In ASCII this code is the character A which is what is printed.
How does printing 577 with %c output "A"?
With printf(). "%c" matches an int argument*1. The int value is converted to an unsigned char value of 65 and the corresponding character*2, 'A' is then printed.
This makes no difference if a char is signed or unsigned or encoded with 2's complement or not. There is no undefined behavior (UB). It makes no difference how the argument is passed, on the stack, register, or .... The endian of int is irrelevant. The argument value is converted to an unsigned char and the corresponding character is printed.
*1All int values are allowed [INT_MIN...INT_MAX].
When a char value is passed as ... argument, it is first converted to an int and then passed.
char ch = 'A';
printf("%c", ch); // ch is converted to an `int` and passed to printf().
*2 65 is an ASCII A, the ubiquitous encoding of characters. Rarely other encodings are used.
Just output the value of the variable i in the hexadecimal representation
#include <stdio.h>
int main( void )
{
int i = 577;
printf( "i = %#x\n", i );
}
The program output will be
i = 0x241
So the least significant byte contains the hexadecimal value 0x41 that represents the ASCII code of the letter 'A'.
577 in hex is 0x241. The ASCII representation of 'A' is 0x41. You're passing an int to printf but then telling printf to treat it as a char (because of %c). A char is one-byte wide and so printf looks at the first argument you gave it and reads the least significant byte which is 0x41.
To print an integer, you need to use %d or %i.

Assigned negative value to char in c prints "?" on macOS

#include<stdio.h>
int main(){
signed char c; //range of signed char is -128 to 127
char d; //this is basically a signed char, if I am not wrong
unsigned char e; //range of unsigned char is 0 to 255
c = -12;
d = -12;
e = 255;
printf("%c\n", c);
printf("%c\n", d);
printf("%c\n", e);
return 0;
}
These all char gets printed as '?'.
Trying to understand why? is it a garbage value or actually a same char assigned to different values? which is highly not possible.
When you pass a char, signed char, or unsigned char to printf, it is promoted to an int value (in normal C implementations, where char is narrower than int). For a %c conversion specification , printf converts that int to unsigned char and writes the resulting character.
When the character is not a printable character of the character set in use, displaying a “?” is a common behavior.
Whether char is signed or unsigned is implementation-defined, although it remains a distinct type, separate from both signed char and unsigned char.

C program to display negative numbers through decimal, octal and hexa-decimal values

**Allocation and Storage part of C programming **
I have come with some doubt while trying to print negative numbers through different number system.
while printing negative numbers, I am getting different output values.But I am not understanding clearly. If anybody help me will be appreciative.
#include<stdio.h>
int main( )
{
char a = -5;
unsigned char b = -5;
int c = -5;
unsigned int d = -5;
//try to print as using "%d" format specifier to display decimal value
printf("%d %d",a,b);
printf("%d %d",c,d);
//try to print as using "%o" format specifier to display octal value
printf("%o %o",a,b);
printf("%o %o",c,d);
//try to print as using "%x" format specifier to display hexa-decimal value
printf("%x %x",a,b);
printf("%x %x",c,d);
return 0;
}
Output:-
displaying decimal value
a = -5 b = 251
c = -5 d = -5
displaying octal value
a = 37777777773 b = 373
c = 37777777773 d = 37777777773
displaying Hexa-decimal value
a = fffffffb b = fb
c = fffffffb d = fffffffb
Now, come to the point. I don't know why unsigned char would take only 8 bits(1 Byte) and other gets allocated to 32 bits (4 Bytes).
When a char value is passed to printf, it is promoted to int. printf is printing this int value.
When the char value is −5 and it is promoted to int, the int value is −5. For a 32-bit two’s complement int, −5 is represented with the bits fffffffb16. So, when you ask printf to format that with %x, you can “fffffffb”. (Technically, %x is for an unsigned int, and passing an int does not match, but most C implementations will accept it.) The eight-bit char was promoted to a 32-bit int, and that is why you see 32-bit results.
You can tell printf that the value it receives originated as a character value by using %hho or %hhx instead of %o or %x, and it may adjust accordingly. (Again, the o and x specifiers are for unsigned int values, and passing an int does not match, but it may work.) A pedantically correct solution is to use (unsigned int) (unsigned char) x when passing a signed char x for a %o or %x conversion.

Collateral effect using the sprintf function

How come when I use the sprintf function somehow the variable A value changed?
#include <stdio.h>
int main(void) {
short int A = 8000;
char byte_1[2] /*0001 1111 01000 0000*/, total[4];
sprintf(byte_1, "%i", A);
printf("%s\n", byte_1);// displayed on the screen 8000
printf("%i\n", A); // displayed on the screen 12336
}
byte_1 is too short to receive the representation of A in decimal: it only has space for 1 digit and the null terminator and sprintf does not have this information, so it will attempt to write beyond the end of the byte_1 array, causing undefined behavior.
make byte_1 larger, 12 bytes is a good start.
sprintf is inherenty unsafe. Use snprintf that protects against buffer overrun:
snprintf(byte_1, sizeof byte_1, "%i", A);
Here is a potential explanation for this unexpected output: imagine byte_1 is located in memory just before A. sprintf converts the value of A to five characters '8', '0', '0', '0' and '\0' that overflows the end of byte_1, and overwrites the value of variable A itself. When you later print the value of A with printf, A no longer has value 8000, but rather 12336... Just one of an infinite range of possible effects of undefined behavior.
Try this corrected version:
#include <stdio.h>
int main(void) {
short int A = 8000;
char byte_1[12], total[4];
snprintf(byte_1, sizeof byte_1, "%i", A);
printf("%s\n", byte_1);
printf("%i\n", A);
return 0;
}
The text representation of the value stored in A is ”8000” - that’s four characters plus the string terminator, so byte_1 needs to be at least 5 characters wide. If you want byte_1 to store the representation of any unsigned int, you should make it more like 12 characters wide:
char byte_1[12];
Two characters is not enough to store the string ”8000”, so whensprintf writes to byte_1, those extra characters are most likely overwriting A.
Also note that the correct conversion specifier for an unsigned int is %u, not %i. This will matter when trying to format very large unsigned values where the most significant bit is set. %i will attempt to format that as a negative signed value.
Edit
As chrqlie pointed out, the OP had declared A as short int - for some reason, another answer had changed that to unsigned int and that stuck in my head. Strictly speaking, the correct conversion specifier for a short int is %hd if you want signed decimal output.
For the record, here's a list of some common conversion specifiers and their associated types:
Specifier Argument type Output
--------- ------------- ------
i,d int Signed decimal integer
u unsigned int Unsigned decimal integer
x,X unsigned int Unsigned hexadecimal integer
o unsigned int Unsigned octal integer
f float, double Signed decimal float
s char * Text string
c char Single character
p void * Pointer value, implementation-defined
For short and long types, there are some length modifiers:
Specifier Argument type Output
--------- ------------- ------
hd short signed decimal integer
hhd char signed decimal integer
ld long signed decimal integer
lld long long signed decimal integer
Those same modifiers can be applied to u, x, X, o, etc.
byte_1 is too small for the four digits of "A". It only has enough room for a single digit, and the null (\0) terminator. If you make byte_1 an array of 5 bytes, one for each digit and the null byte, it will be able to fit "A".
#include <stdio.h>
int main(void) {
unsigned int A = 8000;
char byte_1[5], total[4];
sprintf(byte_1, "%i", A);
printf("%s\n", byte_1);
printf("%i\n", A);
return 0;
}
Basically, messing around with memory and trying to put values into variables that are too small for them, is undefined behavior. This is legal but objectively dangerous in C, and no program should be accessing memory like this.
sprintf(byte_1, "%i", A);
Format specifier needs to agree to the variable type.
I suggest the following change:
sprintf(byte_1, "%c", A);
printf("%c\n", byte_1);
EDIT: So an additional change after performing the change above, is to also change A so it is of the same type as byte_1. This will force you to change the value in your example to match the range of char types. Notice that using a function to protect you for overflowing is just a bad solution. Instead, it is your responsibility as a designer of this code to choose the proper tools for the job. When working with char variables, you need to use char-like containers. Same goes with integers, floats, strings, etc. If you have a 1 kilogram of sugar, you want to use a 1kg container to hold this amount. You wouldn't use a cup (250g) as, as you see, it overflows. Happy codding in C!

Why printing unsigned char outputs such a great number?

I am printing variable of type unsigned char (8bits long).
But the printf() outputs a huge number outside the range of the variable I am printing.
unsigned char c;
printf("%u",c);
The output is:
21664
I don't know what is going on?
It has undefined behaviour because a is not initialized .
To print decimal value of character you can do like this -
#include <stdio.h>
int main()
{
unsigned char a='A';
printf("%d",a);
return 0;
}
Which will print ascii value of A that is 65.
Modify to printf( "%u", (unsigned int)c ). Otherwise printf will get 2 bytes from stack - one with c value and other with random byte.
because print format is %u is unsigned int,but the c is a unsigned char. printf parse c point as a unsigned int point, program read undefined buffer.
Using wrong format specifier leads to undefined behavior.
%u is for unsigned int not for unsigned char
The output is: 21664 I don't know what is going on?
To print the value of that char you have to give a value to that char, if not you get undefined behavior.
If you need to print its decimal value, just use %d(%u its ok also here) because CHAR is also an INT
printf("The decimal value of CHAR is\t%d",c);
This can help you also to understand it:
#include <stdio.h>
int main(void){
unsigned char c = 'A';
printf("The decimal value of CHAR is\t%d\n",c);
printf("The decimal value of CHAR is\t%u\n",c);
return 0;
}
Output:
The decimal value of CHAR is 65
The decimal value of CHAR is 65
There isn't any error in your code. As far as the length of the printed result is concerned it is printing the value of unsigned int instead of unsigned char. Because of wrong format specifier. The value should be smaller than 65535 and it is.
By the way to print an unsigned char use format specifier %h.
This will tell the compiler that the variable is of type unsigned short and will do the job for you.
just declare your unsigned char as
unsigned char ch=234;
and your printf will work correctly

Resources