Assigned negative value to char in c prints "?" on macOS - c

#include<stdio.h>
int main(){
signed char c; //range of signed char is -128 to 127
char d; //this is basically a signed char, if I am not wrong
unsigned char e; //range of unsigned char is 0 to 255
c = -12;
d = -12;
e = 255;
printf("%c\n", c);
printf("%c\n", d);
printf("%c\n", e);
return 0;
}
These all char gets printed as '?'.
Trying to understand why? is it a garbage value or actually a same char assigned to different values? which is highly not possible.

When you pass a char, signed char, or unsigned char to printf, it is promoted to an int value (in normal C implementations, where char is narrower than int). For a %c conversion specification , printf converts that int to unsigned char and writes the resulting character.
When the character is not a printable character of the character set in use, displaying a “?” is a common behavior.
Whether char is signed or unsigned is implementation-defined, although it remains a distinct type, separate from both signed char and unsigned char.

Related

How to get hexadecimal val of char in c

So im new to C and im so confused. I dont know how to get a hexadecimal value of a char.
how would i get the chars hexadecimal val below???
char a = 'a';
With char a = 'a';, a has a value. Depending on how we want to see that value:
To print that value as a (ASCII) character: printf("%c\n", a);
To print its value in decimal: printf("%d\n", a);
To print its value in hexadecimal: printf("%x\n", a);
In all 3 cases, the value in a is promoted to an int as an argument to a ... function. It is that int code is printing.
As a char may be signed or unsigned, additional considerations are needed when a < 0 or in rare implementations where char is as wide as an int.

How many 'char' types are there in C?

I have been reading "The C Programming Language" book by "KnR", and i've come across this statement:
"plain chars are signed or unsigned"
So my question is, what is a plain char and how is it any different from
signed char and unsigned char?
In the below code how is 'myPlainChar' - 'A' different from
'mySignChar' - 'A' and 'myUnsignChar' - 'A'?
Can someone please explain me the statement "Printable char's are
always positive".
Note: Please write examples and explain. Thank you.
{
char myChar = 'A';
signed char mySignChar = 'A';
unsigned char myUnsignChar = 'A';
}
There are signed char and unsigned char. Whether char is signed or unsigned by default depends on compiler and its settings. Usually it is signed.
There is only one char type, just like there is only one int type.
But like with int you can add a modifier to tell the compiler if it's an unsigned or a signed char (or int):
signed char x1; // x1 can hold values from -128 to +127 (typically)
unsigned char x2; // x2 can hold values from 0 to +255 (typically)
signed int y1; // y1 can hold values from -2147483648 to +2147483647 (typically)
unsigned int y2; // y2 can hold values from 0 to +4294967295 (typically)
The big difference between plain unmodified char and int is that int without a modifier will always be signed, but it's implementation defined (i.e. it's up to the compiler) if char without a modifier is signed or unsigned:
char x3; // Could be signed, could be unsigned
int y3; // Will always be signed
Plain char is the type spelled char without signed or unsigned prefix.
Plain char, signed char and unsigned char are three distinct integral types (yes, character values are (small) integers), even though plain char is represented identically to one of the other two. Which one is implementation defined. This is distinct from say int : plain int is always the same as signed int.
There's a subtle point here: if plain char is for example signed, then it is a signed type, and we say "plain char is signed on this system", but it's still not the same type as signed char.
The difference between these two lines
signed char mySignChar = 'A';
unsigned char myUnsignChar = 'A';
is exactly the same as the difference between these two lines:
signed int mySignInt = 42;
unsigned int myUnsignInt = 42;
The statement "Printable char's are always positive" means exactly what it says. On some systems some plain char values are negative. On all systems some signed char values are negative. On all systems there is a character of each kind that is exactly zero. But none of those are printable. Unfortunately the statement is not necessarily correct (it is correct about all characters in the basic execution character set, but not about the extended execution character set).
How many char types are there in C?
There is one char type. There are 3 small character types: char, signed char, unsigned char. They are collectively called character types in C.
char has the same range/size/ranking/encoding as signed char or unsigned char, yet is a distinct type.
what is a plain char and how is it any different from signed char and unsigned char?
They are 3 different types in C. A plain char char will match the same range/size/ranking/encoding as either singed char or unsigned char. In all cases the size is 1.
2 .how is myPlainChar - 'A' different from mySignChar - 'A' and myUnsignChar - 'A'?
myPlainChar - 'A' will match one of the other two.
Typically mySignChar has a value in the range [-128...127] and myUnsignChar in the range of [0...255]. So a subtraction of 'A' (typically a value of 65) will result a different range of potential answers.
Can someone please explain me the statement "Printable char's are always positive".
Portable C source code characters (the basic
execution character set) are positive so printing a source code file only prints characters of non-negative values.
When printing data with printf("%c", some_character_type) or putc(some_character_type) the value, either positive or negative is converted to an unsigned char before printing. Thus it is a character associated with a non-negative value that is printed.
C has isprint(int c) which "tests for any printing character including space". That function is only valid for values in the unsigned char range and the negative EOF. isprint(EOF) reports 0. So only non-negative values pass the isprint(int c) test.
C really has no way to print negative values as characters without undergoing a conversion to unsigned char.
I think it means char without 'unsigned' in front of it ie:
unsigned char a;
as opposed to
char a; // signed char
So basically a variable is always signed (for integers and char) unless you use the statement 'unsigned'.
That should answer the second question as well.
The third question: Characters that are in the ascii set are defined as unsigned characters, ie the number -60 doesn't represent a character, but 65 does, ie 'A'.

C - display char as hex

I want to print the ASCII code of a char in hex; for example, for
char a = 0xA5;
I want to print A5 on the console. Here is what I have tried:
char a = 0xA5;
printf("%02X", a);
but i get FFFFFFA5. How could I solve this?
Cast the value to unsigned char, then cast again to unsigned int to be printed via %X.
char a = 0xA5;
printf("%02X", (unsigned int)(unsigned char)a);
Note that conversion to signed integer which is not capable to store original value is implementation-defined, but conversion to unsigned integer is defined, according to N1256 6.3.1.3
The "problem" is called sign extension -- parameteres are passed as int by default so a char would be converted to int and in the process the sign extension would means that the extra f are added -- make the char unsigned like this
unsigned char a = 0xA5;
printf("%02X", a);
and the compiler will understand how to treat your data.

Why printing unsigned char outputs such a great number?

I am printing variable of type unsigned char (8bits long).
But the printf() outputs a huge number outside the range of the variable I am printing.
unsigned char c;
printf("%u",c);
The output is:
21664
I don't know what is going on?
It has undefined behaviour because a is not initialized .
To print decimal value of character you can do like this -
#include <stdio.h>
int main()
{
unsigned char a='A';
printf("%d",a);
return 0;
}
Which will print ascii value of A that is 65.
Modify to printf( "%u", (unsigned int)c ). Otherwise printf will get 2 bytes from stack - one with c value and other with random byte.
because print format is %u is unsigned int,but the c is a unsigned char. printf parse c point as a unsigned int point, program read undefined buffer.
Using wrong format specifier leads to undefined behavior.
%u is for unsigned int not for unsigned char
The output is: 21664 I don't know what is going on?
To print the value of that char you have to give a value to that char, if not you get undefined behavior.
If you need to print its decimal value, just use %d(%u its ok also here) because CHAR is also an INT
printf("The decimal value of CHAR is\t%d",c);
This can help you also to understand it:
#include <stdio.h>
int main(void){
unsigned char c = 'A';
printf("The decimal value of CHAR is\t%d\n",c);
printf("The decimal value of CHAR is\t%u\n",c);
return 0;
}
Output:
The decimal value of CHAR is 65
The decimal value of CHAR is 65
There isn't any error in your code. As far as the length of the printed result is concerned it is printing the value of unsigned int instead of unsigned char. Because of wrong format specifier. The value should be smaller than 65535 and it is.
By the way to print an unsigned char use format specifier %h.
This will tell the compiler that the variable is of type unsigned short and will do the job for you.
just declare your unsigned char as
unsigned char ch=234;
and your printf will work correctly

Is it safe to cast a character type to an integer type

int main() {
char ch = 'a';
int x;
x = ch;
printf("x=%c", x);
}
Is this code safe to use (considering endiness of machine)?
Yes, it is safe to cast a character (like char) type to an integer type (like int).
In this answer and others, endian-ness is not a factor.
There are 4 conversions going on here and no casting:
a is character of the C encoding. 'a' converts to an int at compile time.
'a'
The int is converted to a char.
char ch = 'a';
The char ch is converted to an int x. In theory there could be a loss of data going from char to int **, but given the overwhelming implementations, there is none. Typical examples: If char is signed in the range -128 to 127, this maps well into int. If char is unsigned in the range 0 to 255, this also maps well into int.
int x;
x = ch;
printf("%c", x) uses the int x value passed to it, converts it to unsigned char and then prints that character. (C11dr §7.21.6.1 8 #haccks) Note there is no conversion of x due to the usual conversion of variadic parameters as x is all ready an int.
printf("x=%c", x);
** char and int could be the same size and char is unsigned with a positive range more than int. This is the one potential problem with casting char to int although typically there is not loss of data. This could be further complicated should char have range like 0 to 2³²-1 and int with a range of -(2³¹-1) to +(2³¹-1). I know of no such machine.
Yes, casting integer types to bigger integer types is always safe.
Standard library's *getc (fgetc, getchar, ...) functions do just that--they read unsigned chars internally and cast them to int because int provides additional room for encoding EOF (end of file, usually EOF==-1).
Yes it is, because int is bigger than char, but using char instead of int would not be safe for the same reason.
What you are doing is first =
int x = ch => Assigning the ascii value of the char to an int
And finally :
printf("x=%c", x); => Printing the ascii value as a char, which will print the actual char that correspond to that value. So yeah it's safe to do that, it's a totally predicatable behaviour.
But safe does not mean useful as integer is bigger than char, usually we do the inverse to save some memory.
it is safe here because char is converted to int anyway when calling printf.
see C++ variadic arguments

Resources