Printing type-cast types in C - c

#include <stdio.h>
int main(void) {
int nr = 5;
char castChar = (char)nr;
char realChar = '5';
printf("The value is: %d\n", castChar);
}
If the above code is compiled, the output will be:
The value is: 5
But if the code below is compiled, the console will output the value 53 instead. Why doesn't it print the same as the when the "castChar" is printed?
#include <stdio.h>
int main(void) {
int nr = 5;
char castChar = (char)nr;
char realChar = '5';
printf("The value is: %d\n", realChar);
}

Because the value of castChar is the integer value 5, while the value of realChar is the integer encoding of the character value '5' (ASCII 53). These are not the same values.
Casting has nothing to do with it. Or, more accurately, casting nr to char doesn't give you the character value '5', it just assigns the integer value 5 to a narrower type.
If you expect the output 5, then you need to print realChar with the %c specifier, not the %d specifier.

(char)5 and '5' are not the same thing.
The literal '5' is an integer value that represents the character 5. Its value depends on the platform. Assuming ASCII representation for characters, this would be 53.
The literal (char)5 is the integer value 5 that has been cast to type char. This means it retains the value of 5 after the cast.

Related

Is it possible to store a character in an array which is defined as an integer?

How could the array defined as int store the string values? Just look at the code, arri[] is defined as an integer but storing string values? And also the array defined as a character is storing integer values. How is this possible?
int main(void) {
int arri[] = {'1' , '2', 'a'};
int *ptri = arri;
char arrc[] = {11, 21 , 31 };
char *ptrc = arrc;
printf("%d" , *arri);
printf("%d" , *ptri);
printf("%d" , *arrc );
printf("%d" , *ptrc);
return 0;
}
How could the array defined as int store the string values?
There are no strings in the code snippets you provided.
In this declaration
int arri[] = {'1' , '2', 'a'};
the initializers that represent integer character constants having the type int are used to initialize elements of the array. These character constants are stored internally as their codes. For example in the ASCII character table the integer character constants internally have correspondingly values 49, 50, and 97.
Here is a demonstrative program
#include <stdio.h>
int main(void)
{
int arri[] = {'1' , '2', 'a'};
const size_t N = sizeof( arri ) / sizeof( *arri );
for ( size_t i = 0; i < N; i++ )
{
printf( "'%c' = %d ", arri[i], arri[i] );
}
putchar( '\n' );
return 0;
}
The program output is
'1' = 49 '2' = 50 'a' = 97
When the conversion specifier %c is used the function printf tries to output them as (graphical) symbols.
Pay attention to that when the conversion specifier %d is used to output an object of the type char then there is performed the integer promotion that promotes the object of the type char to an expression of the type int.
In this declaration
char arrc[] = {11, 21 , 31 };
the integer constants have values that fit into the range of values that can be stored in an object of the type char.
In the both cases there is no truncation or overflow.
The first thing to make clear is that don't actual store a character like 'a' anywhere inside the computer. You actually store a number. For 'a' that number is decimal 97. The computer itself has no knowledge about this being an 'a'. The computer only sees it as a number. It's only when you send that number to a device expecting characters (e.g. a terminal, a printer, etc) that some device driver changes the number to display of the character 'a'.
See https://en.wikipedia.org/wiki/ASCII for a description of the mapping between charcters and numbers.
The C standard allows you to use characters just as-if they were numbers. The compiler automatically converts the character to the corresponding number. Therefore
int x = 'a';
is exactly the same as
int x = 97;
and your line
int arri[] = {'1' , '2', 'a'};
is the same as
int arri[] = {49 , 50, 97};
As already mentioned the type char is just storing numbers - just like the type int. The difference is just the range of numbers that can be stored. Typically a char is 1 byte of memory and int is 4 bytes (but it's system dependant).
So this code
char arrc[] = {11, 21 , 31 };
simply stores those 3 decimal numbers. Typically using 1 byte for each number.
The interresting part is this line:
printf("%d" , *arrc );
Here *arrc is the number 11 stored in 1 byte (typically). So how can it be printed using %d which expects an int ?
The answer is "Default argument promotions". For variadic functions (like printf) this means that integer types "smaller" than int shall be converted to int before the function call. Note that char is considered an integer type so this also applies for char.
So in your case the number 11 stored in char (1 byte) will automatically be converted to the number 11 stored in int (4 bytes). Consequently the printf function will receive an int and will be able to print is as such.
In the statement
int foo = 3.14159;
the double value is automatically converted to int (to 3) [implicit conversion]. There is nothing that prohibits the conversion, so the assigment ("of a double to an int") is ok.
Same thing with your example
char foo = 65; // the int value is implicitly converted to type char
char bar[] = { 66, 67, 0 }; // 3 conversions ok
char baz = 20210914; // possibly erroneous conversion from int to char
// in this case the compiler will, probably, warn you
// 20210914 is beyond the range of char, so this is, technically, UB
Note that
int a = 'b';
the 'b' above is a value of int type, so there really is no conversion.
char b = 'c'; // implicit conversion from int to char ok
int c = b; // implicit conversion from char to int ok

Issue with turning a character into an integer in C

I am having issues with converting character variables into integer variables. This is my code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main()
{
char string[] = "A2";
char letter = string[0];
char number = string[1];
char numbers[] = "12345678";
char letters[] = "ABCDEFGH";
int row;
int column;
for(int i = 0; i < 8; i++){
if(number == numbers[i]){
row = number;
}
}
}
When I try to convert the variable row into the integer value of the variable number, instead of 2 I get 50. The goal so far is to convert the variable row into the accurate value of the character variable number, which in this case is 2. I'm a little confused as to why the variable row is 50 and not 2. Can any one explain to me why it is not converting accurately?
'2' != 2. The '2' character, in ASCII, is 50 in decimal (0x32 in hex). See http://www.asciitable.com/
If you're sure they're really numbers you can just use (numbers[i] - '0') to get the value you're looking for.
2 in your case is a character, and that character's value is 50 because that's the decimal version of the byte value that represents the character 2 in ASCII. Remember, c is very low level and characters are essentially the same thing as any other value: a sequence of bytes. Just like letters are represented as bytes, so are the character representation of their value in our base 10 system. It might seem that 2 should have been represented with the value 2, but it wasn't.
If you use the atoi function, it will look at the string and compute the decimal value represented by the characters in your string.
However, if you're only converting one character to the decimal value it represents , you can take a short cut. subtract the digit from the value of '0'. Though the digits are not represented by the base 10 value they have for us humans, they are ordered sequentially in the ASCII code. And since in C the characters are simply byte values, the difference between a numeric character 0-9 and 0 is the value of the character.
char c = '2';
int i = c - '0';
If you understand why that would work, you get what I'm saying.

Overflow char to int conversion

#include <stdio.h>
void func(int a);
int main() {
char ch = 256;
func(ch);
return 0;
}
void func(int a) {
printf("%d\n", a);
}
The output of the above program is 0.
Can anyone explain how
256 is too large for the char type on your machine.
So it wraps around to the minimum value 0, as per the C standard (from N1570, emphasis mine):
6.3.1.3 Signed and unsigned integers
1 When a value with integer type is converted to another integer type other than _Bool, if the value
can be represented by the new type, it is unchanged.
2 Otherwise, if
the new type is unsigned, the value is converted by repeatedly adding
or subtracting one more than the maximum value that can be represented
in the new type until the value is in the range of the new type.60)
3
Otherwise, the new type is signed and the value cannot be represented
in it; either the result is implementation-defined or an
implementation-defined signal is raised.
The 256 literal is an int, and char is an integer type that is either signed or unsigned, so one of the above emphasized behaviors will occur.
Because Character set is limited to 255 characters only so your output is 0
http://www.asciitable.com/ have complete list of characterset details and char is limited to 256 only
unsigned char 1 byte -> 0 to 255
signed char 1 byte -> -128 to 127
Trying to store 256 in 8 bits...it gets all the 1[00000000]. So basically it overflows and the 0 bits are only left.
#include <stdio.h>
void func(int a);
int main() {
char ch = 256;
printf("%c\n",ch); //nothing prints. That's what happens when
// when we print 'NUL'
func(ch);
return 0;
}
void func(int a) {
printf("%d\n", a);
}
The maximum range of char is 127. The value 256 is 129 more than 127 and when we add 1 to 127 (which is maximum value of char) it goes to minimum value -128. So if we add 129 to 127 it results in 0.
for char
127 + 1 = -128.
127 + 129 = 0
So for the below program the output is 1.
#include <stdio.h>
void func(int a);
int main() {
char ch = 257;
func(ch);
return 0;
}
void func(int a) {
printf("%d\n", a);
}
If you compile this with gcc it actually tells you why:
testo.c: In function ‘main’:
testo.c:6:15: warning: overflow in implicit constant conversion [-Woverflow]
char ch = 256;
^

what is difference between 9-'0' and '9'-'0' in C?

In my following code:
main(){
int c;
char c1='0';
int x=9-c1;
int y='9'-c1;
}
Now in this program I'm getting value of x as some arbitrary value, but the value of y is 0, which is the value that I expect. Why this difference?
Here is a good explanation. Just compile it and run:
#include <stdio.h>
int main(){
int c;
char c1='0';
int x=9-c1;
int y='9'-c1;
printf("--Code and Explanation--\n");
printf("int c;\n");
printf("char c1='0';\n");
printf("int x=9-c1;\n");
printf("int y='9'-c1;\n");
printf("c1 as char '0' has decimal value: %d\n", c1);
printf("decimal 9 - decimal %d or c1 = %d or x\n", c1, x);
printf("char '9' has decimal value %d - decimal %d or c1 = %d\n", '9', c1, y);
printf("Your Welcome :)\n");
return 0;
}
1st char are integers.
2nd chars might have a printable representation or output controlling function (like for ASCII: TAB, CR, LF, FF, BELL ...) depending on the character set in use.
For ASCII
char c = 'A';
is the same as
char c = 65;
is the same as
char c = 0x41;
Another character set widely in use for example is EBCDIC. It uses a different mapping of a character's integer value to its printable/controling representation.
Internally always the same integer value is used/stored.
The printable, often but not always ASCII representation of, for example 65 or 0x41, which is A, is only used when
either printing out using the printf()-family along with the conversion specifiers %s or %c or puts()
or scanning in using the scanf()-family along with the conversion specifiers %s or %c or fgets()
or when coding literals like 'A' or "ABC".
On all other operation only the char's integer value is used.
When you do calculations with chars, you have to keep in mind that to you it looks like a '0' or '9', but the compiler interprets is as its ASCII value, which is 48 for '0' and 57 for '9'.
So when you do:
int x=9-c1;
the result is 9 - 48 = -39. And for
int y='9'-c1;
the result is 57 - 48 = 9.
According to the C Standard (5.2.1 Character sets)
...In both the source and execution basic character sets, the value of
each character after 0 in the above list of decimal digits shall be
one greater than the value of the previous.
Thus expression '9' - '0' has the same value like 9 - 0 and is equal to 9 whether you are using for example the ASCII table of characters or the EBCDIC.
Expression 9 - '0' is implementation defined and depends on the coding table you are using. But in any case the value of the internal representation of character '0' is greater then 9. (9 is the value of the tab character representation '\t')
For example in the ASCII the value of the code of character '0' is equal to 48.
In the EBCDIC the value of '0' is equal to 240.
So you will get that 9 - '0' is some negative number.
For example it is equal to -39 if the character representations are based on the ASCII table or -231 if the character representations are based on the EBCDIC table.
You can see this yourself running this simple program
#include <stdio.h>
int main( void )
{
printf( "%d\n", 9 - '0' );
}
You could write the printf statement also in the following way;)
printf( "%d\n", '\t' - '0' );
because 9 as I mentioned is the value of the internal representation of the escape character '\t' (tab).

How to print the hexadecimal in a specific manner?

In the following code I stored the mac address in a char array.
But even when I am storing it in a char variable, while printing it's printing as follows:
ffffffbb
ffffffcc
ffffffdd
ffffffee
ffffffff
This is the code:
#include<stdio.h>
int main()
{
char *mac = "aa:bb:cc:dd:ee:ff";
char a[6];int i;
sscanf(mac,"%x:%x:%x:%x:%x:%x",&a[0],&a[1],&a[2],&a[3],&a[4],&a[5]);
for( i = 0; i < 6;i++)
printf("%x\n",a[i]);
}
I need the output to be in the following way:
aa
bb
cc
dd
ee
ff
The current printf statement is
printf("%x\n",a[i]);
How can I get the desired output and why is the printf statement printing ffffffaa even though I stored the aa in a char array?
You're using %x, which expects the argument to be unsigned int *, but you're just passing char *. This is dangerous, since sscanf() will do an int-sized write, possibly writing outside the space allocated to your variable.
Change the conversion specifier for the sscanf() to %hhx, which means unsigned char. Then change the print to match. Also, of course, make the a array unsigned char.
Also check to make sure sscanf() succeded:
unsigned char a[6];
if(sscanf(mac, "%hhx:%hhx:%hhx:%hhx:%hhx:%hhx",
a, a + 1, a + 2, a + 3, a + 4, a + 5) == 6)
{
printf("daddy MAC is %02hhx:%02hhx:%02hhx:%02hhx:%02hhx:%02hhx",
a[0], a[1], a[2], a[3], a[4], a[5]);
}
Make sure to treat your a array as unsigned chars, i.e.
unsigned char a[6];
In
printf("%x\n",a[i]);
the expression a[i] yields a char. However, the standard does not specify whether char is signed or unsigned. In your case, the compiler apparently treats it as a signed type.
Since the most significant bit is set in all the bytes of your Mac address (each by is larger than or equal to 0x80), a[i] is treated as a negative value so printf generates the hexadecimal representation of a negative value.

Resources