Errors compiling C code with sizeof from command line - c

I wrote the following code in Nano from the Linux command line to get errors when compiling:
I'd like to know what I need to change in my code to make it compile properly. I am trying to get the number of bits in each data type listed to print on a single line.
#include<stdio.h>
int main(void){
char A;
unsigned char B;
int a;
unsigned int b;
long c;
unsigned long d;
float e;
double f;
long double g;
printf(
"%c %c %i %u %li %lu %f %lf %Lf\n",
sizeof(char), sizeof(unsigned char),
sizeof(int), sizeof(unsigned int),
sizeof(long), sizeof(unsigned long),
sizeof(float), sizeof(double), sizeof(long double)
);
return 0;
}

The sizeof operator returns an integer of type size_t, so you should use the appropriate printf format specifier ("%zu") (assuming C99):
printf(
"%zu %zu %zu %zu %zu %zu %zu %zu %zu\n",
sizeof(char), sizeof(unsigned char),
sizeof(int), sizeof(unsigned int),
sizeof(long), sizeof(unsigned long),
sizeof(float), sizeof(double), sizeof(long double)
);
However, this prints the number of bytes in each type. If you want the number of bits in each type, include <limits.h> and multiply each result by CHAR_BIT to get that:
#include <limits.h>
/* ... */
printf(
"%zu %zu %zu %zu %zu %zu %zu %zu %zu\n",
sizeof(char) * CHAR_BIT, sizeof(unsigned char) * CHAR_BIT,
sizeof(int) * CHAR_BIT, sizeof(unsigned int) * CHAR_BIT,
sizeof(long) * CHAR_BIT, sizeof(unsigned long) * CHAR_BIT,
sizeof(float) * CHAR_BIT, sizeof(double) * CHAR_BIT, sizeof(long double) * CHAR_BIT
);
IMO, it would look much clearer if you label what you're printing and print each value on its own line, like this:
printf("Number of bits in char = %zu\n", sizeof(char) * CHAR_BIT);
printf("Number of bits in unsigned char = %zu\n", sizeof(unsigned char) * CHAR_BIT);
printf("Number of bits in int = %zu\n", sizeof(int) * CHAR_BIT);
printf("Number of bits in unsigned int = %zu\n", sizeof(unsigned int) * CHAR_BIT);
printf("Number of bits in long = %zu\n", sizeof(long) * CHAR_BIT);
printf("Number of bits in unsigned long = %zu\n", sizeof(unsigned long) * CHAR_BIT);
printf("Number of bits in float = %zu\n", sizeof(float) * CHAR_BIT);
printf("Number of bits in double = %zu\n", sizeof(double) * CHAR_BIT);
printf("Number of bits in long double = %zu\n", sizeof(long double) * CHAR_BIT);
And that can be reduced with a macro (though macros aren't the best, they are useful for repetitive code):
#define PRINT_BITS_IN_TYPE(type) \
printf("Number of bits in " #type " = %zu\n", sizeof(type) * CHAR_BIT)
PRINT_BITS_IN_TYPE(char);
PRINT_BITS_IN_TYPE(unsigned char);
PRINT_BITS_IN_TYPE(int);
PRINT_BITS_IN_TYPE(unsigned int);
PRINT_BITS_IN_TYPE(long);
PRINT_BITS_IN_TYPE(unsigned long);
PRINT_BITS_IN_TYPE(float);
PRINT_BITS_IN_TYPE(double);
PRINT_BITS_IN_TYPE(long double);

This is because the unsigned char is promoted to an int (in normal C implementations), so an int is passed to printf for the specifier %c. However, %c expects an unsigned int, so the types do not match, and the C standard does not define the behavior.
you can perform the following methods:
1) try using C11(standard version).
2) try using %zu.

Related

How to cast void* to long long in C?

I am using a 32-bit system. I wanted to cast void* to long long(64-bit). I have tried as below and I am not getting expected values.
printf("size of long long is %d and unsigned int is %d\n", sizeof(long long), sizeof(unsigned int));
void* ptr = 12340000;
long long test = (long long)ptr;
printf("the value of the ptr is %d\n", ptr);
printf("the value of the test is %d\n", test);
The output for the above code is:
size of long long is 8 and unsigned int is 4
the value of the ptr is 12340000
the value of the test is 1610658408
Let's start with getting the format strings match the types you pass.
#include <stdio.h>
int main(void)
{
printf("size of long long is %zu and unsigned int is %zu\n", sizeof(long long), sizeof(unsigned int));
void* ptr = (void *) 12340000;
long long test = (long long) ptr;
printf("the value of the ptr is %p\n", ptr);
printf("the value of the test is %lld\n", test);
}
If you want to check text in hexadecimal (hint: use %llx) you will see it is probably the same as ptr in a slightly different representation. You shouldn't cast a void * to long long without testing if that actually provides the correct results on your platform.
But be very careful with format strings. For sizeof you should use %zu, for void * it is %p and for long long it is %lld.
The cast to long long is just: (long long).

Strange typecast behavior

I have this code:
#include <stdio.h>
int func(unsigned int *a) {
printf("(func) Value: %d\n", *a);
}
int main() {
unsigned char a = 255;
printf("Value: %d\n", a);
printf("Bytes: %d %d %d %d\n\n", *&a, *(&a + 1), *(&a + 2), *(&a + 3));
func((unsigned int *) &a);
return 0;
}
I have the output of this program:
Value: 255
Bytes: 255 0 22 234
(func) Value: -367656705
Why i have negative func value, though the type is unsigned int?
Why i have negative func value, though the type is unsigned int?
int func(unsigned int *a) {
printf("(func) Value: %d\n", *a);
// ^^
}
Because %d does not match type of *a
Because sizeof(unsigned char) and sizeof(unsigned int) are different and can also be platform dependent. For example, unsigned char can be 1 byte. unsigned int can be 4 bytes. When you do your pointer arithmetic, you are possibly looking at other things in the stack frame. C provides you lots of rope to hang yourself with.

Print correct format from double value to hexadecimal

#include <stdio.h>
typedef unsigned char*pointer;
void show_bytes(pointer start, size_t len)
{
size_t i;
for (i = 0; i < len; i++)
printf("%p\t0x%04x\n",start+i, start[i]);
printf("\n");
}
int main()
{
double a = 4.75;
printf("Double demo by %s on %s %s\n", "Toan Tran", __DATE__, __TIME__);
printf("Double a = %.2f (0x%08x)\n", a, a);
show_bytes((pointer) &a, sizeof(double));
}
Output:
Double demo by Toan Tran on Nov 8 2018 11:07:07
Double a = 4.75 (0x00000100)
0x7ffeee7a0b38 0x0000
0x7ffeee7a0b39 0x0000
0x7ffeee7a0b3a 0x0000
0x7ffeee7a0b3b 0x0000
0x7ffeee7a0b3c 0x0000
0x7ffeee7a0b3d 0x0000
0x7ffeee7a0b3e 0x0013
0x7ffeee7a0b3f 0x0040
For this line:
printf("Double a = %.2f (0x%08x)\n", a, a);
I want it to print out the result of start[i]
The return hexadecimal is not the right value for double.
I want it to return 0x40130000000000...
Please help.
The %x format specifier is expecting an unsigned int argument, but you're passing in a double. Using the wrong format specifier invokes undefined behavior.
To print the representation of a double, you need to print each individual byte as hex using a character pointer. This is exactly what you're doing in show_bytes, and is the proper way to do this.
Also, when printing a pointer with the %p format specifier, you should cast the pointer to void *, which is what %p expects. This is one of the rare cases where a cast to void * is needed.
You might be tempted to do something like this:
printf("%llx", *((unsigned long long *)&a));
However this is a violation of the strict aliasing rule. You would need to use memcpy to copy the bytes to the other type:
static_assert(sizeof(unsigned long long) == sizeof(double));
unsigned long long b;
memcpy(&b, &a, sizeof(a));
printf("%llx", b);
You can also do this with a union:
union dval {
double d;
unsigned long long u;
};
union dval v;
v.d = d;
printf("%llx", v.u);
To allow printing a hex dump of any object, pass its address and length.
void show_bytes2(void *start, size_t size) {
int nibble_width_per_byte = (CHAR_BIT + 3) / 4; // Often 2
unsigned char *mem = start;
// Highest bytes first
for (size_t i = size; i>0; ) {
printf("%0*x", nibble_width_per_byte, mem[--i]);
}
printf("\n");
// Lowest bytes first
while (size--) {
printf("%0*x", nibble_width_per_byte, *mem++);
}
printf("\n");
}
Use "%a" to print the significand of the double in hexadecimal.
int main() {
double a = 4.75;
printf("Double a = %a %e %f %g\n", a, a, a, a);
show_bytes2(&a, sizeof a);
}
Output
Double a = 0x1.3p+2 4.750000e+00 4.750000 4.75
4013000000000000 // I want it to return 0x40130000000000...
0000000000001340

Unexpected result of size_t modulo long unsigned int

In the following code, BITS_IN_INT is an long unsigned int with value 32. Why does the modulo operation return the value 0 instead of the expected 20?
#include <stdio.h>
#include <stdlib.h>
#include <limits.h>
#define BITS_IN_INT sizeof(int) * CHAR_BIT
int main()
{
size_t i = 20;
printf("%zu\n", i);
printf("%lu\n", BITS_IN_INT);
printf("%lu\n", i % BITS_IN_INT);
return 0;
}
After macro expansion, the last printf looks like:
printf("%lu\n", i % sizeof(int) * 8);
So, the expression (in printf) is evaluated as if:
(i % sizeof(int)) * 8
If sizeof(int) is 4 (which seems to the case on your platform) then i % sizeof(int) yields 0; 0 * 8 is 0.
Avoid macros if you can. If not, always use parentheses around your macros:
#define BITS_IN_INT (sizeof(int) * CHAR_BIT)
NB: %zu is the correct format specifier for printing size_t values.

How multiply with 12 decimals in C?

I'm trying to have precision of 12 decimals in C. I don't know if there's an easier solution. But at least that code works. Now I'm just trying to save the result in a "long double" but "strtold()" is not working
char* multiply12Decimals(float n1, float n2)
{
long n1Digits;
sscanf(doubleToVoidPointerInNewMemoryLocation(n1*1000000), "%ld", &n1Digits);
printf("n1Digits: %ld\n", n1Digits);
long n2Digits;
sscanf(doubleToVoidPointerInNewMemoryLocation(n2*1000000), "%ld", &n2Digits);
printf("n2Digits: %ld\n", n2Digits);
long long mult = (long long) n1Digits*n2Digits;
printf("mult: %lld\n", mult);
char *charNum = malloc(30*sizeof(char));
sprintf (charNum, "0.%012lld\n", mult);
printf("result: %s\n", charNum);
return charNum;
}
printf("%.12lf",num); solves the problem.
Multiply two double and print it like this. No need to use long.

Resources