I tried to code a program, but i'm having problems with 1 small segment.
for(uint8_t i = 1; i < MAX_BIT_VALUE; i*=2){
printf("Current value of i: %u\n", i);
}
When I run this segment, I get this output:
Current value of i: 0
Current value of i: 0
Current value of i: 0
Current value of i: 0
in an infinite loop. I don't understand why. uint8_t is an unsigned integer of 8-bits. I merely multiplied i which has value 1 by 2. How could it possibly become 0?
If i changed the data type of i to int however, it works just fine:
Current value of i: 1
Current value of i: 2
Current value of i: 4
Current value of i: 8
Current value of i: 16
...
I tried to find a possible answer online, but I don't know how to phrase the problem to get an answer. Could you guys help me please?
Apart from what the other answer recommend on the printing, you have the following problem:
multipy i==128 by 2
result 256
storing it 256 (0x100) in an unsigned 8bit results in 0,
because 8 bit are to narrow
multiply 0 by 2
result 0
0 < 255 -> endless loop
Your code causes undefined behavior.
The flag value # is not supposed to be used with u conversion specifier.
Quoting C11,chapter §7.21.6.1
# The result is converted to an ‘‘alternative form’’. For o conversion, it increases
the precision, if and only if necessary, to force the first digit of the result to be a
zero (if the value and precision are both 0, a single 0 is printed). For x (or X)
conversion, a nonzero result has 0x (or 0X) prefixed to it. For a, A, e, E, f, F, g,
and G conversions, the result of converting a floating-point number always
contains a decimal-point character, even if no digits follow it. (Normally, a
decimal-point character appears in the result of these conversions only if a digit
follows it.) For g and G conversions, trailing zeros are not removed from the
result. For other conversions, the behavior is undefined.
For fixed width integers use the format specifier MACROS as defined in inttypes.h, like PRIu8 for a 8-bit unsigned integer type.
After you fix this the next problem is, overflow of the 8-bit variable. As described in the other answer by Yunnosch Once i becomes 128, you multiple it by 2, and store the result back in the 8-bit variable, the result is 0. Then,
0 multiplied by anything remains 0
which constitutes a forever TRUE condition in the for loop, i < MAX_BIT_VALUE part.
and you get the infinite loop.
Change the printf format string from:
printf("Current value of i: %#u\n", i);
to
printf("Current value of i: %"PRIu8"\n", i);
and it should be fine. The PRIu8 macro is defined in inttypes.h
Related
I'm making a function that takes a value using scanf_s and converts that into a binary value. The function works perfectly... until I put in a really high value.
I'm also doing this on VS 2019 in x64 in C
And in case it matters, I'm using
main(int argc, char* argv[])
for the main function.
Since I'm not sure what on earth is happening, here's the whole code I guess.
BinaryGet()
{
// Declaring lots of stuff
int x, y, z, d, b, c;
int counter = 0;
int doubler = 1;
int getb;
int binarray[2000] = { 0 };
// I only have to change things to 1 now, am't I smart?
int binappend[2000] = { 0 };
// Get number
printf("Gimme a number\n");
scanf_s("%d", &getb);
// Because why not
printf("\n");
// Get the amount of binary places to be used (how many times getb divides by 2)
x = getb;
while (x > 1)
{
d = x;
counter += 1;
// Tried x /= 2, gave me infinity loop ;(
x = d / 2;
}
// Fill the array with binary values (i.e. 1, 2, 4, 8, 16, 32, etc)
for (b = 1; b <= counter; b++)
{
binarray[b] = doubler * 2;
doubler *= 2;
}
// Compare the value of getb to binary values, subtract and repeat until getb = 0)
c = getb;
for (y = counter; c >= 1; y--)
{
// Printing c at each subtraction
printf("\n%d\n", c);
// If the value of c (a temp variable) compares right to the binary value, subtract that binary value
// and put a 1 in that spot in binappend, the 1 and 0 list
if (c >= binarray[y])
{
c -= binarray[y];
binappend[y] += 1;
}
// Prevents buffer under? runs
if (y <= 0)
{
break;
}
}
// Print the result
for (z = 0; z <= counter; z++)
{
printf("%d", binappend[z]);
}
}
The problem is that when I put in the value 999999999999999999 (18 digits) it just prints 0 once and ends the function. The value of the digits doesn't matter though, 18 ones will have the same result.
However, when I put in 17 digits, it gives me this:
99999999999999999
// This is the input value after each subtraction
1569325055
495583231
495583231
227147775
92930047
25821183
25821183
9043967
655359
655359
655359
655359
131071
131071
131071
65535
32767
16383
8191
4095
2047
1023
511
255
127
63
31
15
7
3
1
// This is the binary
1111111111111111100100011011101
The binary value it gives me is 31 digits. I thought that it was weird that at 32, a convenient number, it gimps out, so I put in the value of the 32nd binary place minus 1 (2,147,483,647) and it worked. But adding 1 to that gives me 0.
Changing the type of array (unsigned int and long) didn't change this. Neither did changing the value in the brackets of the arrays. I tried searching to see if it's a limit of scanf_s, but found nothing.
I know for sure (I think) it's not the arrays, but probably something dumb I'm doing with the function. Can anyone help please? I'll give you a long-distance high five.
The problem is indeed related to the power-of-two size of the number you've noticed, but it's in this call:
scanf_s("%d", &getb);
The %d argument means it is reading into a signed integer, which on your platform is probably 32 bits, and since it's signed it means it can go up to 2³¹-1 in the positive direction.
The conversion specifiers used by scanf() and related functions can accept larger sizes of data types though. For example %ld will accept a long int, and %lld will accept a long long int. Check the data type sizes for your platform, because a long int and an int might actually be the same size (32 bits) eg. on Windows.
So if you use %lld instead, you should be able to read larger numbers, up to the range of a long long int, but make sure you change the target (getb) to match! Also if you're not interested in negative numbers, let the type system help you out and use an unsigned type: %llu for an unsigned long long.
Some details:
If scanf or its friends fail, the value in getb is indeterminate ie. uninitialised, and reading from it is undefined behaviour (UB). UB is an extremely common source of bugs in C, and you want to avoid it. Make sure your code only reads from getb if scanf tells you it worked.
In fact, in general it is not possible to avoid UB with scanf unless you're in complete control of the input (eg. you wrote it out previously with some other, bug free, software). While you can check the return value of scanf and related functions (it will return the number of fields it converts), its behaviour is undefined if, say, a field is too large to fit into the data type you have for it.
There's a lot more detail on scanf etc. here.
To avoid problems with not knowing what size an int is, or if a long int is different on this platform or that, there is also the header stdint.h which defines integer types of a specific width eg. int64_t. These also have macros for use with scanf() like SCNd64. These are available from C99 onwards, but note that Windows' support of C99 in its compilers is incomplete and may not include this.
Don't be so hard on yourself, you're not dumb, C is a hard language to master and doesn't follow modern idioms that have developed since it was first designed.
I came across this program to convert decimals numbers into their binary equivalent in C. I do not understand how the printf statement works in this program.
int main()
{
int N;
scanf("%d", &N); // Enter decimal equivalent here
for( int j = floor(log2(N)); j >= 0; j-- ){
printf("%d", (N >> j) & 1);
}
}
Let's take an example to get through this problem. Suppose you enter N=65. Its binary representation is - 1000001. When your given code goes through it, j will start at floor(log2(65)), which is 6. So, the given loop will run 7 times, which means 7 numbers will be printed out (which fits the fact that 65's binary representation has 7 digits).
Inside the loop - The number is shifted by j bits each time to the right. When 1000001 is shifted to the right by 6 bits, it becomes 0000001. If shifted by 5, it is 0000010, and so on. It goes down to a shift by 0 bits which is the original number. When each of these shifted numbers are &ed with 1, only the least significant bit (the right most bit) remains. And this digit can either be a 0 or a 1.
If you would have noticed each right shift divides the number by 2. So when 1000001 is shifted by 1 to make 0100000, it is the binary representation of 32, which indeed is 65/2 in C. After all, this is the way someone manually calculates the binary representation of a number. Each division by 2 gives you a digit (starting from the end) of the representation, and that digit is either a 0 or a 1. The & helps in getting the 0 or 1.
In the end, 65 becomes 1000001.
What it is doing is:
Finding the largest number j such that 2^j <= N
Starting at the jth bit (counting from the right) and moving to the right ...
chopping off all of the bits to the right of the current chosen bit
chopping off all of the bits to the left of current chosen bit
printing the value of the single remaining bit
The code actually has undefined behavior because you did not include <stdio.h>, nor <math.h>. Without a proper declaration of floor() and log2(), the compiler infers the prototype from the calling context and gets int log2(int) and int floor(int), which is incompatible with the actual definition in the C library.
With the proper includes, log2(N) gives the logarithm of N in base 2. Its integral part is the power of 2 <= N, hence 1 less than the number of bits in N.
Note however that this method does not work for negative values of N and only works by coincidence for 0 as the conversion of NaN to int gives 0, hence a single binary digit.
I have searched many articles on google for following question but still not find any good answer.
From my first look ,I able to get that int x:3 here 3 is width if we assign x value greater than 3 some negative value get printed and if we assign value less than 3 ,the assigned value to x get printed correctly.
Can anyone explain how the output is coming of this code.
#include<stdio.h>
1 struct Point
2 {
3 int x:3, y:4;
4 };
5
6 int main()
7 {
8 struct Point p1 = {6,3};
9
10 printf ("x = %d, y = %d", p1.x, p1.y);
11
12 return 0;
13 }
The output comes as:
x = -2, y = 3
when i included stdio.h header file warning vanished but the output of x get chnaged first without header file the value of x is coming -4 now with stdio.h header file the value of x becomes -2.Why this is happening??
Thanks in advance!!
You are trying to store the value 6 into a signed integer of 3 bits, where there are 2 value bits and 1 bit reserved for the sign. Therefore the value 6, which cannot fit in the 2 value bits, is converted to a signed value in an implementation-defined manner. Apparently by using Two's complement.
"warning: overflow in implicit constant conversion" tells you just that. The compiler tries show constant 6 into the signed type bit field. I cannot fit - you get an overflow, and then there's an implicit type conversion to signed type.
int a = 31;
int b = 1;
while (a)
{
b = (2 * b) + 1;
a--;
};
printf("count:%d \n", b);
it prints the right number when a is smaller than 31. Starting from 31, it just prints -1 and I don't understand why. How can I fix it?
The integer overflows and will become negative.
To fix this, you can change the int variable b to long.
long b = 1;
In the two's complement internal representation of type int if its size is equl to 4 bytes the sign bit is equal to 2^31.
Thus then you multiply b by two when b is equal to INT_MAX then the sign bit is set and the number is converted to -1.
That is while the sign bit is not set you get poistive numbers 1, 3, 7, 15 and so on. As soon as the sign bit is set you get negative number -1 that has internal representation with all bits including the sign bit are set.
The while loop is terminated when a becomes 0, since the condition
while (a) ...
evaluating to
while (a != 0)...
It will happen after the loop is executed 31 times with the following expression in it:
b = (2 * b) + 1;
while the initial value of b is 1. It will generate the series: 1, 3, 7, 15... 2^(k+1)-1, while k is the iteration number (starting from 0 for initial value). So for k=31 the value would be 2^32-1. 2^32 is overflowing the 4-byte integer storage type, which is resulting in an undefined behavior. But some compilers are usually handling the overflow by just throwing away the overflowed leftmost bits, so the truncated 2^32 is becoming 0. So 0-1 = -1 and this is your result. But again, no standard is guaranteeing that you will get this result, so you should never rely on it.
To fix it, you can use a bigger storage type, like long, and use %ld for printf.
It's ok the #EugeneSh. answer, but the series don't start in 1, but 3. And to print all values of b, the printf needs to stay into the while loop.If the intention were starting in 1, the printf needs to be the first line into the while loop.
The output of the following c program is: 0.000000
Is there a logic behind the output or is the answer compiler dependent or I am just getting a garbage value?
#include<stdio.h>
int main()
{
int x=10;
printf("%f", x);
return 0;
}
PS:- I know that to try to print an integer value using %f is stupid. I am just asking this from a theoretical point of view.
From the latest C11 draft — §7.16 Variable arguments <stdarg.h>:
§7.16.1.1/2
...if type is not compatible with the type of the actual next argument
(as promoted according to the default argument promotions), the behavior
is undefined, except for the following cases:
— one type is a signed integer type, the other type is the corresponding
unsigned integer type, and the value is representable in both types;
— one type is pointer to void and the other is a pointer to a character type.
The most important thing to remember is that, as chris points out, the behavior is undefined. If this were in a real program, the only sensible thing to do would be to fix the code.
On the other hand, looking at the behavior of code whose behavior is not defined by the language standard can be instructive (as long as you're careful not to generalize the behavior too much).
printf's "%f" format expects an argument of type double, and prints it in decimal form with no exponent. Very small values will be printed as 0.000000.
When you do this:
int x=10;
printf("%f", x);
we can explain the visible behavior given a few assumptions about the platform you're on:
int is 4 bytes
double is 8 bytes
int and double arguments are passed to printf using the same mechanism, probably on the stack
So the call will (plausibly) push the int value 10 onto the stack as a 4-byte quantity, and printf will grab 8 bytes of data off the stack and treat it as the representation of a double. 4 bytes will be the representation of 10 (in hex, 0x0000000a); the other 4 bytes will be garbage, quite likely zero. The garbage could be either the high-order or low-order 4 bytes of the 8-byte quantity. (Or anything else; remember that the behavior is undefined.)
Here's a demo program I just threw together. Rather than abusing printf, it copies the representation of an int object into a double object using memcpy().
#include <stdio.h>
#include <string.h>
void print_hex(char *name, void *addr, size_t size) {
unsigned char *buf = addr;
printf("%s = ", name);
for (int i = 0; i < size; i ++) {
printf("%02x", buf[i]);
}
putchar('\n');
}
int main(void) {
int i = 10;
double x = 0.0;
print_hex("i (set to 10)", &i, sizeof i);
print_hex("x (set to 0.0)", &x, sizeof x);
memcpy(&x, &i, sizeof (int));
print_hex("x (copied from i)", &x, sizeof x);
printf("x (%%f format) = %f\n", x);
printf("x (%%g format) = %g\n", x);
return 0;
}
The output on my x86 system is:
i (set to 10) = 0a000000
x (set to 0.0) = 0000000000000000
x (copied from i) = 0a00000000000000
x (%f format) = 0.000000
x (%g format) = 4.94066e-323
As you can see, the value of the double is very small (you can consult a reference on the IEEE floating-point format for the details), close enough to zero that "%f" prints it as 0.000000.
Let me emphasize once again that the behavior is undefined, which means specifically that the language standard "imposes no requirements" on the program's behavior. Variations in byte order, in floating-point representation, and in argument-passing conventions can dramatically change the results. Even compiler optimization can affect it; compilers are permitted to assume that a program's behavior is well defined, and to perform transformations based on that assumption.
So please feel free to ignore everything I've written here (other than the first and last paragraphs).
Because an integer 10 in binary looks like this:
00000000 00000000 00000000 00001010
All printf does is take the in-memory representation and try to present it as an IEEE 754 floating point number.
There are three parts to a floating point number (from MSB to LSB):
The sign: 1 bit
The exponent: 8 bits
The mantissa: 23 bits
Since an integer 10 is just 1010 in the mantissa bits, its a very tiny number that is much less than the default precision of printf's floating point format.
The result is not defined.
I am just asking this from a theoretical point of view.
The complete chris's excellent answer:
What happens in your printf is undefined, but it could be quite similar to the code below (it depends on the actual implementation of the varargs, IIRC).
Disclaimer: The following is more "as-if-it-worked-that-way" explanation of what could happen in an undefined behaviour case on one platform than a true/valid description that always happens on all platforms.
Define "undefined" ?
Imagine the following code:
int main()
{
int i = 10 ;
void * pi = &i ;
double * pf = (double *) pi ; /* oranges are apples ! */
double f = *pf ;
/* what is the value inside f ? */
return 0;
}
Here, as your pointer to double (i.e. pf) points to an address hosting an integer value (i.e. i), what you'll get is undefined, and most probably garbage.
I want to see what's inside that memory !
If you really want to see what's possibly behind that garbage (when debugging on some platforms), try the following code where we will use an union to simulate a piece of memory where we will write either double or int data:
typedef union
{
char c[8] ; /* char is expected to be 1-byte wide */
double f ; /* double is expected to be 8-bytes wide */
int i ; /* int is expected to be 4-byte wide */
} MyUnion ;
The f and i field are used to set the value, and the c field is used to look at (or modify) the memory, byte by byte.
void printMyUnion(MyUnion * p)
{
printf("[%i %i %i %i %i %i %i %i]\n"
, p->c[0], p->c[1], p->c[2], p->c[3], p->c[4], p->c[5], p->c[6], p->c[7]) ;
}
the function above will print the memory layout, byte by byte.
The function below will prinf the memory layout of different types of values:
int main()
{
/* this will zero all the fields in the union */
memset(myUnion.c, 0, 8 * sizeof(char)) ;
printMyUnion(&myUnion) ; /* this should print only zeroes */
/* eg. [0 0 0 0 0 0 0 0] */
memset(myUnion.c, 0, 8 * sizeof(char)) ;
myUnion.i = 10 ;
printMyUnion(&myUnion) ; /* the representation of the int 10 in the union */
/* eg. [10 0 0 0 0 0 0 0] */
memset(myUnion.c, 0, 8 * sizeof(char)) ;
myUnion.f = 10 ;
printMyUnion(&myUnion) ; /* the representation of the double 10 in the union */
/* eg. [0 0 0 0 0 0 36 64] */
memset(myUnion.c, 0, 8 * sizeof(char)) ;
myUnion.f = 3.1415 ;
printMyUnion(&myUnion) ; /* the representation of the double 3.1415 in the union */
/* eg. [111 18 -125 -64 -54 33 9 64] */
return 0 ;
}
Note: This code was tested on Visual C++ 2010.
It doesn't mean it will work that way (or at all) on your platform, but usually, you should get results similar to what happens above.
In the end, the garbage is just the hexadecimal data set in the memory your looking at, but seen as some type.
As most types have different memory representation of the data, looking at the data in any other type than the original type is bound to have garbage (or not-so-garbage) results.
Your printf could well behave like that, and thus, try to interpret a raw piece of memory as a double when it was initially set as an int.
P.S.: Note that as the int and the double have different size in bytes, the garbage gets even more complicated, but it is mostly what I described above.
But I want to print an int as a double!
Seriously?
Helios proposed a solution.
int main()
{
int x=10;
printf("%f",(double)(x));
return 0;
}
Let's look at the pseudo code to see what's being fed to the printf:
/* printf("...", [[10 0 0 0]]) ; */
printf("%i",x);
/* printf("...", [[10 0 0 0 ?? ?? ?? ??]]) ; */
printf("%f",x);
/* printf("...", [[0 0 0 0 0 0 36 64]]) ; */
printf("%f",(double)(x));
The casts offers a different memory layout, effectively changing the integer "10" data into a double "10.0" data.
Thus, when using "%i", it will expect something like [[?? ?? ?? ??]], and for the first printf, receive [[10 0 0 0]] and interpret it correctly as an integer.
When using "%f", it will expect something like [[?? ?? ?? ?? ?? ?? ?? ??]], and receive on the second printf something like [[10 0 0 0]], missing 4 bytes. So the 4 last bytes will be random data (probably the bytes "after" the [[10 0 0 0]], that is, something like [[10 0 0 0 ?? ?? ?? ??]]
In the last printf, the cast changed the type, and thus the memory representation into [[0 0 0 0 0 0 36 64]] and the printf will interpret it correctly as a double.
essentially it's garbage. Small integers look like unnormalized floating point numbers which shouldn't exist.
You could cast the int variable like this:
int i = 3;
printf("%f",(float)(i));