PROBLEM SOLVED: It still doesn't work on Code::Blocks so it has something to to with Codeblocks.
I have a Problem with the following C-Code. It is supposed to show the max/min Value of a "short int". I'm pretty sure it is right but it doesn't seems to work on my machine.
As the output I just get a zero instead of the desired +32767 and -32768.
If someone could verify for me that it is indeed not a problem in the code but a problem with my software.
PS: I tried running the code on someone else's machine and it worked fine there.
#include <stdio.h>
int main()
{
short int si=0;
short int si_pred=0;
while (si>=0) {
si_pred=si;
si++;
}
printf("%d lowest possible value for a short int.\n",si);
printf("%d highest possible value for a short int.\n",si_pred);
return 0;
}
Your while loop is terminating early because you set si to 0, and your loop will only run when si > 0. Try this, since you know that the largest int should be at least 1:
short int si = 1;
Now your loop won't short-circuit, since 1 > 0.
Or, even better, just set the loop condition to check for si >= 0.
while(si >= 0) {
si_pred = si;
si++;
}
The problem isn't Code::Blocks. The problem is that the behavior on signed integer overflow is undefined. You're not going to get consistent results from compiler to compiler, because there's no requirement on the compiler to handle signed integer overflow in any particular way. There's no guarantee that adding 1 to SHRT_MAX will result in SHRT_MIN.
The right way to do this is to include "limits.h" in your source code and examine the values of SHRT_MIN and SHRT_MAX.
Related
While reading http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html about undefined behavior in c, I get a question on this example.
for (i = 0; i <= N; ++i) { ... }
In this loop, the compiler can assume that the loop will iterate
exactly N+1 times if "i" is undefined on overflow, which allows a
broad range of loop optimizations to kick in. On the other hand, if
the variable is defined to wrap around on overflow, then the compiler
must assume that the loop is possibly infinite (which happens if N is
INT_MAX) - which then disables these important loop optimizations.
This particularly affects 64-bit platforms since so much code uses
"int" as induction variables.
This example is to show the C compiler could take advantage of the undefined behavior to make assumption that the execution times would be exact N+1. But I don't understand why this assumption is valid.
I can understand that if the variable is defined to wrap around on overflow and N is INT_MAX, then the for loop will be infinite because i will go from 0 to INT_MAX and overflow to INT_MIN, then loop to INT_MAX and restart from INT_MIN etc. So the compiler could not make this assumption about execution times and can not do optimization on this point.
But what about when i is undefined on overflow? In this case, i loops normally from 0 to INT_MAX, then i will be assigned INT_MAX+1, which would overflow to an undefined value such as between 0 and INT_MAX. If so, the condition i<= INT_MAX is still valid, should the for-loop not continue and also be infinite?
… then i will be assigned INT_MAX+1, which would overflow to an undefined value such as between 0 and INT_MAX.
No, that is not correct. That is written as if the rule were:
If ++i overflows, then i will be given some int value, although it is not specified which one.
However, the rule is:
If ++i overflows, the entire behavior of the program is undefined by the C standard.
That is, if ++i overflows, the C standard allows any of these things to happen:
i stays at INT_MAX.
i changes to INT_MIN.
i changes to zero.
i changes to 37.
The processor generates a trap, and the operating system terminates your process.
Some other variable changes value.
Program control jumps out of the loop, as if it had ended normally.
Anything.
Now consider this assumption used in optimization by the compiler:
… the compiler can assume that the loop will iterate exactly N+1 times…
If ++i can only set i to some int value, then the loop will not terminate, as you conclude. On the other hand, if the compiler generates code that assumes the loop will iterate exactly N+1 times, then something else will happen in the case when ++i overflows. Exactly what happens depends on the contents of the loop and what the compiler does with them. But it does not matter what: Generating this code is allowed by the C standard because whatever happens when ++i overflows is allowed by the C standard.
Lets consider an actual case:
#include <limits.h>
#include <stdio.h>
unsigned long long test_int(unsigned long long L, int N) {
for (int i = 0; i <= N; ++i) {
L++;
return L;
}
unsigned long long test_unsigned(unsigned long long L, unsigned N) {
for (unsigned i = 0; i <= N; ++i) {
L++;
return L;
}
int main() {
fprintf(stderr, "int: %llu\n", test_int(0, INT_MAX));
fprintf(stderr, "unsigned: %llu\n", test_unsigned(0, UINT_MAX));
return 0;
}
The point of the blog article is the of possible behavior of the compiler for the above code:
for test_int() the compiler can determine that for argument values from INT_MIN to -1, the function should return L unchanged, for values between 0 and INT_MAX-1, the return value should be L + N + 1 and for INT_MAX the behavior is undefined, so returning L + N + 1 is OK too, hence the code can be simplified as
unsigned long long test_int(unsigned long long L, int N) {
if (N >= 0)
L += N + 1;
return L;
}
for test_unsigned(), the same analysis yields: for argument values below UINT_MAX, the return value is L + N + 1 and for UINT_MAX there is an infinite loop:
unsigned long long test_unsigned(unsigned long long L, unsigned N) {
if (N != UINT_MAX)
return L + N + 1;
for (;;);
}
As can be seen on https://godbolt.org/z/abafdE8P4 both gcc and clang perform this optimisation for test_int, taking advantage of undefined behavior on overflow but generate iterative code for test_unsigned.
Signed integer overflow invokes the Undefined Behaviour. Programmer cannot assume that a portable program will behave the particular way.
On the other hand, a program compiled for the particular platform using particular version of the compiler and using the same versions of the libraries will behave deterministic way. But you do not know if any of those change (ie. compiler, compiler version etc etc) that the behaviour will remain the same.
So your assumptions can be valid for the particular build and execution environment, but are invalid in general.
I am currently working on a CS50 course and I am trying to make a function that can give me a number of digits in a number that I put. For example number 10323 will be 5 digits. I wrote a code for this but it seems like it doesn't work for case above 10 digits. Can I know what is wrong with this code?
P.S: CS50 uses modified C language for beginners. The language may look a little different but I think its the math that is the problem here so there should be no much difficulty looking at my code?
int digit(int x) //function gives digit of a number
{
if (x == 0)
{
return 0;
}
else
{
int dig = 0;
int n = 1;
int y;
do
{
y = x / n;
dig ++;
n = expo(10,dig);
}
while (y < 0 || y >= 10);
return dig;
}
}
You didn't supply a definition for the function expo(), so it's not possible to say why the digit() function isn't working.
However, you're working with int variables. The specification of the size of the int type is implementation-dependent. Different compilers can have different sized ints. And even a given compiler can have different sizes depending on compilation options.
If the particular compiler your CS50 class is using has 16-bit ints (not likely these days but theoretically possible), those values will go from 0 (0x0000) up to 32767 (0x7FFF), and then wrap around to -32768 (0x8000) and up to 01 (0xFFFF). So in that case, your digit function would only handle part of the range up to 5 decimal digits.
If your compiler using 32-bit ints, then your ints would go from 0 (0x00000000) up to 2147483647 (0x7FFFFFFF), then wrap around to -2147483648 (0x80000000) and up to -1 (0xFFFFFFFF), thus limited to part of the 10-bit range.
I'm going to go out on a limb and guess that you have 32-bit ints.
You can get an extra bit by using the type unsigned int everywhere that you are saying int. But basically you're going to be limited by the compiler and the implementation.
If you want to get the number of decimal digits in much larger values, you would be well advised to use a string input rather than a numeric input. Then you would just look at the length of the string. For extra credit, you might also strip off leading 0's, maybe drop a leading plus sign, maybe drop commas in the string. And it would be nice to recognize invalid strings with unexpected non-numeric characters. But basically all of this depends on learning those string functions.
"while(input>0)
{
input=input/10;
variable++;
}
printf("%i\n",variable);"
link an input to this.
Please consider this C Code
void main() {
int i, s = 17;
for (i = 8; i < 2000000; i++) {
if (ifprime(i))
s += i;
}
printf("%d", s);
}
It won't run with this number of iterations but produces some result with lower iterations like while (i < 200000)
Why is that?
(Note that I am not asking for a solution, Many thanks)
This is going to overflow with 32 bit integers. That's undefined behavior and while that typically won't cause your program to run endlessly, that's a possibility, because there's no guarantee whatsoever on what happens when your program exhibits undefined behavior. Try long long instead, which is at least 64 bit (for the signed version, that's at least 63 bits and one sign bit).
long long s = 17;
And print it this way:
printf("%lld", s);
This unoptimized version of a prime number search will take quite a while for all numbers up to 2000000, so perhaps you just think it's running endlessly when it isn't. I recommend debugging by putting a print like if (i % 1000 == 0) printf("%d %lld\n", i, s); into the loop, then you can see how far along it is and if it is still working. For me, it's working (with that long long fix, of course).
because you know that will be positive you can use
unsigned long long s = 17;
and then print it with
printf("%llu",s);
also you can use that for i
This question already has answers here:
Sum of positive values in an array gives negative result in a c program
(4 answers)
Closed 4 years ago.
I have written the following c code to find the sum of first 49 numbers of a given array, but the sum is coming out to be negative.
#include<stdio.h>
int main()
{
int i;
long int sum=0;
long int a[50]={846930887,1681692778,1714636916, 1957747794, 424238336, 719885387, 1649760493, 596516650, 1189641422, 1025202363, 1350490028, 783368691, 1102520060, 2044897764, 1967513927, 1365180541, 1540383427, 304089173, 1303455737, 35005212, 521595369, 294702568, 1726956430, 336465783, 861021531, 278722863, 233665124, 2145174068, 468703136, 1101513930, 1801979803, 1315634023, 635723059, 1369133070, 1125898168, 1059961394, 2089018457, 628175012, 1656478043, 1131176230, 1653377374, 859484422, 1914544920, 608413785, 756898538, 1734575199, 1973594325, 149798316, 2038664371, 1129566414};
for(i=0;i<49;i++)
{
sum=sum+a[i];
printf("sum is : %ld\n",sum);
}
printf("\nthe total sum is %ld",sum);
}
i don't know why it is coming so?please help.
Using long long instead of long, the program works:
Ouput: 56074206897
Reason
Range of long: -2^31+1 to +2^31-1
Range of long long: -2^63+1 to +2^63-1
As you can see 2^31-1 = 2147483647 <
56074206897; but 2^63-1 = 9,223,372,036,854,775,807 > 56074206897
This leads to overflow. According to the C standard, the result of signed integer overflow is undefined behavior. What that means is that if this condition ever happens at runtime, the compiler is allowed to make your code do anything. Your program could crash, or produce the wrong answer, or have unpredictable effects on other parts of your code, or it might silently do what you intended.
In your case it is overflowing the maximum value of long int on your system. Because long int is signed, when the most significant bit gets set, it becomes a negative number.
I didn't actually add them up, but just looking at them, I'd say its a pretty safe guess that you are running into an integer overflow error.
A long int has a maximum size of about 2 billion (2^31). If you add more than that, it'll look back around and go to -2^31.
You'll need to use a data type that can hold more than that if you want to sum up those numbers. Probably a long long int should work. If you're sure it'll always be positive, even better to use an unsigned long long int.
As long int has maximum range upto 2,147,483,647, and the value of sum is more than the range.So, it is coming as negative value. You can use the following code...
#include<stdio.h>
int main()
{
int i;
long long int sum=0; //Taking long long int instead of long int
int a[50]={846930887,1681692778,1714636916, 1957747794, 424238336,
719885387, 1649760493, 596516650, 1189641422, 1025202363, 1350490028,
783368691, 1102520060, 2044897764, 1967513927, 1365180541, 1540383427,
304089173, 1303455737, 35005212, 521595369, 294702568, 1726956430,
336465783, 861021531, 278722863, 233665124, 2145174068, 468703136,
1101513930, 1801979803, 1315634023, 635723059, 1369133070, 1125898168,
1059961394, 2089018457, 628175012, 1656478043, 1131176230, 1653377374,
859484422, 1914544920, 608413785, 756898538, 1734575199, 1973594325,
149798316, 2038664371, 1129566414};
for(i=0;i<49;i++)
{
sum=sum+a[i];
printf("sum is : %lld\n",sum);
}
printf("\nTotal sum is %lld",sum);
}
As Vlad from Moscow said this is a overflow issue, which made an undefined behavior. In you system (long int sum) sum does not have capacity to hold the total value. Not sure but you can use long long int sum =0;(after C99). If it still cannot work properly, search for "BigInteger" implement.
This is perhaps the strangest thing I have ever come across: A number which is simultaneously positive and negative! (And I can prove it to you, because I have the link to my code with outputs/inputs here at ideaone. Basically, my output is a negative number, but even stranger: when I check to see if it is less than zero, it is false (?!). Even more strange: when you multiply it by a number other than one, it switches back to being printed as a positive number.
This error does not happen when I compile on Xcode, but it does when compiled on the internet (or with some other compilers), such as the one in my link.
It's not important to understand exactly what it does, I'm wondering why this value is both negative and positive at the same time.
#include <stdio.h>
#include <cmath>
int main()
{
unsigned long answer,T,n,N,a,d,b,L1,amount;
scanf("%ld",&T); // number of test cases to loop through
while (T--) {
scanf("%ld",&N);
amount = 0;
answer = 0;
n=N-N%2;
for (a = 1; a <= n/2; a++) {
d = N-a;
L1 = a*d;
for (b = 1; b*b < L1 ; b++) {
// "amount" is always a positive number
amount = 2*(((L1-1)/b) - b + 1) - 1;
if (d==a) answer+=amount;
else answer+=2*amount; // we are only adding positive values to this
}
}
if (answer<0) printf("This answer is less than zero %ld\n",answer);
if (answer>0) printf("This answer is greater than zero %ld\n",answer);
printf("%ld\n",answer);
printf("%ld\n",2*answer/2);
}
}
Input:
1
2500
Output:
Success time: 0.02 memory: 3300 signal:0
This answer is greater than zero -1842629629
-1842629629
304854019
As you can see, my negative answer is greater than zero. And not to mention, it's different from the answer when I compiled in Xcode, even in positive form.
I am amazed by this. It's printing out a negative number that is "positive"
Proof is here at ideaone.com
answer is unsigned long. %ld is signed long in the format string. Change %ld to %lu and you won't be changing an unsigned into a signed value at print time and it will print correctly.
if (answer<0) printf("This answer is less than zero %lu\n",answer);
if (answer>0) printf("This answer is greater than zero %lu\n",answer);
The problem lies in the signing of your data types. You're using an 'unsigned long', which can't be negative, technically. Negative numbers can still be assigned to it and mathematical operations that lead to the negatives (such as 0 - 1) can still be performed. But the processor will always treat these values as positive.
I know that some compiliers will treat unsigned numbers differently depending on compile and link time flags and that is probably what's causing the difference between the two compilers. Or even possibly the website your using my be interpreting the code, for safety reasons.
Either way, change the declarations to just 'long' and problem solved.
i can not add comment,but i executed in Ubuntu,and it show the same with you.
FYI,when i input a and 25,it's ok
myqiqiang#ubuntu:~$ ./test
1
2500
This answer is greater than zero -1842629629
-1842629629
304854019
myqiqiang#ubuntu:~$ ./test
1 25
This answer is greater than zero 12722
12722
12722