Strtol doesn't set errno on overflow conversion - c

My strtol function fails to set errno during overflown conversion.
#include <stdio.h>
#include <string.h>
#include <malloc.h>
#include <getopt.h>
#include <errno.h>
#include <stdlib.h>
int main(int argc, char **argv) {
errno = 0;
int e = strtol("1000000000000000", NULL, 10);
printf("%d %d\n", errno, e);
return 0;
}
returns
0 -1530494976
What do I do wrong?
Compiler
gcc (Ubuntu 4.9.2-10ubuntu13) 4.9.2
Options
gcc -Wall -std=gnu99 -O2

There is nothing wrong with the implementation of strtol() but there is with your code.
The return type of this function is long (see the trailing l) and apparently the value 1000000000000000 can be represented by the long integer type. However the return value is assigned to e whose type is int which is unable to represent this value. What then happens is implementation-defined.
So change int e to long e and "%d %d\n" to "%d %ld\n". If you want to keep it as int, then you have to check if the value is outside of its range of representable values by yourself:
#include <stdio.h>
#include <errno.h>
#include <stdlib.h>
#include <limits.h> // for INT_{MIN,MAX}
int
main(void)
{
errno = 0;
long f = strtol("1000000000000000", NULL, 10);
if (errno == ERANGE) {
puts("value not representable by long (or int)");
} else if (f < INT_MIN || f > INT_MAX) {
puts("value not representable by int");
} else {
int e = f;
printf("%d\n", e);
}
}

It seems like both Microsoft [1] and Apple [2] implementations have the setting of errno commented out.
[1] http://research.microsoft.com/en-us/um/redmond/projects/invisible/src/crt/strtol.c.htm
[2] http://www.opensource.apple.com/source/xnu/xnu-1456.1.26/bsd/libkern/strtol.c

Related

Setting errno from user defined function

As far as I am aware of, most if not all standard C functions will set the global errno on failure to represent what happened, and thus the value can be used in logging, debugging or testing. Is it advisable for a user defined function to use this same behavior, or should we mimic it with a global variable local_errno that accepts the same values with the same meanings?
As am example, I’m writing a calculator, and I want an Addition function to report overflow:
#include <errno.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int
Add( int, int );
int
main( void )
{
int a = 0, b = 0, res0 = 0;
scanf(“%d %d”, &a, &b );
res0 = Add(a,b);
if( res0 == INF )
{
fprintf(stderr, “Error %s in Add\n”, strerror(errno));
return(errno);
}else
{
printf(“%d\n”, res0);
}
return(EXIT_SUCCESS);
}
int
Add( int l, int r )
{
if( l == INT_MAX || r == INT_MAX )
{
errno = EOVERFLOW;
return(INF);
}
return(l+r);
}

I am using this code to print from text file but the program gives me "-1.#IND00"

I have a problem. I am using this code to print from text file but the program gives me a different number -such as 11732408.000000- each time. However I don't get this problem when ex is integer.
#include <stdio.h>
#include <string.h>
int main() {
char example[] ="123.12/456 ";
double ex = atof(strtok(example, "/"));
printf("%lf", ex);
return 0;
}
I could solve my problem. Thank you for your helps.
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
int main ()
{
char example[20] ="123.12/456 ";
double ex=atof(strtok(example,"/"));
printf("%lf",ex);
return 0;
}
You forgot to include <stdlib.h> which contains the declaration of atof().
Your compiler is lenient and accepts your code is spite of the missing declaration, and it incorrectly infers the prototype to be int atof(char *), which causes undefined behavior when storing the return value to ex.
Hence the bogus output.
Note also that the l in the format %lf is necessary for scanf() but ignored by printf() as float arguments are implicitly converted to double when passed to vararg functions.
Here is a corrected version:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main() {
char example[] = "123.12/456 ";
char *p = strtok(example, "/");
if (p != NULL) {
double ex = atof(p);
printf("%f\n", ex);
}
return 0;
}

C time.h wrap around

I would like to assign max possible time to a time_t variable then convert it to a string and print out the result.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <limits.h>
int main()
{
time_t maxTime;
maxTime= LONG_LONG_MAX;
char *strOfMaxTime;
*strOfMaxTime = ctime(maxTime);
printf("%s",strOfMaxTime);
return 0;
}
OP's code is using char *ctime(const time_t *timer) incorrectly.
time_t maxTime;
char *strOfMaxTime;
// *strOfMaxTime = ctime(maxTime);
strOfMaxTime = ctime(&maxTime);
Yet simply assigning maxTime= LONG_LONG_MAX; is not necessary the correct way to determine the maximum time a system can handle.
Below is a trial and error method - likely with various implementation limitations. localtime() returns NULL when time_t is out of range.
#include <stdio.h>
#include <time.h>
time_t max_time() {
time_t t0, t1;
time_t delta = 1;
time(&t0); // now
while (t0 != -1) {
t1 = t0 + delta;
if (localtime(&t1) == NULL) { // If conversion fail, quit doubling.
break;
}
delta *= 2; // 2x for the next increment.
t0 = t1;
}
while (delta) {
t1 = t0 + delta;
if (localtime(&t1) != NULL) { // if succeeds, update t0
t0 = t1;
}
delta /= 2; // try smaller and smaller deltas.
}
printf("%s %lld\n", ctime(&t0), (long long) t0);
return t0;
}
int main(void) {
max_time();
return 0;
}
Output (Note that 17:59:59 depends on timezone and the year 2,147,483,647 is the max 32-bit signed integer. YMMV.)
Tue Dec 31 17:59:59 2147483647
67767976233532799
From C Standards#7.27.1p4
The range and precision of times representable in clock_t and time_t are implementation-defined.
First, you need to fix the issue in your program. The below statement must be giving error while compiling:
*strOfMaxTime = ctime(maxTime);
Change this to:
strOfMaxTime = ctime(&maxTime);
You can use perror() to get the error message for the given input - LONG_LONG_MAX, like this:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <limits.h>
#include <errno.h>
int main()
{
time_t maxTime;
maxTime= LONG_LONG_MAX;
char *strOfMaxTime;
strOfMaxTime = ctime(&maxTime);
if (errno != 0)
perror ("Error");
else
printf("%d,%s",errno, strOfMaxTime);
return 0;
}
On my setup I am getting this output:
Error: Value too large to be stored in data type
Indeed, LONG_LONG_MAX is invalid input.
As the standard mentions that the range of time_t is implementation-defined, so if I give UINT_MAX I am getting the output:
0,Sun Feb 7 11:58:15 2106
This is wrong:
*strOfMaxTime = ctime(maxTime);
This tries to assign the return value of ctime (a pointer to a char) to *strOfMaxTime a char.
Instead call:
strOfMaxTime = ctime(&maxTime);
And then check the return value of strOfMaxTime as it may be NULL if ctime fails to convert maxTime
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <limits.h>
int main()
{
time_t maxTime;
maxTime = INT_MAX;
char *strOfMaxTime = ctime(&maxTime);
printf("%s",strOfMaxTime);
return 0;
}
The maximum year is 2038, and this is known as Year 2038 problem:
https://en.wikipedia.org/wiki/Year_2038_problem
Numerous errors have been pointed out in other postings (assigning output of ctime() to *strOfMaxTime, LONG_LONG_MAX, etc). On my 64bit Ubuntu 16.04 Linux system, time_t is defined as a long int and a long int is defined as 8 bytes as is a long long int. However assigning LLONG_MAX to maxTime still causes ctime() to fail. So I modified your code to get a range of what the upper limit of valid values ctime() will accept.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <errno.h>
#include <limits.h>
int main()
{
time_t maxTime;
maxTime= LONG_MAX;
char *strOfMaxTime;
strOfMaxTime = ctime(&maxTime);
while( strOfMaxTime == NULL )
{
perror("ctime error");
printf("%ld div by 2\n", maxTime);
maxTime /= 2;
strOfMaxTime = ctime(&maxTime);
}
printf("%s\n",strOfMaxTime);
return 0;
}
Running it yields the following output:
ctime error: Invalid argument
9223372036854775807 div by 2
ctime error: Invalid argument
4611686018427387903 div by 2
ctime error: Invalid argument
2305843009213693951 div by 2
ctime error: Invalid argument
1152921504606846975 div by 2
ctime error: Invalid argument
576460752303423487 div by 2
ctime error: Invalid argument
288230376151711743 div by 2
ctime error: Invalid argument
144115188075855871 div by 2
ctime error: Invalid argument
72057594037927935 div by 2
Sat Jun 12 22:26:07 1141709097

How to use DBL_MANT_DIG to check strtod

Consider the following code:
#include <stdio.h>
#include <limits.h>
#include <stdlib.h>
#include <errno.h>
#include <float.h>
int main (void) {
double val;
/* base b = 2; 2^DBL_MANT_DIG */
/* decimal digits log10(2^DBL_MANT_DIG) */
/*const char *str = "9007199254740992";*/
const char *str = "9007199254740993";
errno = 0;
val = strtod(str, NULL);
printf("%d\n", DBL_MANT_DIG );
if (errno == ERANGE) {
printf("error\n");
} else {
printf("%f\n", val);
}
return 0;
}
This returns:
53
9007199254740992.000000
Since str has a string number that has more significant digits than the my machine can handle, how does one use DBL_MANT_DIG or the log10(2^DBL_MANT_DIG) version of it to check that the result of val is correct?
You don't use those to check that the conversion is exact.
Here's one way of how to do it.
Another way is to find out how many decimal digits after the decimal point are there in the resultant double, do sprintf() using that as the precision and compare its output with the original string.

GCC division truncates (rounding problem)

Using GCC on the Ubuntu Linux 10.04, I have unwanted rounding after a division.
I tried:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void FormatReading(int temp)
{
double reading = temp / 100;
printf("%f\n",reading); /* displays 226.000000, was expecting 226.60 */
}
int main(void)
{
FormatReading(22660);
return 0;
}
It was suggested to me to try:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void FormatReading(int temp)
{
long reading = temp ;
reading = reading / 100;
printf("%3.2ld\n",reading); /* displays 226 */
}
int main(void)
{
FormatReading(22660);
return 0;
}
I also tried:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void FormatReading(int temp)
{
long reading = temp ;
double reading2 = reading / 100;
printf("%3.2f\n",reading2); /* displays 226.00 */
}
int main(void)
{
FormatReading(22660);
return 0;
}
I also tried the round function using include math.h with compiler tag -lm in various ways, but did not find what I was looking for.
Any help greatly appreciated.
Best regards,
Bert
double reading = temp / 100.0;
^^
temp / 100 is an integer division - that you assign the result to a double doesn't change this.
You are using integer division which always gives integral results rather than fractions, and then the result is being assigned to a double. Divide by 100.0 instead of 100 to get the behavior you want.

Resources