I would like to assign max possible time to a time_t variable then convert it to a string and print out the result.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <limits.h>
int main()
{
time_t maxTime;
maxTime= LONG_LONG_MAX;
char *strOfMaxTime;
*strOfMaxTime = ctime(maxTime);
printf("%s",strOfMaxTime);
return 0;
}
OP's code is using char *ctime(const time_t *timer) incorrectly.
time_t maxTime;
char *strOfMaxTime;
// *strOfMaxTime = ctime(maxTime);
strOfMaxTime = ctime(&maxTime);
Yet simply assigning maxTime= LONG_LONG_MAX; is not necessary the correct way to determine the maximum time a system can handle.
Below is a trial and error method - likely with various implementation limitations. localtime() returns NULL when time_t is out of range.
#include <stdio.h>
#include <time.h>
time_t max_time() {
time_t t0, t1;
time_t delta = 1;
time(&t0); // now
while (t0 != -1) {
t1 = t0 + delta;
if (localtime(&t1) == NULL) { // If conversion fail, quit doubling.
break;
}
delta *= 2; // 2x for the next increment.
t0 = t1;
}
while (delta) {
t1 = t0 + delta;
if (localtime(&t1) != NULL) { // if succeeds, update t0
t0 = t1;
}
delta /= 2; // try smaller and smaller deltas.
}
printf("%s %lld\n", ctime(&t0), (long long) t0);
return t0;
}
int main(void) {
max_time();
return 0;
}
Output (Note that 17:59:59 depends on timezone and the year 2,147,483,647 is the max 32-bit signed integer. YMMV.)
Tue Dec 31 17:59:59 2147483647
67767976233532799
From C Standards#7.27.1p4
The range and precision of times representable in clock_t and time_t are implementation-defined.
First, you need to fix the issue in your program. The below statement must be giving error while compiling:
*strOfMaxTime = ctime(maxTime);
Change this to:
strOfMaxTime = ctime(&maxTime);
You can use perror() to get the error message for the given input - LONG_LONG_MAX, like this:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <limits.h>
#include <errno.h>
int main()
{
time_t maxTime;
maxTime= LONG_LONG_MAX;
char *strOfMaxTime;
strOfMaxTime = ctime(&maxTime);
if (errno != 0)
perror ("Error");
else
printf("%d,%s",errno, strOfMaxTime);
return 0;
}
On my setup I am getting this output:
Error: Value too large to be stored in data type
Indeed, LONG_LONG_MAX is invalid input.
As the standard mentions that the range of time_t is implementation-defined, so if I give UINT_MAX I am getting the output:
0,Sun Feb 7 11:58:15 2106
This is wrong:
*strOfMaxTime = ctime(maxTime);
This tries to assign the return value of ctime (a pointer to a char) to *strOfMaxTime a char.
Instead call:
strOfMaxTime = ctime(&maxTime);
And then check the return value of strOfMaxTime as it may be NULL if ctime fails to convert maxTime
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <limits.h>
int main()
{
time_t maxTime;
maxTime = INT_MAX;
char *strOfMaxTime = ctime(&maxTime);
printf("%s",strOfMaxTime);
return 0;
}
The maximum year is 2038, and this is known as Year 2038 problem:
https://en.wikipedia.org/wiki/Year_2038_problem
Numerous errors have been pointed out in other postings (assigning output of ctime() to *strOfMaxTime, LONG_LONG_MAX, etc). On my 64bit Ubuntu 16.04 Linux system, time_t is defined as a long int and a long int is defined as 8 bytes as is a long long int. However assigning LLONG_MAX to maxTime still causes ctime() to fail. So I modified your code to get a range of what the upper limit of valid values ctime() will accept.
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <errno.h>
#include <limits.h>
int main()
{
time_t maxTime;
maxTime= LONG_MAX;
char *strOfMaxTime;
strOfMaxTime = ctime(&maxTime);
while( strOfMaxTime == NULL )
{
perror("ctime error");
printf("%ld div by 2\n", maxTime);
maxTime /= 2;
strOfMaxTime = ctime(&maxTime);
}
printf("%s\n",strOfMaxTime);
return 0;
}
Running it yields the following output:
ctime error: Invalid argument
9223372036854775807 div by 2
ctime error: Invalid argument
4611686018427387903 div by 2
ctime error: Invalid argument
2305843009213693951 div by 2
ctime error: Invalid argument
1152921504606846975 div by 2
ctime error: Invalid argument
576460752303423487 div by 2
ctime error: Invalid argument
288230376151711743 div by 2
ctime error: Invalid argument
144115188075855871 div by 2
ctime error: Invalid argument
72057594037927935 div by 2
Sat Jun 12 22:26:07 1141709097
Related
The purpose of this code is to pass a virtual address in decimal and output the page number and offset.
After I compile my code using the gcc compiler on Linux I get this error:
indirection requires pointer operand ('int' invalid)
virtualAddress = *atoi(argv[1]);
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>
#include <unistd.h>
#include <math.h>
#include <curses.h>
int main(int argc,char *argv[])
{
unsigned long int virtualAddress,pageNumber,offset;
if(argc<2){
printf(" NO ARGUMNET IS PASSED");
return 0;
}
virtualAddress = *atoi(argv[1]);
//PRINT THE VIRTUAL ADDRESS
printf("The Address %lu contains:",virtualAddress);
//CALCULATE THE PAGENUMBER
pageNumber = virtualAddress/4096;
//PRINT THE PAGE NUMBER
printf("\n Page Number = %lu",pageNumber);
//FIND THE OFFSET
offset = virtualAddress%4096;
//PRINTS THE OFFSET
printf("\n Offset = %lu",offset);
getch();
return 0;
}
This error occurs when you want to create pointer to your variable by *my_var instead of &my_var.
virtualAddress = *atoi(argv[1]);
atoi function returns int (not int * so no need to dereference return value) and you try to dereference int , therefore , compiler gives an error.
As you need it in unsinged long int use strtoul -
char * p;
virtualAddress = strtoul(argv[1], &p,10);
My strtol function fails to set errno during overflown conversion.
#include <stdio.h>
#include <string.h>
#include <malloc.h>
#include <getopt.h>
#include <errno.h>
#include <stdlib.h>
int main(int argc, char **argv) {
errno = 0;
int e = strtol("1000000000000000", NULL, 10);
printf("%d %d\n", errno, e);
return 0;
}
returns
0 -1530494976
What do I do wrong?
Compiler
gcc (Ubuntu 4.9.2-10ubuntu13) 4.9.2
Options
gcc -Wall -std=gnu99 -O2
There is nothing wrong with the implementation of strtol() but there is with your code.
The return type of this function is long (see the trailing l) and apparently the value 1000000000000000 can be represented by the long integer type. However the return value is assigned to e whose type is int which is unable to represent this value. What then happens is implementation-defined.
So change int e to long e and "%d %d\n" to "%d %ld\n". If you want to keep it as int, then you have to check if the value is outside of its range of representable values by yourself:
#include <stdio.h>
#include <errno.h>
#include <stdlib.h>
#include <limits.h> // for INT_{MIN,MAX}
int
main(void)
{
errno = 0;
long f = strtol("1000000000000000", NULL, 10);
if (errno == ERANGE) {
puts("value not representable by long (or int)");
} else if (f < INT_MIN || f > INT_MAX) {
puts("value not representable by int");
} else {
int e = f;
printf("%d\n", e);
}
}
It seems like both Microsoft [1] and Apple [2] implementations have the setting of errno commented out.
[1] http://research.microsoft.com/en-us/um/redmond/projects/invisible/src/crt/strtol.c.htm
[2] http://www.opensource.apple.com/source/xnu/xnu-1456.1.26/bsd/libkern/strtol.c
I have a string that contains microseconds since the epoch. How could I convert it to a time structure?
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
int main ()
{
struct tm tm;
char buffer [80];
char *str ="1435687921000000";
if(strptime (str, "%s", &tm) == NULL)
exit(EXIT_FAILURE);
if(strftime (buffer,80,"%Y-%m-%d",&tm) == 0)
exit(EXIT_FAILURE);
printf("%s\n", buffer);
return 0;
}
Portable solution (assuming 32+ bit int). The following does not assume anything about time_t.
Use mktime() which does not need to have fields limited to their primary range.
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
int main(void) {
char buffer[80];
char *str = "1435687921000000";
// Set the epoch: assume Jan 1, 0:00:00 UTC.
struct tm tm = { 0 };
tm.tm_year = 1970 - 1900;
tm.tm_mday = 1;
// Adjust the second's field.
tm.tm_sec = atoll(str) / 1000000;
tm.tm_isdst = -1;
if (mktime(&tm) == -1)
exit(EXIT_FAILURE);
if (strftime(buffer, 80, "%Y-%m-%d", &tm) == 0)
exit(EXIT_FAILURE);
printf("%s\n", buffer);
return 0;
}
Edit: You could simply truncate the string, since struct tm does not store less than 1 second accuracy.
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main ()
{
struct tm now;
time_t secs;
char buffer [80];
char str[] ="1435687921000000";
int len = strlen(str);
if (len < 7)
return 1;
str[len-6] = 0; // divide by 1000000
secs = (time_t)atol(str);
now = *localtime(&secs);
strftime(buffer, 80, "%Y-%m-%d", &now);
printf("%s\n", buffer);
printf("%s\n", asctime(&now));
return 0;
}
Program output:
2015-06-30
Tue Jun 30 19:12:01 2015
You can convert the microseconds to seconds, and use localtime() like this
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
int main (void)
{
struct tm *tm;
char buffer[80];
char *str = "1435687921000000";
time_t ms = strtol(str, NULL, 10);
/* convert to seconds */
ms = (time_t) ms / 1E6;
tm = localtime(&ms);
if (strftime(buffer, 80, "%Y-%m-%d", tm) == 0)
return EXIT_FAILURE;
printf("%s\n", buffer);
return EXIT_SUCCESS;
}
Note that in the printed date, the microseconds are not present, so you can ignore that part.
Convert the string to a time_t, then use gmtime(3) or localtime(3).
#include <time.h>
#include <stdio.h>
#include <stdlib.h>
int main () {
struct tm *tm;
char buffer [80];
char *str ="1435687921000000";
time_t t;
/* or strtoull */
t = (time_t)(atoll(str)/1000000);
tm = gmtime(&t);
strftime(buffer,80,"%Y-%m-%d",tm);
printf("%s\n", buffer);
return 0;
}
Consider the following code:
#include <stdio.h>
#include <limits.h>
#include <stdlib.h>
#include <errno.h>
#include <float.h>
int main (void) {
double val;
/* base b = 2; 2^DBL_MANT_DIG */
/* decimal digits log10(2^DBL_MANT_DIG) */
/*const char *str = "9007199254740992";*/
const char *str = "9007199254740993";
errno = 0;
val = strtod(str, NULL);
printf("%d\n", DBL_MANT_DIG );
if (errno == ERANGE) {
printf("error\n");
} else {
printf("%f\n", val);
}
return 0;
}
This returns:
53
9007199254740992.000000
Since str has a string number that has more significant digits than the my machine can handle, how does one use DBL_MANT_DIG or the log10(2^DBL_MANT_DIG) version of it to check that the result of val is correct?
You don't use those to check that the conversion is exact.
Here's one way of how to do it.
Another way is to find out how many decimal digits after the decimal point are there in the resultant double, do sprintf() using that as the precision and compare its output with the original string.
Using GCC on the Ubuntu Linux 10.04, I have unwanted rounding after a division.
I tried:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void FormatReading(int temp)
{
double reading = temp / 100;
printf("%f\n",reading); /* displays 226.000000, was expecting 226.60 */
}
int main(void)
{
FormatReading(22660);
return 0;
}
It was suggested to me to try:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void FormatReading(int temp)
{
long reading = temp ;
reading = reading / 100;
printf("%3.2ld\n",reading); /* displays 226 */
}
int main(void)
{
FormatReading(22660);
return 0;
}
I also tried:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
void FormatReading(int temp)
{
long reading = temp ;
double reading2 = reading / 100;
printf("%3.2f\n",reading2); /* displays 226.00 */
}
int main(void)
{
FormatReading(22660);
return 0;
}
I also tried the round function using include math.h with compiler tag -lm in various ways, but did not find what I was looking for.
Any help greatly appreciated.
Best regards,
Bert
double reading = temp / 100.0;
^^
temp / 100 is an integer division - that you assign the result to a double doesn't change this.
You are using integer division which always gives integral results rather than fractions, and then the result is being assigned to a double. Divide by 100.0 instead of 100 to get the behavior you want.