uint64_t to an array - C language - c

I tried to parse uint64_t array to an array of char (result in decimal, separated by comma).
I used memcpy, every time I get a random values. iota() function converts max uint32_t values. I tried separate uint64_t to 2 uint32_t, but I never get a right result.
uint64_t numbers[10] = { 201234567890123456,
12345678901234567890,
98765432109876543,
65432109887,
12345234512345,
217631276371261627,
12354123512453124,
2163521442531,
2341232142132321,
1233432112 };
char array[1000] = "";
Expected result:
array = "201234567890123456,12345678901234567890,98765432109876543,65432109887,12345234512345,217631276371261627,12354123512453124,2163521442531,2341232142132321,1233432112"
I tried int64ToChar from this topic, but result is:
void uint64ToChar(char a[], int64_t n) {
memcpy(a, &n, 10);
}
uint64_t number = 12345678900987654;
char output[30] = "";
uint64ToChar(output, number);
Result:
�g]T�+
Thanks for any help.

Ue snpintf() to convert the 64-bit numbers:
#include <inttypes.h>
#include <stdio.h>
#include <stdint.h>
int main() {
uint64_t numbers[10] = { 201234567890123456,
12345678901234567890,
98765432109876543,
65432109887,
12345234512345,
21763127637126371627,
12354123512453124,
2163521442531,
2341232142132321,
1233432112 };
char array[1000];
size_t i, n = sizeof(numbers) / sizeof(numbers[0]), pos = 0;
for (i = 0; i < n && pos < sizeof(array); i++) {
pos += snprintf(array + pos, sizeof(array) - pos, "%"PRIu64"%s", numbers[i],
i < n - 1 ? "," : "");
}
printf("%s\n", array);
return 0;
}

If all the data is available at compile-time, there's no obvious reason why you should use slow run-time conversion functions like s*printf. Simply do it all at pre-processor stage:
#define INIT_LIST \
201234567890123456, \
12345678901234567890, \
98765432109876543, \
65432109887, \
12345234512345, \
217631276371261627, \
12354123512453124, \
2163521442531, \
2341232142132321, \
1233432112
#define STR_(...) #__VA_ARGS__
#define STR(x) STR_(x)
int main (void)
{
uint64_t numbers[10] = { INIT_LIST };
char array[] = STR(INIT_LIST);
puts(array);
}
More advanced alternatives with "X macros" are possible, if you want to micro-manage comma and space placement between numbers etc.
Please note that 12345678901234567890 is too large to be a valid signed integer constant on my 64 bit system, the max is 2^63 - 1 = 9.22*10^18 but this number is 12.34*10^18. I have to change it to 12345678901234567890ull to get this program to compile, since the maximum number is then 18.44*10^18.

You can accomplish this by using sprintf_s in a for loop.
#include <stdio.h>
#include <stdint.h>
#include <string.h>
#define BUFSIZE 22 /* string buffer size to support a single 64 bit unsigned integer */
#define ARRSIZE 10 /* size of the unsigned integer array */
#define STRSIZE BUFSIZE * ARRSIZE /* max size of the string of unsigned integer array */
int main()
{
int i;
char num_str[STRSIZE] = "";
uint64_t num_arr[ARRSIZE] = {
201234567890123456,
12345678901234567890,
98765432109876543,
65432109887,
12345234512345,
2176312763712637162,
12354123512453124,
2163521442531,
2341232142132321,
1233432112
};
for (i = 0; i < ARRSIZE; i++)
{
/* convert an unsigned integer to a string and concatenate to previous string every loop */
sprintf_s(num_str + strlen(num_str), BUFSIZE, "%llu,", num_arr[i]);
}
num_str[strlen(num_str) - 1] = 0; /* truncate the trailing comma */
printf("%s", num_str);
return 0;
}
This results in:
num_str = "201234567890123456,12345678901234567890,98765432109876543,65432109887,12345234512345,2176312763712637162,12354123512453124,2163521442531,2341232142132321,1233432112"

The memcpy function copies byte by byte from one location to another. What you're attempting to do is take 10 bytes of a 64 bit number (which only contains 8 bytes) and reinterpret each of them as an ASCII character. Not only will this not give the results you expect, but you also invoke undefined behavior by reading past the memory bounds of an object.
Guessing at what functions do is a bad way to learn C. You need to read the documentation for these functions (run "man function_name" on Linux for the function in question or search learn.microsoft.com for MSVC) to understand what they do.
If you use the sprintf function, you can convert a 64 bit integer to a string. The %lld identifier accepts a long long int, which is at least as big as a uint64_t.
sprintf(a, "%lld", (unsigned long long)n);

Related

How to allocate enough memory to convert an unsigned long long int into string

With a code where I have a struct:
struct fibo_entry { /* Definition of each table entry */
int n;
unsigned long long int lli; /* 64-bit integer */
char *str;
};
I have to solve a Fibonacci sequence where I have the following:
fibo_table = (struct fibo_entry *)malloc(sizeof(struct fibo_entry));
//fibo_table->str = (char *)malloc(1 + 8 * sizeof(char)); // !!??
for (i = 0; i <= n; i++) {
fibo_table[i].n = i;
if (i == 0) {
fibo_table[i].lli = 0;
//sprintf(fibo_table[i].str, "%llu", fibo_table[i].lli);
//fibo_table[i].str = atoi(fibo_table[i].lli);
} else if (i == 1) {
fibo_table[i].lli = 1;
} else {
fibo_table[i].lli = fibo_table[i-1].lli + fibo_table[i-2].lli;
//log10(fibo_table[i].lli);
}
}
The process to calculate Fibonacci is done, the problem that I have comes when I have to calculate the memory that I need to allocate a long long int in the string.
I know that the numbers use 64 bits each and I tried with malloc and the concept that sprintf should work to convert one in another, but I can't find a solution. Every time that I try to run the program, just fail.
If you are writing a (positive) number, n, in decimal notation (as the %llu format specifier will), then the number of digits will be (the integral part of) log10(n) + 1.
So, in order to (pre-)determine the maximum buffer size required, you can use the log10 function on the compiler-defined constant, ULLONG_MAX (this is the maximum value that an unsigned long long int can have). Note that, when allocating the character buffer, you should add 1 to the number of digits, to allow for the nul-terminator in your string.
The following short program may be helpful:
#include <stdio.h>
#include <stdint.h>
#include <math.h>
#include <limits.h>
int main()
{
size_t maxdigits = (size_t)(log10((double)(ULLONG_MAX)) + 1);
printf("Max. digits in unsigned long long = %zu!\n", maxdigits);
printf("Max. value for the type is: %llu\n", ULLONG_MAX);
return 0;
}
On most modern systems, unsigned long long int will be 64 bits, with a maximum value of 18446744073709551615 (20 digits); however, it is better to use the platform/compiler-specific ULLONG_MAX, rather than relying on any particular value being correct.
Further, rather than calculating this maxdigits value multiple times, you need only calculate a 'global' constant once, then re-use that as and when required.
size_t CountDigit(long long int num)
{
size_t count = 0;
if(num < 0)
{
count++;
}
while (num)
{
count++;
num \=10;
}
count++;\\thats for the '\0'
return (count);
}
then you can use count for malloc and after that you can use sprintf or do it yourself, to insert the right chars in it.
How to allocate enough memory to convert an unsigned long long int into string
With an n-bit unsigned integer, a buffer of log10(2^n - 1) + 1 + 1 is needed. +1 for "ceiling" and +1 for the null character.
To find this value at compiler time, could use:
#define LOG10_2_N 302
#define LOG10_2_D 1000
#define ULL_STR_SIZE1 (sizeof (unsigned long long)*CHAR_BIT * LOG10_2_N / LOG10_2_D + 2)
To find this at pre-processor time is a little trickier as we need to find the bit width via macros. This approach also takes space advantage if rare padding bits are used.
// Numbers of bits in a Mersenne Number
// https://mathworld.wolfram.com/MersenneNumber.html
// https://stackoverflow.com/a/4589384/2410359
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))
#define ULLONG_BIT_WIDTH IMAX_BITS(ULLONG_MAX)
// 28/93 is a tight fraction for log10(2)
#define LOG10_2_N 28
#define LOG10_2_D 93
#define ULL_STR_SIZE2 (ULLONG_BIT_WIDTH * LOG10_2_N / LOG10_2_D + 1 + 1)
ULL_STR_SIZE2 is suitable for #if ULL_STR_SIZE2 ... processing.
fibo_table[i].str = malloc(ULL_STR_SIZE2);
if (fibo_table[i].str) {
sprintf(fibo_table[i].str, "%llu", fibo_table[i].lli);
Or at run-time, to "right size", call snprintf() first and find the size needed for per each integer via the return value of *printf().
int n = snprintf(NULL, 0, "%llu", fibo_table[i].lli);
fibo_table[i].str = malloc(n + 1);
if (fibo_table[i].str) {
sprintf(fibo_table[i].str, "%llu", fibo_table[i].lli);
Or slightly more efficient, form a worst case size temporary buffer, write to it and then duplicate the string.
char tmp[ULL_STR_SIZE2];
snprintf(tmp, sizeof tmp, "%llu", fibo_table[i].lli);
fibo_table[i].str = strdup(tmp);
Alternative: change pointer char *str to an array.
struct fibo_entry {
int n;
unsigned long long int lli;
//char *str;
char *str[ULL_STR_SIZE2];
};
and use
snprintf(fibo_table[i].str, sizeof fibo_table[i].str, "%llu", fibo_table[i].lli);
Best to not assume long long is 64-bit. It is at least 64-bit.
You can calculate how many digits you need to use doing something like number_digits = (int)log10((double)num) + 1; and then reserve enough space with fibo_table[i].str = malloc(number_digits * sizeof(char)); remember you need to do this every for iteration, after those two steps you can now use the sprintf as you were sprintf(fibo_table[i].str, "%llu", fibo_table[i].lli);, code would look something like this:
int number_digits;
for (i = 0; i <= n; i++) {
if (i == 0) {
....
} else if (i == 1) {
....
} else {
....
}
number_digits = (int)log10((double)i) + 1;
fibo_table[i].str = malloc(number_digits*sizeof(char));
sprintf(fibo_table[i].str, "%llu", fibo_table[i].lli);
}

Problem with sprintf and uint64_t (avr-libc 2.0.0)

I am playing with the function sprintf in avr-libc 2.0.0 and uint64_t, and it seems it doesn't work properly.
The code
uint64_t x = 12ull;
char buffer[30];
int len = sprintf(buffer, "%llu", x);
int buffer_len = strlen(buffer);
returns len == 2(ok) and buffer_len == 0 (wrong!!!).
The same code works perfectly for uint16_t and uint32_t (and also works for the signed version).
What's the problem? Is it a bug in sprintf of avr-libc? (I test the same code in gcc, not in avr-gcc, and it works ok).
Thanks.
The avr-libc does not implement printing with ll printf modifier.
But the ll length modifier will to abort the output, as this realization does not operate long long arguments.
Here is a small wrapper which I have written in under 10 minutes:
#include <stdio.h>
#include <stdint.h>
char *uint64_to_str(uint64_t n, char dest[static 21]) {
dest += 20;
*dest-- = 0;
while (n) {
*dest-- = (n % 10) + '0';
n /= 10;
}
return dest + 1;
}
#define LOG10_FROM_2_TO_64_PLUS_1 21
#define UINT64_TO_STR(n) uint64_to_str(n, (char[21]){0})
int main(void) {
printf("Hello World\n");
printf("%s", UINT64_TO_STR(123456789ull));
return 0;
}
will output:
Hello world
123456789

How do I fix my `itoa` implementation so it doesn't print reversed output?

I want to convert an integer into a string of numeric characters in C.
I've tried using itoa, but it's non-standard and not provided by my C library.
I tried to implement my own itoa, but it's not working properly:
#include <stdlib.h>
#include <stdio.h>
char *itoa(int val, char *buf, int base)
{
size_t ctr = 0;
for( ; val; val /= base )
{
buf[ctr++] = '0' + (val % base);
}
buf[ctr] = 0;
return buf;
}
int main(void)
{
unsigned char c = 201;
char *buf = malloc(sizeof(c)*8+1);
itoa(c, buf, 2);
puts(buf);
free(buf);
}
It gives reversed output.
For example, if c is 'A' and base is 2, the output is this: 0101101
The output I want it to be is this: 1011010
How do I fix this issue?
Similar questions
I've already seen this question: Is there a printf converter to print in binary format?
I do not want a printf format specifier to print an integer as binary, I want to convert the binary to a string.
I've already seen this question: Print an int in binary representation using C
Although the answer does convert an integer into a string of binary digits, that's the only thing it can do.
Restrictions
I want itoa to be able to work with other bases, such as 10, 8, etc. and print correctly (i.e. 12345 translates to "12345" and not to "11000000111001").
I do not want to use printf or sprintf to do this.
I do not care about the length of the string as long is the result is correct.
I do not want to convert the integer into ASCII characters other than numeric ones, with the exception of bases greater than 10, in which case the characters may be alphanumeric.
The answer must fit this prototype exactly:
char *itoa(int val, char *buf, int base);
There may be a function called nitoa that has this prototype and returns the number of characters required to hold the result of itoa:
size_t nitoa(int val, int base);
How do I fix my itoa implementation so it doesn't print reversed output?
Rather than reverse the string, form it right-to-left. #4 of #user3386109
I recommend the helper function also receives in a size.
#include <limits.h>
char* itostr(char *dest, size_t size, int a, int base) {
// Max text needs occur with itostr(dest, size, INT_MIN, 2)
char buffer[sizeof a * CHAR_BIT + 1 + 1];
static const char digits[36] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
if (base < 2 || base > 36) {
fprintf(stderr, "Invalid base");
return NULL;
}
// Start filling from the end
char* p = &buffer[sizeof buffer - 1];
*p = '\0';
// Work with negative `int`
int an = a < 0 ? a : -a;
do {
*(--p) = digits[-(an % base)];
an /= base;
} while (an);
if (a < 0) {
*(--p) = '-';
}
size_t size_used = &buffer[sizeof(buffer)] - p;
if (size_used > size) {
fprintf(stderr, "Scant buffer %zu > %zu", size_used , size);
return NULL;
}
return memcpy(dest, p, size_used);
}
Then to provide memory, use a compound literal.
// compound literal C99 or later
#define INT_STR_SIZE (sizeof(int)*CHAR_BIT + 2)
#define MY_ITOA(x, base) itostr((char [INT_STR_SIZE]){""}, INT_STR_SIZE, (x), (base))
Now you can call it multiple times.
int main(void) {
printf("%s %s %s %s\n", MY_ITOA(INT_MIN,10), MY_ITOA(-1,10), MY_ITOA(0,10), MY_ITOA(INT_MAX,10));
printf("%s %s\n", MY_ITOA(INT_MIN,2), MY_ITOA(INT_MIN,36));
return (0);
}
Output
-2147483648 -1 0 2147483647
-10000000000000000000000000000000 -ZIK0ZK
Note: sizeof(c)*8+1 is one too small for INT_MIN, base 2.
This solution works for me:
#include <errno.h>
#include <stdlib.h>
#include <string.h>
#define itoa lltoa
#define utoa ulltoa
#define ltoa lltoa
#define ultoa ulltoa
#define nitoa nlltoa
#define nutoa nulltoa
#define nltoa nlltoa
#define nultoa nulltoa
#define BASE_BIN 2
#define BASE_OCT 8
#define BASE_DEC 10
#define BASE_HEX 16
#define BASE_02Z 36
__extension__
char *ulltoa(unsigned long long val, char *buf, int base)
{
int remainder;
char c, *tmp = buf;
if(base < BASE_BIN)
{
errno = EINVAL;
return NULL;
}
do {
remainder = val % base;
if(remainder >= BASE_DEC) c = 'a' - BASE_DEC;
else c = '0';
*tmp++ = remainder + c;
val /= base;
} while(val);
*tmp = 0;
return strrev(buf);
}
__extension__
size_t nulltoa(unsigned long long val, int base)
{
size_t size = 0;
if(base < BASE_BIN)
{
errno = EINVAL;
return 0;
}
if(!val) size++;
for( ; val; val /= base, size++ );
return size;
}
__extension__
char *lltoa(long long val, char *buf, int base)
{
if(val < 0 && base > BASE_BIN)
{
val = -val;
*buf++ = '-';
}
return ulltoa(val, buf, base);
}
__extension__
size_t nlltoa(long long val, int base)
{
size_t size = 0;
if(val < 0 && base > BASE_BIN)
{
val = -val;
size++;
}
return size + nulltoa(val, base);
}

Remove thousand separator C-programming

Im trying to make a simple function which can convert a number with a thousand separator to an integer without the separator. All my numbers are within the range of 0 to 999.999, so my initial though was to just handle it like a double and then multiply it by 1000 and call it a day, but is there a more generel way of doing this?:
#include <stdio.h>
main() {
double a;
a=369.122;
int b;
b = a * 1000;
printf("b is %d", b);
}
Where is my current solution:
#include <stdio.h>
main() {
char *XD = "113.321";
int s1, s2;
sscanf(XD, "%i.%i", &s1, &s2);
printf("%i", s1 * 1000 + s2);
}
Using a double for this is not appropriate due to floating point imprecision: you might find that when multiplied by 1000 and truncate to an int, you end up with a number that is 1 less than you really want.
Also note that the largest value for an int can be as small as 32767. On such a platform, you would overflow b.
If I were you, I'd use a long throughout and introduce the 1000s separator when you want to display the value. For a positive number x, the first 1000 is attained using x / 1000, the final 1000 using x % 1000.
You can simply parse the input yourself and ignore the separators.
Parsing integers is easy:
#include <stdio.h>
int main()
{
int c;
unsigned n = 0, acc = 0;
while(EOF!=(c=getchar())){
if(c>='0' && c<='9')
acc = 10*n + c-'0';
else if(c == '.') //ignore your separator
continue;
else
break; //not a digit and not a separator -- end parsing
if(acc < n)
fprintf(stderr, "overflow\n");
n = acc;
}
printf("got %d\n", n);
}
If you want very hi-perf., avoid the getchar and parse a buffered string (or at least use getchar_unlocked).
Alternatively, you can lex the string, copy legal characters to a buffer, and then run strtoul or similar on that buffer.
You should only only need like 22 characters for the buffer max (assuming base 10) or else 64 bit integers would start overflowing if you parsed them from a buffer that needed more digits.
A rugged, generic solution is to use a string, then simply skip everything that is not a digit. That way you don't have to worry about locale etc. (Several countries use , for decimals and . for thousand separator, while English-speaking counties do the opposite. And spaces are also sometimes used for thousand separator.)
#include <stdint.h>
#include <inttypes.h>
#include <ctype.h>
#include <stdio.h>
uint32_t digits_only (const char* str_number)
{
uint32_t result = 0;
for(size_t i=0; str_number[i] != '\0'; i++)
{
if(isdigit(str_number[i]))
{
result *= 10;
result += (uint32_t)str_number[i] - '0';
}
}
return result;
}
int main (void)
{
printf("%" PRIu32 "\n", digits_only("123,456"));
printf("%" PRIu32 "\n", digits_only("123.456"));
printf("%" PRIu32 "\n", digits_only("blabla 123 blabla 456 blabla"));
}

I can store only a finite number of lines in a new text file

I have many different pseudo-random number generators written in C that generate an arbitrary number of pairs of random numbers (through the CLI) and store them in a (new) text file: a pair of numbers per column. I want to store 400.000.000 numbers in a text file, but when I look at the number of lines the file has, it has only 82.595.525 lines. This is the code:
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "../Calculos/myfunctions.c"
void outputDevRandomOpenFile (FILE * from_file, FILE * to_file, unsigned long long how_many_pairs){
unsigned long long i = 0LL;
int seed;
unsigned long long max_period = 2147483648LL;
for (i = 0LL; i < how_many_pairs; i += 1LL){
fread (&seed, sizeof(int), 1, from_file);
fprintf (to_file, "%.10lf ", fabs (((double) seed) / ((double) max_period)));
fread (&seed, sizeof(int), 1, from_file);
fprintf (to_file, "%.10lf\n", fabs (((double) seed) / ((double) max_period)));
}
}
int main (int argc, char *argv[]){
char * endptr;
unsigned long long how_many_pairs = (unsigned long long) strtoull (argv[1], &endptr, 10);
FILE * urandom = fopen ("/dev/urandom", "r");
FILE * to_file = fopen ("generated_numbers_devrandom.txt", "w");
outputDevRandomOpenFile (urandom, to_file, how_many_pairs);
fclose (urandom);
return 0;
}
At first I suspected that there where some issue in the code (i.e. I could be choosing the wrong types of variables somewhere), but I tested it by including inside the for-loop a if (i > 165191050) printf ("%llu\n", i); (remind that I'm using a 1-D array for storing couples of numbers, not a 2-D one, so in the condition I just multiply 82595525*2) to test whether the problem was that the code was not looping 800.000.000 times, but only 165191050. When I performed the test, after i = 165191050, it just started to print out i values on the shell, so it really looped those 800.000.000 times, but when I looked the number of lines of the generated text file, there were 82595525 lines again. So I'm betting the problem is not in the code (or at least not in the types of variables I used).
I'm also getting the same results with this algorithm (this is just another different pseudo-random number generator):
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define MT_LEN 624
int mt_index;
unsigned long mt_buffer[MT_LEN];
void mt_init() {
int i;
for (i = 0; i < MT_LEN; i++)
mt_buffer[i] = rand();
mt_index = 0;
}
#define MT_IA 397
#define MT_IB (MT_LEN - MT_IA)
#define UPPER_MASK 0x80000000
#define LOWER_MASK 0x7FFFFFFF
#define MATRIX_A 0x9908B0DF
#define TWIST(b,i,j) ((b)[i] & UPPER_MASK) | ((b)[j] & LOWER_MASK)
#define MAGIC(s) (((s)&1)*MATRIX_A)
unsigned long mt_random() {
unsigned long * b = mt_buffer;
int idx = mt_index;
unsigned long s;
int i;
if (idx == MT_LEN*sizeof(unsigned long))
{
idx = 0;
i = 0;
for (; i < MT_IB; i++) {
s = TWIST(b, i, i+1);
b[i] = b[i + MT_IA] ^ (s >> 1) ^ MAGIC(s);
}
for (; i < MT_LEN-1; i++) {
s = TWIST(b, i, i+1);
b[i] = b[i - MT_IB] ^ (s >> 1) ^ MAGIC(s);
}
s = TWIST(b, MT_LEN-1, 0);
b[MT_LEN-1] = b[MT_IA-1] ^ (s >> 1) ^ MAGIC(s);
}
mt_index = idx + sizeof(unsigned long);
return *(unsigned long *)((unsigned char *)b + idx);
/* Here there is a commented out block in MB's original program */
}
int main (int argc, char *argv[]){
char * endptr;
const unsigned long long how_many_pairs = (unsigned long long) strtoll (argv[1], &endptr, 10);
unsigned long long i = 0;
FILE * file = fopen ("generated_numbers_mt.txt", "w");
mt_init ();
for (i = 0LL; i < how_many_pairs; i++){
fprintf (file, "%.10lf ", ((double) mt_random () / (double) 4294967295));
fprintf (file, "%.10lf\n", ((double) mt_random () / (double) 4294967295));
}
fclose (file);
return 0;
}
Again, it loops 800.000.000 times, but it only stores 165191050 numbers.
$ ./devrandom 400000000
$ nl generated_numbers_devrandom.txt | tail # Here I'm just asking the shell to number the lines of the text file and to print out the 10 last ones.
82595516 0.8182168589 0.0370640513
82595517 0.1133005517 0.8237414290
82595518 0.9035788113 0.6030153367
82595519 0.9192735264 0.0945496135
82595520 0.0542484536 0.7224835437
82595521 0.1827865853 0.9254508596
82595522 0.0249044443 0.1234162976
82595523 0.0371284033 0.8898798078
82595524 0.5977596357 0.9672102989
82595525 0.5523654688 0.29032228
What is going on here?
Thanks in advance.
Each line is 26 characters long, 82595525 lines x 26 = 2147483650 bytes
If you look closer to the file created, I'm quite sure the last line is truncated and the file size is precisely 2147483647, i.e. 2^31-1.
The reason why you can't write a larger file is either due to a file system limitation but more likely due to the fact you compile a (non large file aware) 32 bit binary, with which a file can't be more than 2147483647 as it is the largest signed integer that can be used.
If that is the case and if your OS is 64 bit, the simplest fix is to set the proper compiler flags to build a 64 bit binary which won't have this limitation.
Otherwise, have a look to abasterfield workaround.
Compile with CFLAGS -D_FILE_OFFSET_BITS=64 or put
#define _FILE_OFFSET_BITS 64
in your code before you include any libc headers

Resources