Code to read Wav header file producing strange results? C - c

I need to read the header variables from a wave file and display what they are. I am using the following code, but my output has numbers far too large. I've searched for solutions for hours. Help would be much appreciated! Thanks. I got the wave soundfile format from https://ccrma.stanford.edu/courses/422/projects/WaveFormat/
Output:
Wav file header information:
Filesize 3884 bytes
RIFF header RIFF
WAVE header WAVE
Subchunk1ID fmt
Chunk Size (based on bits used) 604962816
Subchunk1Size 268435456
Sampling Rate 288030720
Bits Per Sample 2048
AudioFormat 256
Number of channels 2048
Byte Rate 288030720
Subchunk2ID
Subchunk2Size 1684108385
Here is the source:
#include <stdio.h>
#include <stdlib.h>
typedef struct WAV_HEADER
{
char RIFF[4];
int ChunkSize;
char WAVE[4];
char fmt[4];
int Subchunk1Size;
short int AudioFormat;
short int NumOfChan;
int SamplesPerSec;
int bytesPerSec;
short int blockAlign;
short int bitsPerSample;
int Subchunk2Size;
char Subchunk2ID[4];
}wav_hdr;
int getFileSize(FILE *inFile);
int main(int argc,char *argv[])
{
//check startup conditions
if(argc >= 2); //we have enough arguments -- continue
else { printf("\nUSAGE: program requires a filename as an argument -- please try again\n"); exit(0);}
wav_hdr wavHeader;
FILE *wavFile;
int headerSize = sizeof(wav_hdr),filelength = 0;
wavFile = fopen(argv[1],"r");
if(wavFile == NULL)
{
printf("Unable to open wave file\n");
exit(EXIT_FAILURE);
}
fread(&wavHeader,headerSize,1,wavFile);
filelength = getFileSize(wavFile);
fclose(wavFile);
printf("\nWav file header information:\n");
printf("Filesize\t\t\t%d bytes\n",filelength);
printf("RIFF header\t\t\t%c%c%c%c\n",wavHeader.RIFF[0],wavHeader.RIFF[1],wavHeader.RIFF[2],wavHeader.RIFF[3]);
printf("WAVE header\t\t\t%c%c%c%c\n",wavHeader.WAVE[0],wavHeader.WAVE[1],wavHeader.WAVE[2],wavHeader.WAVE[3]);
printf("Subchunk1ID\t\t\t%c%c%c%c\n",wavHeader.fmt[0],wavHeader.fmt[1],wavHeader.fmt[2],wavHeader.fmt[3]);
printf("Chunk Size (based on bits used)\t%d\n",wavHeader.ChunkSize);
printf("Subchunk1Size\t\t\t%d\n",wavHeader.Subchunk1Size);
printf("Sampling Rate\t\t\t%d\n",wavHeader.SamplesPerSec); //Sampling frequency of the wav file
printf("Bits Per Sample\t\t\t%d\n",wavHeader.bitsPerSample); //Number of bits used per sample
printf("AudioFormat\t\t\t%d\n",wavHeader.AudioFormat);
printf("Number of channels\t\t%d\n",wavHeader.bitsPerSample); //Number of channels (mono=1/sterio=2)
printf("Byte Rate\t\t\t%d\n",wavHeader.bytesPerSec); //Number of bytes per second
printf("Subchunk2ID\t\t\t%c%c%c%c\n",wavHeader.Subchunk2ID[0],wavHeader.Subchunk2ID[1],wavHeader.Subchunk2ID[2],wavHeader.Subchunk2ID[3]);
printf("Subchunk2Size\t\t\t%d\n",wavHeader.Subchunk2Size);
printf("\n");
return 0;
}
int getFileSize(FILE *inFile)
{
int fileSize = 0;
fseek(inFile,0,SEEK_END);
fileSize=ftell(inFile);
fseek(inFile,0,SEEK_SET);
return fileSize;
}`

So, your code basically works -- if you compile it with the same compiler and O/S that the author of the file format spec was using (32-bit Windows). You're hoping that your compiler has laid out your struct exactly as you need to match the file bytes. For example, I can compile and run it on win32 and read a WAV file perfectly -- right up to the variable part of the header whose variability you failed to code for.
Having written a great deal of code to manipulate a variety of file formats, I would advise you give up on trying to read into structs and instead make a few simple utility functions for things like "read next 4 bytes and turn them into an int".
Notice things like the "extra format bytes". Parts of the file format depend on the values in previous parts of the file format. That's why you generally need to think of it as a dynamic reading process rather than one big read to grab the headers. It's not hard to keep the result highly portable C that will work between operating systems without relying on O/S specific things like stat() or adding library dependencies for things like htonl() -- should portability (even portability to a different compiler or even just different compiler options on the same O/S) be desirable.

It seems like you noticed the endian issue, but the way to handle it is with htonl, ntohl, htons, and, ntohs. This is part of your number problem.
Read here:
http://www.beej.us/guide/bgnet/output/html/multipage/htonsman.html
Note there are a lot of other posts here on WAV files. Have you considered reading them?
Also, there are standard ways to get file information, like size, either through the windows API on windows or stat on linux/unix

Related

Same program is 10 times slower on windows

I wrote a simple c program that copies 10 million bytes from a file and pastes them in reverse order on another file (this is done one byte at a time, I know it's not efficient but it's just to make some tests), I don't understand why on linux it takes 2.5 seconds while on windows it takes more than 20 seconds. I run the same program changing only the paths.
I use windows 10 and archlinux, the files are on an ntfs partition.
code on windows
#include <stdio.h>
#include <time.h>
void get_nth_byte(FILE *fp, int nth_index,unsigned char* output){
fseek(fp,nth_index,SEEK_SET);
fread(output, sizeof(unsigned char), 1,fp);
}
int main() {
clock_t begin = clock();
//
FILE* input = fopen( "C:\\Users\\piero\\Desktop\\input.txt","rb");
FILE* output = fopen("C:\\Users\\piero\\Desktop\\output.txt","wb");
unsigned char byte;
for (int i = 10000000; i > 0; i--) {
get_nth_byte(input,i,&byte);
fwrite(&byte, sizeof(unsigned char),1,output);
}
//
clock_t end = clock();
double result = (double) (end - begin)/CLOCKS_PER_SEC;
printf("%f",result);
return 0;
}
code on linux
#include <stdio.h>
#include <time.h>
void get_nth_byte(FILE *fp, int nth_index,unsigned char* output){
fseek(fp,nth_index,SEEK_SET);
fread(output, sizeof(unsigned char), 1,fp);
}
int main() {
clock_t begin = clock();
//
FILE* input = fopen( "/run/media/piero/Windows/Users/piero/Desktop/input.txt","rb");
FILE* output = fopen("/run/media/piero/Windows/Users/piero/Desktop/output.txt","wb");
unsigned char byte;
for (int i = 10000000; i > 0; i--) {
get_nth_byte(input,i,&byte);
fwrite(&byte, sizeof(unsigned char),1,output);
}
//
clock_t end = clock();
double result = (double) (end - begin)/CLOCKS_PER_SEC;
printf("%f",result);
return 0;
}
output on linux : 2.224549
output on windows : 25.349647
UPDATE
I solved the problem by using cygwin rather than mingwin, now it takes about 4.3 seconds
This is a great demonstration of how it's not the code we write that runs, it's the executable that the compiler makes from the code that runs.
It is possible that your Windows C compiler is not as advanced as your Linux C compiler, and is not optimizing your code as well as it could, or it's possible that the libraries that the Windows compiler is linking to for fread() and fwrite() are slower than the equivalent libraries in the Linux system.
If I had to put up my best guess, the Linux C compiler probably noticed that it would be more efficient to read more than one byte at a time, and it could do that without affecting the semantics of your program, and the Windows compiler either didn't infer the same, or wasn't able to optimize in the same way due to some underlying proprietary filesystem thing that only Microsoft engineers understand.
I can't say for sure without a peek at the disassembled binaries
One of the strengths of Unix/Linux is that files are designed to be treated as streams of bytes, with it being maximally easy and efficient to seek to the n'th byte using fseek or lseek.
Non-Unix operating systems, such as Windows, tend to have to work much harder to implement those seek operations. In the worst case, they may actually need to read through the file, counting characters as they go.
Your code opens both files in binary mode, and this should reduce the need for the fseek implementation to perform any expensive emulations. In text mode, a 10x performance penalty for heavy fseek use wouldn't surprise me. I'm much more surprised you're seeing it in binary mode.
[Disclaimer: strictly speaking, in text mode fseek is not defined as seeking to an arbitrary byte offset at all, but rather, only to a position defined by the number returned by a previous call to ftell. If an implementation takes advantage of that freedom, it can reduce the performance penalty for text-mode fseek operations, also, but it then means that code like yours, that constructs positions to seek to on the assumption that they're pure byte offsets, may not work at all.]

Get the number of bytes in a file [duplicate]

How can I figure out the size of a file, in bytes?
#include <stdio.h>
unsigned int fsize(char* file){
//what goes here?
}
On Unix-like systems, you can use POSIX system calls: stat on a path, or fstat on an already-open file descriptor (POSIX man page, Linux man page).
(Get a file descriptor from open(2), or fileno(FILE*) on a stdio stream).
Based on NilObject's code:
#include <sys/stat.h>
#include <sys/types.h>
off_t fsize(const char *filename) {
struct stat st;
if (stat(filename, &st) == 0)
return st.st_size;
return -1;
}
Changes:
Made the filename argument a const char.
Corrected the struct stat definition, which was missing the variable name.
Returns -1 on error instead of 0, which would be ambiguous for an empty file. off_t is a signed type so this is possible.
If you want fsize() to print a message on error, you can use this:
#include <sys/stat.h>
#include <sys/types.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>
off_t fsize(const char *filename) {
struct stat st;
if (stat(filename, &st) == 0)
return st.st_size;
fprintf(stderr, "Cannot determine size of %s: %s\n",
filename, strerror(errno));
return -1;
}
On 32-bit systems you should compile this with the option -D_FILE_OFFSET_BITS=64, otherwise off_t will only hold values up to 2 GB. See the "Using LFS" section of Large File Support in Linux for details.
Don't use int. Files over 2 gigabytes in size are common as dirt these days
Don't use unsigned int. Files over 4 gigabytes in size are common as some slightly-less-common dirt
IIRC the standard library defines off_t as an unsigned 64 bit integer, which is what everyone should be using. We can redefine that to be 128 bits in a few years when we start having 16 exabyte files hanging around.
If you're on windows, you should use GetFileSizeEx - it actually uses a signed 64 bit integer, so they'll start hitting problems with 8 exabyte files. Foolish Microsoft! :-)
Matt's solution should work, except that it's C++ instead of C, and the initial tell shouldn't be necessary.
unsigned long fsize(char* file)
{
FILE * f = fopen(file, "r");
fseek(f, 0, SEEK_END);
unsigned long len = (unsigned long)ftell(f);
fclose(f);
return len;
}
Fixed your brace for you, too. ;)
Update: This isn't really the best solution. It's limited to 4GB files on Windows and it's likely slower than just using a platform-specific call like GetFileSizeEx or stat64.
**Don't do this (why?):
Quoting the C99 standard doc that i found online: "Setting the file position indicator to end-of-file, as with fseek(file, 0, SEEK_END), has undefined behavior for a binary stream (because of possible trailing null characters) or for any stream with state-dependent encoding that does not assuredly end in the initial shift state.**
Change the definition to int so that error messages can be transmitted, and then use fseek() and ftell() to determine the file size.
int fsize(char* file) {
int size;
FILE* fh;
fh = fopen(file, "rb"); //binary mode
if(fh != NULL){
if( fseek(fh, 0, SEEK_END) ){
fclose(fh);
return -1;
}
size = ftell(fh);
fclose(fh);
return size;
}
return -1; //error
}
POSIX
The POSIX standard has its own method to get file size.
Include the sys/stat.h header to use the function.
Synopsis
Get file statistics using stat(3).
Obtain the st_size property.
Examples
Note: It limits the size to 4GB. If not Fat32 filesystem then use the 64bit version!
#include <stdio.h>
#include <sys/stat.h>
int main(int argc, char** argv)
{
struct stat info;
stat(argv[1], &info);
// 'st' is an acronym of 'stat'
printf("%s: size=%ld\n", argv[1], info.st_size);
}
#include <stdio.h>
#include <sys/stat.h>
int main(int argc, char** argv)
{
struct stat64 info;
stat64(argv[1], &info);
// 'st' is an acronym of 'stat'
printf("%s: size=%ld\n", argv[1], info.st_size);
}
ANSI C (standard)
The ANSI C doesn't directly provides the way to determine the length of the file.
We'll have to use our mind. For now, we'll use the seek approach!
Synopsis
Seek the file to the end using fseek(3).
Get the current position using ftell(3).
Example
#include <stdio.h>
int main(int argc, char** argv)
{
FILE* fp = fopen(argv[1]);
int f_size;
fseek(fp, 0, SEEK_END);
f_size = ftell(fp);
rewind(fp); // to back to start again
printf("%s: size=%ld", (unsigned long)f_size);
}
If the file is stdin or a pipe. POSIX, ANSI C won't work.
It will going return 0 if the file is a pipe or stdin.
Opinion:
You should use POSIX standard instead. Because, it has 64bit support.
And if you're building a Windows app, use the GetFileSizeEx API as CRT file I/O is messy, especially for determining file length, due to peculiarities in file representations on different systems ;)
If you're fine with using the std c library:
#include <sys/stat.h>
off_t fsize(char *file) {
struct stat filestat;
if (stat(file, &filestat) == 0) {
return filestat.st_size;
}
return 0;
}
I used this set of code to find the file length.
//opens a file with a file descriptor
FILE * i_file;
i_file = fopen(source, "r");
//gets a long from the file descriptor for fstat
long f_d = fileno(i_file);
struct stat buffer;
fstat(f_d, &buffer);
//stores file size
long file_length = buffer.st_size;
fclose(i_file);
I found a method using fseek and ftell and a thread with this question with answers that it can't be done in just C in another way.
You could use a portability library like NSPR (the library that powers Firefox).
In plain ISO C, there is only one way to determine the size of a file which is guaranteed to work: To read the entire file from the start, until you encounter end-of-file.
However, this is highly inefficient. If you want a more efficient solution, then you will have to either
rely on platform-specific behavior, or
revert to platform-specific functions, such as stat on Linux or GetFileSize on Microsoft Windows.
In contrast to what other answers have suggested, the following code is not guaranteed to work:
fseek( fp, 0, SEEK_END );
long size = ftell( fp );
Even if we assume that the data type long is large enough to represent the file size (which is questionable on some platforms, most notably Microsoft Windows), the posted code has the following problems:
The posted code is not guaranteed to work on text streams, because according to §7.21.9.4 ¶2 of the ISO C11 standard, the value of the file position indicator returned by ftell contains unspecified information. Only for binary streams is this value guaranteed to be the number of characters from the beginning of the file. There is no such guarantee for text streams.
The posted code is also not guaranteed to work on binary streams, because according to §7.21.9.2 ¶3 of the ISO C11 standard, binary streams are not required to meaningfully support SEEK_END.
That being said, on most common platforms, the posted code will work, if we assume that the data type long is large enough to represent the size of the file.
However, on Microsoft Windows, the characters \r\n (carriage return followed by line feed) will be translated to \n for text streams (but not for binary streams), so that the file size you get will count \r\n as two bytes, although you are only reading a single character (\n) in text mode. Therefore, the results you get will not be consistent.
On POSIX-based platforms (e.g. Linux), this is not an issue, because on those platforms, there is no difference between text mode and binary mode.
C++ MFC extracted from windows file details, not sure if this is better performing than seek but if it is extracted from metadata I think it is faster because it doesn't need to read the entire file
ULONGLONG GetFileSizeAtt(const wchar_t *wFile)
{
WIN32_FILE_ATTRIBUTE_DATA fileInfo;
ULONGLONG FileSize = 0ULL;
//https://learn.microsoft.com/nl-nl/windows/win32/api/fileapi/nf-fileapi-getfileattributesexa?redirectedfrom=MSDN
//https://learn.microsoft.com/nl-nl/windows/win32/api/fileapi/ns-fileapi-win32_file_attribute_data?redirectedfrom=MSDN
if (GetFileAttributesEx(wFile, GetFileExInfoStandard, &fileInfo))
{
ULARGE_INTEGER ul;
ul.HighPart = fileInfo.nFileSizeHigh;
ul.LowPart = fileInfo.nFileSizeLow;
FileSize = ul.QuadPart;
}
return FileSize;
}
Try this --
fseek(fp, 0, SEEK_END);
unsigned long int file_size = ftell(fp);
rewind(fp);
What this does is first, seek to the end of the file; then, report where the file pointer is. Lastly (this is optional) it rewinds back to the beginning of the file. Note that fp should be a binary stream.
file_size contains the number of bytes the file contains. Note that since (according to climits.h) the unsigned long type is limited to 4294967295 bytes (4 gigabytes) you'll need to find a different variable type if you're likely to deal with files larger than that.
I have a function that works well with only stdio.h. I like it a lot and it works very well and is pretty concise:
size_t fsize(FILE *File) {
size_t FSZ;
fseek(File, 0, 2);
FSZ = ftell(File);
rewind(File);
return FSZ;
}
Here's a simple and clean function that returns the file size.
long get_file_size(char *path)
{
FILE *fp;
long size = -1;
/* Open file for reading */
fp = fopen(path, "r");
fseek(fp, 0, SEEK_END);
size = ftell(fp);
fclose(fp);
return size;
}
You can open the file, go to 0 offset relative from the bottom of the file with
#define SEEKBOTTOM 2
fseek(handle, 0, SEEKBOTTOM)
the value returned from fseek is the size of the file.
I didn't code in C for a long time, but I think it should work.

Why is my Memory dumping soo slow?

The idea behind this program is to simply access the ram and download the data from it to a txt file.
Later Ill convert the txt file to jpeg and hopefully it will be readable .
However when I try and read from the RAM using NEW[] it takes waaaaaay to long to actually copy all the values into the file?
Isnt it suppose to be really fast? I mean I save pictures everyday and it doesn't even take a second?
Is there some other method I can use to dump memory to a file?
#include <stdio.h>
#include <stdlib.h>
#include <hw/pci.h>
#include <hw/inout.h>
#include <sys/mman.h>
main()
{
FILE *fp;
fp = fopen ("test.txt","w+d");
int NumberOfPciCards = 3;
struct pci_dev_info info[NumberOfPciCards];
void *PciDeviceHandler1,*PciDeviceHandler2,*PciDeviceHandler3;
uint32_t *Buffer;
int *BusNumb; //int Buffer;
uint32_t counter =0;
int i;
int r;
int y;
volatile uint32_t *NEW,*NEW2;
uintptr_t iobase;
volatile uint32_t *regbase;
NEW = (uint32_t *)malloc(sizeof(uint32_t));
NEW2 = (uint32_t *)malloc(sizeof(uint32_t));
Buffer = (uint32_t *)malloc(sizeof(uint32_t));
BusNumb = (int*)malloc(sizeof(int));
printf ("\n 1");
for (r=0;r<NumberOfPciCards;r++)
{
memset(&info[r], 0, sizeof(info[r]));
}
printf ("\n 2");
//Here the attach takes place.
for (r=0;r<NumberOfPciCards;r++)
{
(pci_attach(r) < 0) ? FuncPrint(1,r) : FuncPrint(0,r);
}
printf ("\n 3");
info[0].VendorId = 0x8086; //Wont be using this one
info[0].DeviceId = 0x3582; //Or this one
info[1].VendorId = 0x10B5; //WIll only be using this one PLX 9054 chip
info[1].DeviceId = 0x9054; //Also PLX 9054
info[2].VendorId = 0x8086; //Not used
info[2].DeviceId = 0x24cb; //Not used
printf ("\n 4");
//I attached the device and give it a handler and set some setting.
if ((PciDeviceHandler1 = pci_attach_device(0,PCI_SHARE|PCI_INIT_ALL, 0, &info[1])) == 0)
{
perror("pci_attach_device fail");
exit(EXIT_FAILURE);
}
for (i = 0; i < 6; i++)
//This just prints out some details of the card.
{
if (info[1].BaseAddressSize[i] > 0)
printf("Aperture %d: "
"Base 0x%llx Length %d bytes Type %s\n", i,
PCI_IS_MEM(info[1].CpuBaseAddress[i]) ? PCI_MEM_ADDR(info[1].CpuBaseAddress[i]) : PCI_IO_ADDR(info[1].CpuBaseAddress[i]),
info[1].BaseAddressSize[i],PCI_IS_MEM(info[1].CpuBaseAddress[i]) ? "MEM" : "IO");
}
printf("\nEnd of Device random info dump---\n");
printf("\nNEWs Address : %d\n",*(int*)NEW);
//Not sure if this is a legitimate way of memory allocation but I cant see to read the ram any other way.
NEW = mmap_device_memory(NULL, info[1].BaseAddressSize[3],PROT_READ|PROT_WRITE|PROT_NOCACHE, 0,info[1].CpuBaseAddress[3]);
//Here is where things are starting to get messy and REALLY long to just run through all the ram and dump it.
//Is there some other way I can dump the data in the ram into a file?
while (counter!=info[1].BaseAddressSize[3])
{
fprintf(fp, "%x",NEW[counter]);
counter++;
}
fclose(fp);
printf("0x%x",*Buffer);
}
A few issues that I can see:
You are writing blocks of 4 bytes - that's quite inefficient. The stream buffering in the C library may help with that to a degree, but using larger blocks would still be more efficient.
Even worse, you are writing out the memory dump in hexadecimal notation, rather than the bytes themselves. That conversion is very CPU-intensive, not to mention that the size of the output is essentially doubled. You would be better off writing raw binary data using e.g. fwrite().
Depending on the specifics of your system (is this on QNX?), reading from I/O-mapped memory may be slower than reading directly from physical memory, especially if your PCI device has to act as a relay. What exactly is it that you are doing?
In any case I would suggest using a profiler to actually find out where your program is spending most of its time. Even a rudimentary system monitor would allow you to determine if your program is CPU-bound or I/O-bound.
As it is, "waaaaaay to long" is hardly a valid measurement. How much data is being copied? How long does it take? Where is the output file located?
P.S.: I also have some concerns w.r.t. what you are trying to do, but that is slightly off-topic for this question...
For fastest speed: write the data in binary form and use the open() / write() / close() API-s. Since your data is already available in a contiguous block of (virtual) memory it is a waste to copy it to a temporary buffer (used by the fwrite(), fprintf(), etc. API-s).
The code using write() will be similar to:
int fd = open("filename.bin", O_RDWR|O_CREAT, S_IRWXU);
write(fd, (void*)NEW, 4*info[1].BaseAddressSize[3]);
close(fd);
You will need to add error handling and make sure that the buffer size is specified correctly.
To reiterate, you get the speed-up from:
avoiding the conversion from binary to ASCII (as pointed out by others above)
avoiding many calls to libc
reducing the number of system-calls (from inside libc)
eliminating the overhead of copying data to a temporary buffer inside the fwrite()/fprintf() and related functions (buffering would be useful if your data arrived in small chunks, including the case of converting to ASCII in 4 byte units)
I intentionally ignore commenting on other parts of your code as it is apparently not intended to be production quality yet and your question is focused on how to speed up writing data to a file.

fwrite() alternative for large files on 32-bit system

I'm trying to generate large files (4-8 GB) with C code.
Now I use fopen() with 'wb' parameters to open file binary and fwrite() function in for loop to write bytes to file. I'm writing one byte in every loop iteration. There is no problem until the file is larger or equal to 4294967296 bytes (4096 MB). It looks like some memory limit in 32-bit OS, because when it writes to that opened file, it is still in RAM. Am I right? The symptom is that the created file has smaller size than I want. The difference is 4096 MB, e.g. when I want 6000 MB file, it creates 6000 MB - 4096 MB = 1904 MB file.
Could you suggest other way to do that task?
Regards :)
Part of code:
unsigned long long int number_of_data = (unsigned int)atoi(argv[1])*1024*1024; //MB
char x[1]={atoi(argv[2])};
fp=fopen(strcat(argv[3],".bin"),"wb");
for(i=0;i<number_of_data;i++) {
fwrite(x, sizeof(x[0]), sizeof(x[0]), fp);
}
fclose(fp);
fwrite is not the problem here. The problem is the value you are calculating for number_of_data.
You need to be careful of any unintentional 32-bit casting when dealing with 64-bit integers. When I define them, I normally do it in a number of discrete steps, being careful at each step:
unsigned long long int number_of_data = atoi(argv[1]); // Should be good for up to 2,147,483,647 MB (2TB)
number_of_data *= 1024*1024; // Convert to MB
The assignment operator (*=) will be acting on the l-value (the unsigned long long int), so you can trust it to be acting on a 64-bit value.
This may look unoptimised, but a decent compiler will remove any unnecessary steps.
You should not have any problem creating large files on Windows but I have noticed that if you use a 32 bit version of seek on the file it then seems to decide it is a 32 bit file and thus cannot be larger that 4GB. I have had success using _open, _lseeki64 and _write when working with >4GB files on Windows. For instance:
static void
create_file_simple(const TCHAR *filename, __int64 size)
{
int omode = _O_WRONLY | _O_CREAT | _O_TRUNC;
int fd = _topen(filename, omode, _S_IREAD | _S_IWRITE);
_lseeki64(fd, size, SEEK_SET);
_write(fd, "ABCD", 4);
_close(fd);
}
The above will create a file over 4GB without issue. However, it can be slow as when you call _write() there the file system has to actually allocate the disk blocks for you. You may find it faster to create a sparse file if you have to fill it up randomly. If you will fill the file sequentially from the beginning then the above code will be fine. Note that if you really want to use the buffered IO provided by fwrite you can obtain a FILE* from a C library file descriptor using fdopen().
(In case anyone is wondering, the TCHAR, _topen and underscore prefixes are all MSVC++ quirks).
UPDATE
The original question is using sequential output for N bytes of value V. So a simple program that should actually produce the file desired is:
#include <stdlib.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <io.h>
#include <tchar.h>
int
_tmain(int argc, TCHAR *argv[])
{
__int64 n = 0, r = 0, size = 0x100000000LL; /* 4GB */
char v = 'A';
int fd = _topen(argv[1], _O_WRONLY | _O_CREAT| _O_TRUNC, _S_IREAD | _S_IWRITE);
while (r != -1 && n < count) {
r = _write(fd, &v, sizeof(value));
if (r >= 0) n += r;
}
_close(fd);
return 0;
}
However, this will be really slow as we are only writing one byte at a time. That is something that can be improved by using a larger buffer or using buffered I/O by calling fdopen on the descriptor (fd) and switching to fwrite.
Yuo have no problem with fwrite(). The problem seems to be your
unsigned long long int number_of_data = (unsigned int)atoi(argv[1])*1024*1024; //MB
which indeed should be rather something like
uint16_t number_of_data = atoll(argv[1])*1024ULL*1024ULL;
unsigned long long would still be ok, but unsigned int * int * int will give you a unsinged int no matter how large your target variable is.

2GB limit on file size when using fwrite in C?

I have a short C program that writes into a file until there is no more space on disk:
#include <stdio.h>
int main(void) {
char c[] = "abcdefghij";
size_t rez;
FILE *f = fopen("filldisk.dat", "wb");
while (1) {
rez = fwrite(c, 1, sizeof(c), f);
if (!rez) break;
}
fclose(f);
return 0;
}
When I run the program (in Linux), it stops when the file reaches 2GB.
Is there an internal limitation, due to the FILE structure, or something?
Thanks.
On a 32 bits system (i.e. the OS is 32 bits), by default, fopen and co are limited to 32 bits size/offset/etc... You need to enable the large file support, or use the *64 bits option:
http://www.gnu.org/software/libc/manual/html_node/Opening-Streams.html#index-fopen64-931
Then your fs needs to support this, but except fat and other primitive fs, all of them support creating files > 2 gb.
it stops when the file reaches 2GB.
Is there an internal limitation, due
to the FILE structure, or something?
This is due to the libc (the standard C library), which by default on a x86 (IA-32) Linux system is 32-bit functions provided by glibc (GNU's C Library). So by default the file stream size is based upon 32-bits -- 2^(32-1).
For using Large File Support, see the web page.
#define _FILE_OFFSET_BITS 64
/* or more commonly add -D_FILE_OFFSET_BITS=64 to CFLAGS */
#include <stdio.h>
int main(void) {
char c[] = "abcdefghij";
size_t rez;
FILE *f = fopen("filldisk.dat", "wb");
while (1) {
rez = fwrite(c, 1, sizeof(c), f);
if ( rez < sizeof(c) ) { break; }
}
fclose(f);
return 0;
}
Note: Most systems expect fopen (and off_t) to be based on 2^31 file size limit. Replacing them with off64_t and fopen64 makes this explicit, and depending on usage might be best way to go. but is not recommended in general as they are non-standard.

Resources