I use my C program to stream binary data to ImageMagick:
inbuf = popen(string, "wb");
setbuf(inbuf, NULL); /// !!! ///
fwrite(buffer, frame, 1, inbuf);
pclose(inbuf);
And ImageMagick doesn't always receive all data on windows (mingw). Without the setbuf command (disabling bufferization) it receives even less data and problem appears on Linux (gcc) as well.
When I dump just the same buffer to file everything works fine and all data is written to disk and I don't even have to disable buffering:
outbuf = fopen("temp\\tune.gray", "wb");
fwrite(buffer, frame, 1, outbuf);
fclose(outbuf);
I discovered that problem occurs when I send odd number of bytes :) When I send even number everything works fine. I tried to write data not in bulk mode but splitting buffer to smaller portions, tried even sending data byte-by-byte - it doesn't help. Any ideas?
fwrite returns the total number of bytes successfully written. You can write a loop which tests how many bytes are actually written, and proceed to next write operation (from the location which was successfully written) until the entire buffer is successfully written.
Related
{
FILE* f1 = fopen("C:\\num1.bin", "wb+");//it will create a new file
int A[] = { 1,3,6,28 }; //int arr
fwrite(A, sizeof(A), 1, f1); //should insert the A array to the file
}
I do see the file but even after the fwrite, the file remains empty (0 bytes), does anyone know why?
You need to close the file with fclose
Otherwise the write buffer will not (necessarily) force the file contents to be written to disk
A couple of things:
As #Grantly correctly noted above, you are missing a call to fclose or fflush after writing to the file. Without this any cached/pending writes will not necessarily be actually written to the open file.
You do not check the return value of fopen. If fopen fails for any reason it will return a NULL pointer and not a valid file pointer. Since you're writing directly to the root of the drive C:\ on a Windows platform, that's something you definitely do want to be checking for (not that you shouldn't in other cases too, but run under a regular user account that location is often write protected).
Result of fwrite is not required to appear in the fille immediately after it returns. That is because file operations usually work in a buffered manner, i.e. they are cached and then flushed to speed things up and improve the performance.
The content of the file will be updated after you call fclose:
fclose()
(...) Any unwritten buffered data are flushed to the OS. Any unread buffered
data are discarded.
You may also explicitly flush the internal buffer without closing the file using fflush:
fflush()
For output streams (and for update streams on which the last operation
was output), writes any unwritten data from the stream's buffer to the
associated output device.
I have a C program that writes into 3 lines every 10ms into stdout. If I redirect the output to a file (using > ) there will be long delays (60ms) in the running of the program. The delays are periodic (say every 5 seconds).
If I just let it write to console or redirect to /dev/null, there is no problem.
I suspected that this is the stdout buffer problem, but using fflush(stdout) didn't solve the problem.
How can I solve the issue?
If I redirect the output to a file (using > ) there will be long
delays (60ms) in the running of the program.
That's because when stdout is a terminal device, it is usually (although not required) line-buffered, that is, the output buffer is flushed when a newline character is written, whereas in the case of regular files, output is fully buffered, meaning the buffers are flushed either when they are full or you close the file (or you explicitly call fflush(), of course).
fflush(stdout) may not be enough for you because that only flushes the standard I/O library buffers, but the kernel also buffers and delays writes to disk. You can call fsync() on the file descriptor to flush the modified buffer cache pages to disk after calling fflush(), as in fsync(STDOUT_FILENO).
Be careful and don't call fsync() without calling fflush() before.
UPDATE: You can also try sync(), which, unlike fsync(), does not block waiting for the underlying writes to return. Or, as suggested in another answer, fdatasync() may be a good choice because it avoids the overhead of updating file times.
You need to use fsync. The following:
fsync(fileno(stdout))
Should help. Note that the Linux kernel will still buffer limit I/O according to its internal scheduler limits. Running as root and setting a very low nice value might make a difference, if you're not getting the frequency you want.
If it's still too slow, try using fdatasync instead. Every fflush and fsync causes the filesystem to update node metadata (filesize, access time, etc) as well as the actual data itself. If you know in blocks how much data you'll be writing, then you can try the following trick:
#define _XOPEN_SOURCE 500
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int main(int argc, char **argv){
FILE *fp = fopen("test.txt", "w");
char *line = "Test\n";
char *fill = "\0";
fwrite(fill, 1, 100*strlen(line), fp);
fflush(fp);
fsync(fileno(fp));
rewind(fp);
for (int i = 0; i < 100; i++){
fwrite(line, strlen(line), 1, fp);
fflush(fp);
fdatasync(fileno(fp));
}
}
The first fwrite call writes 5*100 zeros to the file in one chunk, and fsyncs so that it's written to disk and the node information is updated. Now we can write up to 500 bytes to the file without trashing filesystem metadata. rewind(3) returns the file pointer position to the beginning of the file so we can write over the data without changing the filesize of the node.
Timing that program gives the following:
$ time ./fdatasync
./fdatasync 0.00s user 0.01s system 1% cpu 0.913 total
So it ran fdatasync and sync'ed to disk 100 times in 0.913 seconds, which averages out to ~9ms per write & fdatasync call.
it could be just that every 5seconds you are filling up your disk buffer and there is a spike in the latency due to flushing to actual disk.check with iostat
I'm opening a file using CreateFile() with the flags FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH for several reasons, and I've noticed a strange behavior:
Since for using those flags we have to allocate memory aligned to the sector size, let's say the sector size is 512.
Now, if I allocate 512 bytes with _aligned_malloc() and I read from the file, everything works fine if the file size is exactly a multiple of the sector size, let's say 512*4, or 2048. I read pieces of 512 bytes and the last piece makes ReadFile() to return the EOF code, that is, to return FALSE and GetLastError() set as ERROR_HANDLE_EOF.
The problem arise when the file size it not aligned to the sector size, that is, the file's size is let's say 2048+13, or 2061 bytes.
I can successfully read the first 4 512-sized chunks from the file, and a 5th call to ReadFile() lets me to read the latest 13 surplus bytes from the file, but this is the strange thing: in such a case ReadFile() doesn't return the EOF code! Even if I told to ReadFile() to read 512 bytes, and it read only 13 bytes (so it surpassed the end of file), it doesn't tell me that, and returns just 13 bytes read, without no other further information.
So, when I read the last 13 bytes and my loop is set to read until EOF, it will call ReadFile() again for a 6th time, causing an error: ERROR_INVALID_PARAMETER and I guess this is correct, because I'm trying to read after I had surpassed the end of file!
My question is: is this a normal behavior or am I doing something wrong? When using non-buffered I/O, I should expect to not having EOF code when I read the last non-sector-aligned chunk of file? Or there is another way to do that?
How I can understand that I've just passed the EOF?
I guess that I could solve this problem by modifying the loop: instead of reading until EOF, I could read until EOF OR until the actually returned bytes are less than the requested bytes for the reading. Is this a correct assumption?
NOTE: this does not happen when using files with normal flags, it only happens when I use FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH.
NOTE 2: I'm using I/O Completion Ports for reading files, but I guess this happens also without using them, by just using blocking I/O.
EOF is surprisingly hard. Even C's feof function is often misunderstood.
Basically, you get ERROR_HANDLE_EOF in the first case to distinguish the "512 bytes read, more to read" and "512 bytes read, nothing left" cases.
In the seconds case, this is not needed. "512 bytes requested, 13 bytes read, no error" already means that you're at EOF. Any other reason for a partial read would have been an error.
I want to wget in C and I am using popen to do so.
FILE *stdoutPtr = popen(command,"r");
fseek(stdoutPtr, 0L, SEEK_END);
long pos = ftell(stdoutPtr);
Here I want to get the size of stdout so that I can initialize buffer. But pos variable is always -1.
pos is supposed to tell me current position of read pointer.
Please help....
The FILE returned by popen is not a regular file, but a thing called a pipe. (That's what the p stands for.) Data flows through the pipe from the stdout of the command you invoked to your program. Because it's a communications channel and not a file on disk, a pipe does not have a definite size, and you cannot seek to different locations in the data stream. Therefore, fseek and ftell will both fail when applied to this FILE, and that's what a -1 return value means. If you inspect errno immediately after the call to ftell you will discover that it has the value ESPIPE, which means "You can't do that to a pipe."
If you're trying to read all of the output from the command into a single char* buffer, the only way to do it is to repeatedly call one of the read functions until it indicates end-of-file, and enlarge the buffer as necessary using realloc. If the output is potentially large, it would be better to change your program to process the data in chunks, if there's any way to do that.
You can't use pipes that way. For one thing, the information would be obsolete the instant you got it, since more data could be written to the pipe by then. You have to use a different allocation strategy.
The most common strategy is to allocate a fixed-size buffer and just keep reading until you reach end of file. You can process the data as you read it, if you like.
If you need to process the data all in one chunk, you can allocate a large buffer and start reading into that. If it does get full, then use realloc to enlarge the buffer and keep going until you have it all.
A common pattern is to keep a buffer pointer, a buffer count, and an allocation size. Initially, set the allocation size to, say, 64K. Set the count to zero. Allocate a 64K buffer. Read up to size-count bytes into the buffer. If you hit EOF, stop. If the buffer is nearly full, bump up the allocation size by 50% and realloc the buffer.
My application (C program) opens two file handles to the same file (one in write and one in read mode). Two separate threads in the app read from and write to the file. This works fine.
Since my app runs on embedded device with a limited ram disk size, I would like write FileHandle to wrap to beginning of file on reaching max size and the read FileHandle to follow like a circular buffer. I understand from answers to this question that this should work. However as soon as I do fseek of write FileHandle to beginning of file, fread returns error. Will the EOF get reset on doing fseek to beginning of file? If so, which function should be used to cause write file position to get set to 0 without causing EOF to be reset.
EDIT/UPDATE:
I tried couple of things:
Based on #neodelphi I used pipes this works. However my usecase requires I write to a file. I receive multiple channels of live video surveilance stream that needs to be stored to harddisk and also read back decoded and displayed on monitor.
Thanks to #Clement suggestions on doing ftell I fixed a couple of bugs in my code and wrap works for the reader however, the data read appears to be stale data since write are still buffered but reader reads stale content from hard disk. I cant avoid buffering due to performance considerations (I get 32Mbps of live data that needs to be written to harddisk). I have tried things like flushing writes only in the interval from when write wraps to when read wraps and truncating the file (ftruncate) after read wraps but this doesnt solve the stale data problem.
I am trying to use two files in ping-pong fashion to see if this solves the issue but want to know if there is a better solution
You should have something like that :
// Write
if(ftell(WriteHandle)>BUFFER_MAX) rewind (WriteHandle);
fwrite(WriteHandle,/* ... */);
// Read (assuming binary)
readSize = fread (buffer,1,READ_CHUNK_SIZE,ReadHandle);
if(readSize!=READ_CHUNK_SIZE){
rewind (ReadHandle);
if(fread (buffer+readSize,1,READ_CHUNK_SIZE-readSize,ReadHandle)!=READ_CHUNK_SIZE-readSize)
;// ERROR !
}
Not tested, but it gives an idea. The write should also handle the case BUFFER_MAX is not modulo WRITE_CHUNK_SIZE.
Also, you may read only if you are sure that the data has already been written. But I guess you already do that.
You could mmap the file into you're virtual memory and then just create a normal circular buffer with the pointer returned.
int fd = open(path, O_RDWR);
volatile void * mem = mmap(NULL, max_size, PROT_WRITE, MAP_SHARED, fd, 0);
volatile char * c_mem = (volatile char *)mem;
c_mem[index % max_size] = 'a'; // This line will now write to the offset index in the file
// Now doing
Can also probably be stricter on permissions depending on on exact case.