I have a programme which is writing results to a file and I would like to read in real-time from that file. It is a normal text file and external programme always write a whole line. I need to run it just on a Linux system.
int last_read = 0;
int res;
FILE *file;
char *buf;
char *end = buf + MAXLINE - 1;
int c;
int fileSize;
char *dst;
while (external_programme_is_running()) {
file = fopen(logfile, "r"); //without opening and closing it's not working
fseek(file, 0, SEEK_END);
fileSize = ftell(file);
if (fileSize > last_read) {
fseek(file, last_read, SEEK_SET);
while (!feof(file)) {
dst = buf;
while ((c = fgetc(file)) != EOF && c != '\n' && dst < end)
*dst++ = c;
*dst = '\0';
res = ((c == EOF && dst == buf) ? EOF : dst - buf);
if (res != -1) {
last_read = ftell(file);
parse_result(buf)
}
}
}
fclose(file);
}
Is this a correct approach? Or would it be better to check the modification time and then open the file? Is is possible that reading would crash in case that the file would be modified at the very same time?
To avoid the need to close, re-open, and re-seek for every loop iteration, call clearerr on the stream after reading EOF.
You shouldn't have any problems if you read at the same time the other program writes. The worst that would happen is that you wouldn't get to see what was written until the next time you open the file.
Also, your approach of comparing the last seek position to the end of the file is a fine way to look for additions to the file, since the external program is simply writing additional lines. I would recommend adding a sleep(1) at the end of your loop, though, so you don't use a ton of CPU.
There's no problem in reading a file while another process is writing to it. The standard tail -f utility is used often for that very purpose.
The only caveat is that the writing process must not exclusively lock the file.
Regarding your read code (ie. using fgetc()), since you said that the writing process will be writing a line at a time, you might want to look at fgets() instead.
Related
I'm trying to move content from one file to another.
My code:
char *path = extractFileName(args[1]);
if (path == 0)
return -1;
FILE *input = fopen(path, "r");
rewind(input);
fseek(input, 0L, SEEK_END);
long sz = ftell(input);
printf("sz: %ld\n", sz);
rewind(input);
size_t a;
FILE *result = fopen("result.mp3", "w");
size_t counter = 0;
char buffer[128];
while ((a = fread(&buffer[0], 1, 128, input)) != 0) {
fwrite(&buffer[0], 1, a, result);
counter += a;
}
printf("%d\n", counter);
printf("ferror input: %d\n", ferror(input));
printf("feof input: %d\n", feof(input));
After execution it prints
sz: 6675688
25662
ferror input: 0
feof input: 16
As far as I know it means that C knows that size of input file is 665kb but returns eof when I try to read more than 25662 bytes. What I'm doing wrong?
Since your output filename is result.mp3, it's a safe bet you're dealing with non-textual data. That means you should be opening your files in binary mode - "rb" and "wb" respectively. If you're running this code on Windows, not doing that would explain the behavior you're seeing (On that platform, reading a particular byte (0x1A) in text mode causes it to signal end of file even when it's not actually the end), and using binary mode will fix it. On other OSes, it's a no-op but still clues the reader into your intentions and the type of data you're expecting to work with, and is thus a good idea even if it's not strictly needed on them.
I am currently working on a project in which I have to read from a binary file and send it through sockets and I am having a hard time trying to send the whole file.
Here is what I wrote so far:
FILE *f = fopen(line,"rt");
//size = lseek(f, 0, SEEK_END)+1;
fseek(f, 0L, SEEK_END);
int size = ftell(f);
unsigned char buffer[MSGSIZE];
FILE *file = fopen(line,"rb");
while(fgets(buffer,MSGSIZE,file)){
sprintf(r.payload,"%s",buffer);
r.len = strlen(r.payload)+1;
res = send_message(&r);
if (res < 0) {
perror("[RECEIVER] Send ACK error. Exiting.\n");
return -1;
}
}
I think it has something to do with the size of the buffer that I read into,but I don't know what it's the correct formula for it.
One more thing,is the sprintf done correctly?
If you are reading binary files, a NUL character may appear anywhere in the file.
Thus, using string functions like sprintf and strlen is a bad idea.
If you really need to use a second buffer (buffer), you could use memcpy.
You could also directly read into r.payload (if r.payload is already allocated with sufficient size).
You are looking for fread for a binary file.
The return value of fread tells you how many bytes were read into your buffer.
You may also consider to call fseek again.
See here How can I get a file's size in C?
Maybe your code could look like this:
#include <stdint.h>
#include <stdio.h>
#define MSGSIZE 512
struct r_t {
uint8_t payload[MSGSIZE];
int len;
};
int send_message(struct r_t *t);
int main() {
struct r_t r;
FILE *f = fopen("test.bin","rb");
fseek(f, 0L, SEEK_END);
size_t size = ftell(f);
fseek(f, 0L, SEEK_SET);
do {
r.len = fread(r.payload, 1, sizeof(r.payload), f);
if (r.len > 0) {
int res = send_message(&r);
if (res < 0) {
perror("[RECEIVER] Send ACK error. Exiting.\n");
fclose(f);
return -1;
}
}
} while (r.len > 0);
fclose(f);
return 0;
}
No, the sprintf is not done correctly. It is prone to buffer overflow, a very serious security problem.
I would consider sending the file as e.g. 1024-byte chunks instead of as line-by-line, so I would replace the fgets call with an fread call.
Why are you opening the file twice? Apparently to get its size, but you could open it only once and jump back to the beginning of the file. And, you're not using the size you read for anything.
Is it a binary file or a text file? fgets() assumes you are reading a text file -- it stops on a line break -- but you say it's a binary file and open it with "rb" (actually, the first time you opened it with "rt", I assume that was a typo).
IMO you should never ever use sprintf. The number of characters written to the buffer depends on the parameters that are passed in, and in this case if there is no '\0' in buffer then you cannot predict how many bytes will be copied to r.payload, and there is a very good chance you will overflow that buffer.
I think sprintf() would be the first thing to fix. Use memcpy() and you can tell it exactly how many bytes to copy.
I have written a program which takes a file as input and whenever it finds a line with length > 80, it adds \ and \n to that file to make it 80 chars in width max.
The problem is that I have used fseek to insert \ and \n whenever the length exceeds 80, so it overrides two characters of that line which exceeds length 80. Is there a way using which I can insert text without overriding the existing text?
Here is my code:-
#include<stdio.h>
#include<string.h>
int main(int argc, char *argv[])
{
FILE *fp1,*fp2;
int prev=0,now=0;
char ch;
int flag=0;
long cur;
fp1=fopen(argv[1],"r+");
if(fp1==NULL){
printf("Unable to open the file to read. Program will exit.");
exit(0);
}
else{
while((ch=fgetc(fp1))!=EOF){
if(ch!=' ' && ch!='\n'){
now=now+1;
}
else{
if(now>=80){
fseek(fp1,cur,SEEK_SET);
fputc('\\',fp1);
fputc('\n',fp1);
now=0;
continue;
}
if(ch=='\n'){
flag=0;
now=0;
continue;
}
else{
prev=now;
cur=ftell(fp1);
}
now=now+1;
}
}
}
fclose(fp1);
return 0;
}
To run it, you need to do following:-
user#ubuntu$ cc xyz.c
user#ubuntu$ ./a.out file_to_check.txt
While there are a couple of techniques to do it in-place, you're working with a text file and want to perform insertions. Operating systems typically don't support text file insertions as a file system primitive and there's no reason they should do that.
The best way to do that kind of thing is to open your file for reading, open a new file for writing, copy the part of the file before the insertion point, insert the data, copy the rest, and then move the new file over the old one.
This is a common technique and it has a purpose. If anything goes wrong (e.g. with your system), you still have the original file and can repeat the transaction later. If you start two instances of the process and use a specific pattern, the second instance is able to detect that the transaction has already been started. With exclusive file access, it can even detect whether the transaction was interrupted or is still running.
That way is much less error prone than any of the techniques performed directly on the original file and is used by all of those traditional tools like sed even if you ask them to work in-place (sed -i). Another bonus is that you can always rename the original file to one with a backup suffix before overwriting it (sed offers such an option as well).
The same technique is often used for configuration files even if your program is writing an entirely new version and doesn't use the original file for that. It hasn't been long since many internet magazines claimed that ext4 accidentally truncates configuration files to zero length. This was exactly because some applications kept the configuration files open and truncated while the system was forcedly shut down. Those application often tampered with the original configuration files before they had the data ready and then even kept them open without syncing them, which made the window for data corruption much larger.
TL;DR version:
When you value your data, don't destroy it before you have the replacement data ready.
No, there's no way to insert characters into an existing file. You will need to use a second file to do that.
This is the function I use for this kind of thing:
int finsert (FILE* file, const char *buffer) {
long int insert_pos = ftell(file);
if (insert_pos < 0) return insert_pos;
// Grow from the bottom
int seek_ret = fseek(file, 0, SEEK_END);
if (seek_ret) return seek_ret;
long int total_left_to_move = ftell(file);
if (total_left_to_move < 0) return total_left_to_move;
char move_buffer[1024];
long int ammount_to_grow = strlen(buffer);
if (ammount_to_grow >= sizeof(move_buffer)) return -1;
total_left_to_move -= insert_pos;
for(;;) {
u16 ammount_to_move = sizeof(move_buffer);
if (total_left_to_move < ammount_to_move) ammount_to_move = total_left_to_move;
long int read_pos = insert_pos + total_left_to_move - ammount_to_move;
seek_ret = fseek(file, read_pos, SEEK_SET);
if (seek_ret) return seek_ret;
fread(move_buffer, ammount_to_move, 1, file);
if (ferror(file)) return ferror(file);
seek_ret = fseek(file, read_pos + ammount_to_grow, SEEK_SET);
if (seek_ret) return seek_ret;
fwrite(move_buffer, ammount_to_move, 1, file);
if (ferror(file)) return ferror(file);
total_left_to_move -= ammount_to_move;
if (!total_left_to_move) break;
}
seek_ret = fseek(file, insert_pos, SEEK_SET);
if (seek_ret) return seek_ret;
fwrite(buffer, ammount_to_grow, 1, file);
if (ferror(file)) return ferror(file);
return 0;
}
Use it like this:
FILE * file= fopen("test.data", "r+");
ASSERT(file);
const char *to_insert = "INSERT";
fseek(file, 3, SEEK_SET);
finsert(file, to_insert);
ASSERT(ferror(file) == 0);
fclose(file);
This (as others here have mentioned) can theoretically corrupt a file if there is an error, but here is some code to actually do it... Doing it in-place like this is usually fine, but you should backup the file if you are worried about it...
No, there is no way. You have to create a new file or move the contents of the file 2 characters backwards.
You can load the file as chunks (in your case is 80 characters) and then append two character (new line) and write the content into anohter file.
another implementation use tmpfile()
#include <stdio.h>
#include <stdlib.h>
#include <assert.h>
FILE *tmp_buf;
int finsert(FILE *f, const char* msg){
fseek(tmp_buf, 0, SEEK_SET);
fpos_t f_pos;
assert (fgetpos(f, &f_pos)==0);
char buf[50];
while(fgets(buf, 50, f))
fputs(buf, tmp_buf);
long tmp_buf_pos = ftell(tmp_buf);
fsetpos(f, &f_pos);
fputs(msg, f);
fseek(tmp_buf, 0, SEEK_SET);
while(--tmp_buf_pos>=0)
fputc(fgetc(tmp_buf), f);
return ferror(f);
}
int main()
{
FILE *f = fopen("result.txt", "wb+");
assert (f!=NULL);
fputs("some text", f);
tmp_buf = tmpfile();
assert (tmp_buf!=NULL);
assert(finsert(f, "another text")==0);
fclose (f);
perror("");
}
tested in Cygwin64
I've been stuck on this for a few days and it's getting really frustrating.
I'm using popen() to call a command line process and get its output and store it in a C string. I was using fgets() but it seems that breaks after a new line, so I'm using fread(). The only problem is that the returned C string is sometimes messed up.
Here's my code:
const char *cmd = "date";//This the shell command
char buf[BUFSIZ];//Output of the command
FILE *ptr;
int c;
if ((ptr = popen(cmd, "r")) != NULL)
while(fread(buf, sizeof(buf),1, ptr))
while ((c = getchar()) != EOF)
printf("output = %s", buf);
(void) pclose(ptr);
The final C string sometimes has weird characters in it that shouldn't be there, or sometimes no string is even available. Can anybody please help? ):
Edit: Here is what I was doing when using fgets() The Shell command can be anything that outputs text though. Not just "date."
if ((ptr = popen(cmd, "r")) != NULL)while (fgets(buf, BUFSIZ, ptr) != NULL)printf("output = %s", buf);(void) pclose(ptr);
fread doesn't insert a NUL terminator after what it reads. You need to check the return value to know how much it read, and only print that much. If you read with fread, you typically want to write the data with fwrite, something on this order:
long bytes;
while ((bytes=fread(buf, sizeof(buf), 1, ptr))>0)
fwrite(buf, bytes, 1, stdout);
Well, fgets is the right way to do it.
FILE *ptr;
if (NULL == (ptr = popen(cmd, "r"))) {
/* ... */
}
while(fgets(buf, sizeof(buf), ptr) != NULL) {
/* There is stuff in 'buf' */
}
I think the reason fgets wasn't working for you is that you were doing something wrong.
Now, here's why I think you are running into trouble with your current code:
You are not checking how much fread actually returned
You are reading with getchar and discarding stuff
You don't have a NUL terminator in the buffer
Get this right and it will all be better: fread might legally read less than you told it to.
The output from date doesn't include the '\0' (NUL) character you need to properly terminate the string. Keep track of the number of characters read and put in the NUL yourself.
Though really, you should be using fgets, getline or similar text-oriented functions to read from a program such as date. getline is especially easy (and safe since it does some memory management for you):
FILE *fp = popen("date", "r");
char *ln = NULL;
size_t len = 0;
while (getline(&ln, &len, fp) != -1)
fputs(ln, stdout);
free(ln);
pclose(fp);
Below is the correct way to use fread for process output with popen:
const char *cmd = "date";
char buf[BUFSIZ];
FILE *ptr;
if ((ptr = popen(cmd, "r")) != NULL) {
/* Read one byte at a time, up to BUFSIZ - 1 bytes, the last byte will be used for null termination. */
size_t byte_count = fread(buf, 1, BUFSIZ - 1, ptr);
/* Apply null termination so that the read bytes can be treated as a string. */
buf[byte_count] = 0;
printf("%s\n", buf);
}
(void) pclose(ptr);
As you can see, the primary problem is to correctly deal with null termination. The two size parameter of fread is also important, you have to let it read character by character. Note that in the case of popen, fread will only return 0 if the process has exited without giving any output. It will not return 0 if it takes a long time for the process to print anything.
If the output is larger than BUFSIZ, you can wrap fread with a while loop.
I have a server side C based CGI code as:
cgiFormFileSize("UPDATEFILE", &size); //UPDATEFILE = file being uploaded
cgiFormFileName("UPDATEFILE", file_name, 1024);
cgiFormFileContentType("UPDATEFILE", mime_type, 1024);
buffer = malloc(sizeof(char) * size);
if (cgiFormFileOpen("UPDATEFILE", &file) != cgiFormSuccess) {
exit(1);
}
output = fopen("/tmp/cgi.tar.gz", "w+");
inc = size/(1024*100);
fptr = fopen("progress_bar.txt", "w+");
while (cgiFormFileRead(file, b, sizeof(b), &got_count) == cgiFormSuccess)
{
fwrite(b,sizeof(char),got_count,output);
i++;
if(i == inc && j<=100)
{
fprintf(fptr,"%d", j);
fflush(fptr);
i = 0;
j++; // j is the progress bar increment value
}
}
fclose(fptr);
cgiFormFileClose(file);
retval = system("mkdir /tmp/update-tmp;\
cd /tmp/update-tmp;\
tar -xzf ../cgi.tar.gz;\
bash -c /tmp/update-tmp/update.sh");
However, this doesn't work the way as is seen above. Instead of printing 1,2,...100 to progress_bar.txt (referred by fptr)one by one it prints at ONE GO, seems it buffers and then writes to the file.
fflush() also didn't work.
Any clue/suggestion would be really appreciated.
First, open the file before the loop and close after it ends. Too much IO.
The problem is here w+ - this truncates your file. use a+. (fopen help)
It is writing it one-by-one, it's just that it does it so fast that you're vanishingly unlikely to ever see the file with a value other than 99 in it.
This is easily demonstrated if you put a sleep(1) within the loop, so that it's slow enough for you to catch it.