fopen gives no error, but file stays empty - c

I'm having trouble writing in a file, but the problem is that weird, I dont't even know for what I should exactly ask for.
At first: I use C, I am forced to keep it concurring to the 89/90-standard an to compile with gcc. My OS is Windows 7 Home Premium, 64Bit.
What I want to do, is to open three filestreams like this:
FILE *A=fopen("A.txt", "r");
FILE *B=fopen("B.txt", "w");
FILE *D=fopen("C.txt", "w");
The first one only to read, the two others to write. During the program, I write with
fprintf(B, "%c", letter);
fprintf(D, "%c", letter);
integers, interpreted as ASCII-charachters into the files. I did this twenty times before, it always worked. Now, the "D" file stays empty. If I change the order of the streams to:
FILE *A=fopen("A.txt", "r");
FILE *D=fopen("C.txt", "w");
FILE *B=fopen("B.txt", "w");
my file "B" stays empty! So always the last one opened does not work. But why?! I can't see any difference to my other programms, which are working, and this program also works, except in the case of this third filestream. I'm compiling with -Wall and -pedantic, the Compiler is not complaining, the Programm ist not crashing, everything works but the third stream!
Has anyone any idea, or even better, experience with a problem like this? I tried about one hour without getting any clue.
Edit: Thanks for alle the comments! I'm programming with C for about two months till now, so there are many things, I'm not really sure of.
#Mhd.Tahawi: Yes, now I did, but no difference.
#Kninnug: The files get opened succesfully.
#Bit Fiddling Code Monkey: Yes, as far as I know, there is a limit somewhere. But I worked with 6 filestreams at the same time a week ago and everything was fine.
#Pandrei: Yes, the Output in the other file was fine.
#Martin James: I'm not sure what you mean. If you mean fclose or something similar: Since so many of you asked me for this, I tried it, but there was no difference.
#hmjd: 'letter' is an integer that gets its value by fgetc(A);
#squeamish ossifrage: I'm printing sign by sign into the file an get '1' for each, seems to be ok.
Edit2: Now, that's nice, I'm facing some kind of Heisenbug. I was going to check the return values, as so many of you told me to.
So I initalized a new int retval=0; and wrote:
letter=fgtec(A);
while (letter!=EOF)
{
retval=fprintf(B, "%c", letter);
printf("%d", retval); /*Want to see the return values on stdout*/
if (retval<0) /*If writing fails, the program stops*/
{
printf("Error while writing in file");
return (-1);
}
letter=fgetc(A);
}
And this one works! New file gets filled, everything is fine. Except for the fact, that I have absolutely no idea, why the old one:
letter=fgtec(A);
while (letter!=EOF)
{
fprintf(B, "%c", letter);
letter=fgetc(A);
}
that worked tenthousand times before, didn't do anything at all.
Has anyone any further ideas? However, thanks so far for your ideas!

Unless you called fclose() your standard library will buffer the output data and you won't see it on disk until you either write enough, or flush it via fflush() or close the file via fclose() (or disable buffering via setbuf()/setvbuf()).
Obviously, disabling buffering or flushing after each I/O operation obviates the whole purpose of buffered I/O making performance suboptimal.

I think, I found the source of the problem, so I thought it might be helpful to share, that it seems to be compiler-related.
As written above, I use gcc as part of the MinGW-package.
I asked a friend of mine, who uses Pelles C, to compile the code himself and voila! For him, it worked fine!
So it seems like gcc has, under certain conditions, possibly problems with filestreams. But I can't really say, which conditions.
However, if anyone else has a problem like this, try it with another compiler!

Related

Implementing lseek in xv6

First off I need to say it's completely possible I'm missing something.
My assignment is to essentially implement 'fprintf'. Now while appending to the file isn't required, I like to go above and beyond.
My issue is, I can't find a definition for lseek in xv6, meaning I have to implement it on my own, but I genuinely don't know how to go about it.
Tried reading 512 bytes at a time on an infinite loop in attempt to move the cursor over to the end, as a way to hardcode it, but if the file isn't opened with O_RDWR or I try this with stdout it fails.
I've also tried writing an empty string on an infinite loop. Knew it wouldn't work, but tried anyways.
I can read xv6 fairly well (The user level programs), but I can't understand the source code of lseek for the life of me
It doesn't have to be a genuine lseek. I just need to be able to get to the end of an fd and continue writing, but this cannot be reliant on filemode.
Any help is greatly appreciated.
I found the solution.
The reason O_APPEND doesn't work is because the definition of open(), in sysfile.c, doesn't do anything with append.
In sys_open, they hardcode a value of 0 for f->off (offset), and this is what I need to change.
My planned solution is to figure out the filesize (in bytes) of the file, and set the offset to that number.
Probably gonna use stat().

C: fprintf does not work

I have a long C code. At the beginning I open two files and write something on them:
ffitness_data = fopen("fitness_data.txt","w");
if( ffitness_data == NULL){
printf("Impossible to open the fitness data file\n");
exit(1);
}else{
fprintf(ffitness_data,"#This file contains all the data that are function of fitness.\n");
fprintf(ffitness_data,"#Columns: f,<p>(f),<l>(f).\n\n");
}
fmeme_data = fopen("meme_data.txt","w");
if( fmeme_data == NULL){
printf("Impossible to open the meme data file\n");
exit(1);
}else{
fprintf(fmeme_data,"#This file contains all the data relative to memes.\n");
fprintf(fmeme_data,"#Columns: fitness, popularity, lifetime.\n\n");
}
Everything is fine at this step: files are open and two lines are written on them.
Then I have a long simluation of a stochastic process, whose code is not interesting for the question's purposes: the files and their pointers are never used. At the end of the process I have:
for(i=0;i<data;i++){
fprintf(fmeme_data,"%f\t%d\t%f\n",meme[i].fitness,meme[i].popularity,meme[i].lifetime);
}
for(i=0;i<40;i++){
fprintf(ffitness_data,"%f\t%f\t%f\n",(1.0/40)*(i+0.5),popularity_histo[i],lifetime_histo[i]);
}
Then I DO fflush() and fclose() of both files.
If I make the code run on my laptop, both files are filled. If the code runs on a remote server, the file fitness_data.txt contains only the first print, i.e. the print starting with # but doesn't contain the data. I want you to note that:
The other file never gives me problems.
I'm used to this server. Something similar never happened.
Given all these information, the question is:
Why it is happening that a certain command, used always in the same way and in the same code, always works on a server while on a different server it works sometime but sometime it doesn't?
Admins: I don't think this question is a duplicate. All similar questions were solved by asjusting the code (here) or adding fflush() (here) and similar things. Here is not a problem in the code (in my modest opinion) because on my laptop it works. I bet it works on most.
We can't say for certain what's going on here, because we don't have your full program nor do we have access to the server where the problem happens. But, we can give you some debugging advice.
When a C program behaves differently on one computer than another, the very first thing you should suspect is memory corruption. The best available tool for finding memory corruption is valgrind. Fix the first invalid operation it reports and repeat until it reports no more invalid operations. There are excellent odds that the problem will have then gone away.
Turn up the warning levels as high as they can go and fix all of the complaints, even the ones that look silly.
You say you are calling fflush and fclose, but are you checking whether they failed? Check thoroughly, like this:
if (ferror(ffitness_data) || fflush(ffitness_data) || fclose(ffitness_data)) {
perror("write error on fitness_data.txt");
exit(1);
}
Does the problem go away if you change the optimization level you are compiling with? If so, you may have a bug that causes "undefined behavior". Unfortunately there are a lot of possible ways to do that and I can't easily explain how to look for them.
Use a tool like C-Reduce to cut your program down to a smaller program that still doesn't work correctly but is short enough to post here in its entirety.
Read and follow the instructions in the article "How to Debug Small Programs"..

What are some best practices for file I/O in C?

I'm writing a fairly basic program for personal use but I really want to make sure I use good practices, especially if I decide to make it more robust later on or something.
For all intents and purposes, the program accepts some input files as arguments, opens them using fopen() read from the files, does stuff with that information, and then saves the output as a few different files in a subfolder. eg, if the program is in ~/program then the output files are saved in ~/program/csv/
I just output directly to the files, for example output = fopen("./csv/output.csv", "w");, print to it with fprintf(output,"%f,%f", data1, data2); in a loop, and then close with fclose(output); and I just feel like that is bad practice.
Should I be saving it in a temp directory wile it's being written to and then moving it when it's finished? Should I be using more advanced file i/o libraries? Am I just completely overthinking this?
Best practices in my eyes:
Check every call to fopen, printf, puts, fprintf, fclose etc. for errors
use getchar if you must, fread if you can
use putchar if you must, fwrite if you can
avoid arbitrary limits on input line length (might require malloc/realloc)
know when you need to open output files in binary mode
use Standard C, forget conio.h :-)
newlines belong at the end of a line, not at the beginning of some text, i.e. it is printf ("hello, world\n"); and not "\nHello, world" like those mislead by the Mighty William H. often write to cope with the sillyness of their command shell. Outputting newlines first breaks line buffered I/O.
if you need more than 7bit ASCII, chose Unicode (the most common encoding is UTF-8 which is ASCII compatible). It's the last encoding you'll ever need to learn. Stay away from codepages and ISO-8859-*.
Am I just completely overthinking this?
You are. If the task's simple, don't make a complicated solution on purpose just because it feels "more professional". While you're a beginner, focus on code readability, it will facilitate your and others' lives.
It's fine. I/O is fully buffered by default with stdio file functions, so you won't be writing to the file with every single call of fprintf. In fact, in many cases, nothing will be written to it until you call fclose.
It's good practice to check the return of fopen, to close your files when finished, etc. Let the OS and the compiler do their job in making the rest efficient, for simple programs like this.
If no other program is checking for the presence of ~/program/csv/output.csv for further processing, then what you're doing is just fine.
Otherwise you can consider writing to a FILE * obtained by a call to tmpfile in stdio.h or some similar library call, and when finished copy the file to the final destination. You could also put down a lock file output.csv.lck and remove that when you're done, but that depends on you being able to modify the other program's behaviour.
You can make your own cat, cp, mv programs for practice.

Reading .png file in C and sending it over a socket

I am currently working on a simple server implemented in C.
Processing jpg files works fine, btu png's give me a segmentation fault. I never get past this chunk of code. Why this might be?
fseek (file , 0 , SEEK_END);
lSize = ftell (file);
rewind (file);
Thanks.
It's far more likely that you were accessing those arrays in a problematic fashion. Check the logic in your buffering code. Make sure you have your buffer sizes #define'd in a central location, rather than hardcoding sizes and offsets. You made it quit crashing, but if you missed an underlying logic error, you may run into mysterious problems down the road when you change something else. It is probably worth your time to deliberately break the program again and figure out WHY it's broken. As others have suggested, a debugger would be an excellent idea at this point. Or post a more complete example of your code.

file opening pointer

I want to ask that in c programming we open a file using pointer by using how many pointer at the same time we can open the same file with out getting any error? Is there a limit? Also does sequence matter like
f1= fopen("abc.txt",r)
f2= fopen("abc.txt",w)
do f2 be close first or f1 can be close first too
Yes, most standard libraries impose some limit on how many files a particular process can have open at a time. As long as you're halfway reasonable about things, however, and only open files as you need them, and close them when you're done, it's rarely an issue.
You're guaranteed that you can open at least FOPEN_MAX files simultaneously. In some cases you can open more than that, but (absent limits imposed elsewhere, such as the OS being short of resources) you can open that many.
Edit: As to why you can often open many more files than FOPEN_MAX indicates: it's pretty simple: to guarantee the ability to open N files, you pretty much need to pre-allocate all the space you're going to use for those files (e.g., a buffer for each). Since most programs never open more than a few files at a time anyway, they try to keep that number fairly low to keep from wasting too much memory on space most don't use anyway.
Then, to accommodate programs that need to open more files, they can/will use realloc (or something similar) to try to allocate more space as needed. Since realloc can fail, though, the attempt at opening more files can also fail.
This will give you the answer for your system. I got 16 on mine, FWIW.
#include <stdio.h>
int main(void)
{
printf("%d\n", FOPEN_MAX);
return 0;
}

Resources