how to forbid multiple fopen() of same file - c

I have a C program which opens same file more than one time like this:
FILE *fp1 = fopen("/path/to/file","r");
// Without/beore closing fp1
FILE *fp2 = fopen("/path/to/file","r");
I want to make the second fopen() fail, while I run the program .
Assume I am running my the C programme/executable from GNU Bash shell (/bin/bash), or Bourne shell (/bin/sh).
Is there any setting/configuration I can do in my shell environment such that it will not allow same program to open same file more than one times simultaneously. So that the second fopen() will fail (i.e. will return NULL) ?

You could use open instead of fopen and pass in the O_EXCL flag.

Related

how to accept pathname from stdin for open() system call?

I need to accept the pathname when I run my C script on linux from stdin.
I have tried doing:-
int file = open(STDIN_FILENO, O_RDONLY)
"file" is always assigned to -1 ( file not opened).
I expect running
./myScript < test.txt
to pass "test.txt" to open
open("test.txt", O_RDONLY); // expected after running the previous command
I expect running
./myScript < test.txt
to pass "test.txt" to open
That's an incorrect expectation. When you use shell's redirection operator <, it opens the file text.txt and assigns the file descriptor to your program's standard input i.e., file descriptor 0 (STDIN_FILENO). So there's no need to open the file again - it's been done already.
If you want to expect your program to receive the filename as an argument then don't use < and pass it as an argument:
./myScript test.txt
Now you would be able to receive the filename in argv[1] of your program and use it in the call to open system call.
NB: C isn't a scripting but a compiled language, so you'd better get the terminology correct (such as "C program" instead of "C script").

Creating a file using fopen()

I am just creating a basic file handling program.
the code is this:
#include <stdio.h>
int main()
{
FILE *p;
p=fopen("D:\\TENLINES.TXT","r");
if(p==0)
{
printf("Error",);
}
fclose(p);
}
This is giving Error, I cannot create files tried reinstalling the compiler and using different locations and names for files but no success.
I am using Windows 7 and compiler is Dev C++ version 5
Change the mode argument in fopen(const char *filename, const char *mode) from:
p=fopen("D:\\TENLINES.TXT","r");//this will not _create_ a file
if(p==0) // ^
To this:
p=fopen("D:\\TENLINES.TXT","w");//this will create a file for writing.
if(p==NULL) // ^ //If the file already exists, it will write over
//existing data.
If you want to add content to an existing file, you can use "a+" for the open mode.
See fopen() (for more open modes, and additional information about the fopen family of functions)
According to tutorial, fopen returns NULL when error occurs. Therefore, you should check if p equals NULL.
Also, in printf("Error",);, omit the comma after string.
Yes you should open the file in write mode.
Which creates the file . Read mode is only to read content
or else you can use "r+" for both read and write.
You should be able to open the file, but you need to make it first. Make a txt document with the name res.txt. It should be able to write your result into the text document.
<?php
$result = $variable1 . $variable2 "=" .$res ."";
echo $result;
$myfile = fopen("res.txt", "a+") or die("nope");
fwrite($myfile, $result);
fclose($myfile)
?>
fopen()
Syntax:
FILE *fp;
fp=fopen(“data.txt”,”r”);
if(fp!=NULL){
//file operations
}
It is necessary to write FILE in the uppercase. The function fopen() will open a file “data.txt”
in read mode.
The fopen() performs the following important task.
It searches the disk for opening the file.
In case the file exists, it loads the file from the disk into memory. If the file is found with huge contents then it loads the file part by part.
If the file does not exist this function returns a NULL. NULL is a macro defined character in the header file “stdio.h”. This indicates that it is unable to open file. There may be following reasons for failure of fopen() functions.
a.When the file is in protected or hidden mode.
b.The file may be used by another program.
It locates a character pointer, which points the first cha
racter of the file. Whenever a file is
opened the character pointer points to the first character of the file

Redirecting of stdout in bash vs writing to file in c with fprintf (speed)

I am wondering which option is basically quicker.
What interests me the most is the mechanism of redirection. I suspect the file is opened at the start of the program ./program > file and is closed at the end. Hence every time a program outputs something it should be just written to a file, as simple as it sounds. Is it so? Then I guess both options should be comparable when it comes to speed.
Or maybe it is more complicated process since the operating system has to perform more operations?
There is no much difference between that options (except making file as a strict option reduces flexibility of your program).
To compare both approaches, let's check, what stays behind a magical entity FILE*:
So in both cases we have a FILE* object, a file descriptor fd - a gateway to an OS kernel and in-kernel infrastructure that provides access to files or user terminals, which should (unless libc has some special initializer for stdout or kernel specially handles files with fd = 1).
How does bash redirection work in compare with fopen()?
When bash redirects file:
fork() // new process is created
fd = open("file", ...) // open new file
close(1) // get rid of fd=1 pointing to /dev/pts device
dup2(fd, 1) // make fd=1 point to opened file
close(fd) // get rid of redundant fd
execve("a") // now "a" will have file as its stdout
// in a
stdout = fdopen(1, ...)
When you open file on your own:
fork() // new process is created
execve("a") // now "a" will have file as its stdout
stdout = fdopen(1, ...)
my_file = fopen("file", ...)
fd = open("file", ...)
my_file = fdopen(fd, ...)
So as you can see, the main bash difference is twiddling with file descriptors.
Yes, you are right. The speed will be identical. The only difference in the two cases is which program opens and closes the file. When you redirect it using shell, it is the shell that opens the file and makes the handle available as stdout to the program. When the program opens the file, well, the program opens the file. After that, the handle is a file handle in both the cases, so there should be absolutely no difference in speed.
As a side remark, the program which writes to stdout can be used in more general ways. You can for example say
./program | ssh remotehost bash -c "cat > file"
which will cause the output of the program to be written to file on remotehost. Of course in this case there is no comparison like one you are making in the question.
stdout is a FILE handle, fprintf writes to a file handle, so the speed will be very similar in both cases. In fact printf("Some string") is equivalent to fprintf(stdout, "Some string"). I will say no more :)

File is not written on disk until program ends

I'm writing a file using a c code on a unix system . I open it , write a few lines and close it. Then i call a shell script, say code B where this file is to be used and then return back to main program. However, when code B tries to read the file, the file is empty.
I checked the file on the file system, its size is shown as 0 and no data is present in file. However after killing the running c code process, file has data present in it.
Here is the piece of code -
void writefile(){
FILE *fp;
fp = fopen("ABC.txt","w");
fputs("Some lines...\n",fp);
fclose(fp);
system("code_B ABC.txt");
}
Please advise how can I read the file in the shell script without stopping the c code process.
If there's some time between the fputs and fclose, add
fflush(fp);
This will cause the contents of the disk file to be written.
You should do fsync() after the fclose(), to guarantee the writing of the file to the disk.
Take a look at this question:
Does Linux guarantee the contents of a file is flushed to disc after close()?
The kernel ensures that data which is written to a file can be read back afterwards from a different process, even if it is not physically written to the disc yet. So, in usual scenarios, there is no need to call fsync() - still, even with fsync(), the filesystem could decide to further delay physical writes.
One common problem is that the C library has not flushed its buffers yet, in which case you would need to call fflush() - however, you are calling fclose() before launching your sub process, and fclose() internally calls fflush().
Actually, since system() is using a shell to launch the command passed as parameter, you can use the following simple SSCCE to verify that it works:
#include <stdio.h>
void writefile(){
FILE *fp;
fp = fopen("ABC.txt","w");
fputs("Some lines...\n",fp);
fclose(fp);
system("cat ABC.txt");
}
int main() {
writefile();
return 0;
}
Here, system() simply calls the cat command to print the file contents. The output is:
$ ./writefile
Some lines...

reading file in C with fopen()

I am writing a program in C that needs to read lines from a file. I am using fopen() for that purpose currently.
This works fine with my program.
./myProgram /path/to/file
However, I am having trouble reading inputs like this:
./myProgram - <<END
This
is
some
nameless
file
END
So I am guessing - is the nameless file that has the contents between the 2 END's, but my program will given an error related to file not found in that case, which means that fopen() returned a null pointer.
I am wondering what is going on here?
You are correct. According to fopen's man, fopen requires a const char* to open the file. If you are just passing your argv into fopen, "This is some nameless file" is not a file path, and so fopen won't be able to find your file.
If you want to read from the stdin, you can use fgets or any other reading function that takes a FILE *stream argument, to which you can pass the stdin file stream.

Resources