I am trying to write 2 programs that will talk to each other using fifo pipe.
I used the example here (section 5.2), but I changed the mknod there to mkfifo and tried to change gets to fgets.
This is the code (of one program which writes into the fifo):
#include <stdio.h>
#include <unistd.h>
#include <string.h>
#include <errno.h>
#include <sys/types.h> /*mkfifo, open */
#include <sys/wait.h>
#include <sys/stat.h> /* mkfifo, open */
#include <fcntl.h> /*open */
#define FIFO_PATH "/home/hana/Desktop"
#define BUFFER_SIZE 300
int main()
{
char buffer[BUFFER_SIZE];
int fd;
int wStatus;
mkfifo(FIFO_PATH, 666);
printf("waiting for readers\n");
fd = open(FIFO_PATH, O_RDWR);
while (fgets(buffer, BUFFER_SIZE, fd), !feof(stdin))
{
if ((wStatus = write(fd, buffer, strlen(buffer))) == -1)
perror("write");
else
printf("speak: wrote %d bytes\n", wStatus);
}
return 0;
}
I get a compilation error: passing argument 3 of fgets makes pointer from integer.
So fgets is expecting FILE* and not file descriptor.
What should I do? change something so that fgets works? use another function?
I am compiling with gcc (ansi, pedantic).
Thanks
The answer from whjm is the cause of your error diagnostic, but I think you probably meant
fgets(buffer, BUFFER_SIZE, stdin)
// ^^^^^
It doesn't make sense that you would read from a pipe and then immediately write the same thing back to the pipe. Also, if you never read from stdin, feof(stdin) will never be true.
Also, with fgets just check for a null result and then outside the loop, do the check for eof:
while (fgets(...) != NULL)
{
...
}
if (!feof(stdin))
{
// error handling
}
mkfifo() just creates special node in filesystem. And you are free to open it in any way. Actually there are two alternatives - POSIX "non-buffered" I/O: open()/write()/read() or standard buffered I/O: fopen()/fread()/fwrite(). First family operates on file descriptors while second one uses so called file streams: FILE. You can not mix these APIs freely. Just choose one and stick to it.
Standard I/O library offers some useful extra capabilities comparing to low-level non-buffered I/O. Like fgets() that you're trying to use. In this situation would be reasonable to use standard streams and replace open() with:
FILE* stream = fopen(FIFO_PATH, "r+");
Thus program will use FILE* instead of plain file descriptors. Also write() need to be changed to fwrite() immediately followed by fflush() to guarantee that written data are passed to FIFO.
P.S. In case of necessity it is possible to "wrap" low-level descriptors returned by open()(or something other) with standard FILE*. See fdopen(). But it is much like a workaround to use standard I/O API with special file objects that can not be opened with fopen().
Related
Hey how would I be able to output the first 10 words of the text file without using functions from <stdio.h>
#include <fcntl.h>
#include <unistd.h>
#include <stdlib.h>
int main()
{
int fd_to_read = open("sample.txt", O_RDONLY);
if (fd_to_read == -1) {
exit(1);
}
// ...
close(fd_to_read);
}
I have no idea how to display the first 10 words without the use of <stdio.h>
On POSIX systems the io primitives are: open, close, read, write.
These primitives operate on file descriptors rather than an FILE object as purposed by the C Standard.
Like the c io interface, the posix interface expects a buffer and a buffer size for the read and write primitives.
Assuming that words are separated by the whitespace character ' ', your job would be to read continuously from the source descriptor and count the occurrences of the space character (by iterating over the buffer char by char) until it hits the desired threshold.
Until then, write everything to the output descriptor.
In <unistd.h> you'll find these three importatnt symbols:
STDIN_FILENO
STDOUT_FILENO
STDERR_FILENO
This problem is more difficult than it looks: you cannot use <stdio.h> so you must use system calls to read from the file and write to stdout:
Read a byte:
char ch;
if (read(0, &ch, 1) != 1) {
/* end of file reached */
break;
}
Write the byte to stdout:
write(0, &ch, 1);
Testing for word boundaries is more tricky: you must skip all white space, then you have a new word until you read more white space.
pipe(7) says:
If a process attempts to read from an empty pipe, then read(2) will block until data is available. If a process attempts to write to a full pipe (see below), then write(2) blocks until sufficient data has been read from the pipe to allow the write to complete. Nonblocking I/O is possible by using the fcntl(2) F_SETFL operation to enable the O_NONBLOCK open file status flag.
Below I have two simple C programs compiled on linux with gcc:
reader.c:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define STACKBUF_SIZE 128
#define FIFO_PATH "/home/bogdan/.imagedata"
signed int main(int argc, char **argv) {
int fifo_fd = open(FIFO_PATH, O_RDONLY); // blocking... - notice no O_NONBLOCK flag
if (fifo_fd != -1) {
fprintf(stdout, "open() call succeeded\n");
}
while (1) {
char buf[STACKBUF_SIZE] = {0};
ssize_t bread = read(fifo_fd, buf, STACKBUF_SIZE);
fprintf(stdout, "%d - %s\n", bread, buf);
sleep(1);
}
close(fifo_fd);
return EXIT_SUCCESS;
}
writer.c:
#define STACKBUF_SIZE 128
#define FIFO_PATH "/home/bogdan/.imagedata"
#define DATA "data"
int main(void) {
int fifo_fd = open(FIFO_PATH, O_WRONLY); // blocks until reader opens on the reader end, however we always first open the reader so...
if(fifo_fd != -1) {
ssize_t bwritten = write(fifo_fd, DATA, 5);
fprintf(stdout, "writer wrote %ld bytes\n", bwritten);
}
close(fifo_fd);
return EXIT_SUCCESS;
}
The files are compiled into two separate binaries with gcc writer.c -Og -g -o ./writer, same for the reader.
From the shell I first execute the reader binary, and as expected, the initial open() call blocks until I also execute the writer. I then execute the writer, whose open() call immediately succeeds and it writes 5 bytes to the FIFO (which are correctly displayed by the reader), after which it closes the fd, leaving the FIFO empty (?).
However, the following read() calls in the while loop of the reader don't block at all, and instead just return 0.
Unless I am missing something (I probably am) this is in clash with the semantics outlined by the pipe(7) manpage, as the FIFO fd was open without the O_NONBLOCK flag both in the reader and the writer.
The section of the manual that you quoted only applies to pipes with open writers. Two paragraphs down, it says this:
If all file descriptors referring to the write end of a pipe have been closed, then an attempt to read(2) from the pipe will see end-of-file (read(2) will return 0).
I am trying to understand the standard I/O. I met a problem of calling fdopen().
What's the behavior if I call fdopen() on the same file descriptor as follows? Why do I get an ouput of '\377' (-1) ?
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
int main()
{
int fd1, fd2;
char c;
FILE *fp1, *fp2;
fd1 = open("foo.txt", O_RDONLY, 0);
fp1 = fdopen(fd1, "r");
fp2 = fdopen(fd1, "r");
if (fp2 == NULL)
printf("NULL\n");
if (errno)
printf("ERROR\n");
c = fgetc(fp1);
c = fgetc(fp2);
printf("c = %c\n", c);
exit(0);
}
Let's say your stdio buffer size is 4K. The first fgetc reads 4K into the buffer and returns the first byte. The fd is now advanced 4K into the file. The second fgetc reads from there. Your file is smaller than the buffer size, so you're at EOF. You print the EOF with %c and get a funny character.
Multiple fdopen on a single fd gets a vote of don't try it; it will hurt from me. With an exception for creating stdin, stdout, and stderr from a single tty descriptor if you're writing getty.
Multiple problems:
char is not the right type for storing the return value of fgetc. Use int.
You're accessing the same open file description via two different handles without performing the steps necessary to switch between them legally. This invokes undefined behavior.
Checking errno and inferring from it that there was an error is not valid. If you already know there was an error, errno tells you which one. It does not tell you whether or not an error occurred, and in case one did not, any nonzero value may have been written to errno.
We don't know your file contents so we can't know what you expect to be read.
Is it alright for multiple processes to access (write) to the same file at the same time? Using the following code, it seems to work, but I have my doubts.
Use case in the instance is an executable that gets called every time an email is received and logs it's output to a central file.
if (freopen(console_logfile, "a+", stdout) == NULL || freopen(error_logfile, "a+", stderr) == NULL) {
perror("freopen");
}
printf("Hello World!");
This is running on CentOS and compiled as C.
Using the C standard IO facility introduces a new layer of complexity; the file is modified solely via write(2)-family of system calls (or memory mappings, but that's not used in this case) -- the C standard IO wrappers may postpone writing to the file for a while and may not submit complete requests in one system call.
The write(2) call itself should behave well:
[...] If the file was
open(2)ed with O_APPEND, the file offset is first set to the
end of the file before writing. The adjustment of the file
offset and the write operation are performed as an atomic
step.
POSIX requires that a read(2) which can be proved to occur
after a write() has returned returns the new data. Note that
not all file systems are POSIX conforming.
Thus your underlying write(2) calls will behave properly.
For the higher-level C standard IO streams, you'll also need to take care of the buffering. The setvbuf(3) function can be used to request unbuffered output, line-buffered output, or block-buffered output. The default behavior changes from stream to stream -- if standard output and standard error are writing to the terminal, then they are line-buffered and unbuffered by default. Otherwise, block-buffering is the default.
You might wish to manually select line-buffered if your data is naturally line-oriented, to prevent interleaved data. If your data is not line-oriented, you might wish to use un-buffered or leave it block-buffered but manually flush the data whenever you've accumulated a single "unit" of output.
If you are writing more than BUFSIZ bytes at a time, your writes might become interleaved. The setvbuf(3) function can help prevent the interleaving.
It might be premature to talk about performance, but line-buffering is going to be slower than block buffering. If you're logging near the speed of the disk, you might wish to take another approach entirely to ensure your writes aren't interleaved.
This answer was incorrect. It does work:
So the race condition would be:
process 1 opens it for append, then
later process 2 opens it for append, then
later still 1 writes and closes, then
finally 2 writes and closes.
I'd be impressed if that 'worked' because it isn't clear to me what
working should mean. I assume 'working' means all of the bytes written
by the two processes are inthe log file? I'd expect that they both
write starting at the same byte offset, so one will replace the others
bytes. It will all be okay upto and including step 3. and only show up
as a problem at step 4, Seems like an easy test to write: open getchar
... write close.
Is it critical that they can have the file open simultaneously? A
more obvious solution if the write is quick, is to open exclusive.
For a quick check on your system, try:
/* write the first command line argument to a file called foo
* stackoverflow topic 9880935
*/
#include <stdio.h>
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
int main (int argc, const char * argv[]) {
if (argc <2) {
fprintf(stderr, "Error: need some text to write to the file Foo\n");
exit(1);
}
FILE* fp = freopen("foo", "a+", stdout);
if (fp == NULL) {
perror("Error failed to open file\n");
exit(1);
}
fprintf(stderr, "Press a key to continue\n");
(void) getchar(); /* Yes, I really mean to ignore the character */
if (printf("%s\n", argv[1]) < 0) {
perror("Error failed to write to file: ");
exit(1);
}
fclose(fp);
return 0;
}
Given the following function:
freopen("file.txt","w",stdout);
Redirects stdout into a file, how do I make it so stdout redirects back into the console?
I will note, yes there are other questions similar to this, but they are about linux/posix. I'm using windows.
You can't assigned to stdout, which nullifies one set of solutions that rely on it.
dup and dup2() are not native to windows, nullifying the other set. As said, posix functions don't apply (unless you count fdopen()).
You should be able to use _dup to do this
Something like this should work (or you may prefer the example listed in the _dup documentation):
#include <io.h>
#include <stdio.h>
...
{
int stdout_dupfd;
FILE *temp_out;
/* duplicate stdout */
stdout_dupfd = _dup(1);
temp_out = fopen("file.txt", "w");
/* replace stdout with our output fd */
_dup2(_fileno(temp_out), 1);
/* output something... */
printf("Woot!\n");
/* flush output so it goes to our file */
fflush(stdout);
fclose(temp_out);
/* Now restore stdout */
_dup2(stdout_dupfd, 1);
_close(stdout_dupfd);
}
An alternate solution is:
freopen("CON","w",stdout);
Per wikipedia "CON" is a special keyword which refers to the console.
After posting the answer I have noticed that this is a Windows-specific question. The below still might be useful in the context of the question to other people. Windows also provides _fdopen, so mayble simply changing 0 to a proper HANDLE would modify this Linux solution to Windows.
stdout = fdopen(0, "w")
#include <stdio.h>
#include <stdlib.h>
int main()
{
freopen("file.txt","w",stdout);
printf("dupa1");
fclose(stdout);
stdout = fdopen(0, "w");
printf("dupa2");
return 0;
}
take note that the filedescriptors for stdin, stdout, stderr (0,1,2) are not nessesarily the same as the 'special variables' printf() and the likes use. although in most cases they output to the same devices upon program start. (not if you start changing things in the middle of your program, or tty redirects are in place). stdin stdout stderr are FILE * pointers. both concepts need to be 'redirected' seperately from each other with their own methods... 'dup2' is for duplicating file descriptors. not FILE pointers. for FILE * pointers such as stdin, stdout, stderr... 'freopen()'.. but that will literally only affect printf and derivatives.
This works to me
#include <stdio.h>
int main()
{
FILE* original_stdout = stdout;
stdout = fopen("new_stdout.txt", "w");
printf("ciao\n");
fclose(stdout);
stdout = original_stdout;
printf("a tutti\n");
return 0;
}