Redirecting stdin to a temporary file? - c

I want stdin to be redirected to a string of text supplied in my program. I want to write the string of text to a temporary file and then point stdin to that file. I'm a little unsure about this code because it seems to blend low level function calls like write() and dup() with higher level function calls like fclose(). Is this the correct approach?:
char* buffer = "This is some text";
int nBytes = strlen(buffer);
FILE* file = tmpfile();
int fd = fileno(file);
write(fd,buffer,nBytes);
rewind(file);
dup2(fd,0);
fclose(file);
EDIT
As per a suggestion in the comments, I tried approaching this problem with pipes. If I want to use pipes, would this be the correct approach? I still want feedback on the first approach as well:
int fd[2];
pipe(fd); // For sake of simplicity assume returns 0 (no error).
char* buffer = "This is some text";
int nBytes = strlen(buffer);
write(fd[1],buffer,nBytes);
close(fd[1]);
dup2(fd[0],0);
close(fd[0]);

Related

Write to the same file with different processes in order of occurence

I am working on a UNIX based operating system (Lubuntu 14.10. I have several processes that need to print a message to the same file and to the std output.
When I print my message to the screen, it works the way I want, in the order of occurence. E.g:
Process1_message1
Process2_message1
Process3_message1
Process1_message2
Process2_message2
Process3_message2
...
However, when I check the output file it is like below:
Process1_message1
Process1_message2
Process2_message1
Process2_message2
Process3_message1
Process3_message2
...
I use fprintf(FILE *ptr, char *str) to write the message to the file.
Note: I opened the file with following format in the main process:
fptr=fopen("output.txt", "a");
where fptr is a global FILE *.
Any help will be appreciated. Thank you!
fprintf() isn't going to work. It's prone being translated into multiple calls to write() to actually write out the data, exactly like you posted. You call fprintf() once, and under the covers it makes multiple calls to write() to actually write the data into the file.
You need to use open( filename, O_WRONLY | O_CREAT | O_APPEND, 0600 ), and write data something like this in order to ensure you only call write() once, which is guaranteed to be atomic:
ssize_t myprintf( int fd, const char *fmt, ... )
{
char buffer[ 1024 ];
ssize_t bytesWritten;
va_list argp;
va_start( argp, fmt );
int bytes = vsnprintf( buffer, sizeof( buffer ), fmt, argp );
if ( bytes < sizeof( buffer ) )
{
bytesWritten = write( fd, buffer, bytes );
}
// buffer was too small, get a bigger one
else
{
char *bufptr = malloc( bytes + 1 );
bytes = vsnprintf( bufptr, bytes + 1, fmp, argp );
bytesWritten = write( fd, bufptr, bytes );
free( bufptr );
}
return( bytesWritten );
}
Most likely, your problem is that the file output is fully buffered, so the output from each process doesn't appear until the standard I/O buffer for the stream (in that process) is full.
You can probably work around it sufficiently by setting line buffering:
FILE *fptr = fopen("output.txt", "a");
if (fptr != 0)
{
setvbuf(fptr, 0, _IOLBF, BUFSIZ);
…code using fptr — including your fork() calls…
fclose(fptr);
}
Every time a process writes a line to the buffer, it will be flushed. You might run into problems if your output lines are longer than BUFSIZ; then you might want to increase the size passed to setvbuf() to the largest line length you need written atomically.
If that still isn't good enough, or if you need to be able to write groups of lines at one time, you'll have to go to a solution using file descriptors as in Andrew Henle's answer. You might want to look at the O_SYNC and O_DSYNC options to open().
Flushing buffers is different in stdio when you are writing to a terminal (isatty(fptr) ---see isatty(3)--- returns true) than when you output to a file. For a file, stdio output only does a write(2) system call when the buffer is filled up and this makes all the messages to appear together (as each buffer flushes out on exit, they fill up in one single output buffer) On ttys, output is flushed when buffer fills up or when a \n char is output to the buffer (as a compromise on buffering/non buffering)
You can force buffer flushing with fflush(fptr); after fprintf(fptr, ...); or even do fflush(NULL); (which flushes all output buffers in one call).
But, be carefull as the writes are the ones that control the atomicity of calls (not the fprintf calls) so, if you have to write several pages of output in one fprintf call, be ready to accept messed output.

reading from a file descriptor in C

(correct me if im wrong on my terms) So i need to read from a file descriptor, but the read method takes in a int for byte size to read that much OR i can use O_NONBLOCK, but i still have to setup up a buffer size of an unknown size. making it difficult. heres what i have so far
this is my method that handles all the polling and mkfifo. and N is already predefined in main
struct pollfd pfd[N];
int i;
for(i = 0; i < N; i++)
{
char fileName[32];
snprintf (fileName, sizeof(fileName), "%d_%di", pid, i);
mkfifo(fileName, 0666);
pfd[i].fd = open(fileName, O_RDONLY | O_NDELAY);
pfd[i].events = POLLIN;
pfd[i].revents = 0;
snprintf (fileName, sizeof(fileName), "%d_%do", pid, i);
mkfifo(fileName, 0666);
i++;
pfd[i].fd = open(fileName, O_WRONLY | O_NDELAY);
pfd[i].events = POLLOUT;
pfd[i].revents = 0;
i--;
}
while(1)
{
int len, n;
n = poll(pfd, N, 2000);
if( n < 0 )
{
printf("ERROR on poll");
continue;
}
if(n == 0)
{
printf("waiting....\n");
continue;
}
for(i = 0; i < N; i++)
{
char buff[1024]; <---i dont want to do this
if (pfd[i].revents & POLLIN)
{
printf("Processing input....\n");
read(pfd[i].fd, buff, O_NONBLOCK);
readBattlefield(buff);
print_battleField_stats();
pfd[i].fd = 0;
}
}
}
i also read somewhere that once read() reads all the data coming, it empties the pipe, meaning i can use the same again for another incoming data. but it doesnt empty the pipe because i cant use the same pipe again. I asked my professor but all he says was to use something like scanf, but how do use scanf if scanf takes a FILE stream, and the poll.fd is an int? essentially my ultimate question is, how to read the incoming data through the file descriptor using scan or of other sort? using scan will help me more with handling the data.
EDIT:
in another terminal i have to put cat file > (named_file)
and my main program will read the input data. heres what the input data looks like
3 3
1 2 0
0 2 0
3 0 0
first 2 numbers are grid information and player number, and after that is grid, but this a simplified version, ill be dealing with sizes over 100's of players and grids of over 1000's
char buff[1024]; <---i dont want to do this
What would you like to do then? This is how it works. This is not how it works:
read(pfd[i].fd, buff, O_NONBLOCK);
This will compile because O_NONBLOCK is an integer #define, but it is absolutely and unequivocally incorrect. The third argument to read() is a number of bytes to read. Not a flag. Period. It may be zero, but what you've done here is pass an arbitrary number -- whatever the value of O_NONBLOCK is, which could easily be more than 1024, the size of your buffer. This does not set the read non-block. recv() is similar to read() and does take such flags as a forth argument, but you can't use that with a file descriptor. If you want to set non-block on a file descriptor, you must do it with open() or fcntl().
how to read the incoming data through the file descriptor using scan or of other sort?
You can create a FILE* stream from an open descriptor with fdopen().
i also read somewhere that once read() reads all the data coming, it empties the pipe, meaning i can use the same again for another incoming data. but it doesnt empty the pipe because i cant use the same pipe again.
Once you reach EOF (because the writer closed the connection), read() will return 0, and continue to return 0 immediately until someone opens the pipe again.
If you set the descriptor non-block, read() will always return immediately; if there is someone connected and nothing to read, it will return -1 but errno will == EAGAIN. See man 2 read.
man fifo is definitely something you should read; if there's anything you aren't sure about, ask a specific question based on that.
And don't forget: Fix that read() call. It's wrong. W R O N G. Your prof/TA/whoever will not miss that.

C language. Read from stdout

I have some troubles with a library function.
I have to write some C code that uses a library function which prints on the screen its internal steps.
I am not interested to its return value, but only to printed steps.
So, I think I have to read from standard output and to copy read strings in a buffer.
I already tried fscanf and dup2 but I can't read from standard output. Please, could anyone help me?
An expanded version of the previous answer, without using files, and capturing stdout in a pipe, instead:
#include <stdio.h>
#include <unistd.h>
main()
{
int stdout_bk; //is fd for stdout backup
printf("this is before redirection\n");
stdout_bk = dup(fileno(stdout));
int pipefd[2];
pipe2(pipefd, 0); // O_NONBLOCK);
// What used to be stdout will now go to the pipe.
dup2(pipefd[1], fileno(stdout));
printf("this is printed much later!\n");
fflush(stdout);//flushall();
write(pipefd[1], "good-bye", 9); // null-terminated string!
close(pipefd[1]);
dup2(stdout_bk, fileno(stdout));//restore
printf("this is now\n");
char buf[101];
read(pipefd[0], buf, 100);
printf("got this from the pipe >>>%s<<<\n", buf);
}
Generates the following output:
this is before redirection
this is now
got this from the pipe >>>this is printed much later!
good-bye<<<
You should be able to open a pipe, dup the write end into stdout and then read from the read-end of the pipe, something like the following, with error checking:
int fds[2];
pipe(fds);
dup2(fds[1], stdout);
read(fds[0], buf, buf_sz);
FILE *fp;
int stdout_bk;//is fd for stdout backup
stdout_bk = dup(fileno(stdout));
fp=fopen("temp.txt","w");//file out, after read from file
dup2(fileno(fp), fileno(stdout));
/* ... */
fflush(stdout);//flushall();
fclose(fp);
dup2(stdout_bk, fileno(stdout));//restore
I'm assuming you meant the standard input. Another possible function is gets, use man gets to understand how it works (pretty simple). Please show your code and explain where you failed for a better answer.

Read/writing on a pipe, accomplishing file copying in C

I am trying to read from a file, write it to a pipe, and in a child process read from the pipe and write it to a new file. The program is passed two parameters: the name of the input file, and the name of the file to be copied to. This is a homework project, but I have spent hours online and have found only ways of making it more confusing. We were given two assignments, this and matrix multiplication with threads. I got the matrix multiplication with no problems, but this one, which should be fairly easy, I am having so much trouble with. I get the first word of the file that I am copying, but then a whole bunch of garble.
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
if(argc < 3) {
printf("Not enough arguments: FileCopy input.txt copy.txt\n");
exit(0);
}
char buffer[200];
pid_t pid;
int fds[2];
pipe(fds);
pid = fork();
if (pid == 0) { /* The child process */
//wait(NULL);
write(1, "hi i am in child\n", 17);
int copy = open(argv[2], O_WRONLY | O_CREAT, S_IWUSR | S_IRUSR | S_IXUSR | S_IRGRP);
FILE* stream;
close(fds[1]);
stream = fdopen(fds[0], "r");
while (fgets(buffer, sizeof(buffer), stream) != NULL) {
//printf("%s\n", buffer);
write(copy, buffer, 200);
//printf("kjlkjljljlkj\n");
//puts(buffer);
}
close(copy);
close(fds[0]);
exit(0);
}
else {
write(1, "hi i am in parent\n", 18);
FILE* input = fopen(argv[1], "r");
FILE* stream;
close(fds[0]);
stream = fdopen(fds[1], "w");
/*while (fscanf(input, "%s", buffer) != EOF) {
//printf("%s\n", buffer);
fprintf(stream, "%s\n", buffer);
fflush(stream);
//printf("howdy doody\n");
}*/
fgets(buffer, sizeof(buffer), input);
printf("%s", buffer);
fprintf(stream, "%s", buffer);
fflush(stream);
close(fds[1]);
fclose(input);
wait(NULL);
exit(0);
}
return 0;
}
Am I doing the reads and writes wrong?
Am I doing the reads and writes wrong?
Yes.
In the child, you are mixing string-oriented buffered I/O (fgets()) with block-oriented binary I/O. (That is, write().) Either approach will work, but it would be normal practice to pick one or the other.
If you mix them, you have to consider more aspects of the problem. For example, in the child, you are reading just one line from the pipe but then you write the entire buffer to the file. This is the source of the garbage characters you are probably seeing in the file.
In the parent, you are sending only a single line with no loop. And after that, you close the underlying file descriptor before you fclose() the buffered I/O system. This means when fclose tries to flush the buffer, the now-closed descriptor will not work to write any remaining data.
You can either use write()/read()/close(), which are the Posix-specified kernel-level operations, or you can use fdopen/puts/gets/fclose which are the ISO C - specified standard I/O library operations. Now, there is one way of mixing them that will work. If you use stdio in the parent, you could still use read/write in the child, but then you would be making kernel calls for each line, which would not usually be an ideal practice.
You should generally read/write pipes only using the read/write-calls.
You should close the according ends of the pipe for child (read-only) and parent (write-only).
Afterwards, write from the parent into the pipe using write()-systemcall. And in the child read using read()-systemcall.
Look here for a good explanation.

I/O issues writing on file

I'm having a hard time trying to figure out why this piece of code doesn't work as it should. I am learning the basics of I/O operations and I have to come up with a C program that writes on a 'log.txt' file what is given from standard input and as the 'stop' word is entered, the program must halt.
So my code is:
#include "main.h"
#define SIZE 1024
int main(int argc, char *argv[])
{
int fd;
int readBytes;
int writBytes;
char *buffer;
if ((fd = open("log.txt", O_WRONLY|O_APPEND)) < 0)
{
perror("open");
}
buffer = (char *) calloc (SIZE, sizeof(char));
while ((readBytes = read(0, buffer, SIZE) < SIZE)&&(strncmp(buffer, "stop", 4) != 0));
if ((writBytes = write(fd, buffer, SIZE)) < 0)
{
perror("write");
}
if ((close(fd)) < 0)
{
perror("close");
}
}
If I enter:
this is just a text
stop
The output is
stop
is just a text
If I enter more than a sentence:
this is just a text
this is more text
and text again
stop
This is what is logged:
stop
ext again
xt
t
And on top of that if I try to edit the log.txt file from vim or just a text editor I can see '\00's. I guess \00 stands for all the bytes left empty from the 1024 available, right? How can I prevent that from happening?
It looks like you're expecting
readBytes = read(0, buffer, SIZE) < SIZE)
to somehow accumulate things in buffer. It doesn't. Every subsequent read will put whatever it read at the start of the buffer, overwriting what the previous read has read.
You need to put your write in the while block - one write for every read, and only write as much as you read, otherwise you'll write garbage (zeros from the calloc and/or leftovers from the previous read) in your log file.
Also note that while your technique will probably work most of the time for a line-buffered input stream, it will not do what you expect if you redirect from a file or a pipe. You should be using formatted input functions (like getline if you your implementation has that, scanf, or fgets).

Resources