I'm writing a short program that's polling the buffer of a named pipe. To test it out, I'll log into 'nobody' and echo into the pipe. while it's hanging, I log in from a different user and run the program that reads the buffer. When it runs, the program returns nothing and the other user is logged out of the system. Here's the read function:
void ReadOut( char * buf )
{
ZERO_MEM( buffer, BUF_SIZE );
int pipe = open( buf, O_RDONLY | O_NONBLOCK );
if( pipe < 0 )
{
printf( "Error %d has occured.\n" , pipe );
return;
}
while( read( pipe, buffer, 2 ) > 0 ) printf( "%s \n" , buffer );
close( pipe );
return;
}
This function also works when I take out O_NONBLOCK
When you mark a file descriptor as non blocking, all the operations that normally can block (for example read(2), and write(2)) return -1 instead and set errno = EAGAIN.
So in your case read immediately returns -1 signaling "I'm not ready right now, try again later".
Related
I can't figure out how to get rid of a delay, it seems like a buffering delay but I havent had any luck with setvbuf or fflush...
I have a c/c++ program that executes a python script which immediately starts printing to stdout (quite a bit), however there seems to be a huge delay when I try to read that input in my program. I have tried to include a basic version of what I am doing below. In the output I see TEST0 immediately and then after quite some time I get a huge dump of prints.... I tried setvbuff but that didnt seem to make a difference. I think either I am doing something wrong or just not understanding what's happening.
Update: I am running in Linux.
Update2: fixed code typo with multiple forks
Update3: Adding stdout flushes in the python script fixed the problem, no more delays! Thanks #DavidGrayson
int pipeFd[2];
pid_t pid;
char buff[PATH_MAX];
std::string path = "/user/bin/python3";
std::string script = "";//use path to python script here
std::string args = ""; //use args for python script here
pid = fork();
if( pid == -1)
{
printf( "[ERROR] cant fork\n" );
}
else if( pid == 0 )
{
close( pipeFd[0] );
dup2( pipeFd[1], STDOUT_FILENO);
close( pipeFd[1] );
execl(path.c_str(), "python3", script.c_str(), args.c_str(), (char*)NULL );
printf( "[ERROR] script execl failed\n" );
exit(1);
}
else
{
//setvbuf(stdout, NULL, _IONBF, 0);
//setvbuf(stdin, NULL, _IONBF, 0);
printf( "TEST0\n" );
fflush(stdout);
//it takes a really long time to see this next print
read( pipeFd[0], buff, 1 );
printf( "TEST1:%c\n", buff[0] );
fflush(stdout);
}
My code is working fine. The only error I'm getting is that after the program writes the text into the file i.e. text1.txt, the text file prints some weird symbols like /00 when I actually open it.
int fd;
fd = open("text1.txt", O_RDWR);
char text[] = "This is my file.";
write(fd,text,sizeof(text));
You need to ensure that open succeeded instead of blindly writing to the file-descriptor.
Always check the return value of a syscall (and most C standard library functions) and check errno if the return value indicated an error.
Your string literal will include a hidden \0 (NULL) character after the dot.
Writing text directly to the file will therefore include the trailing \0 which is what you're seeing.
These issues can be rectified by:
Always checking the return value of a syscall - and in this case: print a helpful error message to stdout and perform any necessary cleanup (the goto closeFile; statement).
Because C doesn't have a native try/catch or RAII it means its difficult to write terse error-handling and cleanup code, but using goto for common clean-up code is generally acceptable in C, hence the goto closeFile statement.
Using strlen to get the actual length of the string.
Though in a pinch it's okay to use sizeof(text) - 1 provided you're in a scope where the C compiler knows the length of text as using sizeof() won't work if you cross a function boundary due to array pointer decay.
Like so:
void writeToFile() {
int fd = open( "text1.txt", O_CREAT | O_WRONLY ); // Use `O_WRONLY` instead of `O_RDWR` if you're only writing to the file. Use `O_CREAT` to create a file if it doesn't already exist.
if( fd == -1 ) {
printf( "Error opening file: errno: %d - %s\n", errno, strerror( errno ) );
return;
}
size_t textLength = strlen( text );
size_t written = write( fd, text, textLength );
if( written == -1 ) {
printf( "Error writing text: errno: %d - %s\n", errno, strerror( errno ) );
goto closeFile;
}
else if( written < textLength ) {
printf( "Warning: Only %d of %d bytes were written.", written, textLength );
goto closeFile;
}
else {
// Carry on as normal.
}
closeFile:
if( close( fd ) == -1 ) {
printf( "Error closing file: errno: %d - %s\n", errno, strerror( errno ) );
}
}
I am writing a code that forks multiple processes. They all share a file called "character." what I want to do is to have every process read the 'only character' in the file and then erase it by putting its own character so the other process can do the same. The file is the only way the processes can communicate each other. How can I erase the 'only character' in the file and put a new one in its place. I was advise to use freopen() (which closes the file and reopens it erasing its previous contents) but I am not sure if it is the best way to achieve this.
You should not have to reopen the file. That gains you nothing. If you're worried about each process buffering input or output, disable buffering if you want to use FILE *-based stdio functions.
But if I'm reading your question correctly (you want each process to replace the one character in the file when it's a specific value is held in the file, and that value changes for each process), this will do what you want, using POSIX open() pread(), and pwrite() (you're already using POSIX fork(), so using low-level POSIX IO makes things a lot simpler - note that pread() and pwrite() eliminate the need for seeking.)
I'll say this is what I think you're trying to do:
// header files and complete error checking is omitted for clarity
int fd = open( filename, O_RDWR );
// fork() here?
// loop until we read the char we want from the file
for ( ;; )
{
char data;
ssize_t result = pread( fd, &data, sizeof( data ), 0 );
// pread failed
if ( result != sizeof( data ) )
{
break;
}
// if data read matches this process's value, replace the value
// (replace 'a' with 'b', 'c', 'z' or '*' - whatever value you
// want the current process to wait for)
if ( data == 'a' )
{
data = 'b';
result = pwrite( fd, &data, sizeof( data ), 0 );
break;
}
}
close( fd );
For any decent number of processes, that's going to put a lot of stress on your filesystem.
If you really want to start with fopen() and use that family of calls, this might work depending on your implementation:
FILE *fp = fopen( filename, "rb+" );
// disable buffering
setbuf( fd, NULL );
// fork() here???
// loop until the desired char value is read from the file
for ( ;; )
{
char data;
// with fread(), we need to fseek()
fseek( fp, 0, SEEK_SET );
int result = fread( &data, 1, 1, fp );
if ( result != 1 )
{
break;
}
if ( data == 'a' )
{
data = 'b';
fseek( fp, 0, SEEK_SET );
fwrite( &data, 1, 1, fp );
break;
}
}
fclose( fp );
Again, that assumes I'm reading your question properly. Note that the POSIX rules John Bollinger mentioned in his comments regarding multiple handles don't apply - because the streams are explicitly not buffered.
I've written Zsh module. There I have a builtin – function mapped to Zsh command. This function duplicates its stdin file descriptor:
/* Duplicate standard input */
oconf->stream = fdopen( dup( fileno( stdin ) ), "r" );
Then I create a thread that reads the descriptor (oconf->stream):
/* Run the thread */
if ( pthread_create( &workers[ oconf->id ], NULL, process_input, oconf ) ) {
fprintf( stderr, "Error creating thread\n" );
free_oconf( oconf );
return 1;
}
In that thread I read oconf->stream, it works. When I'm closing the descriptor (in the same thread):
if ( -1 != fcntl( fileno( oconf->stream ), F_GETFD ) ) {
if ( 0 != fclose( oconf->stream ) ) {
I sometimes get "Bad descriptor" error. That's the reason for the fcntl() check before fclose(), without it much more errors were generated. Turns out that adding sleep():
if ( -1 != fcntl( fileno( oconf->stream ), F_GETFD ) ) {
sleep( 1 );
if ( 0 != fclose( oconf->stream ) ) {
"restores" the large amount of errors. So there's a race condition, something is closing the file descriptor between fcntl() and fclose(). My question:
Is it possible that when writer process ends, the pipe created between stdout of writer <-> stdin of reader, gets invalidated on the other end automatically?
The code I pasted is from the "stdin reader". I read data from stdin (duplicated to oconf->stream) and finish by closing by fclose(). Zsh creates a writer and stdout <-> stdin connection when I do cat /some/data.txt | my_reader_builtin.
I want to write my own version of the head Unix command, but my program is not working.
I am trying to to print the first 10 lines of a text file, but instead the program prints all the lines. I specify the file name and number of lines to print via command-line arguments. I am only required to use Unix system calls such as read(), open() and close().
Here is the code:
#include "stdlib.h"
#include "stdio.h"
#include <fcntl.h>
#include <stdlib.h>
#include <unistd.h>
#define BUFFERSZ 256
#define LINES 10
void fileError( char*, char* );
int main( int ac, char* args[] )
{
char buffer[BUFFERSZ];
int linesToRead = LINES;
int in_fd, rd_chars;
// check for invalid argument count
if ( ac < 2 || ac > 3 )
{
printf( "usage: head FILE [n]\n" );
exit(1);
}
// check for n
if ( ac == 3 )
linesToRead = atoi( args[2] );
// attempt to open the file
if ( ( in_fd = open( args[1], O_RDONLY ) ) == -1 )
fileError( "Cannot open ", args[1] );
int lineCount = 0;
//count no. of lines inside file
while (read( in_fd, buffer, 1 ) == 1)
{
if ( *buffer == '\n' )
{
lineCount++;
}
}
lineCount = lineCount+1;
printf("Linecount: %i\n", lineCount);
int Starting = 0, xline = 0;
// xline = totallines - requiredlines
xline = lineCount - linesToRead;
printf("xline: %i \n\n",xline);
if ( xline < 0 )
xline = 0;
// count for no. of line to print
int printStop = lineCount - xline;
printf("printstop: %i \n\n",printStop);
if ( ( in_fd = open( args[1], O_RDONLY ) ) == -1 )
fileError( "Cannot open ", args[1] );
//read and print till required number
while (Starting != printStop) {
read( in_fd, buffer, BUFFERSZ );
Starting++; //increment starting
}
//read( in_fd, buffer, BUFFERSZ );
printf("%s \n", buffer);
if ( close( in_fd ) == -1 )
fileError( "Error closing files", "" );
return 0;
}
void fileError( char* s1, char* s2 )
{
fprintf( stderr, "Error: %s ", s1 );
perror( s2 );
exit( 1 );
}
What am I doing wrong?
It's very odd that you open the file and scan it to count the total number lines before going on to echoing the first lines. There is absolutely no need to know in advance how many lines there are altogether before you start echoing lines, and it does nothing useful for you. If you're going to do it, anyway, however, then you ought to close() the file before you re-open it. For your simple program, this is a matter of good form, not of correct function -- the misbehavior you observe is unrelated to that.
There are several problems in the key portion of your program:
//read and print till required number
while (Starting != printStop) {
read( in_fd, buffer, BUFFERSZ );
Starting++; //increment starting
}
//read( in_fd, buffer, BUFFERSZ );
printf("%s \n", buffer);
You do not check the return value of your read() call in this section. You must check it, because it tells you not only whether there was an error / end-of-file, but also how many bytes were actually read. You are not guaranteed to fill the buffer on any call, and only in this way can you know which elements of the buffer afterward contain valid data. (Pre-counting lines does nothing for you in this regard.)
You are performing raw read()s, and apparently assuming that each one will read exactly one line. That assumption is invalid. read() does not give any special treatment to line terminators, so you are likely to have reads that span multiple lines, and reads that read only partial lines (and maybe both in the same read). You therefore cannot count lines by counting read() calls. Instead, you must scan the valid characters in the read buffer and count the newlines among them.
You do not actually print anything inside your read loop. Instead, you wait until you've done all your reading, then print everything the buffer after the last read. That's not going to serve your purpose when you don't get all the lines you need in the first read, because each subsequent successful read will clobber the data from the preceding one.
You pass the buffer to printf() as if it were a null-terminated string, but you do nothing to ensure that it is, in fact, terminated. read() does not do that for you.
I have trouble believing your claim that your program always prints all the line of the designated file, but I can believe that it prints all the lines of the specific file you're testing it on. It might do that if the file is short enough that the whole thing fits into your buffer. Your program then might read the whole thing into the buffer on the first read() call (though it is not guaranteed to do so), and then read nothing on each subsequent call, returning -1 and leaving the buffer unchanged. When you finally print the buffer, it still contains the whole contents of the file.