I've got two C files, server.c and client.c. The server has to create a fifo file and constantly read in it, waiting for input. The client gets its PID and writes the PID in the fifo.
This is my server file which I launch first:
int main(){
int fd;
int fd1;
int bytes_read;
char * buffer = malloc(5);
int nbytes = sizeof(buffer);
if((fd = mkfifo("serverfifo",0666)) == -1) printf("create fifo error");
else printf("create fifo ok");
if ((fd1 = open("serverfifo",O_RDWR)) == -1) printf("open fifo error");
else{
printf("open fifo ok");
while(1){
bytes_read = read(fd,buffer,nbytes);
printf("%d",bytes_read);
}
}
return(0);
}
And my client file :
int main(){
int fd;
int pid = 0;
char *fifo;
int bytes;
if ((pid = getpid()) == 0) printf("pid error");
char pid_s[sizeof(pid)];
sprintf(pid_s,"%d",pid);
if ((fd = open ("serverfifo",O_RDWR)) == -1)printf("open fifo error");
else {
printf("open fifo ok");
bytes = write(fd,pid_s, sizeof(pid_s));
printf("bytes = %d",bytes);
}
close(fd);
return(0);
}
The two main problems I'm getting are: When I write the pid to the file it returns the number of bytes I wrote so it looks okay but when I check the properties of the fifo file it says 0 bytes. The second problem is the read doesn't work. If I do a printf before it shows, but after it doesn't and the read isn't returning anything it just freezes.
I realise there are a lot of similar posts on the site but I couldn't find anything that helped.
I'm using Ubuntu and GCC compiler with CodeBlocks.
There are many things wrong here
char pid_s[sizeof(pid)];
sprintf(pid_s,"%d",pid);
sizeof(pid) returns the size of the pid value, not its string representation, i.e. it is sizeof(int) which is either 4 or 8, depending on your architecture. You then proceed to print it. If this works it works only by luck (you are on a 64 bit machine). The correct way to do is, if you choose to do it at all, is to allocate a suitably large buffer, and use snprintf to make sure you don't overflow. PIDs fit in 5 digits, so something like this will do:
char pid_s[8];
snprintf(pid_s, sizeof(pid_s), "%d", pid);
of course, you can skip this step all together and send the raw bytes of the pid instead
write(fd, (void*)&pid, sizeof(pid))
Now in the server you make similar mistakes:
char * buffer = malloc(5);
int nbytes = sizeof(buffer);
sizeof(buffer) returns 4 or 8 again, but you allocated 5 bytes, the correct way to do this, if you want to allocate on the heap (using malloc), is this:
char* buffer = malloc(8);
int nbytes = 8;
alternatively you can allocate on the stack:
char buffer[8];
int nbytes = sizeof(buffer);
sizeof is sort of magical, in that if you pass in an array, it returns the size of the array (8*1) in this case.
When you are reading, you read 5 bytes, which is likely not enough (because you wrote 8 bytes due to the earlier bug), so it would not finish. You should read like this
int pid;
read(fd, (void*)&pid, sizeof(pid));
Also, if you were to actually read and write strings, you'd do something like this:
// client
char pid_s[8];
snprintf(pid_s, sizeof(pid_s), "%d", pid);
write(fd, pid_s, sizeof(pid_s));
// server
char pid_s[8];
read(fd, pid_s, sizeof(pid_s));
Note also that read may return less than what was written, and you need to call it again to keep reading...
Well there are a lot of mistake in this code...
First of all sizeof is not working like that.
Why do you serialize the pid ?
This is wrong :
char pid_s[sizeof(pid)];
123456 is an int and it doesn't fit into this array of size 4, only 3 char can be printed...
And because you are trying to serialize the pid you don't know the expected size to read, unless you take the worst case and write 10 + 1 for the '\0'...
Related
I am writing a hex dump program in C. I know there are tons of hex dump programs out there, but I wanted to write one for the experience. I have written the program in CodeBlocks, on Windows, but I can't seem to get it to work.
I am reading in a test program which is roughly 137,000 bytes, but the program stops at 417 bytes. Now, when I compile the code on Linux (as it's only a console application and is using standard C libraries), it works perfectly, and gives back the correct amount of bytes in the file. Does anyone have any idea why read() would not work on Windows, but works fine in Linux?
Below is an example of how I am reading in the file.
int main(int argc, char **argv)
{
if (argc != 2) { return 1; }
int fd = open(argv[1], O_RDONLY);
if (fd == -1) { return 1; }
unsigned char buffer[8];
unsigned int bytes = 0;
unsigned int total_bytes = 0;
while ((bytes = read(fd, buffer, sizeof(unsigned char) * 8)) > 0) {
...
total_bytes += bytes;
}
printf("Total Bytes: %d\n", total_bytes);
return 0;
}
I have found the answer in this post after all. They were having the issue with stdin, though. Apparently the substitute character (1A) is the same as CTRL+Z in Windows, and so it was forcibly closing my program when reading that character.
C reading (from stdin) stops at 0x1a character
I try to fill a named pipe (created by mkfifo /tmp/pipe) by writing to it 3 bytes at a time until the write() function blocks.
On my system, a pipe seems to be limited to 16 pages of 4096 bytes. Thus the pipe can contain 65536 bytes.
I do that with the following C code:
int main ()
{
pid_t child;
child = fork ();
if (child == 0)
{
ssize_t ret;
ssize_t total = 0;
unsigned char *datat = malloc (65536);
assert (datat != NULL);
int fd = open ("/tmp/pipe", O_WRONLY);
assert (fd != -1);
while (1)
{
printf ("Trying writting\n");
ret = write (fd, datat, 3);
assert (ret != -1);
total += ret;
printf ("write : %ld.\n", total);
}
}
else
{
int fd = open ("/tmp/pipe", O_RDONLY);
assert (fd != -1);
while (1); //prevent closing the pipe.
}
return 0;
}
By this way, I succeed to fill the pipe until 65520 bytes. I don't understand why 65520 and not 65536 (or 65535 if we consider that 65536 is not a multiple of 3).
Then I tried to write 65520 bytes and, after, write 3 bytes:
int
main (int argc, char *argv[])
{
pid_t child;
child = fork ();
if (child == 0)
{
ssize_t ret;
ssize_t total = 0;
unsigned char *datat = malloc (65536);
assert (datat != NULL);
int fd = open ("/tmp/pipe", O_WRONLY);
assert (fd != -1);
while(1)
{
printf ("Trying writting\n");
ret = write (fd, datat, 65520);
assert (ret != -1);
total += ret;
printf ("Trying writting\n");
ret = write (fd, datat, 3);
assert (ret != -1);
total += ret;
printf ("write : %ld.\n", total);
}
}
else
{
int fd = open ("/tmp/pipe", O_RDONLY);
assert (fd != -1);
while (1); //prevent closing the pipe.
}
return 0;
}
I expected the second write to block, however it was not the case and I wrote 65523 bytes.
The question is: why can't I write more than 65520 bytes on the first case whereas I can in the second?
EDIT:
More information :
My Operating system is Linux archlinux 4.16.5-1-ARCH
man 7 pipe give information about the size (which is equal to 65536 bytes) of the pipe and is confirmed by fcntl:
int
main (int argc, char *argv[])
{
int fd = open ("/tmp/pipe", O_WRONLY);
printf ("MAX : %d\n", fcntl (fd, F_GETPIPE_SZ));
return 0;
}
It's because of the way 4KB pages are filled with written data in the pipe implementation in the Linux kernel. More specifically, the kernel appends written data to a page only if the data fits entirely in the page, otherwise puts the data into another page with enough free bytes.
If you write 3 bytes at a time, the pipe pages won't be filled at their full capacity, because the page size (4096) is not a multiple of 3: the nearest multiple is 4095, so each page will end up with 1 "wasted" byte. Multiplying 4095 by 16, which is the total number of pages, you get 65520.
In your second use case, when you write 65520 bytes all at once, you are filling 15 pages entirely (61440 bytes), plus you are putting the remaining 4080 bytes in the last page, which will have 16 bytes still available for subsequent writes: that's why your second write() call with 3 bytes succeeds without blocking.
For full details on the Linux pipe implementation, see https://elixir.bootlin.com/linux/latest/source/fs/pipe.c.
Here is my code snippet:
int fd;
bufsize = 30;
char buf[bufsize];
char cmd[100] = "file.txt";
int newfd = 1;
if (fd = open(cmd,O_RDONLY) >=0){
puts("wanna read");
while (read(fd,&bin_buf,bufsize)==1){
puts("reading");
write(newfd,&bin_buf,bufsize);
}
close(fd);
}
So here the program prints "wanna read" but never prints "reading". I have also tried opening using nonblock flag, but no use. Can anybody help me? I must use open() and read() system calls only. Thanks.
Edit: I have made some clarifications in the code. Actually the newfd that I'm writing to is a socket descriptor, but I don't think that is important for this problem because it sticks on the read which is before the write.
The first problem is your if statement. You forgot to use enough parentheses, so if the open() works, the read tries to read from file descriptor 1, aka standard output. If that's your terminal (it probably is) on a Unix box, then that works — surprising though that may be; the program is waiting for you to type something.
Fix: use parentheses!
if ((fd = open(cmd, O_RDONLY)) >= 0)
The assignment is done before, not after, the comparison.
I observe in passing that you don't show how you set cmd, but if you see the 'wanna read' message, it must be OK. You don't show how newfd is initialized; maybe that's 1 too.
You also have the issue with 'what the read() call returns'. You probably need:
int fd;
char buf[bufsize];
int newfd = 1;
if ((fd = open(cmd, O_RDONLY)) >= 0)
{
puts("wanna read");
int nbytes; // ssize_t if you prefer
while ((nbytes = read(fd, buf, sizeof(buf))) > 0)
{
puts("reading");
write(newfd, buf, nbytes);
}
close(fd);
}
You can demonstrate my primary observation by typing something ('Surprise', or 'Terminal file descriptors are often readable and writable' or something) with your original if but my loop body and then writing that somewhere.
Your read() call attempts to read bufsize bytes and returns the number of bytes actually read. Unless bufsize ==, it is quite unlikely read() will return 1, so the block is almost always skipped and nothing get written.
Also note that if (fd = open(cmd, O_RDONLY) >= 0) is incorrect and would set fd to 1, the handle for standard output, if the file exists, causing the read to fail as standard input is most likely not opened for reading.
Note that reading with the read system call is tricky on some environments, because a return value of -1 may be restartable.
Here is an improved version:
int catenate_file(const char *cmd, int newfd, size_t bufsize) {
int fd;
char buf[bufsize];
if ((fd = open(cmd, O_RDONLY)) >= 0) {
puts("wanna read");
ssize_t nc;
while ((nc = read(fd, buf, bufsize)) != 0) {
if (nc < 0) {
if (errno == EINTR)
continue;
else
break;
}
printf("read %zd bytes\n", nc);
write(newfd, buf, nc);
}
close(fd);
return 0;
}
return -1;
}
read returns the number of bytes read from file that can be bufsize or less if the remainder of the file that has to be read is shorter than bufsize.
In your case most probably bufsize is bigger than 1 and the file is bigger than 1 byte so the condition of the while loop is evaluated false, the code is skipped to the point where file is closed.
You should check if there if there are more bytes to be read:
while( read(fd,&bin_buf,bufsize) > 0 ) {
For class I was given this, "Develop a C program that copies an input file to an output file and counts the number of read/write operations." I know how to do the action copying the input file to the output file, but I am not entirely sure how to keep track of how many read/write operation were performed. This program is supposed to repeat the copying using different buffer sizes and output a listing of the number of read/write operations performed with each buffer size. I am just not sure how to do the part of counting the r/w operations. How could one go about doing this? Thank you in advance.
Here is my current code (updated):
#include <stdio.h>
#include "apue.h"
#include <fcntl.h>
#define BUFFSIZE 1
int main(void)
{
int n;
char buf[BUFFSIZE];
int input_file;
int output_file;
int readCount = 0;
int writeCount = 0;
input_file = open("test.txt", O_RDONLY);
if(input_file < 0)
{
printf("could not open file.\n");
}
output_file = creat("output.txt", FILE_MODE);
if(output_file < 0)
{
printf("error with output file.\n");
}
while((n = read(input_file, buf, BUFFSIZE)) > 0)
{
readCount++;
if(write(output_file, buf, n) == n){
writeCount++;
}else{
printf("Error writing");
}
}
if(n < 0)
{
printf("reading error");
}
printf("read/write count: %d\n", writeCount + readCount);
printf("read = %d\n", readCount);
printf("write = %d\n", writeCount);
}
And for the text file: test one two
The result is:
read/write count: 26
read = 13
write = 13
Process returned 0 (0x0) execution time : 0.003 s
Press ENTER to continue.
I was thinking that the write would be 12...but I am not sure...
You will need to increment a variable every time you call a function that does reading or writing. You may do that by making a function that wraps the standard i/o function.
For example, replace fread with something like this:
size_t fread_count(void *p, size_t size, size_t num, FILE *f){
iocount++;
return fread(p, size, num, f);
}
iocount would have to be in scope (e.g. global)
If you need to count reads and writes separately, use separate variables.
One that you increment for reads and one that you increment for writes.
-edit-
since you are using write() and read(), you could easily make an equivalent
function like above but using write and read instead of fwrite and fread
to help with trying different buffer sizes:
1) put the open/read/write/close and char buffer[], read/write counters, final printf statement for counters, etc into a separate function
2) in main(), add a table that contains the buffer sizes to be tried.
3) call the new function from main(),
including a parameter that indicates the buffer size to use
For my OS class I have the assignment of implementing Unix's cat command with system calls (no scanf or printf). Here's what I got so far:
(Edited thanks to responses)
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
main(void)
{
int fd1;
int fd2;
char *buffer1;
buffer1 = (char *) calloc(100, sizeof(char));
char *buffer2;
buffer2 = (char *)calloc(100, sizeof(char));
fd1 = open("input.in", O_RDONLY);
fd2 = open("input2.in", O_RDONLY);
while(eof1){ //<-lseek condition to add here
read (fd1, buffer1, /*how much to read here?*/ );
write(1, buffer1, sizeof(buffer1)-1);
}
while (eof2){
read (fd2,buffer2, /*how much to read here?*/);
write(1, buffer2, sizeof(buffer2)-1);
}
}
The examples I have seen only show read with a known number of bytes. I don't know how much bytes each of the read files will have, so how do I specify read's last paramether?
Before you can read into a buffer, you have to allocate one. Either on the stack (easiest) or with mmap.
perror is a complicated library function, not a system call.
exit is not a system call on Linux. But _exit is.
Don't write more bytes than you have read before.
Or, in general: Read the documentation on all these system calls.
Edit: Here is my code, using only system calls. The error handling is somewhat limited, since I didn't want to re-implement perror.
#include <fcntl.h>
#include <unistd.h>
static int
cat_fd(int fd) {
char buf[4096];
ssize_t nread;
while ((nread = read(fd, buf, sizeof buf)) > 0) {
ssize_t ntotalwritten = 0;
while (ntotalwritten < nread) {
ssize_t nwritten = write(STDOUT_FILENO, buf + ntotalwritten, nread - ntotalwritten);
if (nwritten < 1)
return -1;
ntotalwritten += nwritten;
}
}
return nread == 0 ? 0 : -1;
}
static int
cat(const char *fname) {
int fd, success;
if ((fd = open(fname, O_RDONLY)) == -1)
return -1;
success = cat_fd(fd);
if (close(fd) != 0)
return -1;
return success;
}
int
main(int argc, char **argv) {
int i;
if (argc == 1) {
if (cat_fd(STDIN_FILENO) != 0)
goto error;
} else {
for (i = 1; i < argc; i++) {
if (cat(argv[i]) != 0)
goto error;
}
}
return 0;
error:
write(STDOUT_FILENO, "error\n", 6);
return 1;
}
You need to read as many bytes as will fit in the buffer. Right now, you don't have a buffer yet, all you got is a pointer to a buffer. That isn't initialized to anything. Chicken-and-egg, you therefore don't know how many bytes to read either.
Create a buffer.
There is usually no need to read the entire file in one gulp. Choosing a buffer size that is the same or a multiple of the host operating system's memory page size is a good way to go. 1 or 2 X the page size is probably good enough.
Using buffers that are too big can actually cause your program to run worse because they put pressure on the virtual memory system and can cause paging.
You could use open, fstat, mmap, madvise and write to make a very efficient cat command.
If Linux specific you could use open, fstat, fadvise and splice to make an even more efficient cat command.
The advise calls are to specify the SEQUENTIAL flags which will tell the kernel to do aggressive read-ahead on the file.
If you like to be polite to the rest of the system and minimize buffer cache use, you can do your copy in chunks of 32 megabytes or so and use the advise DONTNEED flags on the parts already read.
Note:
The above will only work if the source is a file. If the fstat fails to provide a size then you must fall back to using an allocated buffer and read, write. You can use splice too.
Use the stat function to find the size of your files before you read them. Alternatively, you can read chunks until you get an EOF.