Reading all buffers except for the last one (in C) - c

I want to read all buffers from a pipe except for the last one. This is my current code:
while(read(server_to_client,serverString2,sizeof(serverString2))){
printf("Client : PID %d",getpid());
printf("-Target>>%s<<", clientString2);
printf(serverString2);
}
The problem with that is it reads everything from the buffer. How can I avoid reading the last buffer?

You can't. The question does not even make sense.
The question supposes that a "buffer" is a meaningful unit of measure for your data, but it is not. In particular, the third argument to read(2) is a maximum number of bytes to read, but the call may actually transfer fewer bytes for a large number reasons, with reaching the end of the data being only one. Other reasons are in fact a lot more likely to manifest when the file descriptor being read is connected to a pipe, as you say yours is, than when it is connected to a file. Note that this means you must always capture read()'s return value if you intend to examine the data it reads, for otherwise you cannot know how much of the buffer contains valid data.
More generally, you cannot tell from an open file descriptor for a pipe how much data is available to be read from it. You need to include that information in your protocol (for example, HTTP's Content-Length header), or somehow communicate it out-of-band. That still doesn't tell you how much data is available to be read right now, but it can help you determine when to stop trying to read more.
Edited to add:
If you ask because you want to avoid dealing with partially-filled buffers, then you are flat out of luck. At minimum you need to be prepared for a partially-filled buffer when the data are prematurely truncated. Unless the total size of the data to be transferred is certain to be a multiple of the chosen buffer size, you will also have to be prepared to deal with a partial buffer at the end of your data. You can, however, avoid dealing with partial buffers in the middle of your data by repeatedly read()ing until you fill the buffer, perhaps via a wrapper function such as this:
ssize_t read_fully(int fd, void *buf, size_t count) {
char *byte_buf = buf;
ssize_t bytes_remaining = count;
while (1) {
ssize_t nread = read(fd, byte_buf, bytes_remaining);
if ((nread <= 0) || ((bytes_remaining -= nread) <= 0)) {
break;
}
byte_buf += nread;
bytes_remaining -= nread;
}
return count - bytes_remaining;
}
Alternatively, you can approach the problem altogether differently. Instead of trying to avoid reading certain data, you may be able to read it but avoid processing it. Whether that could be sensible depends on the nature of your program.

Do you really need to avoid reading the last buffer? Or just avoid doing anything with it? Perhaps a different form of loop? Perhaps a check for eof() after reading each buffer?
while(read(server_to_client,serverString2,sizeof(serverString2)))
{
if (! eof(server_to_client))
{
printf("Client : PID %d",getpid());
printf("-Target>>%s<<", clientString2);
printf(serverString2);
}
else
{
// do special stuff for the last buffer here
}
}

Related

why does fread from stdin not stop [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 months ago.
Improve this question
I am trying to read input from stdin with fread(). However i am have a problem, the loop will not terminate and instead keeps reading.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char *argv[])
{
if (argc != 2) {
fprintf(stderr, "argument err");
return -1;
}
FILE *in = fopen(argv[1], "w");
if (in == NULL) {
fprintf(stderr, "failed to open file");
return -1;
}
char buffer[20];
size_t ret;
while ((ret = fread(buffer, 1, 20, stdin)) > 0) {
if (fwrite(buffer, 1, ret, in) != ret) {
if (ferror(in) != 0) {
perror("write err:");
}
}
}
return 0;
}
How can i make this loop terminate when EOF is reached? i have tried using ctrl+D but that just seems like a strange way to stop taking input.
I guess what i want is to use fread() to read multiple arbitrary amounts of data in chunks of 20 bytes and then somehow stop.
How can i make this loop terminate when EOF is reached?
When do you think EOF is reached? Really. When you are providing input interactively, how is the system or the program supposed to know that you've entered all the data you want the program to consume?
i have tried using ctrl+D but that just seems like a strange way to stop taking input.
It is exactly the way to signal a soft EOF to a POSIX terminal. Since you want the loop to stop when EOF is encountered, it seems absolutely natural to me to use ctrl+D for the purpose when providing data interactively. That's not the only way you could signal the end of the input, but it has a lot going for it.
I guess what i want is to use fread() to read multiple arbitrary amounts of data in chunks of 20 bytes and then somehow stop.
Again: how is the program supposed to know when it has consumed all the "multiple arbitrary amounts" of data that you decide to provide on a given run? An EOF signal is an eminently reasonable choice for multiple reasons, and the way to deliver that from a POSIX terminal interface is ctrl+D.
As pointed out before you are reading from an eternal stream, this means that stdin don't naturally have a EOF (or <=0) value.
If you want your loop to terminate, you will have to add a termination condition, like a certain character, word or all type of value. After that you could use a break or a return in some case. You could also search if your terminal emulator support the insertion of an EOF value into the stdin, which is pretty common (But very platform dependent).
ADD: On my system, typical linux, CTRL+D is for an EOF insertion in stdin. It seems that you found this out yourself, and if you want your program to know where to stop you will need to use this.
You cand also send a signal to your program, usually done with a shortcut like CTRL+D, CTRL+C, CTRL+T etc... there is all sort of signal, which can be sent by your system or/and your TE and you just have to implement in your program the corresponding signal receiver.
How can i make this loop terminate when EOF is reached? i have tried using ctrl+D but that just seems like a strange way to stop taking input.
fread and fwrite are there to read data records, so they (both) take the number of records to read and the size of the record. If the available data doesn't fit on a full record, you will not get the full record at all (indeed, the routines return the number of full records read, and the partial read will be waiting for the next fread() call.)
All the calls in stdio.h package are buffered, so the buffer holds the data that has been read (from the system) but not yet consumed by the user, and so, this makes me to wonder why are you trying to use a buffer to read data that is already buffered?
EOF is produced when you are trying to read one record and the fread() call results in a true end of file from the system (this normally requires two calls, the first to complete the remaining data, the second resulting in no data ---zero bytes--- returned from the system) So you have to distinguish two cases:
fread() returns 0 in case it has read something, but is not enough to complete a record.
fread() returns EOF in case it has read nothing (the true end of file is reached)
As I've said above, fread() & fwrite() will read/write full records (this is useful when your data is a struct with a fixed length, but normally not when you can have extra data at the end)
The way to terminate the loop should be something like this:
while ((ret = fread(buffer, 1, 20, stdin)) >= 0) {
if (fwrite(buffer, 1, ret, in) != ret) {
if (ferror(in) != 0) {
perror("write err:");
}
}
}
/* here you can have upto 19 bytes in the buffer that cannot
* be read with that record length, but you can read individually
* with fgetc() calls. */
so, if you read half a record (at end of file) only at the next fread() it will detect the end of file (by reading nothing) and you will be free of ending. (beware that the extra data that doesn't fill a full buffer, still needs to be read by other means)
The cheapest and easiest way to solve this problem (to copy a file from one descriptor to another) is described in K&R (in the first edition) and has not yet have better code to void it, is this:
int c;
while ((c = fgetc(in)) != EOF)
fputc(c, out);
while it seems to read the characters one by one, it actually makes a call to read(2) to completely fill a full buffer of data, and return just one character, next characters will be taken from the buffer, saving calls to read(), and the same happens to fputc() (it fills the buffer until it's full, then flushes it, in a single call to write()).
Many people has tried to defeat the code above, without any measurable gain in efficience. So, my hint is be simple, that the world is complicated enough to force you to go complex.

Failure to write subsquent compressed data to an output file in C

I am reading data from an input file and compressing it with bzip library function calls BZ2_bzCompress in C. I can compress the data successfully. But I cannot write all the compressed data to an output file. Only the first compressed line can be written. Am I missing something here.
int main()
{
bz_stream bz;
FILE* f_d;
FILE* f_s;
BZFILE* b;
int bzerror = -10;
unsigned int nbytes_in;
unsigned int nbytes_out;
char buf[3000] = {0};
int result = 0;
char buf_read[500];
char file_name[] = "/path/file_name";
long int save_pos;
f_d = fopen ( "myfile.bz2", "wb+" );
f_s = fopen(file_name, "r");
if ((!f_d) && (!f_s)) {
printf("Cannot open files");
return(-1);
}
bz.opaque = NULL;
bz.bzalloc = NULL;
bz.bzfree = NULL;
result = BZ2_bzCompressInit(&bz, 1, 2, 30);
while (fgets(buf_read, sizeof(buf_read), f_s) != NULL)
{
bz.next_in = buf_read;
bz.avail_in = sizeof(buf_read);
bz.next_out = buf;
bz.avail_out = sizeof(buf);
printf("%s\n", buf_read);
save_pos = ftell(f_d);
fseek(f_d, save_pos, SEEK_SET);
while ((result == BZ_RUN_OK) || (result == 0) || (result == BZ_FINISH_OK))
{
result = BZ2_bzCompress(&bz, (bz.avail_in) ? BZ_RUN : BZ_FINISH);
printf("2 result:%d,in:%d,outhi:%d, outlo:%d \n",result, bz.total_in_lo32, bz.total_out_hi32, bz.total_out_lo32);
fwrite(buf, 1, bz.total_out_lo32, f_d);
}
if (result == BZ_STREAM_END)
{
result = BZ2_bzCompressEnd(&bz);
}
printf("3 result:%d, out:%d\n", result, bz.total_out_lo32);
result = BZ2_bzCompressInit(&bz, 1, 2, 30);
memset(buf, 0, sizeof(buf));
}
fclose(f_d);
fclose(f_s);
return(0);
}
TL;DR: there are multiple problems, but the main one that explains the problem you asked about is likely that you compress each line of the file independently, instead of the whole file as a unit.
According to the docs of BZ2_bzCompressInit, the bz_stream argument should be allocated and initialized before the call. Yours is (automatically) allocated, but not (fully) initialized. It would be clearer and easier to change to
bz_stream bz = { 0 };
and then skip the assignments to bz.opaque, bz.alloc, and bz.free.
You store but do not really check the return value of your BZ2_bzCompressInit call. It does eventually get tested in the condition of the inner while loop, but you do not detect error conditions there, but instead just success and normal completion conditions.
Your handling of the input buffer is significantly flawed.
In the first place, you set the number of available input bytes incorrectly:
bz.avail_in = sizeof(buf_read);
Since you're using fgets() to read data into the buffer, under no circumstances is the full size of the buffer occupied by input data, because fgets() ensures that a string terminator is written into the array. In fact, it could be worse because fgets() will stop at after newlines, so it may provide as few as just one input byte on a successful read.
If you want to stick with fgets() then you need to use strlen() to determine the number of bytes available from each read, but I would suggest that you instead switch to fread(), which will more reliably fill the buffer, indicate with its return value how many bytes were read, and correctly handle inputs containing null bytes.
In the second place, you use BZ2_bzCompress() to compress each buffer of input as if it were a complete file. When you come to the end of the buffer, you finish a compression run and reinitialize the bz_stream. This will definitely interfere with decompressing, and it may explain why your program (seems to) compress only the first line of its input. You should be reading the whole content of the file (in suitably-sized chunks) and feeding all of it to BZ2_bzCompress(... BZ_RUN) before you finish up. There should be one sequence of calls to BZ2_bzCompress(... BZ_FINISH) and finally one call to BZ2_bzCompressEnd() for the whole file, not per line.
You do not perform error detection or handling for any of your calls to standard library or bzip functions. You do handle the expected success-case return values for some of these, but you need to be rpepared for errors, too.
There are some additional oddities
you have unused variables nbytes_in, nbytes_out, bzerror, and b.
you open the input file as a text file, though whether that makes any difference is platform-dependent.
the ftell() / fseek() pair has no overall effect other than setting save_pos, which is not otherwise used.
although it is not harmful, it also is not useful to memset() the output buffer to all-zeroes at the end of each line (or initially).
Given that you're compressing the input, it's odd (but again not harmful) that you provide six times as much output buffer as you do input buffer.

Errors when implementing a fwrite to get date from socket

I see a plenty of examples but none addresses what I want to accomplish. I need to read the bytes from a socket and write them in to a file. In this Code Project blog I see where in the client script a while loop iterates through a read call:
while((n = read(sockfd, recvBuff, sizeof(recvBuff)-1)) > 0)
So I modified the code do that fputs(recvBuff, f1) where f1 is a pointer to a pdf file. A pdf file is also a file I'm fetching from the server so I need to reassemble it, however the fputs operated with a string and corrupts the file, so I need a byte "writer" so fwrite would have been the choice but I can't get fwrite to work. I ended up modifying my code to resemble some of the examples to test it out but to no avail.
If in fwrite the first parameters is the 'data' how would I pass it? I've tried the read() call as in the while loop above but that seem to return an integer rather then a byte stream. Any ideas?
I'm new to programming but am new to C and would appreciate a little push in a right direction. Thanks.
You want something more like this. fwrite doesn't return a stream it returns the number of items (i.e. the 3rd parameter) successfully written. In this case the "item" is a single char and you are attempting to write "bytesRead" number of them. Good form dictates that you should check that the result fread returns is the same as you requested be written but this rarely fails on a disk file so many people skip it in non-critical situations. You may want to add that on yourself.
FILE *f1;
int sockfd;
char recvBuff[4096];
size_t bytesWritten;
ssize_t bytesRead;
while((bytesRead = read(sockfd, recvBuff, sizeof(recvBuff))) > 0)
bytesWritten = fwrite(recvBuff, 1, bytesRead, f1);

Socket Read/Write error

would install valgrind to tell me what the problem is, but unfortunately can't any new programs on this computer... Could anyone tell me if there's an obvious problem with this "echo" program? Doing this for a friend, so not sure what the layout of the client is on the other side, but I know that both reads and writes are valid socket descriptors, and I've tested that n = write(writes,"I got your message \n",20); and n = write(reads,"I got your message \n",20); both work so can confirm that it's not a case of an invalid fd. Thanks!
int
main( int argc, char** argv ) {
int reads = atoi(argv[1]) ;
int writes = atoi(argv[3]) ;
int n ;
char buffer[MAX_LINE];
memset(buffer, 0, sizeof(buffer));
int i = 0 ;
while (1) {
read(reads, buffer, sizeof(buffer));
n = write(writes,buffer,sizeof(buffer));
if (n < 0) perror("ERROR reading from socket");
}
There are a few problems, the most pressing of which is that you're likely pushing garbage data down the the write socket by using sizeof(buffer) when writing. Lets say you read data from the reads socket and it's less than MAX_LINES. When you go to write that data, you'll be writing whatever you read plus the garbage at the end of the buffer (even though you memset at the very beginning, continual use of the same buffer without reacting to different read sizes will probably generate some garbage.
Try getting the return value from read and using it in your write. If the read return indicates an error, clean up and either exit or try again, depending on how you want your program to behave.
int n, size;
while (1) {
size = read(reads, buffer, sizeof(buffer));
if (size > 0) {
n = write(writes, buffer, size);
if (n != size) {
// write error, do something
}
} else {
// Read error, do something
}
}
This, of course, assumes your writes and reads are valid file descriptors.
These two lines look very suspicious:
int reads = atoi(argv[1]) ;
int writes = atoi(argv[3]) ;
Do you really get file/socket descriptor numbers on the command line? From where?
Check the return value of your read(2) and write(2), and then the value of errno(3) - they probably tell you that your file descriptors are invalid (EBADF).
One point not made thus far: Although you know that the file descriptors are valid, you should include some sanity checking of the command line.
if (argc < 3) {
printf("usage: foo: input output\n");
exit(0);
}
Even with this sanity checking passing parameters like this on a command line can be dangerous.
The memset() is not needed, provided you change the following (which you should do nevertheless).
read() has a result, telling you how much it has actually read. This you should give to write() in order to write only what you actually have, removing the need for zeroing.
MAX_LINE should be at least 512, if not more.
There probably are some more issues, but I think I have the most important ones.

Seg fault with open command when trying to open very large file

I'm taking a networking class at school and am using C/GDB for the first time. Our assignment is to make a webserver that communicates with a client browser. I am well underway and can open files and send them to the client. Everything goes great till I open a very large file and then I seg fault. I'm not a pro at C/GDB so I'm sorry if that is causing me to ask silly questions and not be able to see the solution myself but when I looked at the dumped core I see my seg fault comes here:
if (-1 == (openfd = open(path, O_RDONLY)))
Specifically we are tasked with opening the file and the sending it to the client browser. My Algorithm goes:
Open/Error catch
Read the file into a buffer/Error catch
Send the file
We were also tasked with making sure that the server doesn't crash when SENDING very large files. But my problem seems to be with opening them. I can send all my smaller files just fine. The file in question is 29.5MB.
The whole algorithm is:
ssize_t send_file(int conn, char *path, int len, int blksize, char *mime) {
int openfd; // File descriptor for file we open at path
int temp; // Counter for the size of the file that we send
char buffer[len]; // Buffer to read the file we are opening that is len big
// Open the file
if (-1 == (openfd = open(path, O_RDONLY))) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
// Read from file
if (-1 == read(openfd, buffer, len)) {
send_head(conn, "", 400, strlen(ERROR_400));
(void) send(conn, ERROR_400, strlen(ERROR_400), 0);
logwrite(stdout, CANT_OPEN);
return -1;
}
(void) close(openfd);
// Send the buffer now
logwrite(stdout, SUC_REQ);
send_head(conn, mime, 200, len);
send(conn, &buffer[0], len, 0);
return len;
}
I dunno if it is just a fact that a I am Unix/C novice. Sorry if it is. =( But you're help is much appreciated.
It's possible I'm just misunderstanding what you meant in your question, but I feel I should point out that in general, it's a bad idea to try to read the entire file at once, in case you deal with something that's just too big for your memory to handle.
It's smarter to allocate a buffer of a specific size, say 8192 bytes (well, that's what I tend to do a lot, anyway), and just always read and send that much, as much as necessary, until your read() operation returns 0 (and no errno set) for end of stream.
I suspect you have a stackoverflow (I should get bonus points for using that term on this site).
The problem is you are allocating the buffer for the entire file on the stack all at once. For larger files, this buffer is larger than the stack, and the next time you try to call a function (and thus put some parameters for it on the stack) the program crashes.
The crash appears at the open line because allocating the buffer on the stack doesn't actually write any memory, it just changes the stack pointer. When your call to open tries tow rite the parameters to the stack, the top of the stack is now overflown and this causes a crash.
The solution is as Platinum Azure or dreamlax suggest, read in the file little bits at a time or allocate your buffer on the heap will malloc or new.
Rather than using a variable length array, perhaps try allocated the memory using malloc.
char *buffer = malloc (len);
...
free (buffer);
I just did some simple tests on my system, and when I use variable length arrays of a big size (like the size you're having trouble with), I also get a SEGFAULT.
You're allocating the buffer on the stack, and it's way too big.
When you allocate storage on the stack, all the compiler does is decrease the stack pointer enough to make that much room (this keeps stack variable allocation to constant time). It does not try to touch any of this stacked memory. Then, when you call open(), it tries to put the parameters on the stack and discovers it has overflowed the stack and dies.
You need to either operate on the file in chunks, memory-map it (mmap()), or malloc() storage.
Also, path should be declared const char*.

Resources