Printing contents of file to stdout low-level I/O in C - c

How would I go about printing contents of a file that I've appended to using only low-level I/O functions?
The closest I get is printing the text that I'm using to append
Example:
file1.txt = dog
file2.txt = cat
I want file2.txt, which is now "catdog" to be printed out. How would I do that?
As said before I can only get "dog" to print. I'm also successful in appended the file. I know it's probably really simple solution but I been scratching my head for hours.
My code
while (1) {
if ((bufchar = read(fdin1, buf, sizeof(buf))) > 0) {
bp = buf; // Pointer to next byte to write.
while (bufchar > 0) {
if ((wrchar = write(fdin2, bp, bufchar)) < 0)
perror("Write failed");
bufchar -= wrchar; // Update.
bp += wrchar;
}
}
else if (bufchar == 0) { // EOF reached.
break;
}
else
perror("Read failed");
}

Just some heads up, that if you are appending to file2.txt it would then be "catdog" not the other way around. If you are only able to get dog to write out, physically go into the file to ensure you are actually appending and not simply overwriting the file.
Here is some reading for the specifics of low-level file I/O. Read the top two links for opening and closing files and primitive I/O operations. Without seeing any of your code it is hard to help you though it is possible you are not properly closing the file and so your appended line is not saved...

Related

C Read from File will always read 0 bytes

I got this problem where i want to read from file using read function, but i cant't.
My code does that. I have a parent proccess and a child process. The child does an exec command and i have redirected the stdout to a file f. The parent waits the child and after that is reads from the file it's content and stores it to a buffer. Then i'm sending this output to a client through socket,using send, but that's not the problem. The problem is that even though the txt has content inside from the exec command, read function won't read anything and from my testing it always reads 0 bytes.
//parent
if (wait(&status2)== -1){ // Wait for child
perror("wait");
}
check_child_exit(status2);
n = read(f, buffer, MAX_BUFF-1);
if (n < 0){
error("ERROR reading from File");
}
printf("\n%d\n",n);
n = send(newsockfd,buffer, MAX_BUFF-1,0);
if (n < 0){
perror("ERROR writing to socket");
break;
}
//child
dup2(f,1);
dup2(f,2);
.
.
.
execvp(words[0],words); // Execute date
perror("execvp");
exit(EXIT_FAILURE)
So as you can see these are the 2 processes. I heard from another article that the problem might be on the opening of the file. But im not sure which options to use or not. I even tried to open with open function and fopen function just to try new things.
Here are the open and fopen call:
f = open("temp133",O_RDWR|O_CREAT|O_TRUNC,0755);
if (f==-1){
error_exit("Error Exit");
}
FILE *fd=fopen("tmp","w+");
thanks in advance

Differences between writing/reading binary/text in c

I'm working on a client/server program where the client sends/receives files. The files may be text files or binary files. However, I am not sure what changes I need to make, if any, to accommodate for either file type. Basically I am looking to read/write to a file on the server side without caring what type of file it is, I would like to be able to do so without checking what type of file it was. Would code like this work? Why or why not?
Server snippet:
//CREATING/WRITING TO A FILE
//we are ready to begin reading data from the client, and storing it
int fd = open(pathname, O_CREAT | O_WRONLY | O_EXCL, S_IRUSR | S_IWUSR);
while(nbytes < bytes)
{
//only read the neccessary # of bytes: the remaining bytes vs max buffer size
int min_buffer = (bytes - nbytes) < BUFFER_SIZE ? (bytes - nbytes) : BUFFER_SIZE;
length = recv( client->client_socket, contents, min_buffer, 0);
if(fd < 0) //the fd is bad, but we need to continue reading bytes anyway
{
nbytes += length;
continue;
}
if(length <= 0)
break;//string empty or error occurred...this error means the client closed?
if(write(fd, contents, min_buffer) != min_buffer)
{
//printf("There was an error writing to the file.\n");
}
else
{
nbytes += length;
}
}
//READING A STORED FILE AND SENDING THE DATA TO CLIENT
int fd = open(pathname, O_RDWR, S_IRUSR | S_IWUSR);
if(fd >= 0)
{
while(bytes > 0)
{
bytes = read(fd, buffer, BUFFER_SIZE );
if(bytes > 0)//we have read some bytes
{
//send the client the data
write(client->client_socket, buffer, bytes);
}
else if(bytes < 0)
{
//some error occured
write( client->client_socket, "ERROR: Could not read\n", 22);
return;
}
}
}
So if the client sends a binary file vs a text file, would this code cause issues? (We can assume the client knows what type of file to expect.)
Note: Another confusing detail about this is that there are tutorials for writing/reading binary files in c that didn't seem to have any real differences over regular files, which is what lead me here.
Just do everything with "binary" files. Linux has no difference between "text" and "binary" in a file on OS level, there are just files with bytes in it. Ie. expect that a file contains every possible byte value, and don´t write different code for different kinds of content.
There is a difference in Windows: Text mode in Windows means that a line break (\n) in the program gets converted to/from \r\n when writing to / reading from a file. The written text file read in binary mode will contain this two bytes instead of the original \n and vice-versa. (Additionally, MS isn´t very clear in the documentation that this is the only difference, it can confuse beginners easily.)
If you use standard C fopen and fclose instead of Linux-specific open etc., you can specify to open a file in binary or text mode (on Linux too). This is because code with fopen should work on Windows and Linux without any OS-specific changes; but what you choose in fopen doesn´t matter when running on Linux (which can be verified by reading the source code of fopen etc.)
And about the sockets:
Linux: No difference (again)
Windows: No difference too. There are just bytes, and no strange line break conversions.
I tore my hair out for a day over a binary/text file issue. I was outputting binary data into "files" (apparently text files ... after years and years of C I'd always thought a file was a file) and kept getting spurious characters inserted into the output. I went so far as to download a new compiler but had the same problem. The issue? When I output hex A using any of the family of fprint statements, hex D was being inserted. Yes, line feed characters -- A -- were being replaced by carriage return/line feed -- DA. It's a legacy "end of line" issue based on how different systems have developed. The tough part of finding the problem was realizing A was being interpreted as more than just a binary field, but actually being recognized as a line feed.

Low-level C I/O: When read from one file and write to another I'm hitting an infinite loop

I am working on an assignment that only allows use of low-level I/O (read(), write(), lseek()) as well as perror().
I have been able to open the nessisary in and out files with correct permissions, but when I output I get an infinite loop of the in file contents to out. See snippet below...
void *buf = malloc(1024);
while((n = read(in, buf, 1024)) > 0){
if(lseek(in, n, SEEK_CUR) == -1){
perror("in file not seekable");
exit(-1);
}
while((m = write(out, buf, n)) > 0){
if(lseek(out, m, SEEK_CUR) == -1){
perror("out file not seekable");
exit(-1);
}
}
if(m == -1){ perror("error writing out"); exit(-1); }
}
if(n == -1){ perror("error reading in"); exit(-1); }
I have removed some error trapping from my code and you can assume the variables are initialized and includes statements are there.
Problem is the inner loop:
while((m = write(out, buf, n)) > 0){
should really be
if((m = write(out, buf, n)) > 0){
You only want buf to be written once, not infinitely many times. What you also need to handle is short writes, that is, when write returns with m < n && m > 0.
Also, the lseek() calls are wrong, but they don't lead to the loop. read() and write() already advance the current file offset. You do not need to manually advance it, unless you want to skip bytes in the input or output file (note that in the output file case, on UNIX, skipping bytes may lead to so-called "holes" in files, regions which are zero but don't occupy disk space).
Why are you seeking on the input file after your read? Since you will, at most, read 1024 bytes (meaning n will be somewhere between 0 and 1024), you will be continuously seeking to somewhere beyond where you've left the input file pointer so that you'll lose data in the transfer (including probably beyond the end of the file when you get near the end).
This may be one cause why you have an infinite loop but the far more insidious one is the use of while for the write. Since this will return values greater than zero on success, you will continuously write the first chunk to the file over and over. At least until you run out of disk space or other resources.
You also don't need the seek on the write either. The read and write calls do what they have to do and advance the file pointer correctly for the next read or write - it's not something you have to do manually.
You can probably simplify the whole thing to:
while ((n = read (in, buf, 1024)) > 0) {
if ((m = write (out, buf, n)) != n) {
perror ("error writing out");
exit (-1);
}
}
which has the advantages of:
getting rid of the seek calls;
removing the 'infinite' loop;
checking that you've written all the bytes requested.

c read() causing bad file descriptor error

Context for this is that the program is basically reading through a filestream, 4K chunks at a time, looking for a certain pattern. It starts by reading in 4k, and if doesn't find the pattern there, it starts a loop which reads in the next 4k chunk (rinse and repeat until EOF or pattern is found).
On many files the code is working properly, but some files are getting errors.
The code below is obviously highly redacted, which I know might be annoying, but it includes ALL lines that reference the file descriptor or the file itself. I know you don't want to take my word for it, since I'm the one with the problem...
Having done a LITTLE homework before crying for help, I've found:
The file descriptor happens to always = 6 (it's also 6 for the files that are working), and that number isn't getting changed through the life of execution. Don't know if that's useful info or not.
By inserting print statements after every operation that accesses the file descriptor, I've also found that successful files go through the following cycle "open-read-close-close" (i.e. the pattern was found in the first 4K)
Unsuccessful files go "open-read-read ERROR (Bad File Descriptor)-close." So no premature close, and it's getting in the first read successfully, but the second read causes the Bad File Descriptor error.
.
int function(char *file)
{
int len, fd, go = 0;
char buf[4096];
if((fd = open(file, O_RDONLY)) <= 0)
{
my_error("Error opening file %s: %s", file, strerror(errno));
return NULL;
}
//first read
if((len = read(fd, buf, 4096)) <= 0)
{
my_error("Error reading from file %s: %s", file, strerror(errno));
close(fd); return NULL;
}
//pattern-searching
if(/*conditions*/)
{
/* we found it, no need to keep looking*/
close(fd);
}
else
{
//reading loop
while(!go)
{
if(/*conditions*/)
{
my_error("cannot locate pattern in file %s", file);
close(fd); return NULL;
}
//next read
if((len = read(fd, buf, 4096)) <= 0) /**** FAILS HERE *****/
{
my_error("Error reading from file, possible bad message %s: %s",
file, strerror(errno));
close(fd); return NULL;
}
if(/*conditions*/)
{
close(fd);
break;
}
//pattern searching
if(/*conditions*/)
{
/* found the pattern */
go++; //break us out of the while loop
//stuff
close(fd);
}
else
{
//stuff, and we will loop again for the next chunk
}
} /*end while loop*/
}/*end else statement*/
close(fd);
}
.
Try not to worry about the pattern-reading logic - all operations are done on the char buffer, not on the file, so it ought to have no impact on this problem.
EOF returns 0 (falls into if ... <= 0), but does not set errno, which may have an out of date code in it.
Try testing for 0 and negative (error, -1) values seperately.
Regarding "strace": I've used it a little at home, and in previous jobs. Unfortunately, it's not installed in my current work environment. It is a useful tool, when it's available. Here, I took the "let's read the fine manual" (man read) approach with the questioner :-)

Closing a pipe causes the wrong line to be read from a file (that's independent of the pipe)

I'm writing a program that has a number of child processes. The parent has two pipes to write to the child, each pipe is dup2ed into a separate file descriptor in the child. The parent reads through a script file with a number of commands to work what it needs to do. This seems to work fine except I've noticed that if I try to close a pipe from parent to child then if the next line in the script file is fully blank it will be read in weirdly. If I print it out then it comes out as ���< which my program (understandably) doesn't know what to do with.
There is seemingly no link at all between the file pointer that I'm reading in from and the pipe that I'm closing.
My code for reading in lines in is below. It works perfectly normally, just in this case it doesn't work properly and I can't work out why.
char *get_script_line(FILE *script)
{
char *line;
char charRead;
int placeInStr = 0;
int currentSizeOfStr = 80;
int maxStrLength = 64;
/* initialize the line */
line = (char *)malloc(sizeof(char)*currentSizeOfStr);
while ((charRead = fgetc(script)) != EOF && charRead != '\n') {
/* read each char from input and put it in the array */
line[placeInStr] = charRead;
placeInStr++;
if (placeInStr == currentSizeOfStr) {
/* array will run out of room, need to allocate */
/* some more space */
currentSizeOfStr = placeInStr + maxStrLength;
line = realloc(line, currentSizeOfStr);
}
}
if (charRead == EOF && placeInStr == 0)
{
/* EOF, return null */
return NULL;
}
return line;
}
And when I close a pipe I'm doing this:
fflush(pipe);
fclose(pipe);
Does anyone have any idea of what I might be doing wrong? Or have some sort of an idea of how to debug this problem?
edit: To be clear my input script might (basically) look something like this:
Start a new child and set up pipes etc
Send EOF to one of the pipes to the child
(BLANK LINE)
Do something else
It works fine otherwise and I'm fairly sure I'm closing the pipes as I'm doing will send EOF as its supposed to.
I suggest running your program under valgrind to check for memory errors.
Actually.. think I just fixed it. I wasn't freeing the line in get_script_line when I was done with it but once I put that in it has all started working correctly. I have no idea why that would have an effect though.. would be interested if anyone could tell me why not freeing memory could have that effect?

Resources