I am trying to open a tar.gz file and read the contents of that file into a buffer. I want to create another tar.gz file and write the buffer to the newly created tar.gz file. Would the new file be same as the previous one? The code is as follows:
int main()
{
FILE *fp,*fp1;
int len,len1;
int length=0;
char *buf=malloc(1024);
char *buf1=malloc(1024);
fp=fopen("/home/sharwari/Downloads/criu-1.4/3049.tar.gz","rb");
while((len=fread(buf,1024,1,fp))>0)
{
printf("%s",buf);
}
fclose(fp);
fp1=fopen("/home/sharwari/imp5.tgz","wb");
if(fp1==NULL)
printf("\n\terror in creating file...");
len1=fwrite(buf,1,strlen(buf),fp1);
printf("\n\t No. of bytes written: %d",len1);
fclose(fp1);
}
You have the right idea but there are a number of issues with your code. Including at least:
The while loop will result in discarding all except the last 1024 bytes. Because you keep reading 1024 bytes and overwriting the contents of buf.
You cannot use strlen on binary data.
You need more error checking on fread to determine whether you successfully read all the way to the end of the file or whether an error occured. Read the fread man page (it will point you to feof and ferror).
It's good practice to free any malloced memory.
You are calling fwrite(buf,1,strlen(buf),fp1); with wrong arguments.
It should have been
fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);
You are trying to read in a block of 1024bytes which will fail if the file size is less than 1024 bytes
With the below code, you are trying to copy byte by byte from source file to
the destination file.
You can refer the below code.
#include <stdio.h>
#include <stdlib.h>
int main()
{
FILE *fp,*fp1;
int len,len1 = 0;
char buf[1];
fp = fopen("/home/sharwari/Downloads/criu-1.4/3049.tar.gz","rb");
fp1 = fopen("/home/sharwari/imp5.tgz","wb");
if (fp == NULL || fp1 == NULL) {
printf("\n\terror in creating file...");
return -1;
}
while ((len = fread(&buf, 1, 1, fp)) > 0) {
len1 += fwrite(&buf, 1, 1, fp1);
}
printf("\n\t No. of bytes written: %d",len1);
fclose(fp1);
fclose(fp);
return 0;
}
Is it not a bit of overkill fread-ing into a buffer. By definition fopen, fread etc are already buffered and deal with actual io in an optimal manner. The code should be more like
while(1) {
if(i=fgetc(in)==EOF) break;
else fputc(i,out);
}
Related
I have to write a program witch reads from a file received by line and then it overwrites it with the read words uppercased.
This is my code
void toUpperCase(char* string) {
int i=0;
while(string[i])
{
string[i]=toupper(string[i]);
i++;
} }
int main(int argc, char** argv) {
if(argc==1)
{
puts("Error: INSERT PATH");
exit(0);
}
char* file=argv[1];
FILE* fd=fopen(file,"r+");
if(fd<0)
{
perror("Error opening file: ");
exit(0);
}
char buffer[30][30];
int i=0;
while(!feof(fd))
{
fscanf(fd,"%s",buffer[i]);
i++;
}
int j=0;
for(j=0; j<i; j++)
{
toUpperCase(buffer[j]);
fwrite(buffer[j],strlen(buffer[j]),1,fd);
}
fclose(fd);
return 0; }
but this program appends the words contained in buffer[][] instead of overwriting the file.
If the file contain was something like pippo pluto foo then, after the execution is pippo pluto fooPIPPOPLUTOFOO instead of PIPPO PLUTO FOO.
Where am i wrong? Thank you
You have to reset the file position indicator using fseek, as fscanf will advance it. Something like
fseek(fd, length_of_read_string, SEEK_CUR);
This allows you to read the file in chunks, but it will be tricky to get right. Or of course reset it to the file start because you read everything in 1 go:
fseek(fd, 0L, SEEK_SET);
I strongly recommend writing the modified data into a new file, and then after the program has run, delete the initial file and rename the new one. That will also take care of another issue with your program, you are reading the entire file into memory before handling it.
If you want to do in-place translation that doesn't change lengths, you can open the source file in two streams and then do read-chunk, write-chunk in lockstep. That has the advantage of being super-easy to convert to a non-in-place version that will work with nonseekable files too (stdin/stdout, pipes, and sockets).
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <ctype.h> //toupper
inline void upcaseStr(char* str){
for(;*str;str++) { *str=toupper(*str); }
}
int upcaseStream(FILE* in, FILE* out){
char buf[BUFSIZ]; //BUFSIZ is an implementation-defined constant for an optimal buffer size
while(fgets(buf, BUFSIZ, in)){
upcaseStr(buf);
if(fputs(buf, out) == EOF){ return 1; }
}
if(!feof){ return 1; }
return 0;
}
int main(int argc, char **argv)
{
//default in and out
FILE* in = stdin;
FILE* out = stdout;
if(argc == 2) {
in = fopen(argv[1], "r"); //for reading
out = fopen(argv[1], "r+"); //for writing (and reading) starting at the beginning
if(!(in && out)){
fprintf(stderr, "Error opening file %s for reading and writing: %s\n", argv[1], strerror(errno));
}
}
return upcaseStream(in, out);
}
If you do use the in-place version, then in the unlikely event that the if(fputs(buf, out) == EOF){ return 1; } line should return, you're screwed unless you have a backup copy of the file. :)
Note:
You shouldn't name your FILE pointers fd because C people will tend to think you mean "file descriptor". FILE is a struct around a file descriptor. A file descriptor is just an int that you can use for FILE access with the raw system calls. FILE streams are an abstraction layer on top of file descriptors--they aren't file descriptors.
As you read from the file, its internal position indicator gets moved. Once you start writing, you start writing from that position on, which happens to be at the end of the file. So you effectively append the data to the file.
Rewind the handle to reset the position indicator before writing into the file:
rewind(fp);
On a side note, you are reading the file incorrectly:
while(!feof(fd))
{
fscanf(fd,"%s",buffer[i]);
i++;
}
When you reach the end of the file, fscanf will return an error and not read anything, yet you still increment variable i, as if the read was successful. And then you check feof() for end-of-file, but i was already incremented.
Check feof() and return of fscanf() immediately after calling fscanf():
while(1)
{
int read = fscanf(fd,"%s",buffer[i]);
if( read != 1 )
//handle invalid read
if( feof(fd) )
break;
i++;
}
Think about what happens if the string is longer than 29 characters and/or the file contains more than 30 strings. char buffer[30][30];
Welcome to StackOverflow!
Reopening the stream with fopen with the "w" parameter:
fd=fopen(file, "w");
It opens the file and if there are any contents in the file, it clears them.
I want to read the data of the file into a string.
Is there a function that reads the whole file into a character array?
I open the file like this:
FILE *fp;
for(i = 0; i < filesToRead; i++)
{
fp = fopen(name, "r");
// Read into a char array.
}
EDIT: So how to read it "line by line" getchar() ?
Here are three ways to read an entire file into a contiguous buffer:
Figure out the file length, then fread() the whole file. You can figure out the length with fseek() and ftell(), or you can use fstat() on POSIX systems. This will not work on sockets or pipes, it only works on regular files.
Read the file into a buffer which you dynamically expand as you read data using fread(). Typical implementations start with a "reasonable" buffer size and double it each time space is exhausted. This works on any kind of file.
On POSIX, use fstat() to get the file and then mmap() to put the entire file in your address space. This only works on regular files.
You can do the following:
FILE *fp;
int currentBufferSize;
for(i = 0; i < filesToRead; i++)
{
fp = fopen(name, "r");
currentBufferSize = 0;
while(fp != EOF)
fgets(filestring[i], BUFFER_SIZE, fp);
}
Of course you would have to make this in a more robust way, checking if your buffer can hold all the data and so on...
You might use something like the following: where you read each line, carefully check the result and pass it to a datastructure of your choosing. I have not shown how to properly allocate memory, but you can malloc up front and realloc when necessary.
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#define FILE_BUFFER_SIZE 1024
int file_read_line(FILE *fp, char *buffer)
{
// Read the line to buffer
if (fgets(buffer, FILE_BUFFER_SIZE, fp) == NULL)
return -errno;
// Check for End of File
if (feof(fp))
return 0;
return 1;
}
void file_read(FILE *fp)
{
int read;
char buffer[FILE_BUFFER_SIZE];
while (1) {
// Clear buffer for next line
buffer[0] = '\0';
// Read the next line with the appropriate read function
read = file_read_line(fp, buffer);
// file_read_line() returns only negative numbers when an error ocurred
if (read < 0) {
print_fatal_error("failed to read line: %s (%u)\n",
strerror(errno), errno);
exit(EXIT_FAILURE);
}
// Pass the read line `buffer` to whatever you want
// End of File reached
if (read == 0)
break;
}
return;
}
I want to read a text file and transfer it's contents to another text file in c, Here is my code:
char buffer[100];
FILE* rfile=fopen ("myfile.txt","r+");
if(rfile==NULL)
{
printf("couldn't open File...\n");
}
fseek(rfile, 0, SEEK_END);
size_t file_size = ftell(rfile);
printf("%d\n",file_size);
fseek(rfile,0,SEEK_SET);
fread(buffer,file_size,1,rfile);
FILE* pFile = fopen ( "newfile.txt" , "w+" );
fwrite (buffer , 1 ,sizeof(buffer) , pFile );
fclose(rfile);
fclose (pFile);
return 0;
}
the problem that I am facing is the appearence of unnecessary data in the receiving file,
I tried the fwrite function with both "sizeof(buffer)" and "file_size",In the first case it is displaying greater number of useless characters while in the second case the number of useless characters is only 3,I would really appreciate if someone pointed out my mistake and told me how to get rid of these useless characters...
Your are writing all the content of buffer (100 char) in the receiving file. You need to write the exact amount of data read.
fwrite(buffer, 1, file_size, pFile)
Adding more checks for your code:
#include <stdio.h>
#include <stdlib.h>
#define BUFFER_SIZE 100
int main(void) {
char buffer[BUFFER_SIZE];
size_t file_size;
size_t ret;
FILE* rfile = fopen("input.txt","r+");
if(rfile==NULL)
{
printf("couldn't open File \n");
return 0;
}
fseek(rfile, 0, SEEK_END);
file_size = ftell(rfile);
fseek(rfile,0,SEEK_SET);
printf("File size: %d\n",file_size);
if(!file_size) {
printf("Warring! Empty input file!\n");
} else if( file_size >= BUFFER_SIZE ){
printf("Warring! File size greater than %d. File will be truncated!\n", BUFFER_SIZE);
file_size = BUFFER_SIZE;
}
ret = fread(buffer, sizeof(char), file_size, rfile);
if(file_size != ret) {
printf("I/O error\n");
} else {
FILE* pFile = fopen ( "newfile.txt" , "w+" );
if(!pFile) {
printf("Can not create the destination file\n");
} else {
ret = fwrite (buffer , 1 ,file_size , pFile );
if(ret != file_size) {
printf("Writing error!");
}
fclose (pFile);
}
}
fclose(rfile);
return 0;
}
You need to check the return values from all calls to fseek(), fread() and fwrite(), even fclose().
In your example, you have fread() read 1 block which is 100 bytes long. It's often a better idea to reverse the parameters, like this: ret = fread(buffer,1,file_size,rfile). The ret value will then show how many bytes it could read, instead of just saying it could not read a full block.
Here is an implementation of an (almost) general purpose file copy function:
void fcopy(FILE *f_src, FILE *f_dst)
{
char buffer[BUFSIZ];
size_t n;
while ((n = fread(buffer, sizeof(char), sizeof(buffer), f_src)) > 0)
{
if (fwrite(buffer, sizeof(char), n, f_dst) != n)
err_syserr("write failed\n");
}
}
Given an open file stream f_src to read and another open file stream f_dst to write, it copies (the remainder of) the file associated with f_src to the file associated with f_dst. It does so moderately economically, using the buffer size BUFSIZ from <stdio.h>. Often, you will find that bigger buffers (such as 4 KiB or 4096 bytes, even 64 KiB or 65536 bytes) will give better performance. Going larger than 64 KiB seldom yields much benefit, but YMMV.
The code above calls an error reporting function (err_syserr()) which is assumed not to return. That's why I designated it 'almost general purpose'. The function could be upgraded to return an int value, 0 on success and EOF on a failure:
enum { BUFFER_SIZE = 4096 };
int fcopy(FILE *f_src, FILE *f_dst)
{
char buffer[BUFFER_SIZE];
size_t n;
while ((n = fread(buffer, sizeof(char), sizeof(buffer), f_src)) > 0)
{
if (fwrite(buffer, sizeof(char), n, f_dst) != n)
return EOF; // Optionally report write failure
}
if (ferror(f_src) || ferror(f_dst))
return EOF; // Optionally report I/O error detected
return 0;
}
Note that this design doesn't open or close files; it works with open file streams. You can write a wrapper that opens the files and calls the copy function (or includes the copy code into the function). Also note that to change the buffer size, I simply changed the buffer definition; I didn't change the main copy code. Also note that any 'function call overhead' in calling this little function is completely swamped by the overhead of the I/O operations themselves.
Note ftell returns a long, not a size_t. Shouldn't matter here, though. ftell itself is not necessarily a byte-offset, though. The standard requires it only to be an acceptable argument to fseek. You might get a better result from fgetpos, but it has the same portability issue from the lack of specification by the standard. (Confession: I didn't check the standard itself; got all this from the manpages.)
The more robust way to get a file-size is with fstat.
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd>
struct stat stat_buf;
if (fstat(filename, &buf) == -1)
perror(filename), exit(EXIT_FAILURE);
file_size = statbuf.st_size;
I think the parameters you passed in the fwrite are not in right sequence.
To me it should be like that-
fwrite(buffer,SIZE,1,pFile)
as the syntax of fwrite is
size_t fwrite(const void *ptr, size_t size, size_t nmemb, FILE *stream);
The function fwrite() writes nmemb elements of data, each size bytes long, to the stream pointed to by stream, obtaining them from the location given by ptr.
So change the sequence and try again.
I want to create an exact copy of a file(.bmp) in c
#include<stdio.h>
int main()
{
FILE *str,*cptr;
if((str=fopen("org.bmp","rb"))==NULL)
{
fprintf(stderr,"Cannot read file\n");
//return 1;
}
if((cptr=fopen("copy.bmp","wb"))==NULL)
{
fprintf(stderr,"Cannot open output file\n");
//return 1;
}
fseek(str, 0, SEEK_END);
long size=ftell(str);
printf("Size of FILE : %.2f MB \n",(float)size/1024/1024);
char b[2];
for(int i=0;i<size;i++)
{
fread(b,1,1,str);
fwrite(b,1,1,cptr);
}
fseek(cptr, 0, SEEK_END);
long csize=ftell(str);
printf("Size of created FILE : %.2f MB \n",(float)csize/1024/1024);
fclose(str);
fclose(cptr);
return 0;
}
Although it creates a file of the same size but windows throws an error while trying to view the newly created copy of bitmap.
Why is this happening?
You have moved the file pointer for the input file to the end of the file before you start reading it. You need to restore it to the beginning.
Change:
fseek(str, 0, SEEK_END);
long size=ftell(str);
to:
fseek(str, 0, SEEK_BEGIN);
long size=ftell(str);
fseek(str, 0, SEEK_SET);
Note that your code is devoid of error checking - if you had at least checked the result of fread then your mistake would have been immediately apparent. Take-home message: don't cut corners when it comes to error-checking - it will pay dividends later.
You need to seek back to the start of the original file because you are continually reading at the EOF and therefore not making a copy of the file contents, just whatever happens to be in your b[] array.
You are not checking the return codes of fread() and fwrite(). If you had been doing that you might have solved this problem from the return codes.
If you check the size of the original file and the copy in bytes, it should tell you the issue.
This code reads a byte and writes a byte.
#include<stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#define KB 1024
int main()
{
unsigned int ifd,ofd,rcnt;
char buf[KB];
ifd=open("orig.jpg",O_RDONLY);
if(ifd==0)
{
fprintf(stderr,"Cannot read file\n");
//return 1;
}
ofd=open("copy.jpg",O_WRONLY|O_CREAT);
if(ofd==0)
{
fprintf(stderr,"Cannot open output file\n");
//return 1;
}
while(rcnt=read(ifd,buf,KB))
write(ofd,buf,rcnt);
}
~
This is a nice function to copy files! Copy char by char is better than read the whole file because that result (if the file is to long) is a bufferoverflow!
double copy(char *input, char *output) {
FILE *f_in = fopen(input, "r");
FILE *f_out = fopen(output, "a");
if (!f_in || !f_out) {
fclose(f_in);
fclose(f_out);
return -1;
}
int c;
while ((c = fgetc(f_in)) != EOF)
fputc(c, f_out);
fclose(f_in);
fseek(f_out, 0, SEEK_END);
long size = ftell(f_out);
fclose(f_out);
return (double)(size / 1024 / 1024); // MB
}
This function returns the MB of the output file. If it wasn't successfully it return 0.
Use this function like this:
double output;
if ((output = copy("What ever you want to copy", "Where ever it should be printed")) != -1)
printf("Size of file: %lf MB.\n", output);
Hope this will help :)
I copped your first code and also used first solution just you need to add this code to your program :fseek(str, 0, SEEK_SET);and done your copy bitmap will be produce.
I have a text file named test.txt
I want to write a C program that can read this file and print the content to the console (assume the file contains only ASCII text).
I don't know how to get the size of my string variable. Like this:
char str[999];
FILE * file;
file = fopen( "test.txt" , "r");
if (file) {
while (fscanf(file, "%s", str)!=EOF)
printf("%s",str);
fclose(file);
}
The size 999 doesn't work because the string returned by fscanf can be larger than that. How can I solve this?
The simplest way is to read a character, and print it right after reading:
int c;
FILE *file;
file = fopen("test.txt", "r");
if (file) {
while ((c = getc(file)) != EOF)
putchar(c);
fclose(file);
}
c is int above, since EOF is a negative number, and a plain char may be unsigned.
If you want to read the file in chunks, but without dynamic memory allocation, you can do:
#define CHUNK 1024 /* read 1024 bytes at a time */
char buf[CHUNK];
FILE *file;
size_t nread;
file = fopen("test.txt", "r");
if (file) {
while ((nread = fread(buf, 1, sizeof buf, file)) > 0)
fwrite(buf, 1, nread, stdout);
if (ferror(file)) {
/* deal with error */
}
fclose(file);
}
The second method above is essentially how you will read a file with a dynamically allocated array:
char *buf = malloc(chunk);
if (buf == NULL) {
/* deal with malloc() failure */
}
/* otherwise do this. Note 'chunk' instead of 'sizeof buf' */
while ((nread = fread(buf, 1, chunk, file)) > 0) {
/* as above */
}
Your method of fscanf() with %s as format loses information about whitespace in the file, so it is not exactly copying a file to stdout.
There are plenty of good answers here about reading it in chunks, I'm just gonna show you a little trick that reads all the content at once to a buffer and prints it.
I'm not saying it's better. It's not, and as Ricardo sometimes it can be bad, but I find it's a nice solution for the simple cases.
I sprinkled it with comments because there's a lot going on.
#include <stdio.h>
#include <stdlib.h>
char* ReadFile(char *filename)
{
char *buffer = NULL;
int string_size, read_size;
FILE *handler = fopen(filename, "r");
if (handler)
{
// Seek the last byte of the file
fseek(handler, 0, SEEK_END);
// Offset from the first to the last byte, or in other words, filesize
string_size = ftell(handler);
// go back to the start of the file
rewind(handler);
// Allocate a string that can hold it all
buffer = (char*) malloc(sizeof(char) * (string_size + 1) );
// Read it all in one operation
read_size = fread(buffer, sizeof(char), string_size, handler);
// fread doesn't set it so put a \0 in the last position
// and buffer is now officially a string
buffer[string_size] = '\0';
if (string_size != read_size)
{
// Something went wrong, throw away the memory and set
// the buffer to NULL
free(buffer);
buffer = NULL;
}
// Always remember to close the file.
fclose(handler);
}
return buffer;
}
int main()
{
char *string = ReadFile("yourfile.txt");
if (string)
{
puts(string);
free(string);
}
return 0;
}
Let me know if it's useful or you could learn something from it :)
Instead just directly print the characters onto the console because the text file maybe very large and you may require a lot of memory.
#include <stdio.h>
#include <stdlib.h>
int main() {
FILE *f;
char c;
f=fopen("test.txt","rt");
while((c=fgetc(f))!=EOF){
printf("%c",c);
}
fclose(f);
return 0;
}
Use "read()" instead o fscanf:
ssize_t read(int fildes, void *buf, size_t nbyte);
DESCRIPTION
The read() function shall attempt to read nbyte bytes from the file associated with the open file descriptor, fildes, into the buffer pointed to by buf.
Here is an example:
http://cmagical.blogspot.com/2010/01/c-programming-on-unix-implementing-cat.html
Working part from that example:
f=open(argv[1],O_RDONLY);
while ((n=read(f,l,80)) > 0)
write(1,l,n);
An alternate approach is to use getc/putc to read/write 1 char at a time. A lot less efficient. A good example: http://www.eskimo.com/~scs/cclass/notes/sx13.html
You can use fgets and limit the size of the read string.
char *fgets(char *str, int num, FILE *stream);
You can change the while in your code to:
while (fgets(str, 100, file)) /* printf("%s", str) */;
Two approaches leap to mind.
First, don't use scanf. Use fgets() which takes a parameter to specify the buffer size, and which leaves any newline characters intact. A simple loop over the file that prints the buffer content should naturally copy the file intact.
Second, use fread() or the common C idiom with fgetc(). These would process the file in fixed-size chunks or a single character at a time.
If you must process the file over white-space delimited strings, then use either fgets or fread to read the file, and something like strtok to split the buffer at whitespace. Don't forget to handle the transition from one buffer to the next, since your target strings are likely to span the buffer boundary.
If there is an external requirement to use scanf to do the reading, then limit the length of the string it might read with a precision field in the format specifier. In your case with a 999 byte buffer, then say scanf("%998s", str); which will write at most 998 characters to the buffer leaving room for the nul terminator. If single strings longer than your buffer are allowed, then you would have to process them in two pieces. If not, you have an opportunity to tell the user about an error politely without creating a buffer overflow security hole.
Regardless, always validate the return values and think about how to handle bad, malicious, or just malformed input.
You can use getline() to read your text file without worrying about large lines:
getline() reads an entire line from stream, storing the address of the buffer containing the text into *lineptr. The buffer is null-terminated and includes the newline character, if one was found.
If *lineptr is set to NULL before the call, then getline() will allocate a buffer for storing the line. This buffer should be freed by the user program even if getline() failed.
bool read_file(const char *filename)
{
FILE *file = fopen(filename, "r");
if (!file)
return false;
char *line = NULL;
size_t linesize = 0;
while (getline(&line, &linesize, file) != -1) {
printf("%s", line);
free(line);
}
free(line);
fclose(file);
return true;
}
You can use it like this:
int main(void)
{
if (!read_file("test.txt")) {
printf("Error reading file\n");
exit(EXIT_FAILURE);
}
}
I use this version
char* read(const char* filename){
FILE* f = fopen(filename, "rb");
if (f == NULL){
exit(1);
}
fseek(f, 0L, SEEK_END);
long size = ftell(f)+1;
fclose(f);
f = fopen(filename, "r");
void* content = memset(malloc(size), '\0', size);
fread(content, 1, size-1, f);
fclose(f);
return (char*) content;
}
You could read the entire file with dynamic memory allocation, but isn't a good idea because if the file is too big, you could have memory problems.
So is better read short parts of the file and print it.
#include <stdio.h>
#define BLOCK 1000
int main() {
FILE *f=fopen("teste.txt","r");
int size;
char buffer[BLOCK];
// ...
while((size=fread(buffer,BLOCK,sizeof(char),f)>0))
fwrite(buffer,size,sizeof(char),stdout);
fclose(f);
// ...
return 0;
}