Using popen("ls -la") produces strange result - c

I wrote some C code for it to get the result of an "ls -la" command using popen and write the result into an C. The code looks like this:
unsigned int ls(char *destination, const char *username, const char *relative_path)
{
printf("LS IMP\n");
//if(!username || !relative_path) return -1;
FILE *ls_pipe = NULL;
unsigned long ls_pipe_size = -1;
const char ls_command[] = "ls -la ";
char ls_path[255] = "/home/";
char ls_full_command[255];
char buffer[255];
bzero(buffer, 255);
char *entries = NULL;
bzero(ls_full_command, 255);
strcat(ls_path, username);
strcat(ls_path, relative_path);
strcat(ls_full_command, ls_command);
strcat(ls_full_command, ls_path);
printf("AFTER CATS\n");
ls_pipe = popen(ls_full_command, "r");
if(ls_pipe == NULL) return -1;
printf("Pipe ok!");
fseek(ls_pipe, 0, SEEK_END);
ls_pipe_size = ftell(ls_pipe);
rewind(ls_pipe);
printf("Filesize: %lu\n", ls_pipe_size);
int i;
for(i = 0; i < 100; i++)
{
fread(buffer, 1, 255, ls_pipe);
printf("%s", buffer);
}
//entries = (char*) malloc(sizeof(char) * ls_pipe_size);
//if(entries == NULL) return -1;
printf("Entries ok!\n");
//if(ls_pipe_size != fread(destination, sizeof(char), ls_pipe_size, ls_pipe)) return -1;
fclose(ls_pipe);
return strlen(destination);
}
The problem is the size of the pipe is huge (?) and in the result after the proper result three entries start to appear non-stop for like infinity.
Is there any way of reading from it without knowing the exact number of lines of the result using something like another popen with wc -l?
Thanks
P.S there are some modifications in the code when i was trying to test what's going wrong and the malloc didn't work because of the insane size of the pipe.

You can't seek on a pipe — period. Any value you get back from ftell() is immaterial or erroneous. You can't rewind a pipe because you can't seek on a pipe. You can only read data once from a pipe.
So, you need to redesign the code to read an indefinite amount of data.
Here's some reasonably working code — but I needed to adapt it to Mac OS X and my machine, so instead of /home/ it uses /Users/, and the call to ls() uses my user name. The code properly handles buffers full of data that do not end with a null (listing about 570 lines of output for my bin directory). I've left the interface to ls unchanged although it almost doesn't use destination and returning the length of destination is otherwise unrelated to what it is doing. It also uses pclose() to close the pipe. Using pclose() avoids leaving zombies around and returns the exit status of the executed program where fclose() will not.
#include <assert.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
static unsigned int ls(char *destination, const char *username, const char *relative_path)
{
printf("LS IMP\n");
assert(destination != 0 && username != 0 && relative_path != 0);
const char ls_command[] = "ls -la ";
char ls_path[255] = "/Users/";
char ls_full_command[255];
snprintf(ls_full_command, sizeof(ls_full_command), "%s %s%s/%s",
ls_command, ls_path, username, relative_path);
FILE *ls_pipe = popen(ls_full_command, "r");
if (ls_pipe == NULL)
return -1;
printf("Pipe ok!\n");
char buffer[255];
int nbytes;
while ((nbytes = fread(buffer, 1, 255, ls_pipe)) > 0)
printf("%.*s", nbytes, buffer);
putchar('\n');
printf("Entries ok!\n");
pclose(ls_pipe);
return strlen(destination);
}
int main(void)
{
unsigned int length = ls("/", "jleffler", "bin");
printf("ls() returned %u\n", length);
return(0);
}

Related

Copy Function in C not creating matching Checksums

I written a simple copy program that copies a file and generates an MD5, It runs and generates the MD5 correctly.
However when verifying the file generated by the copy function it does not match the source MD5. I can't see any reason for this in my code, can anyone help?
#include <stdio.h>
#include <openssl/md5.h>
#include <assert.h>
#define BUFFER_SIZE 512
int secure_copy(char *filepath, char *destpath);
int main(int argc, char * argv[]) {
secure_copy(argv[1], argv[2]);
return 0;
}
int secure_copy(char *filepath, char *destpath) {
FILE *src, *dest;
src = fopen(filepath, "r");
assert(src != NULL);
dest = fopen(destpath, "w");
assert(dest != 0);
MD5_CTX c;
char buf[BUFFER_SIZE];
ssize_t bytes, out_writer;
unsigned char out[MD5_DIGEST_LENGTH];
MD5_Init(&c);
while((bytes = fread(buf, 1, BUFFER_SIZE, src)) != 0) {
MD5_Update(&c, buf, bytes);
out_writer = fwrite(buf, 1, BUFFER_SIZE, dest);
assert(out_writer != 0);
}
MD5_Final(out, &c);
printf("MD5: ");
for (int i=0; i < MD5_DIGEST_LENGTH; i++)
{
printf("%02x", out[i]);
}
printf("\n");
fclose(src);
fclose(dest);
return 0;
}
Output
$ ./md5speed doc.txt /home/doc.txt
MD5: 4c55e4b9185eece3cc000c4023f8f6fe
when verifying the copied file with md5sum I get a completely different hash.
md5sum doc.txt
29cb4da30c3e28fdb81463b5f0a76894 doc.txt
Though the file still opens and content is uncorrupted.
regarding:
while((bytes = fread(buf, 1, BUFFER_SIZE, src)) != 0)
and
out_writer = fwrite(buf, 1, BUFFER_SIZE, dest);
on the last read, the amount read can be less than BUFFER_SIZE so should always use bytes variable for the number of bytes to write.
Also, certain errors can occur when calling fread() and/or fwrite() Such errors are indicated by negative values (and/or values less than the 3rd parameter to those functions) in the returned variables (bytes, outwriter). The code, to be robust, must be checking those values and handling any errors that occur, including EOF
As stated in comments, changing the fwrite function to use bytes as opposed to BUFFER_SIZE combined with changing file operations mode "rb" and "wb" to binary.

popen and output of system command

I have to figure out the available space in /mnt/ in my application. I wrote the following code. However, execute_cmd some times returns junk apart from the actual output. For ex: 4.5K(followed by junk). Where am I going wrong? Could some one review and let me know why execute_cmd returns a junk byte at the end? How do I improve the code?
char *execute_cmd(char *cmd)
{
FILE *fp;
char path[100];
int ii = 0;
//char ii = 0;
char *buffer = malloc(1024);
char len = 0;
/* Open the command for reading. */
fp = popen(cmd, "r");
if (fp == NULL) {
printf("Failed to run command\n" );
exit(1);
}
printf("Running command is: %s\n", cmd);
memset(buffer, 0, sizeof(buffer));
do {
len = fread(path, 100, 1, fp); /* Is it okay to use fread? I do not know how many bytes to read as this function is a generic function which can be used for executing any command */
strcat(buffer,path);
printf("Number of bytes is: %d\n", len);
} while (len != 0);
len = strlen(buffer);
printf("Buffer contents are: %s %d\n", buffer,len);
/* close */
pclose(fp);
}
void main()
{
char *buffer = "df -h | grep \"/mnt\" | awk '{ print $4}'"; /* FIXME */
char len;
char units;
float number;
char dummy = 0;
char *avail_space;
avail_space = execute_cmd(buffer);
len = strlen(avail_space);
units = avail_space[len - 1];
printf("Available space is: %s %d %c end here\n", avail_space, len, units);
number = strtof(avail_space, NULL);
printf("Number is: %f\n", number);
}
sizeof(buffer) is sizeof(char*), which is probably 8 (or maybe 4). So your memset only clears a little bit of buffer. But with your use of fread, it's not just buffer that needs to be cleared; it's the temporary path.
Uninitialized local variables like path are not zero-initialised. You could use memset(path, 0, sizeof(path)); to clear it -- here the sizeof works because path really is an array -- but simpler is to initialise it in the declaration: char path[100] = "";.
Since fread does not NUL-terminate what it reads, there might be arbitrary garbage following it, making the strcat Undefined Behaviour. In fact, the strcat is totally unnecessary and a waste of cycles. You know how much data you read (it's in len) so you know exactly where to read the next chunk and you can do so directly without a temporary buffer and without a copy.
For future reference, if you are planning on calling malloc and then using memset to clear the allocated region, you should instead use calloc. That's what it's there for.

C Systems Program - Read/Write Issues During Copy

I am coding up a C program that extracts from a standard UNIX archive ar and creates the files it stores.
Here is an example of what an ar looks like if I open it in vim:
!<arch>
yo 1382105439 501 20 100644 10 `
test1 lol
yo2 1382105444 501 20 100644 10 `
test2 lol
...where "test1 lol" and "test2 lol" are the contents of each file, "yo" and "yo2" are two different file names, and the rest is metadata stored in a format corresponding to the standard ar.h (read more on it here: http://www.lehman.cuny.edu/cgi-bin/man-cgi?ar.h+3)
Anyway, I am still in the process of writing out the function but here is what I have so far:
static void extract_files (int argc, char *argv[])
{
int fd;
int new_file_fd;
int num_read = 0;
int new_file_size;
struct ar_hdr current_header;
char name_buffer[16];
char date_buffer[12];
char uid_buffer[6];
char gid_buffer[6];
char mode_buffer[8];
char size_buffer[10];
char fmag_buffer[2];
// grab the fd #
fd = open(argv[2], O_RDWR | O_CREAT, 0666);
// go to the first header
lseek(fd, SARMAG, SEEK_CUR);
// store the number of bits read in a struct current_header
// until its size equal to the size of the entire
// header, or in other words, until the entire
// header is read
while ((num_read = read(fd, (char*) &current_header,
sizeof(struct ar_hdr))) == sizeof(struct ar_hdr))
{
// scans the current string in header and stores
// in nameStr array
sscanf(current_header.ar_name, "%s", name_buffer);
sscanf(current_header.ar_date, "%s", date_buffer);
sscanf(current_header.ar_uid, "%s", uid_buffer);
sscanf(current_header.ar_gid, "%s", gid_buffer);
int mode;
sscanf(current_header.ar_mode, "%o", &mode);
sscanf(current_header.ar_size, "%s", size_buffer);
int size = atoi(size_buffer);
sscanf(current_header.ar_fmag, "%s", fmag_buffer);
// Create a new file
new_file_fd = creat(name_buffer, mode);
// Grab new file size
new_file_size = atoi(size_buffer);
int io_size; // buffer size
char buff[size];
int read_cntr = 0;
// from copy.c
while ((io_size = read (fd, buff, new_file_size)) > 0)
{
read_cntr++;
if (read_cntr > new_file_size)
break;
write (new_file_fd, buff, new_file_size);
}
close(new_file_fd);
printf("%s\n", name_buffer);
printf("%s\n", date_buffer);
printf("%s\n", uid_buffer);
printf("%s\n", gid_buffer);
printf("%s\n", mode_buffer);
printf("%s\n", size_buffer);
printf("%s\n", fmag_buffer);
/* Seek to next header. */
lseek(fd, atoi(current_header.ar_size) + (atoi(current_header.ar_size)%2), SEEK_CUR);
}
}
The issue I am having lies in the second while loop in the above code:
// from copy.c
while ((io_size = read (fd, buff, new_file_size)) > 0)
{
read_cntr++;
if (read_cntr > new_file_size)
break;
write (new_file_fd, buff, new_file_size);
}
For some reason, the files written in this while loop don't run to the length specified by write. The third argument for the standard read()/write() should be the number of bytes to write. For some reason though, my code results in the entire archive being read in and written into the first file.
If I open up the resulting "yo" file, I find the entire archive file has been written to it
test1 lol
yo2 1382105444 501 20 100644 10 `
test2 lol
instead of terminating after reading 10 bytes and giving the expected outcome "test1 lol".
I can also confirm that the "new_file_size" value is indeed 10. So my question is: what am I reading wrong about this while loop?
Note: Expected input would be a command line argument that looks something like:
./extractor.c -x name_of_archive_file
The only relevant information I think I need to deal with in this function is the name of the archive file which I get the fd for at the beginning of extract_files.
Added:
Misc -- the output from when this is run:
yo
1382105439
501
20
X
10
`
As you can see, it never sees the yo2 file or prints out its header because it gets written to "yo" before that can happen...because of this stray while loop :(
You read a value, size_buffer, and assign it to size and new_file_size, you also create a buffer[size] of that same size,
int size = atoi(size_buffer);
sscanf(current_header.ar_fmag, "%s", fmag_buffer);
//...
new_file_size = atoi(size_buffer);
//...
char buff[size];
Read returns a ssize_t count of bytes in range [0..new_file_size], which you set into io_size, realize that read(2) may return < new_file_size bytes, which is why you need the while loop. So you need to write everything you have read, until you reach your write limit. I have made some comments to guide you.
// from copy.c
while ((io_size = read (fd, buff, new_file_size)) > 0)
{
read_cntr++;
//perhaps you mean read_cntr += io_size;
//you probably mean to write io_size bytes here, regardless
//write(new_file_fd, buff, io_size);
if (read_cntr > new_file_size) //probably you want >= here
break;
//you may have broke before you write...
write (new_file_fd, buff, new_file_size);
}
A more typical idiom for this copy would be something where you pick a read/write buffer size, say 4*1024 (4K), 16*1024 (16K), etc, and read that blocksize, until you have less than that blocksize remaining; for example,
//decide how big to make buffer for read()
#define BUFSIZE (16*1024) //16K
//you need min(
#define min(x,y) ( ((x)<(y)) ? (x) : (y) )
ssize_t fdreader(int fd, int ofd, ssize_t new_file_size )
{
ssize_t remaining = new_file_size;
ssize_t readtotal = 0;
ssize_t readcount;
unsigned char buffer[BUFSIZE];
for( ; readcount=read(fd,buffer,min(sizeof(buffer),remaining)); )
{
readtotal += readcount;
if( readcount > remaining ) //only keep remaining
readcount = remaining;
write( ofd, buffer, readcount);
remaining -= readcount;
if( remaining <= 0 ) break; //done
}
return readtotal;
}
Try this,
#include<stdio.h>
#include<stdlib.h>
void usage(char*progname)
{
printf("need 2 files\n");
printf("%s <infile> <outfile>\n",progname);
}
//decide how big to make buffer for read()
#define BUFSIZE (16*1024) //16K
//you need min(
#define min(x,y) ( ((x)<(y)) ? (x) : (y) )
ssize_t fdreader(int fd, int ofd, ssize_t new_file_size )
{
ssize_t remaining = new_file_size;
ssize_t readtotal = 0;
ssize_t readcount;
unsigned char buffer[BUFSIZE];
for( ; readcount=read(fd,buffer,min(sizeof(buffer),remaining)); )
{
readtotal += readcount;
if( readcount > remaining ) //only keep remaining
readcount = remaining;
write( ofd, buffer, readcount);
remaining -= readcount;
if( remaining <= 0 ) break; //done
}
return readtotal;
}
int main(int argc,char**argv)
{
int i=0; /* the infamous 'i' */
FILE*infh;
FILE*outfh;
if( argc < 3 )
{
usage(argv[0]);
return 0;
}
printf("%s %s\n",argv[1],argv[2]); fflush(stdout);
if( !(infh=fopen(argv[1],"r")) )
{
printf("cannot open %s\n",argv[2]); fflush(stdout);
return(2);
}
if( !(outfh=fopen(argv[2],"w+")) )
{
printf("cannot open %s\n",argv[3]); fflush(stdout);
return(3);
}
int x = fdreader(fileno(infh), fileno(outfh), 512 );
return 0;
}
Your while() loop should probably have braces ({ ... }) after it, otherwise you're just incrementing read_cntr without doing anything else.

C, Segmentation fault parsing large csv file

I wrote a simple program that would open a csv file, read it, make a new csv file, and only write some of the columns (I don't want all of the columns and am hoping removing some will make the file more manageable). The file is 1.15GB, but fopen() doesn't have a problem with it. The segmentation fault happens in my while loop shortly after the first progress printf().
I tested on just the first few lines of the csv and the logic below does what I want. The strange section for when index == 0 is due to the last column being in the form (xxx, yyy)\n (the , in a comma separated value file is just ridiculous).
Here is the code, the while loop is the problem:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char** argv) {
long size;
FILE* inF = fopen("allCrimes.csv", "rb");
if (!inF) {
puts("fopen() error");
return 0;
}
fseek(inF, 0, SEEK_END);
size = ftell(inF);
rewind(inF);
printf("In file size = %ld bytes.\n", size);
char* buf = malloc((size+1)*sizeof(char));
if (fread(buf, 1, size, inF) != size) {
puts("fread() error");
return 0;
}
fclose(inF);
buf[size] = '\0';
FILE *outF = fopen("lessColumns.csv", "w");
if (!outF) {
puts("fopen() error");
return 0;
}
int index = 0;
char* currComma = strchr(buf, ',');
fwrite(buf, 1, (int)(currComma-buf), outF);
int progress = 0;
while (currComma != NULL) {
index++;
index = (index%14 == 0) ? 0 : index;
progress++;
if (progress%1000 == 0) printf("%d\n", progress/1000);
int start = (int)(currComma-buf);
currComma = strchr(currComma+1, ',');
if (!currComma) break;
if ((index >= 3 && index <= 10) || index == 13) continue;
int end = (int)(currComma-buf);
int endMinusStart = end-start;
char* newEntry = malloc((endMinusStart+1)*sizeof(char));
strncpy(newEntry, buf+start, endMinusStart);
newEntry[end+1] = '\0';
if (index == 0) {
char* findNewLine = strchr(newEntry, '\n');
int newLinePos = (int)(findNewLine-newEntry);
char* modifiedNewEntry = malloc((strlen(newEntry)-newLinePos+1)*sizeof(char));
strcpy(modifiedNewEntry, newEntry+newLinePos);
fwrite(modifiedNewEntry, 1, strlen(modifiedNewEntry), outF);
}
else fwrite(newEntry, 1, end-start, outF);
}
fclose(outF);
return 0;
}
Edit: It turned out the problem was that the csv file had , in places I was not expecting which caused the logic to fail. I ended up writing a new parser that removes lines with the incorrect number of commas. It removed 243,875 lines (about 4% of the file). I'll post that code instead as it at least reflects some of the comments about free():
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main(int argc, char** argv) {
long size;
FILE* inF = fopen("allCrimes.csv", "rb");
if (!inF) {
puts("fopen() error");
return 0;
}
fseek(inF, 0, SEEK_END);
size = ftell(inF);
rewind(inF);
printf("In file size = %ld bytes.\n", size);
char* buf = malloc((size+1)*sizeof(char));
if (fread(buf, 1, size, inF) != size) {
puts("fread() error");
return 0;
}
fclose(inF);
buf[size] = '\0';
FILE *outF = fopen("uniformCommaCount.csv", "w");
if (!outF) {
puts("fopen() error");
return 0;
}
int numOmitted = 0;
int start = 0;
while (1) {
char* currNewLine = strchr(buf+start, '\n');
if (!currNewLine) {
puts("Done");
break;
}
int end = (int)(currNewLine-buf);
char* entry = malloc((end-start+2)*sizeof(char));
strncpy(entry, buf+start, end-start+1);
entry[end-start+1] = '\0';
int commaCount = 0;
char* commaPointer = entry;
for (; *commaPointer; commaPointer++) if (*commaPointer == ',') commaCount++;
if (commaCount == 14) fwrite(entry, 1, end-start+1, outF);
else numOmitted++;
free(entry);
start = end+1;
}
fclose(outF);
printf("Omitted %d lines\n", numOmitted);
return 0;
}
you're malloc'ing but never freeing. possibly you run out of memomry, one of your mallocs returns NULL, and the subsequent call to str(n)cpy segfaults.
adding free(newEntry);, free(modifiedNewEntry); immediately after the respective fwrite calls should solve your memory shortage.
also note that inside your loop you compute offsets into the buffer buf which contains the whole file. these offsets are held in variables of type int whose maximum value on your system may be too small for the numbers you are handling. also note that adding large ints may result in a negative value which is another possible cause of the segfault (negative offsets into buf take you to some address outside the buffer possibly not even readable).
The malloc(3) function can (and sometimes does) fail.
At least code something like
char* buf = malloc(size+1);
if (!buf) {
fprintf(stderr, "failed to malloc %d bytes - %s\n",
size+1, strerror(errno));
exit (EXIT_FAILURE);
}
And I strongly suggest to clear with memset(buf, 0, size+1) the successful result of a malloc (or otherwise use calloc ....), not only because the following fread could fail (which you are testing) but to ease debugging and reproducibility.
and likewise for every other calls to malloc or calloc (you should always test them against failure)....
Notice that by definition sizeof(char) is always 1. Hence I removed it.
As others pointed out, you have a memory leak because you don't call free appropriately. A tool like valgrind could help.
You need to learn how to use the debugger (e.g. gdb). Don't forget to compile with all warnings and debugging information (e.g. gcc -Wall -g). And improve your code till you get no warnings.
Knowing how to use a debugger is an essential required skill when programming (particularly in C or C++). That debugging skill (and ability to use the debugger) will be useful in every C or C++ program you contribute to.
BTW, you could read your file line by line with getline(3) (which can also fail and you should test that).

Trying to make program that counts number of bytes in a specified file (in C)

I am currently attempting to write a program that will tell it's user how many times the specified 8-bit byte appears in the specified file.
I have some ground work laid out, but when it comes to making sure that the file makes it in to an array or buffer or whatever format I should put the file data into to check for the bytes, I feel I'm probably very far off from using the correct methods.
After that, I need to check whatever the file data gets put in to for the byte specified, but I am also unsure how to do this.
I think I may be over-complicating this quite a bit, so explaining anything that needs to be changed or that can just be scrapped completely is greatly appreciated.
Hopefully didn't leave out any important details.
Everything seems to be running (this code compiles), but when I try to printf the final statement at the bottom, it does not spit out the statement.
I have a feeling I just did not set up the final for loop correctly at all..
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
//#define BUFFER_SIZE (4096)
main(int argc, char *argv[]){ //argc = arg count, argv = array of arguements
char buffer[4096];
int readBuffer;
int b;
int byteCount = 0;
b = atoi(argv[2]);
FILE *f = fopen(argv[1], "rb");
unsigned long count = 0;
int ch;
if(argc!=3){ /* required number of args = 3 */
fprintf(stderr,"Too few/many arguements given.\n");
fprintf(stderr, "Proper usage: ./bcount path byte\n");
exit(0);
}
else{ /*open and read file*/
if(f == 0){
fprintf(stderr, "File could not be opened.\n");
exit(0);
}
}
if((b <= -1) || (b >= 256)){ /*checks to see if the byte provided is between 0 & 255*/
fprintf(stderr, "Byte provided must be between 0 and 255.\n");
exit(0);
}
else{
printf("Byte provided fits in range.\n");
}
int i = 0;
int k;
int newFile[i];
fseek(f, 0, SEEK_END);
int lengthOfFile = ftell(f);
for(k = 0; k < sizeof(buffer); k++){
while(fgets(buffer, lengthOfFile, f) != NULL){
newFile[i] = buffer[k];
i++;
}
}
if(newFile[i] = buffer[k]){
printf("same size\n");
}
for(i = 0; i < sizeof(newFile); i++){
if(b == newFile[i]){
byteCount++;
}
printf("Final for loop is working???"\n");
}
}
OP is mixing fgets() with binary reads of a file.
fgets() reads a file up to the buffer size provided or reaching a \n byte. It is intended for text processing. The typical way to determine how much data was read via fgets() is to look for a final \n - which may or may not be there. The data read could have embedded NUL bytes in it so it becomes problematic to know when to stop scanning the buffer. on a NUL byte or a \n.
Fortunately this can all be dispensed with, including the file seek and buffers.
// "rb" should be used when looking at a file in binary. C11 7.21.5.3 3
FILE *f = fopen(argv[1], "rb");
b = atoi(argv[2]);
unsigned long byteCount = 0;
int ch;
while ((ch = fgetc(f)) != EOF) {
if (ch == b) {
byteCount++;
}
}
The OP error checking is good. But the for(k = 0; k < sizeof(buffer); k++){ loop and its contents had various issues. OP had if(b = newFile[i]){ which should have been if(b == newFile[i]){
Not really an ANSWER --
Chux corrected the code, this is just more than fits in a comment.
#include <sys/stat.h>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char **argv)
{
struct stat st;
int rc=0;
if(argv[1])
{
rc=stat(argv[1], &st);
if(rc==0)
printf("bytes in file %s: %ld\n", argv[1], st.st_size);
else
{
perror("Cannot stat file");
exit(EXIT_FAILURE);
}
return EXIT_SUCCESS;
}
return EXIT_FAILURE;
}
The stat() call is handy for getting file size and for determining file existence at the same time.
Applications use stat instead of reading the whole file, which is great for gigantic files.

Resources