I have problem when copying files
code:
bool done;
FILE* fin;
FILE* fout;
const int bs = 1024*64;//64 kb
char* buffer[bs];
int er, ew, br, bw;
long long int size = 0;
long long int sizew = 0;
er = fopen_s(&fin,s.c_str(),"rb");
ew = fopen_s(&fout,s2.c_str(),"wb");
if(er == 0 && ew == 0){
while(br = fread(buffer,1,bs,fin)){
size += br;
sizew += fwrite(buffer,1,bs,fout);
}
done = true;
}else{
done = false;
}
if(fin != NULL)fclose(fin);
if(fout != NULL)fclose(fout);
Somehow fwrite writes whole buffer ignoring count value (br)
Some examples how:
Copying 595 file of 635 DONE. 524288/524288 B
Copying 596 file of 635 DONE. 524288/524288 B
Copying 597 file of 635 DONE. 65536/145 B
Copying 598 file of 635 DONE. 65536/16384 B
Copying 599 file of 635 DONE. 65536/145 B
Copying 600 file of 635 DONE. 65536/67 B
Copying 601 file of 635 DONE. 65536/32768 B
Copying 602 file of 635 DONE. 65536/67 B
Anyone knows where is the problem?
ignoring count value (br)
Actually you wrote bs.
A good example of the dangers of poor variable naming!
You should do
sizew += fwrite(buffer,1,br,fout);
You were passing bs, which was the maximum amount that fread was allowed to read. br was the amount that fread actually did read.
Related
I'm working at a simple FTP-Server (just for fun) and the last step is to store a file with the "STOR" cmd.
rec_file method:
int rec_file(char *filename, int sockfd){
int bytes_read;
char buffer[4096];
FILE *f = fopen(filename, "wb");
while (bytes_read=read(sockfd,buffer,sizeof(buffer))){
fwrite(buffer , 1 , sizeof(buffer) , f);
}
fclose(f);
return 0;
}
Test File a.txt:
TEST123
456
789
TEST!!
sent file a.txt:
TEST123
456
789
TEST!! AÛD Ìk P ™sB ÃB p Àa ðÿÿÿÿÿÿ P # €Ìk Ìk # dB % 0 % Øa P 0 | w n [ Ìk 8Y
ý % 8Y
ý # Àk W.B G
ý 8Y
ý QH /home/felix/ftp ¨A PQ
ý Ãk R
ý `Q
ý =F ) -rA ÿÿÿÿÿÿÿÿ Çk ¨ * 0k 1„A Çk -rA ßJI Çk ¨ 0k 1„A Çk ¨
So what am I doing wrong?
Your code may write more bytes to the file than you have received from the socket.
The return value of read which gets stored in bytes_read will tell you how many bytes have actually been read. Even if the amount of data is large, read may not always read a full buffer, it may return a smaller number. You should also handle negative values as errors.
The code below has a very simple error handling. In a real application you might have to retry the read call on certain errno values instead of stopping to read data.
while ((bytes_read=read(sockfd,buffer,sizeof(buffer))) > 0){
fwrite(buffer , 1 , bytes_read, f);
}
if(bytes_read < 0) {
perror("read failed");
/* maybe other error handling */
}
Currently trying to read data from a text file line by line using strtok and a space as a delimiter and save the info into different arrays. Im using the FatFs library to read the file from an sd card. Atm im only trying to read the first 2 elements from the line.
My text file looks like this:
223 895 200 200 87 700 700 700
222 895 200 200 87 700 700 700
221 895 200 200 87 700 700 700
222 895 200 200 87 700 700 700
My current code is something like this:
void sd_card_read()
{
char buffer[30];
char buffer2[10];
char buffer3[10];
int i=0;
int k=0;
int l=0;
int16 temp_array[500];
int16 hum_array[500];
char *p;
FIL fileO;
uint8 resultF;
resultF = f_open(&fileO, "dados.txt", FA_READ);
if(resultF == FR_OK)
{
UART_UartPutString("Reading...");
UART_UartPutString("\n\r");
while(f_gets(buffer, sizeof(buffer), &fileO))
{
p = strtok(buffer, " ");
temp_array[i] = atoi(p);
UART_UartPutString(p);
UART_UartPutString("\r\n");
p = strtok(NULL, " ");
hum_array[i] = atoi(p);
UART_UartPutString(p);
UART_UartPutString("\r\n");
i++;
}
UART_UartPutString("Done reading");
resultF = f_close(&fileO);
}
UART_UartPutString("Printing");
UART_UartPutString("\r\n");
for (k = 0; k < 10; k++)
{
itoa(temp_array[k], buffer2, 10);
UART_UartPutString(buffer2);
UART_UartPutString("\r\n");
}
for (l = 0; l < 10; l++)
{
itoa(hum_array[l], buffer3, 10);
UART_UartPutString(buffer3);
UART_UartPutString("\r\n");
}
}
The output atm is this:
223
0
222
0
etc..
895
0
895
0
etc..
After reading one time it puts the next position the value of 0 in both arrays, which is not what is wanted. Its probably something basic but cant see what is wrong.
Any help is valuable!
If we take the first line of the file
223 895 200 200 87 700 700 700
That lines is, including space and newline (assuming single '\n') 31 characters long. And since strings in C needs to be terminated by '\0' the line requires at least 32 characters (if f_gets works similar to the standard fgets function, and adds the newline).
Your buffer you read into only fits 30 characters, which means only 29 characters of your line would be read and then the terminator added. So that means you only read
223 895 200 200 87 700 700 70
The next time you call f_gets the function will read the remaining
0
You need to increase the size of the buffer to be able to fit all of the line. With the current data it needs to be at least 32 characters. But be careful since an extra character in one of the lines will give you the same problem again.
I'm attempting to recreate the wc command in c and having issues getting the proper number of words in any file containing machine code (core files or compiled c). The number of logged words always comes up around 90% short of the amount returned by wc.
For reference here is the project info
Compile statement
gcc -ggdb wordCount.c -o wordCount -std=c99
wordCount.c
/*
* Author(s) - Colin McGrath
* Description - Lab 3 - WC LINUX
* Date - January 28, 2015
*/
#include<stdio.h>
#include<string.h>
#include<dirent.h>
#include<sys/stat.h>
#include<ctype.h>
struct counterStruct {
int newlines;
int words;
int bt;
};
typedef struct counterStruct ct;
ct totals = {0};
struct stat st;
void wc(ct counter, char *arg)
{
printf("%6lu %6lu %6lu %s\n", counter.newlines, counter.words, counter.bt, arg);
}
void process(char *arg)
{
lstat(arg, &st);
if (S_ISDIR(st.st_mode))
{
char message[4056] = "wc: ";
strcat(message, arg);
strcat(message, ": Is a directory\n");
printf(message);
ct counter = {0};
wc(counter, arg);
}
else if (S_ISREG(st.st_mode))
{
FILE *file;
file = fopen(arg, "r");
ct currentCount = {0};
if (file != NULL)
{
char holder[65536];
while (fgets(holder, 65536, file) != NULL)
{
totals.newlines++;
currentCount.newlines++;
int c = 0;
for (int i=0; i<strlen(holder); i++)
{
if (isspace(holder[i]))
{
if (c != 0)
{
totals.words++;
currentCount.words++;
c = 0;
}
}
else
c = 1;
}
}
}
currentCount.bt = st.st_size;
totals.bt = totals.bt + st.st_size;
wc(currentCount, arg);
}
}
int main(int argc, char *argv[])
{
if (argc > 1)
{
for (int i=1; i<argc; i++)
{
//printf("%s\n", argv[i]);
process(argv[i]);
}
}
wc(totals, "total");
return 0;
}
Sample wc output:
135 742 360448 /home/cpmcgrat/53/labs/lab-2/core.22321
231 1189 192512 /home/cpmcgrat/53/labs/lab-2/core.26554
5372 40960 365441 /home/cpmcgrat/53/labs/lab-2/file
24 224 12494 /home/cpmcgrat/53/labs/lab-2/frequency
45 116 869 /home/cpmcgrat/53/labs/lab-2/frequency.c
5372 40960 365441 /home/cpmcgrat/53/labs/lab-2/lineIn
12 50 1013 /home/cpmcgrat/53/labs/lab-2/lineIn2
0 0 0 /home/cpmcgrat/53/labs/lab-2/lineOut
39 247 11225 /home/cpmcgrat/53/labs/lab-2/parseURL
138 318 2151 /home/cpmcgrat/53/labs/lab-2/parseURL.c
41 230 10942 /home/cpmcgrat/53/labs/lab-2/roman
66 162 1164 /home/cpmcgrat/53/labs/lab-2/roman.c
13 13 83 /home/cpmcgrat/53/labs/lab-2/romanIn
13 39 169 /home/cpmcgrat/53/labs/lab-2/romanOut
7 6 287 /home/cpmcgrat/53/labs/lab-2/URLs
11508 85256 1324239 total
Sample rebuild output (./wordCount):
139 76 360448 /home/cpmcgrat/53/labs/lab-2/core.22321
233 493 192512 /home/cpmcgrat/53/labs/lab-2/core.26554
5372 40960 365441 /home/cpmcgrat/53/labs/lab-2/file
25 3 12494 /home/cpmcgrat/53/labs/lab-2/frequency
45 116 869 /home/cpmcgrat/53/labs/lab-2/frequency.c
5372 40960 365441 /home/cpmcgrat/53/labs/lab-2/lineIn
12 50 1013 /home/cpmcgrat/53/labs/lab-2/lineIn2
0 0 0 /home/cpmcgrat/53/labs/lab-2/lineOut
40 6 11225 /home/cpmcgrat/53/labs/lab-2/parseURL
138 318 2151 /home/cpmcgrat/53/labs/lab-2/parseURL.c
42 3 10942 /home/cpmcgrat/53/labs/lab-2/roman
66 162 1164 /home/cpmcgrat/53/labs/lab-2/roman.c
13 13 83 /home/cpmcgrat/53/labs/lab-2/romanIn
13 39 169 /home/cpmcgrat/53/labs/lab-2/romanOut
7 6 287 /home/cpmcgrat/53/labs/lab-2/URLs
11517 83205 1324239 total
Notice the difference in the word count (second int) from the first two files (core files) as well as the roman file and parseURL files (machine code, no extension).
C strings do not store their length. They are terminated by a single NUL (0) byte.
Consequently, strlen needs to scan the entire string, character by character, until it reaches the NUL. That makes this:
for (int i=0; i<strlen(holder); i++)
desperately inefficient: for every character in holder, it needs to count all the characters in holder in order to test whether i is still in range. That transforms a simple linear Θ(N) algorithm into an Θ(N2) cycle-burner.
But in this case, it also produces the wrong result, since binary files typically include lots of NUL characters. Since strlen will actually tell you where the first NUL is, rather than how long the "line" is, you'll end up skipping a lot of bytes in the file. (On the bright side, that makes the scan quadratically faster, but computing the wrong result more rapidly is not really a win.)
You cannot use fgets to read binary files because the fgets interface doesn't tell you how much it read. You can use the Posix 2008 getline interface instead, or you can do binary input with fread, which is more efficient but will force you to count newlines yourself. (Not the worst thing in the world; you seem to be getting that count wrong, too.)
Or, of course, you could read the file one character at a time with fgetc. For a school exercise, that's not a bad solution; the resulting code is easy to write and understand, and typical implementations of fgetc are more efficient than the FUD would indicate.
I'm using zdelta library (http://cis.poly.edu/zdelta/) to compress a bunch of binary files and have been running into the issue where decompression almost always segfaults, even with the command line interface. Just wondering if anyone ran into this before?
I did some error isolation: compression output with my code is same as what I got from CLI (command is ./zdc reference.bin fileToCompress.bin > compressedFile.bin.del) so I assume compression works fine. The confusing part is say I use A.bin as reference and compress against itself, then everything works perfectly. As soon as I try a different file it segfaults (compress B.bin with A.bin being the reference, for example). Same with the decompression CLI.
Code for compression, bufferIn is the uncompressed data and bufferOut is an output buffer area which is large enough (ten times as input buffer, so even if the compression grows the file things should still work):
int rv = zd_compress(reference, refSize,
bufferIn, inputSize,
bufferOut, &outputSize);
Documentation for compress:
433 /* computes zdelta difference between target data and reference data
434 *
435 * INPUT:
436 * ref pointer to reference data set
437 * rsize size of reference data set
438 * tar pointer to targeted data set
439 * tsize size of targeted data set
440 * delta pointer to delta buffer
441 * the delta buffer IS allocated by the user
442 * *dsize size of delta buffer
443 *
444 *
445 * OUTPUT parameters:
446 * delta pointer to zdelta difference
447 * *dsize size of zdelta difference
448 *
449 * zd_compress returns ZD_OK on success,
450 * ZD_MEM_ERROR if there was not enough memory,
451 * ZD_BUF_ERROR if there was not enough room in the output
452 * buffer.
453 */
454 ZEXTERN int ZEXPORT zd_compress OF ((const Bytef *ref, uLong rsize,
455 const Bytef *tar, uLong tsize,
456 Bytef *delta, uLongf* dsize));
==============================
Code for decompression, bufferIn is the compressed data and bufferOut is an output buffer area which is 1000 times than the input (bad practice yes, but I'd like to figure out the segfault first..):
int rv = zd_uncompress(reference, refSize,
bufferOut, &outputSize,
bufferIn, inputSize);
Documentation for uncompress:
518 /* rebuilds target data from reference data and zdelta difference
519 *
520 * INPUT:
521 * ref pointer to reference data set
522 * rsize size of reference data set
523 * tar pointer to target buffer
524 * this buffer IS allocated by the user
525 * tsize size of target buffer
526 * delta pointer to zdelta difference
527 * dsize size of zdelta difference
528 *
529 *
530 * OUTPUT parameters:
531 * tar pointer to recomputed target data
532 * *tsize size of recomputed target data
533 *
534 * zd_uncompress returns ZD_OK on success,
535 * ZD_MEM_ERROR if there was not enough memory,
536 * ZD_BUF_ERROR if there was not enough room in the output
537 * buffer.
538 */
539 ZEXTERN int ZEXPORT zd_uncompress OF ((const Bytef *ref, uLong rsize,
540 Bytef *tar, uLongf *tsize,
541 const Bytef *delta, uLong dsize));
The size variables are all properly initialized. Whenever I run decompression it segfaults deep inside zdelta library at a memcpy in zdelta/inffast.c, seems like a bad destination (only except the case I mentioned above). Anyone had this issue before? Thanks!
I figured this problem was caused by a negation of an unsigned variable, in file inffast.c at line 138:
ptr = rwptr[best_ptr] + (sign == ZD_PLUS ? d : -d);
d is declared of type uInt, so the negation in the false part will (most likely) overflow, which was the cause of the bad destination address of memcpy().
SImply changing this into:
if(ZD_PLUS == sign)
{
ptr = rwptr[best_ptr] + d;
}
else
{
ptr = rwptr[best_ptr] - d;
}
Resolves the issue.
Same story for line 257 in infcodes.c:
c->bp = rwptr[best_ptr] + (c->sign == ZD_PLUS ? c->dist : -c->dist);
I'm doing a project on filesystems on a university operating systems course, my C program should simulate a simple filesystem in a human-readable file, so the file should be based on lines, a line will be a "sector". I've learned, that lines must be of the same length to be overwritten, so I'll pad them with ascii zeroes till the end of the line and leave a certain amount of lines of ascii zeroes that can be filled later.
Now I'm making a test program to see if it works like I want it to, but it doesnt. The critical part of my code:
file = fopen("irasproba_tesztfajl.txt", "r+"); //it is previously loaded with 10 copies of the line I'll print later in reverse order
/* this finds the 3rd line */
int count = 0; //how much have we gone yet?
char c;
while(count != 2) {
if((c = fgetc(file)) == '\n') count++;
}
fflush(file);
fprintf(file, "- . , M N B V C X Y Í Ű Á É L K J H G F D S A Ú Ő P O I U Z T R E W Q Ó Ü Ö 9 8 7 6 5 4 3 2 1 0\n");
fflush(file);
fclose(file);
Now it does nothing, the file stays the same. What could be the problem?
Thank you.
From here,
When a file is opened with a "+"
option, you may both read and write on
it. However, you may not perform an
output operation immediately after an
input operation; you must perform an
intervening "rewind" or "fseek".
Similarly, you may not perform an
input operation immediately after an
output operation; you must perform an
intervening "rewind" or "fseek".
So you've achieved that with fflush, but in order to write to the desired location you need to fseek back. This is how I implemented it - could be better I guess:
/* this finds the 3rd line */
int count = 0; //how much have we gone yet?
char c;
int position_in_file;
while(count != 2) {
if((c = fgetc(file)) == '\n') count++;
}
// Store the position
position_in_file = ftell(file);
// Reposition it
fseek(file,position_in_file,SEEK_SET); // Or fseek(file,ftell(file),SEEK_SET);
fprintf(file, "- . , M N B V C X Y Í Ű Á É L K J H G F D S A Ú Ő P O I U Z T R E W Q Ó Ü Ö 9 8 7 6 5 4 3 2 1 0\n");
fclose(file);
Also, as has been commented, you should check if your file has been opened successfully, i.e. before reading/writing to file, check:
file = fopen("irasproba_tesztfajl.txt", "r+");
if(file == NULL)
{
printf("Unable to open file!");
exit(1);
}