C - Writing a buffer as binary file (wav) - c

I am reading a wav file as binary, putting it in a buffer and I want to write the exact same wav file again.
Here is my code so far:
file = fopen("tone1.wav", "rb");
file3 = fopen("outout.wav","wb");
fseek(file, 0, SEEK_END);
fileLen=ftell(file);
fseek(file, 0, SEEK_SET);
buffer=(char *)malloc(fileLen+1);
buffer3=(char *)malloc(fileLen+1);
fread(buffer, fileLen, 1, file);
for (int i=0;i<fileLen+1;++i){
buffer3[i]=buffer[i];
fwrite(buffer3,sizeof(buffer3),1,file3);
}
fclose(file);
fclose(file3);
free(buffer);
free(buffer3);
The problem is that the outout wav file comes empty and unplayable.
I am not sure what i'm doing wrong. If I replace fwrite(buffer3,sizeof(buffer3),1,file3); by fwrite(buffer3,sizeof(buffer3),1048,file3); (let's say 1048) I get something playable but not the entire wav with a loop in it.
Can anyone tell me what's the problem? Maybe it's the for loop length that's wrong maybe i shouldn't put fileLen as a limit to it? What should i replace 1 by?
Thanks in advance

Please observe the following:
the fact that the file is "raw" and has a ".wav" extension, as you put it, does neither mean it is a wav file nor makes a wav file out of it. In order to become a wav file it needs a valid WAV header and requires proper audio file API for reading and writing. What you're reading and copying is headerless data of unknown format and unknown endianness.
if you want use to standard library C functions for copying contents from one file to another, you do it on the byte level, without interpreting the content, which is what you do.
In that case, there are few issues in your code:
redundant padding of the buffers and casting the return of malloc in C:
buffer = malloc(fileLen); should work.
ambiguous logic: why do you, upon having read the source file in one pass, both copy buffers and write to the destination file byte-per-byte, inside the loop?
even if so, you are still passing incorrect arguments to fread and fwrite functions, please, check man pages. fread(buffer, 1, fileLen, file); should fix the read. (1 equals sizeof (char)).
why do you need a redundant buffer buffer3 if you don't interpret the content of file?
even if so, you are still passing incorrect arguments to your functions inside the loop. This should do the fix:
for (int i=0; i<fileLen; i++){
buffer3[i]=buffer[i];
fwrite(&(buffer3[i]),1,1,file3);
}
Generally one doesn't know the size of the file in advance, before opening the source file. So one allocates the buffer of reasonable size, then reads and writes in chunks determined by the size of the buffer. In that case your read routine should also handle the buffer underflow condition.

Do not write inside the loop.
Or write diferent characters every time (not always buffer3[0]).
fread(buffer, fileLen, 1, file);
for (int i = 0; i < fileLen; ++i) {
// transform and copy
buffer3[i] = transform(buffer[i]);
//fwrite(buffer3 + i, 1, 1, file3); // write 1 character only
}
fwrite(buffer3, fileLen, 1, file3); // write all the new buffer

I solved it by putting fwrite(buffer3,sizeof(char),78000,file3); outside of the for loop, with 78000 being the size of the file1. But my question is, how can i know what is its size by code?

Related

Convert a file into binary buffer in C?

I'm just starting in c/c++. I'm able to write a file from binary :
FILE *myFile= fopen("/mnt/music.mp3", "ab+"); // Doesn't exist
fwrite(binaryBuffer, sizeOfBuffer, 1, myFile);
All I want is to get a new "binaryBuffer" from "myFile"
How I can do that ?
Thanks !
Use the fread function, which works just like fwrite:
char buffer[BUFFER_SIZE]; // declare a buffer
fread(buffer, length, 1, file); //read length amount of bytes into buffer
If you don't know how many bytes to read you can seek to the end of the file to find the length.
(If you read from the same file you just wrote to you will want to rewind)
http://www.cplusplus.com/reference/cstdio/fread/

How is the best way to completely change a file content in C?

I am looking for the faster way to completely change the content of a file. It will be clear after the example:
a.txt:
I am a very very long (maybe not too long) file. I am pretty sure I could be longer.
After running the program, and according to the user's input it should become for instance:
user input:
Hi!
Then I tried to use fwrite.
The problem is that the rest of the file were still there, so I've got something like:
a.txt:
Hi!m a very very long (maybe not too long) file. I am pretty sure I could be longer
After some researching this is what I've done:
FILE
*a;
char
buffer[500];
a = fopen("a.txt", "r");
fread(buffer, sizeof(char), 500, a);
printf("%s\n", buffer);
a = freopen("a.txt", "w", a);
scanf("%s", buffer);
// rewind(a);
// fwrite(buffer, sizeof(char), strlen(buffer), a);
fwrite(buffer, sizeof(char), 10, a);
fclose(a);
Although it works, I want to know if there's a better way to do it.
You can just use ftruncate() POSIX function after fwrite() to truncate the file.
The "w" flag should create a file, truncating any existing content if it exists. That's what you need.
I haven't used freopen() before but suspect that's related to the problem. I would try simply closing the input file with fclose(), and then open the file again using fopen()?
I don't know of a better way to do this using portable C.
There are a few minor problems with your implementation.
You don't check for any errors that might have occurred, which is important in the real world. Especially freopen might return NULL on error, and if you assign that to you're original pointer you lose the ability to fclose the file.
You should also remember that normal C strings end with a 0 byte, but fread reads raw bytes so you should reserve space for that zero byte and provide it. scanf will write the zero byte so you can use strlen to determine how many bytes to tell fwrite to write instead of hardcoding 10.
Finally scanf is easy for mocking stuff up, but the way you have it now if the user provides more than 499 bytes you'll have a buffer overflow which can lead to very bad things.

Effective methods for reading and writing large files in C

I'm writing an application that deals with very large user-generated input files. The program will copy about 95 percent of the file, effectively duplicating it and switching a few words and values in the copy, and then appending the copy (in chunks) to the original file, such that each block (consisting of between 10 and 50 lines) in the original is followed by the copied and modified block, and then the next original block, and so on. The user-generated input conforms to a certain format, and it is highly unlikely that any line in the original file is longer than 100 characters long.
Which would be the better approach?
To use one file pointer and use variables that hold the current position of how much has been read and where to write to, seeking the file pointer back and forth to read and write; or
To use multiple file pointers, one for reading and one for writing.
I am mostly concerned with the efficiency of the program, as the input files will reach up to 25,000 lines, each about 50 characters long.
If you have memory constraints, or you want a generic approach, read bytes into a buffer from one file pointer, make changes, and write out the buffer to a second file pointer when the buffer is full. If you reach EOF on the first pointer, make your changes and just flush whatever is in the buffer to the output pointer. If you intend to replace the original file, copy the output file to the input file and remove the output file. This "atomic" approach lets you check that the copy operation took place correctly before deleting anything.
For example, to deal with generically copying over any number of bytes, say, 1 MiB at a time:
#define COPY_BUFFER_MAXSIZE 1048576
/* ... */
unsigned char *buffer = NULL;
buffer = malloc(COPY_BUFFER_MAXSIZE);
if (!buffer)
exit(-1);
FILE *inFp = fopen(inFilename, "r");
fseek(inFp, 0, SEEK_END);
uint64_t fileSize = ftell(inFp);
rewind(inFp);
FILE *outFp = stdout; /* change this if you don't want to write to standard output */
uint64_t outFileSizeCounter = fileSize;
/* we fread() bytes from inFp in COPY_BUFFER_MAXSIZE increments, until there is nothing left to fread() */
do {
if (outFileSizeCounter > COPY_BUFFER_MAXSIZE) {
fread(buffer, 1, (size_t) COPY_BUFFER_MAXSIZE, inFp);
/* -- make changes to buffer contents at this stage
-- if you resize the buffer, then copy the buffer and
change the following statement to fwrite() the number of
bytes in the copy of the buffer */
fwrite(buffer, 1, (size_t) COPY_BUFFER_MAXSIZE, outFp);
outFileSizeCounter -= COPY_BUFFER_MAXSIZE;
}
else {
fread(buffer, 1, (size_t) outFileSizeCounter, inFp);
/* -- make changes to buffer contents at this stage
-- again, make a copy of buffer if it needs resizing,
and adjust the fwrite() statement to change the number
of bytes that need writing */
fwrite(buffer, 1, (size_t) outFileSizeCounter, outFp);
outFileSizeCounter = 0ULL;
}
} while (outFileSizeCounter > 0);
free(buffer);
An efficient way to deal with a resized buffer is to keep a second pointer, say, unsigned char *copyBuffer, which is realloc()-ed to twice the size, if necessary, to deal with accumulated edits. That way, you keep expensive realloc() calls to a minimum.
Not sure why this got downvoted, but it's a pretty solid approach for doing things with a generic amount of data. Hope this helps someone who comes across this question, in any case.
25000 lines * 100 characters = 2.5MB, that's not really a huge file. The fastest will probably be to read the whole file in memory and write your results to a new file and replace the original with that.

replace bytes in file c

I got some code and I want improve it to find and replace bytes in file
so
I want to find all origbytes in FILE and then replace it with newbytes
then save file, I know how to open, write and save, but hot I can find bytes in char?
FILE* file;
file = fopen("/Users/Awesome/Desktop/test", "r+b");
int size = sizeof(file)+1;
char bytes [size];
fgets(bytes, size, file);
for (int i=0; i<size; i++){
char origbytes [] = {0x00, 0x00};
char newbytes [] = {0x11, 0x11};
if (strcmp(bytes[i], origbytes)) //Here the problem
{
fseek(file, i, SEEK_SET);
fwrite(newbytes, sizeof(newbytes), 1, file);
}
}
fclose(file);
strcmp() is for string compare and not character compare. Two characters can be compared directly
if ( bytes[i] == origbytes[something] )
Also you you should not apply sizeof() on a file pointer to determine file size. You should seek to the end of file using fseek and then query ftell except for binary files. For binary files, use something like fstat
One more thing to note is that fgets returns much before the EOF if it sees a newline. Hence in your code, you may not read entire file contents even after doing the changes that we suggested. You need to use fread appropriately
Strings are null terminated in the C standard library. Your search data is effectively a zero length string. You want memcmp instead.
memcmp (&bytes [i], origBytes, 2)
Firstly sizeof(file) + 1 just returns you the size of a pointer + 1. I don't think you need this for the size of the file. Use this: How do you determine the size of a file in C?
Then since you compare bytes (more or less smae as char) you simply compare using =
you can use fseek and then ftell functions to get the file size, not sizeof.

Unexpected output copying file in C

In another question, the accepted answer shows a method for reading the contents of a file into memory.
I have been trying to use this method to read in the content of a text file and then copy it to a new file. When I write the contents of the buffer to the new file, however, there is always some extra garbage at the end of the file. Here is an example of my code:
inputFile = fopen("D:\\input.txt", "r");
outputFile = fopen("D:\\output.txt", "w");
if(inputFile)
{
//Get size of inputFile
fseek(inputFile, 0, SEEK_END);
inputFileLength = ftell(inputFile);
fseek(inputFile, 0, SEEK_SET);
//Allocate memory for inputBuffer
inputBuffer = malloc(inputFileLength);
if(inputBuffer)
{
fread (inputBuffer, 1, inputFileLength, inputFile);
}
fclose(inputFile);
if(inputBuffer)
{
fprintf(outputFile, "%s", inputBuffer);
}
//Cleanup
free(inputBuffer);
fclose(outputFile);
}
The output file always contains an exact copy of the input file, but then has the text "MPUTERNAM2" appended to the end. Can anyone shed some light as to why this might be happening?
You may be happier with
int numBytesRead = 0;
if(inputBuffer)
{
numBytesRead = fread (inputBuffer, 1, inputFileLength, inputFile);
}
fclose(inputFile);
if(inputBuffer)
{
fwrite( inputBuffer, 1, numBytesRead, outputFile );
}
It doesn't need a null-terminated string (and therefore will work properly on binary data containing zeroes)
Because you are writing the buffer as if it were a string. Strings end with a NULL, the file you read does not.
You could NULL terminate your string, but a better solution is to use fwrite() instead of fprintf(). This would also let you copy files that contain NULL characters.
Unless you know the input file will always be small, you might consider reading/writing in a loop so that you can copy files larger than memory.
You haven't allocated enough space for the terminating null character in your buffer (and you also forget to actually set it), so your fprintf is effectively overreading into some other memory. Your buffer is exactly the same size as the file, and is filled with its content, however, fprintf reads the parameter looking for the terminating null, which isn't there, until a couple of characters later where, coincidently, there is one.
EDIT
You're actually mixing two types of io, fread (which is paired with fwrite) and fprintf (which is paired with fscanf). You should probably be doing fwrite with the number of bytes to write; or conversely, use fscanf, which would null-terminate your string (although, this wouldn't allow nulls in your string).
Allocating memory to fit the file is actually quite a bad way of doing it, especially the way it's done here. If the malloc() fails, no data is written to the output file (and it fails silently). In other words, you can't copy files greater than a few gigabytes on a 32-bit platform due to the address space limitations.
It's actually far better to use a smaller memory chunk (allocated or on the stack) and read/write the file in chunks. The reads and writes will be buffered anyway and, as long as you make the chunks relatively large, the overhead of function calls to the C runtime libraries is minimal.
You should always copy files in binary mode as well, it's faster since there's no chance of translation.
Something like:
FILE *fin = fopen ("infile","rb"); // make sure you check these for NULL return
FILE *fout = fopen ("outfile","wb");
char buff[1000000]; // or malloc/check-null if you don't have much stack space.
while ((count = fread (buff, 1, sizeof(buff), fin)) > 0) {
// Check count == -1 and errno here.
fwrite (buff, 1, count, fout); // and check return value.
}
fclose (fout);
fclose (fin);
This is from memory but provides the general idea of how to do it. And you should always have copiuos error checking.
fprintf expects inputBuffer to be null-terminated, which it isn't. So it's reading past the end of inputBuffer and printing whatever's there (into your new file) until it finds a null character.
In this case you could malloc an extra byte and put a null as the last character in inputBuffer.
In addition to what other's have said: You should also open your files in binary-mode - otherwise, you might get unexpected results on Windows (or other non-POSIX systems).
You can use
fwrite (inputBuffer , 1 , inputFileLength , outputFile );
instead of fprintf, to avoid the zero-terminated string problem. It also "matches better" with fread :)
Try using fgets instead, it will add the null for you at the end of the string. Also as was said above you need one more space for the null terminator.
ie
The string "Davy" is represented as the array that contains D,a,v,y,\0 (without the commas). Basically your array needs to be at least sizeofstring + 1 to hold the null terminator. Also fread will not automatically add the terminator, which is why even if your file is way shorter than the maximum length you get garbage..
Note an alternative method for being lazy is just to use calloc which sets the string to 0. But still you should only fread inputFileLength-1 characters at most.

Resources