Can't write PCM data to wav file - c

I'm trying to write PCM data (in a separate file) to the wav file I'm creating. I have already written the header and confirmed that worked but for some reason when I try to write the raw data into it, it doesn't. What I have successfully read the pcm file to a buffer and got the size of the file. The fwrite() process didn't give me an error either during compile however the resulting file is still empty. Any help is much appreciated! Thanks!
register FILE *handle;
register FILE *lever;
char filename[] = "test.wav";
handle = fopen(filename, "w");
lever = fopen("test.pcm","rb");
fseek(lever, 0, SEEK_END);
long int lSize = ftell(lever);
printf("%i \n",lSize);
rewind(lever);
char *buffer = (char*) malloc(sizeof(char)*lSize);
if (NULL == buffer) {printf("Error creating buffer \n");}
if (lSize != fread(buffer, 1, lSize, lever)) {
printf("Reading error \n");
}
fwrite(buffer, sizeof(buffer), 1, handle);
free(buffer);
fclose(lever);
fclose(handle);

Change
fwrite(buffer, sizeof(buffer), 1, handle);
into:
fwrite(buffer, lSize, 1, handle);

If you are running this under windows, it could fail because of text mode output file.
Use handle = fopen(filename, "wb");. Are you sure you have no errors while opening the files? Also your way of getting file size isn't optimal and it's better to use stat-family functions. And if malloc will fail you'll get a "Segmentation fault".
fwrite returns number of items written, or -1 on error. It sets errno so you can use perror or strerror.
EDIT: wildplasser's answer is the correct solution. Didn't notice this mistake.

Related

I get bus errors when using fread(), and bad data when using buffered stream buffer

FILE * fh = fopen(fname, "rb");
setvbuf(fh, bf, _IOFBF, KB64);
int n = fread(bf, 1, KB64, FH);
/* DISPLAY first 50 characters*/
fclose(fh);
fh = fopen(fname, "rb");
setvbuf(fh, bf, _IONBF, KB64);
int n = fread(bf, 1, KB64, FH);
/* DISPLAY first 50 characters*/
fclose(fh);
The above code gives me completely different data. the _IONBF gives me the correct data, but the app takes much more time than the _IOFBF code. Using Apex C++ on solaris. Simply using setbuf() gives me a bus error when program is called multiple times. Not specifying buffer, also gives me bus errors when program called multiple time. I have no clue why I would get a bus error to begin with, and then bad data when buffer is "buffered".
I found the solution to the above problem. Correct code follows.
Data read is fast and accurate :).
FILE * fh = fopen(fname, "rb");
char bf[KB64], fhistream[KB64]
setvbuf(fh, fhistream, _IOFBF, KB64);
int n = fread(bf, 1, KB64, FH);

fread() returns incorrect data

I am trying to read 128KB binary file in chunks of 256 Bytes. The first 20-40 bytes of 256 bytes seems to be always correct. However after that the data gets corrupted. I tried reading the file and writing it into another binary file and compared. More than half of the data is corrupted. Here is my code
uint8_t buffer[256]
read_bin_file = fopen("vtest.bin", "r");
if (read_bin_file == NULL)
{
printf("Unable to open file\n");
return false;
}
test_bin = fopen("test_file.bin", "w");
if (test_file == NULL)
{
printf("Unable to open file\n");
return false;
}
fflush(stdout);
for (i = 0; i <=0x1FF; i++)
{
file_Read_pointer = i * 256;
fseek(read_bin_file, file_Read_pointer, SEEK_SET);
fread(buffer, 256, 1, read_bin_file);
fseek(test_file, file_Read_pointer, SEEK_SET);
fwrite(buffer, 256, 1, test_file);
}
What is that I am missing?
Also when i try to increase the bytes read from 256 to 1024 ( i<0x7F) the error seems to decrease significantly. The file is almost 90% matching
If it is binary data you're reading and writing, then you should open the files in binary mode with read_bin_file = fopen("vtest.bin", "rb");. Note the "b" in the mode. This prevents special handling of new line characters.
Your fseeks are also unnecessary, the fread and fwrite calls will handle that for you.
From here "The file position indicator for the stream is advanced by the number of characters read."

Proper way to get file size in C

I am working on an assignment in socket programming in which I have to send a file between sparc and linux machine. Before sending the file in char stream I have to get the file size and tell the client. Here are some of the ways I tried to get the size but I am not sure which one is the proper one.
For testing purpose, I created a file with content " test" (space + (string)test)
Method 1 - Using fseeko() and ftello()
This is a method I found on https://www.securecoding.cert.org/confluence/display/c/FIO19-C.+Do+not+use+fseek()+and+ftell()+to+compute+the+size+of+a+regular+file
While the fssek() has a problem of "Setting the file position indicator to end-of-file, as with fseek(file, 0, SEEK_END), has undefined behavior for a binary stream", fseeko() is said to have tackled this problem but it only works on POSIX system (which is fine because the environment I am using is sparc and linux)
fd = open(file_path, O_RDONLY);
fp = fopen(file_path, "rb");
/* Ensure that the file is a regular file */
if ((fstat(fd, &st) != 0) || (!S_ISREG(st.st_mode))) {
/* Handle error */
}
if (fseeko(fp, 0 , SEEK_END) != 0) {
/* Handle error */
}
file_size = ftello(fp);
fseeko(fp, 0, SEEK_SET);
printf("file size %zu\n", file_size);
This method works fine and get the size correctly. However, it is limited to regular files only. I tried to google the term "regular file" but I still not quite understand it thoroughly. And I do not know if this function is reliable for my project.
Method 2 - Using strlen()
Since the max. size of a file in my project is 4MB, so I can just calloc a 4MB buffer. After that, the file is read into the buffer, and I tried to use the strlen to get the file size (or more correctly the length of content). Since strlen() is portable, can I use this method instead? The code snippet is like this
fp = fopen(file_path, "rb");
fread(file_buffer, 1024*1024*4, 1, fp);
printf("strlen %zu\n", strlen(file_buffer));
This method works too and returns
strlen 8
However, I couldn't see any similar approach on the Internet using this method. So I am thinking maybe I have missed something or there are some limitations of this approach which I haven't realized.
Regular file means that it is nothing special like device, socket, pipe etc. but "normal" file.
It seems that by your task description before sending you must retrieve size of normal file.
So your way is right:
FILE* fp = fopen(...);
if(fp) {
fseek(fp, 0 , SEEK_END);
long fileSize = ftell(fp);
fseek(fp, 0 , SEEK_SET);// needed for next read from beginning of file
...
fclose(fp);
}
but you can do it without opening file:
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
struct stat buffer;
int status;
status = stat("path to file", &buffer);
if(status == 0) {
// size of file is in member buffer.st_size;
}
OP can do it the easy way as "max. size of a file in my project is 4MB".
Rather than using strlen(), use the return value from fread(). stlen() stops on the first null character, so may report too small a value. #Sami Kuhmonen Also we do not know the data read contains any null character, so it may not be a string. Append a null character (and allocate +1) if code needs to use data as a string. But in that case, I'd expect the file needed to be open in text mode.
Note that many OS's do not even use allocated memory until it is written.
Why is malloc not "using up" the memory on my computer?
fp = fopen(file_path, "rb");
if (fp) {
#define MAX_FILE_SIZE 4194304
char *buf = malloc(MAX_FILE_SIZE);
if (buf) {
size_t numread = fread(buf, sizeof *buf, MAX_FILE_SIZE, fp);
// shrink if desired
char *tmp = realloc(buf, numread);
if (tmp) {
buf = tmp;
// Use buf with numread char
}
free(buf);
}
fclose(fp);
}
Note: Reading the entire file into memory may not be the best idea to begin with.

C sockets receive file

I'm developing very simple ftp client. I have created a data connection sockets, but I can't transfer file successfully:
FILE *f = fopen("got.png", "w");
int total = 0;
while (1){
memset(temp, 0, BUFFSIZE);
int got = recv(data, temp, sizeof(temp), 0);
fwrite(temp, 1, BUFFSIZE, f);
total += got;
if (total == 1568){
break;
}
}
fclose(f);
BUFFSIZE = 1568
I know that my file is 1568 bytes size, so I try to download it just for a test. Everything is file when I try to download .xml or .html files, but nothing good happens when I try to download png or avi files. Simply original file size is 1568 but got.png file size is 1573. I can't figure out what might cause that.
EDIT:
I have modified my code, so now it looks like (it can accept any file size):
FILE *f = fopen("got.png", "w");
while (1){
char* temp = (char*)malloc(BUFFSIZE);
int got = recv(data, temp, BUFFSIZE, 0);
fwrite(temp, 1, got, f);
if (got == 0){
break;
}
}
fclose(f);
Still received file is 2 bytes too long.
You are opening the file in text mode, so bare CR/LF characters are going to get translated to CRLF pairs when written to the file. You need to open the file in binary mode instead:
FILE *f = fopen("got.png", "wb");
You are always writing a whole buffer even if you have received only a partial one. This is the same problem with ~50% of all TCP questions.
The memset is not necessary. And I hope that temp is an array so that sizeof(temp) does not evaluate to the native pointer size. Better use BUFFSIZE there as well.
Seeing your edit, after fixing the first problem there is another one: Open the file in binary mode.

Trying to read the contents of a device pipe into a buffer causes errors

My C/C++ program takes a file from the command line as an argument. Reading data from 'regular files' is no problem/general programming task, but when the file comes from a 'device pipe' such as /dev/fd/63 it causes my program to crash.
To reproduce:
from your friendly neighborhood bash shell supply a 'device pipe' as a file to your application. Your app should try to read the file contents into a buffer.
./yourapp <(echo 'Hello World!'); # /dev/fd/xx containing output from echo command.
Note the above command is not redirecting the standard input of the application and that is not the intended result..
I don't think that a lot of people know this will crash their application. The application 'seed' from the Gnome project uses glib to do its I/O but it can read from these files just fine. The command 'cat' can also handle this situation gracefully. Why is it when I try to read the contents of that named pipe like a regular file my application gets the crash bug?
EDIT: relavent code section
#include <stdio.h>
int main(int argc, char **argv) {
FILE *file;
char *buffer;
unsigned long fileLen;
//Open file
file = fopen(argv[1], "rb");
if (!file) return -1;
//Get file length
fseek(file, 0, SEEK_END);
fileLen=ftell(file);
fseek(file, 0, SEEK_SET);
//Allocate memory
buffer=(char *)malloc(fileLen+1);
if (!buffer){
fprintf(stderr, "Memory error!");
fclose(file);
return -2;
}
//Read file contents into buffer
fread(buffer, fileLen, 1, file);
fclose(file);
fprintf(stdout, "Size: %i", fileLen);
free(buffer);
}
Can't call fseek on a pipe. The code shows that I was indeed trying to do such a foolish thing.

Resources