fread() under mingw not working properly - c

I have written a C application under Linux with GTK. A friend wanted to test it under Windows. So we compiled it using MinGW64.
The GUI and everything looks/works as it should. However, the fread() call does not work.
read = fread(workbuff, sizeof(char), rec_data_length, bin_file);
if (read != rec_data_length) {
/* Here is some error handling */
}
rec_data_length is 608. I ensured that the file is not corrupted and that these 608 bytes are available. The function returns 87.
Can someone explain this to me? Why does it work under Linux but not under Windows?

The problem with reading from this file was, that I opened a binary file with
fopen("foo", "r");
This worked fine under Linux. But on Windows I had to change it to
fopen("foo", "rb");
This solution works on both systems and behaves now as expected.

Related

QEMU-sparc can't open to file

I am pretty new about QEMU. I am working on QEMU emulation for SPARC microcontroller in my virtual machine which is Ubuntu 20.04.3. I would like to read data from .raw file but fopen functions doesn't work.
I am using eclipse for compiler.
Code Example :
FILE *fake;
if((fake = fopen("/home/rtems/Desktop/file.raw","r"))== NULL){
printf("File doesn't created !\n");
}
File path is correct. There is no doubt.
I am always getting same error even if I can try to open file for write.
Do I need to change something for file access or do I need to convert my raw binary file to another format to access?

C: Creating a file in a directory not owned by the current user, Windows

I recently created a program (originally for linux) that requires the ability to create files so it can save data. It creates a directory that contains the files if needed. This works fine on linux and mac, however on some windows setups, it does not. I tested on a fresh 64-bit Windows 10 virtual machine and everything works fine (ie, fopen, fwrite, fread work fine), but when tested on a computer that has had windows for a few years, it creates no files. The directory is created without a problem though.
I believe the problem is the same as the one explained here https://superuser.com/questions/846143/remove-read-only-attribute-from-folder-after-windows-reinstall
That is, that the NTFS filesystem (and therefore all the directories in it) is not owned by the user attempting to run my program. This is attributed to the windows installation that created the filesystem not being the same installation that is now trying to run my program. Another description of the issue is that right clicking on a directory, going to Properties and attempting to uncheck Read Only is possible, but upon rechecking, it has made itself Read Only again. It's worth noting that the Read Only checkbox isn't checked, but filled with a square, rather.
My program also has a java component to it that makes use of BufferedWriter. This seems to be unaffected by this issue and can create files no problem, so it must be possible to circumnavigate this problem, I just don't know how to do that in C. And that's my question, how do I create files (Using C) on a Windows installation affected by this problem?
Code that I have tried that didn't work on the bugged windows installs:
HANDLE test = CreateFile(file_path, GENERIC_WRITE,
0, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL);
WriteFile(test, "file created", sizeof("file created"), NULL, NULL);
CloseHandle(test);
and not using the windows API:
/* Open file for appending */
FILE *f = fopen(file_path, "ab+");
if (f == NULL) {
perror("fopen");
return -1;
}
int len_s = strlen(s);
/* Write the length of s to the file */
if (1 != fwrite(&len_s, sizeof(char), 1, f)) return -2;
fclose(f);
Neither of these create the files let alone write the content on the bugged windows install. It's worth noting the fopen code block is what works on linux, mac, and the windows 10 virtual machine I tested on, and the CreateFile block works on the virtual machine.

fprintf() is not working in ubuntu

I'm trying to learn File I/O concepts in C programming language. I'm using GNU / Linux ( Ubuntu 16.04 LTS ) and my IDE is eclipse 3.8. when I try to write in a file through fprintf() method, it doesn't create any files or if the file is even created, it doesn't write in it. I tried to fix the problem by using fflush() or setbuf(file_pointer, NULL) methods as is suggested here but still no change. I guess I'm writing the address of the file in a wrong way.
Here is the code:
#include <stdio.h>
int main(void){
FILE *file_pointer;
file_pointer=fopen("~/.textsfiless/test.txt","w+");
setbuf(file_pointer,NULL);
fprintf(file_pointer,"Testing...\n");
fclose(file_pointer);
return EXIT_SUCCESS;
}
Can someone explain what's wrong here?
On Linux, the ~ in ~/.textsfiless/test.txt is not expanded by the C library fopen... When you use ~ on the command line, it is expanded by your shell (but not by the program using it, started by the shell doing some execve(2)...) into your home directory; the expansion is called globbing. Read glob(7). You are very unlikely to have a directory named ~.
You should read Advanced Linux Programming
So you should check if fopen failed (it is very likely that it did fail). If you want to get a file in the home directory, you'll better use getenv(3) with "HOME" (or perhaps getpwuid(3) & getuid(2)...). See environ(7)
Perhaps a better code might be:
char*homedir = getenv("HOME");
if (!homedir) { perror("getenv HOME"); exit(EXIT_FAILURE); };
char pathbuf[512]; /// or perhaps PATH_MAX instead of 512
snprintf(pathbuf, sizeof(pathbuf),
"%s/.textsfiless/test.txt", homedir);
FILE *file_pointer = fopen(pathbuf, "r");
if (!file_pointer) { perror(pathbuf); exit(EXIT_FAILURE); };
and so on.
Notice that you should check against failures most C standard library (& POSIX) functions. The perror(3) function is useful to report errors to the user on stderr.
(pedantically, we should even test that snprintf(3) returns a length below sizeof(pathbuf) or use and test against failure asprintf(3) instead; I leave that test as an exercise to the reader)
More generally, read the documentation of every external function that you are using.
Beware of undefined behavior (your code is probably having some, e.g. fprintf to a NULL stream). Compile your code with all warnings & debug info (so gcc -Wall -g) and use the gdb debugger. Read What every C programmer should know about undefined behavior.
BTW, look into strace(1) and try it on your original (faulty) program. You'll learn a lot about the system calls used in it.
Most likely your call to fopen() fails. You don't have any checking in your program to ensure fopen even worked. It may not have, and this could be due to a variety of things, like you spelling the path wrong, wrong file or process permissions, etc.
To see what really happened, you should check fopen's return value:
#include <stdio.h>
int main(void){
FILE *file_pointer;
file_pointer=fopen("~/.textsfiless/test.txt","w+");
if (file_pointer == NULL) {
printf("Opening the file failed.");
return EXIT_FAILURE;
}
setbuf(file_pointer,NULL);
fprintf(file_pointer,"Testing...\n");
fclose(file_pointer);
return EXIT_SUCCESS;
}
Edit: Since your comment, you getting the path wrong is most certainly what happened. If you're executing your program from the current directory, and your file is in a folder called "textfiless" in your current directory and your file is called "test.txt", then you'd call fopen like this:
file_pointer=fopen("/textsfiless/test.txt","w+");

Opening a file in Mac OS X

I am trying to open a text file with C++ in Mac OS X but I always get a Bus error.
I do not care where to put the file. I just need to read it. Am I writing its address wrong? or that Bus Error has another reason?
FILE *dic;
dic = fopen("DICT","rb");
dic = fopen("./DICT","rb");
dic = fopen("~/DICT","rb");
dic = fopen("~//DICT","rb");
With a little bit of clarification I see the problem in your C code (not C++!) is that fopen() returns NULL. You can check what the problem really is by reporting the detailed error:
if( (dic = fopen("DICT", "rb") == NULL ) {
fprintf(stderr, "%s\n", perror("ERROR:"));
exit(1);
}
If fopen() fails to find the file on the user's desktop and you wish your code to work on multiple platforms then you might define a function to get the user's desktop directory for using with fopen(). Something like
char* user_desktop(char* buf, size_t len)
{
const char* const DESKTOP_DIR =
#ifdef PC
"C:\\Documents and Settings\\Pooya\\Desktop\\"
#elif defined(OSX)
"/Users/Pooya/Desktop/"
#elif defined(LINUX)
"/home/users/pooya/Desktop/"
// fail to compile if no OS specified ...
#endif
return strncpy(buf, DESKTOP_DIR, len);
}
You probably want to look into a more robust way of getting the path of the desktop for each operating system. Most operating systems have an API for this, so do your research. There are also more robust ways of splitting behaviour for various platforms, you can look into that or open a different question about that. I just wanted to express my idea, of having a function which will return you the appropriate desktop path no matter on which platform you compile your code.
This code is correct! Pay attention to the directory where the executable is located. For sure the directory of the execution is not the same as you are expecting (I suppose, the directory of the .c files, right?).
I believe you are executing the app from the IDE. This is commom in Xcode, it mounts the exec`s in another location than that where the project files are located, and this such location that is considered when you execute the program, whether you execute it from the IDE or not!
Simply move the file you want to read to the location of the application and it will work properly.

Writing to a file in Unicode

I am having some problems writing to a file in unicode inside my c program. I am trying to write a unicode Japanese string to a file. When I go to check the file though it is empty. If I try a non-unicode string it works just fine. What am I doing wrong?
setlocale(LC_CTYPE, "");
FILE* f;
f = _wfopen(COMMON_FILE_PATH,L"w");
fwprintf(f,L"日本語");
fclose(f);
Oh about my system:
I am running Windows. And my IDE is Visual Studio 2008.
You might need to add the encoding to the mode. Possibly this:
f = _wfopen(COMMON_FILE_PATH,L"w, ccs=UTF-16LE");
Doing the same with fopen() works for me here. I'm using Mac OS X, so I don't have _wfopen(); assuming _wfopen() isn't returning bad stuff to you, your code should work.
Edit: I tested on cygwin, too - it also seems to work fine.
I cannot find a reference to _wfopen on either of my boxes, I however don't see why opening it with fopen should cause a problem, all you need is a file pointer.
What matters is if or not C recognizes the internal Unicode's values and pushes those binary values to the file properly.
Try just using fopen as Carl suggested, it should work properly.
Edit: if it still doesn't work you may try defining the characters as their integer values and pushing them with fwprintf(), I know that's cumbersome and not a good fix in the long run, but it should work as well.

Resources