C - fopen_s cannot write to a file made by CreateFile - c

I have a main.c function and a subfunction that is called within it. In the subfunction I have used CreateFile to make a file. I then use CloseHandle to close the handle to that file. When I use fopen_s after that (within the subfunction) it works with both read and write modes. But if I use fopen_s in the main function afterwards, I can only open with read access, or else I get error code 13 - permission denied. The parameters of my CreateFile function are as follows:
hAppend = CreateFile(centralDataFilepath, // open central data file
FILE_APPEND_DATA, // open for writing
FILE_SHARE_READ|FILE_SHARE_WRITE, // allow multiple readers
NULL, // no security
OPEN_ALWAYS, // open or create
FILE_ATTRIBUTE_NORMAL, // normal file
NULL); // no attr. template
And I use fopen_s as follows:
FILE *f2;
errno_t errorCode3 = 0;
errorCode3 = fopen_s(&f2, centralDataFilepath, "a+");
fclose(f2);
I don't actually know if CreateFile has anything to do with this, it seems like the permission of the file changes after I exit the subfunction? I need to be able to write to this file, would anyone know why I am getting this permission denied error, and how to fix it?

As described here:
Files that are opened by fopen_s and _wfopen_s are not sharable.
Function failed because it can't lock file for writing. You need to use _fsopen instead. Try this:
f2 = _fsopen(centralDataFilepath, "a+", _SH_DENYNO);

Related

How to initialize stdout/stderr in a subsystem=windows program WITHOUT calling AllocConsole()?

So when trying to use the stdin/stdout/stderr streams in a Windows GUI app, one typically has to call AllocConsole (or AttachConsole) in order to initialize those streams for use. There are lots of posts on here on what you need to do AFTER calling AllocConsole (i.e. use freopen_s on the respective streams, etc).
I have a program where I want to redirect stdout and stderr to an anonymous pipe. I have a working example where I call:
AllocConsole();
FILE* fout;
FILE* ferr;
freopen_s(&fout, "CONOUT$", "r+", stdout);
freopen_s(&ferr, "CONOUT$", "r+", stderr);
HANDLE hreadout;
HANDLE hwriteout;
HANDLE hreaderr;
HANDLE hwriteerr;
SECURITY_ATTRIBUTES sao = { sizeof(sao),NULL,TRUE };
SECURITY_ATTRIBUTES sae = { sizeof(sae),NULL,TRUE };
CreatePipe(&hreadout, &hwriteout, &sao, 0);
CreatePipe(&hreaderr, &hwriteerr, &sae, 0);
SetStdHandle(STD_OUTPUT_HANDLE, hwriteout);
SetStdHandle(STD_ERROR_HANDLE, hwriteerr);
This snippet successfully sets stdout and stderr to the write ends of the anonymous pipes and I can capture the data.
However calling AllocConsole will spawn a Conhost.exe - this is the actual black window that pops to the screen. I don't have a use for this and most importantly, I would like to avoid the process creation of a child conhost.exe under my program.
So the question is, how can I fool Windows into thinking it has a console attached/manually setup the initial stdout and stderr file streams so that I can then redirect them as I have done already? I have looked at the AllocConsole call in a debugger as well as GetStdHandle and SetStdHandle to try and get a sense of what is going on, but my RE skills are lacking.
Without AllocConsole, the freopen_s calls fail with error 6, Invalid Handle. GetStdHandle also returns a NULL handle. Calling SetStdHandle succeeds (based on it's return code and calling GetLastError), however this doesn't appear to actually get things set up where I need them as I don't receive any output in my pipe.
Any ideas?
Use the SetStdHandle function to assign your pipe HANDLE values to STD_INPUT_HANDLE and STD_OUTPUT_HANDLE.

Kernel module check if file exists

I'm making some extensions to the kernel module nandsim, and I'm having trouble finding the correct way to test if a file exists before opening it. I've read this question, which covers how the basic open/read/write operations go, but I'm having trouble figuring out if and how the normal open(2) flags apply here.
I'm well aware that file reading and writing in kernel modules is bad practice; this code already exists in the kernel and is already reading and writing files. I am simply trying to make a few adjustments to what is already in place. At present, when the module is loaded and instructed to use a cache file (specified as a string path when invoking modprobe), it uses filp_open() to open the file or create it if it does not exist:
/* in nandsim.c */
...
module_param(cache_file, charp, 0400);
...
MODULE_PARM_DESC(cache_file, "File to use to cache nand pages instead of memory");
...
struct file *cfile;
cfile = filp_open(cache_file, O_CREAT | O_RDWR | O_LARGEFILE, 0600);
You might ask, "what do you really want to do here?" I want to include a header for the cache file, such that it can be reused if the system needs to be reset. By including information about the nand page geometry and page count at the beginning of this file, I can more readily simulate a number of error conditions that otherwise would be impossible within the nandsim framework. If I can bring down the nandsim module during file operations, or modify the backing file to model a real-world fault mode, I can recreate the net effect of these error conditions.
This would allow me to bring the simulated device back online using nandsim, and assess how well a fault-tolerant file system is doing its job.
My thought process was to modify it as follows, such that it would fail trying to force creation of a file which already exists:
struct file *cfile;
cfile = filp_open(cache_file, O_CREAT | O_EXCL | O_RDWR | O_LARGEFILE, 0600);
if(IS_ERR(cfile)){
printk(KERN_INFO "File didn't exist: %ld", PTR_ERR(cfile));
/* Do header setup for first-time run of NAND simulation */
}
else{
/* Read header and validate against system parameters. Recover operations */
}
What I'm seeing is an error, but it is not the one I would have expected. It is reporting errno 14, EFAULT (bad address) instead of errno 17 EEXIST (File exists). I don't want to run with this because I would like this to be as idiomatic and correct as possible.
Is there some other way I should be doing this?
Do I need to somehow specify that the file path is in user address space? If so, why is that not the case in the code as it was?
EDIT: I was able to get a reliable error by trying to open with only O_RDWR and O_LARGEFILE, which resulted in ENOENT. It is still not clear why my original approach was incorrect, nor what the best way to accomplish my goal is. That said, if someone more experienced could comment on this, I can add it as a solution.
Indeed, filp_open expects a file path which is in kernel address space. Proof is the use of getname_kernel. You can mimic this for your use case with something like this:
struct filename *name = getname(cache_file);
struct file *cfile = ERR_CAST(name);
if (!IS_ERR(name)) {
cfile = file_open_name(name, O_CREAT | O_EXCL | O_RDWR | O_LARGEFILE, 0600);
if (IS_ERR(cfile))
return PTR_ERR(cfile);
putname(name);
}
Note that getname expects a user-space address and is the equivalent of getname_kernel.

DeleteFile() or unlink() calls succeed but doesn't remove file

I am facing this strange problem.
To delete a file unlink() API is called in my code. This call removes the file and succeeds on non-windows platforms. On windows it succeeds (returns 0) but doesn't remove the file.
To experiment I added a loop to call same API repeatedly. In second iteration I got an Permission denied error, Error code =13. Though read/write attributes are set on file and program has full permission to access the file.
I then called DeleteFile() instead of unlink() API. To my surprise I see the same result,call succeeded i.e. returned 1 but file is not removed physically.
I checked through unlocker utility, no other program is accessing the file except the program which is trying to remove this file.
Does anyone has idea what else could be wrong ?
Edit1:
Just to ensure file was not opened at the time of removing it. I saved the handle when file was created and tried to close before removing the file but I got error "'UNOPENED' (Errcode: 9 - Bad file descriptor)". Thus I conclude the file was not open at the time of removing it.
Edit2
As requested, here is the simplified version of code used to create and remove the file.
// Code to create the file
int create_file(const char* path)
{
HANDLE osfh; /* OS handle of opened file */
DWORD fileaccess; /* OS file access (requested) */
DWORD fileshare; /* OS file sharing mode */
DWORD filecreate; /* OS method of opening/creating */
DWORD fileattrib; /* OS file attribute flags */
SECURITY_ATTRIBUTES SecurityAttributes;
SecurityAttributes.nLength= sizeof(SecurityAttributes);
SecurityAttributes.lpSecurityDescriptor= NULL;
SecurityAttributes.bInheritHandle= !(oflag & _O_NOINHERIT);
fileaccess= GENERIC_WRITE;
fileshare= FILE_SHARE_READ | FILE_SHARE_WRITE | FILE_SHARE_DELETE;
filecreate= CREATE_NEW;
if ((osfh= CreateFile(path, fileaccess, fileshare, &SecurityAttributes,
filecreate, fileattrib, NULL)) == INVALID_HANDLE_VALUE)
{
// error handling
}
}
//Code to delete the file -
int remove_file (const char* name)
{
if ((err = unlink(name)) == -1)
{ //Error handling }
}
Edit3
As pointed by Joachim Pileborg and icabod, that DeleteFile() does not remove file if it is still open. As suggested by Remy Lebeau, to use process explorer. I found that one handle to file was indeed open when I closed that from process explorer file deleted like a charm :)
I had also mentioned in the Edit1 when I tried to close the file I got an error. It happened because the file descriptor I get from createfile() is not the actual handle returned by CreateFile() API instead a logical mapped handle due to underlying code complexities to support other non-windows platforms. Anyways, now I understood the root cause of problem but I was expecting if a file with open handle is passed to DeleteFile() API then it should fail in first attempt rather succeed and wait for open handles to close.
Assuming that you call your Createfile function, then later call your remove_file function... you still have a handle open to the file. The WinAPI function CreateFile, if it succeeds, keeps a handle open on the file. In your provided code, you don't close that handle.
From the documentation on DeleteFile:
The DeleteFile function marks a file for deletion on close. Therefore, the file deletion does not occur until the last handle to the file is closed. Subsequent calls to CreateFile to open the file fail with ERROR_ACCESS_DENIED.
My guess is that you still have a handle open, and when you close that handle the file will be deleted.
However, your sample code is incomplete, so it is difficult to tell.

Check if a file is being written using Win32 API or C/C++. I do not have write access myself

Inside a Windows C/C++ programm, I need to process a text file. So I just need to open the file for reading. However, I do not want to do that while the file is still being written to by another process. I also know that this other process will eventually close the file and never write to it agin.
Looking at similar questions on stackoverflow, the typical answer is "try and open the file for writing - if that fails then try again later").
Now in this case, my process does not have write access to the file at all. So checking if the file can be opened for writing is not an option . It will always fail irrespective of any other process having write access or not.
As Hans Passant and Igor Tandetnik said you just need to pass the appropriate sharing flag to CreateFile. As the MSDN documentation for CreateFile says:
FILE_SHARE_WRITE 0x00000002
Enables subsequent open operations on a file or device to request write access.
Otherwise, other processes cannot open the file or device if they request write access.
If this flag is not specified, but the file or device has been opened for write access
or has a file mapping with write access, the function fails.
You'll want to use code like the following:
HANDLE handle = CreateFile(name, GENERIC_READ, FILE_SHARE_READ, NULL,
OPEN_EXISTING, 0, NULL);
if (handle == INVALID_HANDLE_VALUE) {
DWORD errcode = GetLastError();
if (errcode == ERROR_SHARING_VIOLATION) {
printf("%s: sharing violation\n", name);
} else {
printf("%s: CreateFile failed, error code = %lu\n", name, errcode);
}
} else {
printf("%s: CreateFile succeeded\n", name);
}
This code in unable to tell if the ERROR_SHARING_VIOLATION occurred because the other process has the file open for writing or because the another process didn't use FILE_SHARE_READ when opening the file. In the later case any attempt to read from the file will fail with a sharing violation. The FILE_SHARE_READ flag is passed to prevent sharing violations in the case when the file already been opened and FILE_SHARE_READ was used. You could also add FILE_SHARE_DELETE but I assume you'd consider that the same as write access.

fputs crashing in C on Mac with Xcode

I have a command line app and have the code
chdir("/var");
FILE *scriptFile = fopen("wiki.txt", "w");
fputs("tell application \"Firefox\"\n activate\n",scriptFile);
fclose(scriptFile);
and when I run it in Xcode I get an EXC_BAD_ACCESS when it gets to the first fputs(); call
Probably the call to fopen() failed because you don't have write permissions in /var. In this case fopen() returns NULL and passing NULL to fputs() will cause an access violation.
Are you checking to make sure the file is properly being opened?
Normally you will need superuser privileges to write into /var, so this is likely your problem.
I already answered this in the comment and a couple people have told you what you've done wrong as answers but I decided to add a little sample code with error checking:
chdir("/var");
FILE *scriptFile = fopen("wiki.txt", "w");
if( !scriptFile ) {
fprintf(stderr, "Error opening file: %s\n", strerror(errno));
exit(-1);
} else {
fputs("tell application \"Firefox\"\n activate\n",scriptFile);
fclose(scriptFile);
}
Now you will see an error if your file is not opened and it will describe why (in your case, access denied). You can make this work for testing by either 1) replacing your filename with something world writeable, like "/tmp/wiki.txt"; or 2) running your utility with privileges sudo ./your_command_name.

Resources