I am working on a shared memory assignment on Mac OS X
#define SHARED_OBJECT_PATH "/my_shared_memory"
fd = shm_open(SHARED_OBJECT_PATH, O_CREAT | O_EXCL | O_RDWR, S_IRWXU | S_IRWXG);
if (fd < 0) {
perror("In shm_open()");
exit(1);
}
One of the small snippets in the program is the above.
When I compile and run the program a second time, I would get the error:
In shm_open(): File exists
I am assuming because I need to manually delete using rm [path_to]/my_shared_memory. I know on Linux, the default location is /dev/shm, however, this path does not exist on Mac OS X.
Where is the location of my_shared_memory so I can delete it?
The simplest solution to your problem is not using
O_EXCL
if you don't want that behaviour.
Generally, shared memory objects do have a name, but it's not really a file name -- you can't generally delete them. It's good POSIX style to display them under /dev/shm, but this depends on your OS:
My best guess would be that you should read what man shm_open says on your machine.
Under Mac OS which is derived from BSD, there are no visible file entries in the file system for shared memory objects. Cf. https://stackoverflow.com/a/73752984/14393739 for more details.
As a consequence, it is not possible to do cleanups with a rm command or unlink() call. O_EXCL flag should be used with care: at program startup, try shm_open() without O_EXCL and O_CREAT first. If the latter fails, retry with both flags.
Related
I'm making some extensions to the kernel module nandsim, and I'm having trouble finding the correct way to test if a file exists before opening it. I've read this question, which covers how the basic open/read/write operations go, but I'm having trouble figuring out if and how the normal open(2) flags apply here.
I'm well aware that file reading and writing in kernel modules is bad practice; this code already exists in the kernel and is already reading and writing files. I am simply trying to make a few adjustments to what is already in place. At present, when the module is loaded and instructed to use a cache file (specified as a string path when invoking modprobe), it uses filp_open() to open the file or create it if it does not exist:
/* in nandsim.c */
...
module_param(cache_file, charp, 0400);
...
MODULE_PARM_DESC(cache_file, "File to use to cache nand pages instead of memory");
...
struct file *cfile;
cfile = filp_open(cache_file, O_CREAT | O_RDWR | O_LARGEFILE, 0600);
You might ask, "what do you really want to do here?" I want to include a header for the cache file, such that it can be reused if the system needs to be reset. By including information about the nand page geometry and page count at the beginning of this file, I can more readily simulate a number of error conditions that otherwise would be impossible within the nandsim framework. If I can bring down the nandsim module during file operations, or modify the backing file to model a real-world fault mode, I can recreate the net effect of these error conditions.
This would allow me to bring the simulated device back online using nandsim, and assess how well a fault-tolerant file system is doing its job.
My thought process was to modify it as follows, such that it would fail trying to force creation of a file which already exists:
struct file *cfile;
cfile = filp_open(cache_file, O_CREAT | O_EXCL | O_RDWR | O_LARGEFILE, 0600);
if(IS_ERR(cfile)){
printk(KERN_INFO "File didn't exist: %ld", PTR_ERR(cfile));
/* Do header setup for first-time run of NAND simulation */
}
else{
/* Read header and validate against system parameters. Recover operations */
}
What I'm seeing is an error, but it is not the one I would have expected. It is reporting errno 14, EFAULT (bad address) instead of errno 17 EEXIST (File exists). I don't want to run with this because I would like this to be as idiomatic and correct as possible.
Is there some other way I should be doing this?
Do I need to somehow specify that the file path is in user address space? If so, why is that not the case in the code as it was?
EDIT: I was able to get a reliable error by trying to open with only O_RDWR and O_LARGEFILE, which resulted in ENOENT. It is still not clear why my original approach was incorrect, nor what the best way to accomplish my goal is. That said, if someone more experienced could comment on this, I can add it as a solution.
Indeed, filp_open expects a file path which is in kernel address space. Proof is the use of getname_kernel. You can mimic this for your use case with something like this:
struct filename *name = getname(cache_file);
struct file *cfile = ERR_CAST(name);
if (!IS_ERR(name)) {
cfile = file_open_name(name, O_CREAT | O_EXCL | O_RDWR | O_LARGEFILE, 0600);
if (IS_ERR(cfile))
return PTR_ERR(cfile);
putname(name);
}
Note that getname expects a user-space address and is the equivalent of getname_kernel.
I am writing a program that uses POSIX shared memory and have an error that I am unsure how to fix. I looked for similar questions but could not find any relevant to this specific problem.
Two files are involved - server.c, which contains the code run by the program, and shm.c, which contains functions that provide abstraction for handling the shared memory. This is an assignment, so I cannot deviate very far from the current structure.
Below is the relevant code from each file:
server.c
int shmFd;
shmFd = createSHM(SHNAME);
shm.c
int createSHM(char * shname)
{
int fileDesc;
fileDesc = shm_open(shname, O_CREAT | O_RDWR, 0);
if(fileDesc == -1)
{
perror("Error: Could not create shared memory space");
}
return fileDesc;
}
shm.h
#define SHNAME "/shmserver"
When I attempt to run the built program in the terminal, the following error appears:
Error: Could not create shared memory space: Permission denied
Any help would be much appreciated.
The line
fileDesc = shm_open(shname, O_CREAT | O_RDWR, 0);
gives no-one any access rights to the shared memory object. Once you create a shared memory object with no access rights, only the root user will be able to open it.
Instead, use (for example)
fileDesc = shm_open(shname, O_CREAT | O_RDWR, S_IRUSR | S_IWUSR | S_IXUSR);
(You could allow other users to access the shared memory, obviously. But you need, at a minimum, to allow yourself to access it; otherwise, you won't be able to open it once you've created it.)
Perhaps worth noting that your error message is incorrect, so you might be misleading yourself (and others). The call to shm_open does not fail when it is creating the shared memory object. What fails is opening an already created shared memory object without all permissions for the user.
I have seen the following code pattern in a legacy project:
Check whether the shared-memory has been created with name "/abc":
int fd = shm_open("/abc", O_RDWR, 0777);
if(fd != -1)
{
close(fd);
return -1;
}
Remove an object previously created by shm_open():
shm_unlink("/abc");
create a shared memory object:
fd = shm_open("/abc", (O_CREAT | O_RDWR), S_IWUSR);
Is Step 2 redundant?
The code can run into Step 2 because the shared-memory object doesn't exist for "/abc". In other words, the code returns if the object does exist. Why should we explicitly call shm_unlink to remove the non-existing object?
Can we shorten the three steps to just one?
I think we can proceed as follows, where we use the flag O_EXCL to check whether there exists an old memory object and create it if it doesn't exist at all. The shm_open() man page says:
O_EXCL
If O_CREAT was also specified, and a shared memory object
with the given name already exists, return an error. The
check for the existence of the object, and its creation if
it does not exist, are performed atomically.
So it should be okay to replace all the code above with a single line:
int fd = shm_open("/abc", O_RDWR | O_EXCL, 0777);
Is that correct?
Is the Step 2 redundant?
It is. It serves no purpose.
Besides this "check-for-existence" is prone to TOCTOU vulnerability.
Can we shorten the 3-step above as a single step
Yes. That's the right way to go about it. But you'll also need O_CREAT flag (which is missing in your code).
I am supposed to write a program that creates new files using the open() command, which, everything I read says that it's supposed to do if a file doesn't already exist.
My code is like this:
char startroom[] = "laruee.rooms/startroom.txt";
//...
int file_descriptor;
//...
file_descriptor = open(startroom, O_WRONLY | O_CREAT );
{
fprintf(stderr, "Could not open %s\n", startroom);
perror("in main");
exit(1);
}
However, despite everything I've googled about this command indicating that the file should get created if it doesn't already exist, the file is not getting created. (And also from everything I googled, it appears that I am the only programmer in the universe having this problem.)
What gives?
Your question could be operating-system (and even file-system) specific. I guess you are running on Linux on some usual file-system like Ext4 or BTRFS.
Read open(2) & path_resolution(7). There are several reasons why an open could fail (and we cannot guess which is relevant for your computer).
It might happen that your code is not running in the conditions you want it to (working directory, user id...)
Then, improve your code as:
char startroom[] = "laruee.rooms/startroom.txt";
//...
int file_descriptor = open(startroom, O_WRONLY | O_CREAT );
if (file_descriptor < 0) {
fprintf(stderr, "Could not open %s : %s\n",
startroom, strerror(errno));
char wdbuf[256];
if (getcwd(wdbuf, sizeof(wdbuf))
fprintf(stderr, "in %s\n", wdbuf);
exit(EXIT_FAILURE);
}
When using perror or strerror on errno you don't want any function which might change errno to be called after the failing syscall. On Linux with glibc the fprintf(3) function knows about %m ....
BTW, you could also strace(1) your program
Perhaps look also in your /var/log/syslog or /var/log/messages. Be sure your disk is not full (use df -h and df -i). If you have disk quotas, be sure to not overflow them. Be sure that your program is running in the directory you want it to and that current directory contains a laruee.rooms/ sub-directory; you might get it with getcwd(2) like I did above.
Particularly for server-like programs, you might want to use syslog(3).
You should read Advanced Linux Programming
BTW your open is not a command (that would be xdg-open) but a system call or at least a POSIX function
System: Ubuntu 12.04
Compiler: gcc (version: 4.6.3)
My idea is to write a client-server application to exchange data via the serial port.
But my problem is, when I execute the code-snippet below, open returns the same file descriptor
if I start two independent processes:
The first process opens "/dev/ttyS0".
The 2nd process opens "dev/ttyS1".
....
serialPortDescriptor = open(portName,
O_RDWR | O_NOCTTY | O_NDELAY | O_EXCL);
if (serialPortDescriptor == INVALID_SERIALPORT_DESCRIPTOR) {
return SERIALPORT_UNKNOWN_ERROR;
}
.....
Is it normal that open returns the identical file descriptor value for different devices/pathnames ("dev/ttyS1" and "dev/ttyS0" respecively) in two different processes/programs?
It's totally normal. File descriptor is just an offset into in-kernel per-process open file table.