Locking Linux Serial Port - c

I have an issue that I'm trying to solve regarding the serial port in Linux. I'm able to open, read from, and close the port just fine. However, I want to ensure that I am the only person reading/writing from the port at any given time.
I thought that this was already done for me after I make the open() function call. However, I am able to call open() multiple times on the same port in my program. I can also have two threads which are both reading from the same port simultaneously.
I tried fixing this issue with flock() and I still had the same problem. Is it because both systems calls are coming from the same pid, even though there are different file descriptors involved with each set of opens and reads? For the record, both open() calls do return a valid file descriptor.
As a result, I'm wondering if there's any way that I can remedy by problem. From my program's perspective, it's not a big deal if two calls to open() are successful on the same port since the programmer should be aware of the hilarity that they are causing. However, I just want to be sure that when I open a port, that I am the only process with access to it.
Thanks for the help.

In Linux, you can use the TIOCEXCL TTY ioctl to stop other open()s to the device from succeeding (they'll return -1 with errno==EBUSY, device or resource busy). This only works for terminals and serial devices, but it does not rely on advisory locking.
For example:
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <termios.h>
#include <fcntl.h>
#include <errno.h>
int open_device(const char *const device)
{
int descriptor, result;
if (!device || !*device) {
errno = EINVAL;
return -1;
}
do {
descriptor = open(device, O_RDWR | O_NOCTTY);
} while (descriptor == -1 && errno == EINTR);
if (descriptor == -1)
return -1;
if (ioctl(descriptor, TIOCEXCL)) {
const int saved_errno = errno;
do {
result = close(descriptor);
} while (result == -1 && errno == EINTR);
errno = saved_errno;
return -1;
}
return descriptor;
}
Hope this helps.

I was able to fix the issue with use of the flock() function. Use of the structure and fcntl() wasn't working for me for some reason. With use of flock() I was able to add two lines of code and solve my issue.

Related

Video4Linux ioctl error (#25) when attempting to read device information from /dev/video0

I am currently attempting to retrieve device information for a built in web-cam using the following code:
#include <fcntl.h>
#include <unistd.h>
#include <linux/media.h>
#include <sys/ioctl.h>
#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <string.h>
int main(int argc, char **argv) {
int fd = open("/dev/video0", O_RDONLY, 0);
if (fd > 0) {
struct media_device_info *device_data = (struct media_device_info *) malloc (sizeof(struct media_device_info) * 1);
if (ioctl(fd, MEDIA_IOC_DEVICE_INFO, device_data) == 0)
printf("Media Version: %u\nDriver: %s\nVersion: %d\nSerial: %s\n", (unsigned int) device_data->media_version, device_data->driver, (int) device_data->driver_version, device_data->serial);
else {
fprintf(stderr, "Couldn't get device info: %d: %s\n", errno, strerror(errno));
}
close(fd);
free(device_data);
}
return 0;
}
When the code executes the else block is entered thus giving the following:
Couldn't get device info: 25: Inappropriate ioctl for device
From this it would seem that the device is being opened in the wrong manner such that ioctl cannot use the file descriptor. I must be missing something; could anyone here help me with regards to opening the /dev/video0 device?
Thanks!
p.s. If this has been answered before elsewhere please let me know. If this question is invalid in anyway then please accept my apologies.
It seems that the /dev/video* devices may be bound to separate /dev/media* devices, and you need to issue your MEDIA_IOC_DEVICE_INFO ioctl against the corresponding /dev/media* device for your /dev/video* device.
As to how to locate that corresponding device id, the best I have come up with is to search for media* files within the /sys/class/video4linux/video{N}/device directory.
For example, for a given device /dev/video0 on my system (kernel 4.15.0-34-generic), searching for media* files under /sys/class/video4linux/video0/device turned up media10, which I was then able to use to recover the serial number (open /dev/media10, issue the ioctl command).
I don't know whether this method of finding the corresponding media devices is consistent across distros/versions/kernels/etc.

why non-blocking write to disk doesn't return EAGAIN or EWOULDBLOCK?

I modified a program from APUE, the program first open a file, then mark the fd as non-blocking, then continue write to the fd until write return -1.
I think since disk I/O is slow, when write buffers in OS is nearly full, the write system call will return -1, and the errno should be EAGAIN or EWOULDBLOCK.
But I ran the program for about several minutes and I repeated running the program serveral times, the write system call didn't returned -1 even once! Why?
Here's the code:
#include "apue.h"
#include <errno.h>
#include <fcntl.h>
char buf[4096];
int
main(void)
{
int nwrite;
int fd = open("a.txt", O_RDWR);
if(fd<0){
printf("fd<0\n");
return 0;
}
int i;
for(i = 0; i<sizeof(buf); i++)
buf[i] = i*2;
set_fl(fd, O_NONBLOCK); /* set nonblocking */
while (1) {
nwrite = write(fd, buf, sizeof(buf));
if (nwrite < 0) {
printf("write returned:%d, errno=%d\n", nwrite, errno);
return 0;
}
}
clr_fl(STDOUT_FILENO, O_NONBLOCK); /* clear nonblocking */
exit(0);
}
The O_NONBLOCK flag is primarily meaningful for file descriptors representing streams (e.g, pipes, sockets, and character devices), where it prevents read and write operations from blocking when there is no data waiting to read, or buffers are too full to write anything more at the moment. It has no effect on file descriptors opened to regular files; disk I/O delays are essentially ignored by the system.
If you want to do asynchronous I/O to files, you may want to take a look at the POSIX AIO interface. Be warned that it's rather hairy and infrequently used, though.

Why does bind (socket binding) method generate a file in the current directory?

#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
int main()
{
int sock;
struct sockaddr sock_name = {AF_UNIX, "Fred"};
socklen_t len=sizeof(struct sockaddr)+5;
if( (sock=socket(AF_UNIX,SOCK_STREAM,0)) ==-1)
{
printf("error creating socket");
return -1;
}
if( bind(sock,&sock_name,len) != 0 )
{
printf("socket bind error");
return -1;
}
close(sock);
return 0;
}
After the first run, this program keeps reporting binding error. I tried to change the name of the sockaddr. It works again. But after changing it back to "Fred" (in this case), the error continues. Is something being stored in memory I didn't clear? Why does this happen and how could I fix it?
I guess I have found the problem. After the first run, I find a file named "Fred" in the current directory. I removed the file and my program worked again. Why does bind method generate a file in the current directory?
When used with Unix domain sockets, bind(2) will create a special file at the specified path. This file identifies the socket in much the same way a host and port identify a TCP or UDP socket. Just like you can't call bind twice to associate two different sockets with a given host and port*, you can't associate more than one Unix socket
But why doesn't the file disappear when you call close(2)? After all, closing a TCP socket makes the host and port it was bound to available for other sockets.**
That's a good question, and the short answer is, it just doesn't.
So it's customary (at least in example code) to call unlink(2) prior to binding. The Unix domain socket section of Beej's IPC guide has a nice example of this.
*With versions of the Linux kernel >= 3.9, this isn't exactly true.
**After TIME_WAIT or immediately if you use the SO_REUSEADDR socket option.
EDIT
You said this is your teacher's code, but I suggest that you replace your printf calls with perror:
if( bind(sock,&sock_name,len) != 0 )
{
perror("socket bind error");
return -1;
}
...which will print out a human-readable representation of the real problem encountered by bind(2):
$ ./your-example-executable
$ ./your-example-executable
socket bind error: Address already in use
Programming doesn't have to be so inscrutable!
When you successfully open a socket, it stays open until it is closed (even if your program terminates).
It appears that the question code is not closing the socket in the event of an (such as the failure of bind()).
Two processes cannot generally open the same socket.
Each time the code is executed, it is a new process, attempting to open the same socket.
The code needs a better scheme to handle errors.
This is how I would do it:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/socket.h>
#define MY_FALSE (0)
#define MY_TRUE (-1)
int main()
{
int rCode=0;
int sock = (-1);
char *socketFile = "Fred");
struct sockaddr sock_name = {AF_UNIX, socketFile};
socklen_t len=sizeof(struct sockaddr)+5;
int bound = MY_FALSE;
if((sock=socket(AF_UNIX,SOCK_STREAM,0)) ==-1)
{
printf("error creating socket");
rCode=(-1);
goto CLEANUP;
}
if( bind(sock,&sock_name,len) != 0 )
{
printf("socket bind error");
rCode=(-1);
goto CLEANUP;
}
bound=MY_TRUE;
This single 'cleanup' area can be used to free allocated memory, close sockets & files, etc.
CLEANUP:
if((-1) != sock)
close(sock);
if(bound)
unlink(socketFile);
return 0;
}

Read file in non-blocking mode on Linux

When opening the file /dev/urandom in nonblocking mode it is still blocking when reading. Why is the read call still blocking.
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
int main(int argc, char *argv[])
{
int fd = open("/dev/urandom", O_NONBLOCK);
if (fd == -1) {
printf("Unable to open file\n");
return 1;
}
int flags = fcntl(fd, F_GETFL);
if (flags & O_NONBLOCK) {
printf("non block is set\n");
}
int ret;
char* buf = (char*)malloc(10000000);
ret = read(fd, buf, 10000000);
if (ret == -1) {
printf("Error reading: %s\n", strerror(errno));
} else {
printf("bytes read: %d\n", ret);
}
return 0;
}
The output looks like this:
gcc nonblock.c -o nonblock
./nonblock
non block is set
bytes read: 10000000
Opening any (device) file in nonblocking mode does not mean you never need to wait for it.
O_NONBLOCK just says return EAGAIN if there is no data available.
Obviously, the urandom driver always considers to have data available, but isn't necessarily fast to deliver it.
/dev/urandom is non-blocking by design:
When read, the /dev/random device will only return random bytes
within the estimated number of bits of noise in the entropy pool.
/dev/random should be suitable for uses that need very high quality
randomness such as one-time pad or key generation. When the entropy
pool is empty, reads from /dev/random will block until additional
environmental noise is gathered.
A read from the /dev/urandom device will not block waiting for more
entropy. As a result, if there is not sufficient entropy in the
entropy pool, the returned values are theoretically vulnerable to a
cryptographic attack on the algorithms used by the driver.
If you replace it with /dev/random, your program should produce a different result.
In Linux, it is not possible to open regular files in non blocking mode. You have to use the AIO interface to read from /dev/urandom in non blocking mode.

mmap, msync and linux process termination

I want to use mmap to implement persistence of certain portions of program state in a C program running under Linux by associating a fixed-size struct with a well known file name using mmap() with the MAP_SHARED flag set. For performance reasons, I would prefer not to call msync() at all, and no other programs will be accessing this file. When my program terminates and is restarted, it will map the same file again and do some processing on it to recover the state that it was in before the termination. My question is this: if I never call msync() on the file descriptor, will the kernel guarantee that all updates to the memory will get written to disk and be subsequently recoverable even if my process is terminated with SIGKILL? Also, will there be general system overhead from the kernel periodically writing the pages to disk even if my program never calls msync()?
EDIT: I've settled the problem of whether the data is written, but I'm still not sure about whether this will cause some unexpected system loading over trying to handle this problem with open()/write()/fsync() and taking the risk that some data might be lost if the process gets hit by KILL/SEGV/ABRT/etc. Added a 'linux-kernel' tag in hopes that some knowledgeable person might chime in.
I found a comment from Linus Torvalds that answers this question
http://www.realworldtech.com/forum/?threadid=113923&curpostid=114068
The mapped pages are part of the filesystem cache, which means that even if the user process that made a change to that page dies, the page is still managed by the kernel and as all concurrent accesses to that file will go through the kernel, other processes will get served from that cache.
In some old Linux kernels it was different, that's the reason why some kernel documents still tell to force msync.
EDIT: Thanks RobH corrected the link.
EDIT:
A new flag, MAP_SYNC, is introduced since Linux 4.15, which can guarantee the coherence.
Shared file mappings with this flag provide the guarantee that
while some memory is writably mapped in the address space of
the process, it will be visible in the same file at the same
offset even after the system crashes or is rebooted.
references:
http://man7.org/linux/man-pages/man2/mmap.2.html search MAP_SYNC in the page
https://lwn.net/Articles/731706/
I decided to be less lazy and answer the question of whether the data is written to disk definitively by writing some code. The answer is that it will be written.
Here is a program that kills itself abruptly after writing some data to an mmap'd file:
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
typedef struct {
char data[100];
uint16_t count;
} state_data;
const char *test_data = "test";
int main(int argc, const char *argv[]) {
int fd = open("test.mm", O_RDWR|O_CREAT|O_TRUNC, (mode_t)0700);
if (fd < 0) {
perror("Unable to open file 'test.mm'");
exit(1);
}
size_t data_length = sizeof(state_data);
if (ftruncate(fd, data_length) < 0) {
perror("Unable to truncate file 'test.mm'");
exit(1);
}
state_data *data = (state_data *)mmap(NULL, data_length, PROT_READ|PROT_WRITE, MAP_SHARED|MAP_POPULATE, fd, 0);
if (MAP_FAILED == data) {
perror("Unable to mmap file 'test.mm'");
close(fd);
exit(1);
}
memset(data, 0, data_length);
for (data->count = 0; data->count < 5; ++data->count) {
data->data[data->count] = test_data[data->count];
}
kill(getpid(), 9);
}
Here is a program that validates the resulting file after the previous program is dead:
#include <stdint.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <assert.h>
typedef struct {
char data[100];
uint16_t count;
} state_data;
const char *test_data = "test";
int main(int argc, const char *argv[]) {
int fd = open("test.mm", O_RDONLY);
if (fd < 0) {
perror("Unable to open file 'test.mm'");
exit(1);
}
size_t data_length = sizeof(state_data);
state_data *data = (state_data *)mmap(NULL, data_length, PROT_READ, MAP_SHARED|MAP_POPULATE, fd, 0);
if (MAP_FAILED == data) {
perror("Unable to mmap file 'test.mm'");
close(fd);
exit(1);
}
assert(5 == data->count);
unsigned index;
for (index = 0; index < 4; ++index) {
assert(test_data[index] == data->data[index]);
}
printf("Validated\n");
}
I found something adding to my confusion:
munmap does not affect the object that was mappedthat is, the call to munmap
does not cause the contents of the mapped region to be written
to the disk file. The updating of the disk file for a MAP_SHARED
region happens automatically by the kernel's virtual memory algorithm
as we store into the memory-mapped region.
this is excerpted from Advanced Programming in the UNIX® Environment.
from the linux manpage:
MAP_SHARED Share this mapping with all other processes that map this
object. Storing to the region is equiva-lent to writing to the
file. The file may not actually be updated until msync(2) or
munmap(2) are called.
the two seem contradictory. is APUE wrong?
I didnot find a very precise answer to your question so decided add one more:
Firstly about losing data, using write or mmap/memcpy mechanisms both writes to page cache and are synced to underlying storage in background by OS based on its page replacement settings/algo. For example linux has vm.dirty_writeback_centisecs which determines which pages are considered "old" to be flushed to disk. Now even if your process dies after the write call has succeeded, the data would not be lost as the data is already present in kernel pages which will eventually be written to storage. The only case you would lose data is if OS itself crashes (kernel panic, power off etc). The way to absolutely make sure your data has reached storage would be call fsync or msync (for mmapped regions) as the case might be.
About the system load concern, yes calling msync/fsync for each request is going to slow your throughput drastically, so do that only if you have to. Remember you are really protecting against losing data on OS crashes which I would assume is rare and probably something most could live with. One general optimization done is to issue sync at regular intervals say 1 sec to get a good balance.
Either the Linux manpage information is incorrect or Linux is horribly non-conformant. msync is not supposed to have anything to do with whether the changes are committed to the logical state of the file, or whether other processes using mmap or read to access the file see the changes; it's purely an analogue of fsync and should be treated as a no-op except for the purposes of ensuring data integrity in the event of power failure or other hardware-level failure.
According to the manpage,
The file may not actually be
updated until msync(2) or munmap() is called.
So you will need to make sure you call munmap() prior to exiting at the very least.

Resources