How to set interrupt with serial on linux? - c

I want to set interrupt for serial port on linux,so I do it by signal.And the handler of signal hava worded,but I don't know how to get the number of character.Specifically I am not sure the third parameter in read() function when the handler is called by system.So,I need a solution that knows the amount of serial data.
Thanks you all.
PS:My English is not good,so the above may not be clearly expressed
void serialHandler(int sig)
{
read(fd,buffer,I don't know);
}

Specifically I am not sure the third parameter in read() function when the handler is called by system
read() is described fully here, and includes the following example:
#include <sys/types.h>
#include <unistd.h>
...
char buf[20];
size_t nbytes;
ssize_t bytes_read;
int fd;
...
nbytes = sizeof(buf);
bytes_read = read(fd, buf, nbytes);
It is common to use a loop construct (for example around similar code to that shown above) while testing the output of read for an exit criteria. In the above implementation (not looped) bytes_read contains the number of bytes successfully read, excluding any carriage return characters removed. If a read error or end-of-file ( EOF ) is encountered, the returned value can be less than that specified in the number_ofBytes parameter. If an error occurs, read returns 0 and sets errno to a nonzero value.
Note: As mentioned in the comments, using read() in conjunction with a serial port most likely precludes it will ever see an EOF condition.
Also to expound on the comment about using timeouts with read(), and how to implement a timeout for the read function itself using the select() function.
There is more information here to help with creating algorithms to read from port.

Related

clear a Pipe in C

I'm sending data from one process to another pipe and I want to clear the pipe after reading.
Is there a function in C that can do this ?
Yes. It's just the read function offered by the stdio library. You have to invoke it as many times as you need in order to be sure the pipe will be empty.
As the documentation suggests, the read function attempts reading count bytes from an I/O channel (a pipe in your case) for which you have passed the file descriptor as first argument, and places its content into a buffer with enough room to accommodate it.
Let's recall that the read function may return a value indicating a number of bytes read that is smaller than that of those requested. This is perfectly fine if there are less bytes to read than what you expected.
Also remeber that reading from a pipe is blocking if there's nothing to read and the writer has not yet closed the relative descriptor, thus meaning that you'll not get EOF until the counterpart closes its descriptor. Therefore you'll stuck while attempting to read from pipe. If you are intended to avoid the aforementioned possibility I suggest to follow the solution below based on the poll function to verify whether there's data to read from a file descriptor:
#include <poll.h>
struct pollfd pfd;
int main(void)
{
/* your operations */
pfd.fd = pipe_fd;
pfd.events = POLLIN;
while (poll(&pfd, 1, 0) == 1)
{
/* there's available data, read it */
}
return 0;
}

Linux select() not blocking

I'm trying to understand the difference between select() and poll() better. For this I tried to implement a simple program that will open a file as write-only, add its file descriptor to the read set and than execute select in hopes that the function will block until the read permission is granted.
As this didnt work (and as far as I understood, this is intended behaviour) I tried to block access to the file using flock before the select() executen. Still, the program did not block its execution.
My sample code is as follows:
#include <stdio.h>
#include <poll.h>
#include <sys/file.h>
#include <errno.h>
#include <sys/select.h>
int main(int argc, char **argv)
{
printf("[+] Select minimal example\n");
int max_number_fds = FOPEN_MAX;
int select_return;
int cnt_pollfds;
struct pollfd pfds_array[max_number_fds];
struct pollfd *pfds = pfds_array;
fd_set fds;
int fd_file = open("./poll_text.txt", O_WRONLY);
struct timeval tv;
tv.tv_sec = 10;
tv.tv_usec = 0;
printf("\t[+] Textfile fd: %d\n", fd_file);
//create and set fds set
FD_ZERO(&fds);
FD_SET(fd_file, &fds);
printf("[+] Locking file descriptor!\n");
if(flock(fd_file,LOCK_EX) == -1)
{
int error_nr = errno;
printf("\t[+] Errno: %d\n", error_nr);
}
printf("[+] Executing select()\n");
select_return = select(fd_file+1, &fds, NULL, NULL, &tv);
if(select_return == -1){
int error_nr = errno;
printf("[+] Select Errno: %d\n", error_nr);
}
printf("[+] Select return: %d\n", select_return);
}
Can anybody see my error in this code? Also: I first tried to execute this code with two FDs added to the read list. When trying to lock them I had to use flock(fd_file,LOCK_SH) as I cannot exclusively lock two FDs with LOCK_EX. Is there a difference on how to lock two FDs of the same file (compared to only one fd)
I'm also not sure why select will not block when a file, that is added to the Read-set is opened as Write-Only. The program can never (without a permission change) read data from the fd, so in my understanding select should block the execution, right?
As a clarification: My "problem" I want to solve is that I have to check if I'm able to replace existing select() calls with poll() (existing in terms of: i will not re-write the select() call code, but will have access to the arguments of select.). To check this, I wanted to implement a test that will force select to block its execution, so I can later check if poll will act the same way (when given similar instructions, i.e. the same FDs to check).
So my "workflow" would be: write tests for different select behaviors (i.e. block and not block), write similar tests for poll (also block, not block) and check if/how poll can be forced do exactly what select is doing.
Thank you for any hints!
When select tells you that a file descriptor is ready for reading, this doesn't necessarily mean that you can read data. It only means that a read call will not block. A read call will also not block when it returns an EOF or error condition.
In your case I expect that read will immediately return -1 and set errno to EBADF (fd is not a valid file descriptor or is not open for reading) or maybe EINVAL (fd is attached to an object which is unsuitable for reading...)
Edit: Additional information as requested in a comment:
A file can be in a blocking state if a physical operation is needed that will take some time, e.g. if the read buffer is empty and (new) data has to be read from the disk, if the file is connected to a terminal and the user has not yet entered any (more) data or if the file is a socket or a pipe and a read would have to wait for (new) data to arrive...
The same applies for write: If the send buffer is full, a write will block. If the remaining space in the send buffer is smaller than your amount of data, it may write only the part that currently fits into the buffer.
If you set a file to non-blocking mode, a read or write will not block but tell you that it would block.
If you want to have a blocking situation for testing purposes, you need control over the process or hardware that provides or consumes the data. I suggest to use read from a terminal (stdin) when you don't enter any data or from a pipe where the writing process does not write any data. You can also fill the write buffer on a pipe when the reading process does not read from it.

Is there a way to set minimum characters per read using fcntl()?

In Linux command stty we can set the N characters minimum for a completed read using the option min.
From stty man
min N
with -icanon, set N characters minimum for a completed read
time N
with -icanon, set read timeout of N tenths of a second
Is there a way to set these options [ min and time] using fcntl() or any C API's. I checked the fcntl() and open() man , but couldn't find a matching flag.
In Linux command stty we can set the N characters minimum for a completed read using the option min.
Is there a way to set these options [ min and time] using fcntl() or any C API's.
The stty command is merely a command that accesses the termios interface (of a serial terminal).
Programmatically you can use tcgetattr() and tcsetattr().
See Setting Terminal Modes Properly
and Serial Programming Guide for POSIX Operating Systems
Sample C code that sets the deciseconds and minimum-count for a raw read of an open serial terminal:
int set_time_and_min(int fd, int time, int min)
{
struct termios settings;
int result;
result = tcgetattr(fd, &settings);
if (result < 0) {
perror("error in tcgetattr");
return -1;
}
settings.c_cc[VTIME] = time;
settings.c_cc[VMIN] = min;
result = tcsetattr(fd, TCSANOW, &settings);
if (result < 0) {
perror("error in tcsetattr");
return -2;
}
return 0;
}
I checked the fcntl() and open() man , but couldn't find a matching flag.
The man page to reference is termios(3).
Of course the VMIN and VTIME values are only effective when using blocking noncanonical I/O. See Linux Blocking vs. non Blocking Serial Read
Assuming you mean the POSIX ssize_t read(int fildes, void *buf, size_t nbyte), no. There is no standard way to set a minimum number of bytes to be read(). (I can't rule out some implementations providing the capability, but I'm not aware of any, nor for the reasons to follow do I see a point in providing such a capability to read() in general.)
And for a very good reason: what happens if the bytes run out before the number requested is met? The other end of the pipe gets closed, the socket you're reading from gets shut down, or you hit the end of the file you're reading from before reaching the requested number of bytes.
What should read() do then? Block forever waiting for bytes that either might never or can never arrive? In that case, the only sensible action is for read() to return the number of bytes that have been read.
So in general you must handle partial read() results anyway, making the "read a minimum number of bytes" setting pointless.

Counting bytes received by posix read()

I get confused with one line of code:
temp_uart_count = read(VCOM, temp_uart_data, 4096);
I found more about read function at http://linux.die.net/man/3/read, but if everything is okay it returns 0, so how we can get num of bytes received from that?
temp_uart_count is used to count how much bytes we received from virtual COM port and stored it to temp_uart_data which is 4096 bytes wide.
Am I really getting how much bytes i received with this line of code?
... but if everything is okay it returns 0, so how we can get num of bytes received from that?
A return code of zero simply means that read() was unable to provide any data.
Am I really getting how much bytes i received with this line of code?
Yes, a positive return code (i.e. >= 0) from read() is an accurate count of bytes that were returned in the buffer. Zero is a valid count.
If you're expecting more data, then simply repeat the read() syscall. (However you may have setup the termios arguments poorly, e.g. VMIN=0 and VTIME=0).
And - zero indicates end of file
If you get 0, it means that the end of file (or an equivalent condition) has been reached and there is nothing else to read.
The above (one from a comment, and the other in an answer) are incorrect.
Reading from a tty device (e.g. a serial port) is not like reading from a file on a block device, but is temporal. Data for reading is only available as it is received over the comm link.
A non-blocking read() will return with -1 and errno set to EAGAIN when there is no data available.
A blocking non-canonical read() will return zero when there is no data available. Correlate your termios configuration with this to confirm that a return of zero is valid (and does not indicate "end of file").
In either case, the read() can be repeated to get more data when/if it arrives.
Also when using non-canonical (aka raw) mode (or non-blocking reads), do not expect or rely on the the read() to perform message or packet management for you. You will need to add a layer to your program to read bytes, concatenate those bytes into a complete message datagram/packet, and validate it before that message can be processed.
ssize_t read(int fd, void *buf, size_t count); returns you the size of bytes he read and stores it into the value you passed in parameters. And when errors happen it returns -1 (with errno set to EINTR) or to return the number of bytes already read..
From the linux man :
On files that support seeking, the read operation commences at the current file offset, and the file offset is incremented by the number of bytes read. If the current file offset is at or past the end of file, no bytes are read, and read() returns zero.
Yes, temp_uart_count will contain the actual number of bytes read, and obviously that number will be smaller or equal to the number of elements of temp_uart_data. If you get 0, it means that the end of file (or an equivalent condition) has been reached and there is nothing else to read.
If it returns -1 this indicate that an error has occurred and you'll need to check the errno variable to understand what happened.

how is select() alerted to an fd becoming "ready"?

I don't know why I'm having a hard time finding this, but I'm looking at some linux code where we're using select() waiting on a file descriptor to report it's ready. From the man page of select:
select() and pselect() allow a program to monitor multiple file descriptors,
waiting until one or more of the file descriptors become "ready" for some
class of I/O operation
So, that's great... I call select on some descriptor, give it some time out value and start to wait for the indication to go. How does the file descriptor (or owner of the descriptor) report that it's "ready" such that the select() statement returns?
It reports that it's ready by returning.
select waits for events that are typically outside your program's control. In essence, by calling select, your program says "I have nothing to do until ..., please suspend my process".
The condition you specify is a set of events, any of which will wake you up.
For example, if you are downloading something, your loop would have to wait on new data to arrive, a timeout to occur if the transfer is stuck, or the user to interrupt, which is precisely what select does.
When you have multiple downloads, data arriving on any of the connections triggers activity in your program (you need to write the data to disk), so you'd give a list of all download connections to select in the list of file descriptors to watch for "read".
When you upload data to somewhere at the same time, you again use select to see whether the connection currently accepts data. If the other side is on dialup, it will acknowledge data only slowly, so your local send buffer is always full, and any attempt to write more data would block until buffer space is available, or fail. By passing the file descriptor we are sending to to select as a "write" descriptor, we get notified as soon as buffer space is available for sending.
The general idea is that your program becomes event-driven, i.e. it reacts to external events from a common message loop rather than performing sequential operations. You tell the kernel "this is the set of events for which I want to do something", and the kernel gives you a set of events that have occured. It is fairly common for two events occuring simultaneously; for example, a TCP acknowledge was included in a data packet, this can make the same fd both readable (data is available) and writeable (acknowledged data has been removed from send buffer), so you should be prepared to handle all of the events before calling select again.
One of the finer points is that select basically gives you a promise that one invocation of read or write will not block, without making any guarantee about the call itself. For example, if one byte of buffer space is available, you can attempt to write 10 bytes, and the kernel will come back and say "I have written 1 byte", so you should be prepared to handle this case as well. A typical approach is to have a buffer "data to be written to this fd", and as long as it is non-empty, the fd is added to the write set, and the "writeable" event is handled by attempting to write all the data currently in the buffer. If the buffer is empty afterwards, fine, if not, just wait on "writeable" again.
The "exceptional" set is seldom used -- it is used for protocols that have out-of-band data where it is possible for the data transfer to block, while other data needs to go through. If your program cannot currently accept data from a "readable" file descriptor (for example, you are downloading, and the disk is full), you do not want to include the descriptor in the "readable" set, because you cannot handle the event and select would immediately return if invoked again. If the receiver includes the fd in the "exceptional" set, and the sender asks its IP stack to send a packet with "urgent" data, the receiver is then woken up, and can decide to discard the unhandled data and resynchronize with the sender. The telnet protocol uses this, for example, for Ctrl-C handling. Unless you are designing a protocol that requires such a feature, you can easily leave this out with no harm.
Obligatory code example:
#include <sys/types.h>
#include <sys/select.h>
#include <unistd.h>
#include <stdbool.h>
static inline int max(int lhs, int rhs) {
if(lhs > rhs)
return lhs;
else
return rhs;
}
void copy(int from, int to) {
char buffer[10];
int readp = 0;
int writep = 0;
bool eof = false;
for(;;) {
fd_set readfds, writefds;
FD_ZERO(&readfds);
FD_ZERO(&writefds);
int ravail, wavail;
if(readp < writep) {
ravail = writep - readp - 1;
wavail = sizeof buffer - writep;
}
else {
ravail = sizeof buffer - readp;
wavail = readp - writep;
}
if(!eof && ravail)
FD_SET(from, &readfds);
if(wavail)
FD_SET(to, &writefds);
else if(eof)
break;
int rc = select(max(from,to)+1, &readfds, &writefds, NULL, NULL);
if(rc == -1)
break;
if(FD_ISSET(from, &readfds))
{
ssize_t nread = read(from, &buffer[readp], ravail);
if(nread < 1)
eof = true;
readp = readp + nread;
}
if(FD_ISSET(to, &writefds))
{
ssize_t nwritten = write(to, &buffer[writep], wavail);
if(nwritten < 1)
break;
writep = writep + nwritten;
}
if(readp == sizeof buffer && writep != 0)
readp = 0;
if(writep == sizeof buffer)
writep = 0;
}
}
We attempt to read if we have buffer space available and there was no end-of-file or error on the read side, and we attempt to write if we have data in the buffer; if end-of-file is reached and the buffer is empty, then we are done.
This code will behave clearly suboptimal (it's example code), but you should be able to see that it is acceptable for the kernel to do less than we asked for both on reads and writes, in which case we just go back and say "whenever you're ready", and that we never read or write without asking whether it will block.
From the same man page:
On exit, the sets are modified in place to indicate which file descriptors actually changed status.
So use FD_ISSET() on the sets passed to select to determine which FDs have become ready.

Resources