I have problem with a webcam. It can be a hardware one but i'm convinced it is no.
With all apps I can see the stream but suddenly it freezes.
Because of the following output from used app when the problem occurs:
v4l: timeout (got SIGALRM), hardware/driver problems?
I have checked out the code and the interesting part:
/* How many seconds to wait before deciding it's a driver problem. */
#define SYNC_TIMEOUT 3
int alarms;
void sigalarm(int signal)
{
alarms++;
}
.................................................................................
void wait_for_frame_v4l1( input_t *vidin, int frameid )
{
alarms = 0;
alarm(SYNC_TIMEOUT);
if (ioctl(vidin->fd, VIDIOCSYNC, vidin->buf + frameid) < 0 )
fprintf(stderr, "input: Can't wait for frame %d: %s\n", frameid, strerror(errno));
if (alarms)
fprintf(stderr, "v4l: timeout (got SIGALRM), hardware/driver problems?");
alarm(0);
}
From which I conclude that SYNC_TIMEOUT could be problem. The value is 3 secondes which seems to be quite enough.
My request is to help me chage code to don't block indefinitely waiting for frames:
If no frame arrives within 100 ms, then timeout and give the GUI the chance to update itself.
Not all devices can free wheel, so app should support such devices without blocking the GUI.
How can I do sub-second waiting?
v4l2 devices work very well with this:
/* How many milliseconds to wait before deciding it's a driver problem. */
#define SYNC_TIMEOUT_MSECS 100
int wait_for_frame_v4l2(input_t * vidin)
{
struct timeval timeout;
fd_set rdset;
int n;
FD_ZERO(&rdset);
FD_SET(vidin->fd, &rdset);
timeout.tv_sec = 0;
timeout.tv_usec = SYNC_TIMEOUT_MSECS * 1000;
n = select(vidin->fd + 1, &rdset, 0, 0, &timeout);
if(n == -1) {
fprintf(stderr, "input: Can't wait for frame: %s\n", strerror(errno));
} else if(n == 0) {
sigalarm(0);
return 1;
}
return 0;
}
but I have v4l1 device.
What (usb) webcam and kernel version are you using?
Update your driver/kernel
If it's an USB-Cam, try connecting without an USB-hub
The VIDIOCSYNC ioctl on vidin->fd suspends execution until vidin->buf has been filled. You can wait for a filled buffer to become available via select or poll
Related
I need to wait for n bytes of data (count is known) on a serial port or socket on Linux.
Currently I use a loop with poll, measure the time and decrement the timeout:
static int int_read_poll(int fd, uint8_t *buffer, size_t count, int timeout)
{
struct pollfd pfd;
int rc;
pfd.fd = fd;
pfd.events = POLLIN;
rc = poll(&pfd, 1, timeout);
if (rc < 0) {
perror("poll");
return 0;
}
if (rc > 0) {
if (pfd.revents & POLLIN) {
rc = read(fd, buffer, count);
return rc;
}
}
return 0;
}
static int int_read_waitfor(int fd, uint8_t *buffer, size_t count, int timeout)
{
int rc;
struct timespec start, end;
int delta_ms;
int recv = 0;
do {
clock_gettime(CLOCK_MONOTONIC_RAW, &start);
rc = int_read_poll(fd, buffer + recv, count - recv, timeout);
clock_gettime(CLOCK_MONOTONIC_RAW, &end);
delta_ms = (end.tv_nsec - start.tv_nsec) / 1000000;
if (!rc || (rc < 0)) return 0;
recv += rc;
timeout -= delta_ms;
if (timeout <= 0)
return 0;
} while (recv != count);
return recv;
}
On a serial port, poll returns on each single byte and causes many iterations.
Is there a more elegant way to solve that problem?
I am aware that depending on the baudrate, timeout might not decrement in that code portion. Counting nanoseconds might be a better approach.
A simple solution might be to use alarm (if your timeout is in seconds) or setitimer with the ITIMER_REAL timer. Then just have the read call return with an error when the signal happens (with errno == EINTR)
Thanks to all for your valuable hints!
After some testing, I finally decided not to use signals as they may interfere with the application once I port my functions into a library or publish them as source.
I eventually found a neat solution which uses poll and termios (only four syscalls):
static int int_read_waitfor(int fd, uint8_t *buffer, size_t count, int timeout)
{
struct termios tm;
struct pollfd pfd;
int rc;
tcgetattr(fd, &tm);
tm.c_cc[VTIME] = 1; // inter-character timeout 100ms, valid after first char recvd
tm.c_cc[VMIN] = count; // block until n characters are read
tcsetattr(fd, TCSANOW, &tm);
pfd.fd = fd;
pfd.events = POLLIN;
rc = poll(&pfd, 1, timeout);
if (rc > 0) {
rc = read(fd, buffer, count);
if (rc == -1) {
perror("read");
return 0;
}
return rc;
}
return 0;
}
Unlike network sockets which are usually packet based, serial ports (n.b.: in non-canonical mode) are character based. It is expected that a loop with poll is iterated for every arriving character, in particular at low baud rates.
I my application I send a comand over a serial line to a device and wait for an answer.
If no answer is received, a timeout will occur and maybe we'll do a retry.
The termios option "VMIN" is handy as I can specify how many characters I like to reveive. Normally read would block until n chars have arrived.
If there is no answer, the command will block forever.
The termios option "VTIME" in conjunction with VMIN > 0 is specifying the intercharacter timeout in deciseconds (1 = 100ms). This is handy but the timeout will start only after reception of the first character. Otherwise an intercharacter timeout would make no sense.
So if I would use only termios options, read would block of the slave serial device is dead.
To circumvent that problem, I use poll in front of read.
Once the first character has arrived (poll returns with rc=1), I start reading. "VTIME" is active as well and will enforce the intercharacter time of 100ms (the lowest possible setting).
As a bonus the timeout handling is optimized:
Lets assume a timeout of 400ms
If the slave device is dead, poll will return after 400ms
If the slave works and replies within 50ms (first character), poll returns and read starts. If the slave sends too few data, VTIME will kick in and stop reception after 50ms + 100ms. We don't have to wait the whole 400ms for the last (missing) byte to arrive.
I have developed a C program that receive can traffics from vcan0 interface.
I would like to add timeout on reception, I mean when a timeout expired (10 seconds for example) with no data received during this time, I print "no data received during 10 seconds" and I make a reboot of my pc(the reboot is made if a specific condition is satisfied).
I have tested with select function I get the timeout but when the specific condition is not satisfied I can't receive can traffics anymore.
should I add something for reactivate reception when I have timeout and specific condition is not satisfied? if yes how?
My program is like this:
...
while(1)
{
FD_ZERO(&set); /* clear the set */
FD_SET(filedesc, &set); /* add our file descriptor to the set */
timeout.tv_sec = 0;
timeout.tv_usec = 10000000; // 10 second
rv = select(fd_max + 1, &set, NULL, NULL, &timeout);
if(rv == -1)
perror("select"); /* an error accured */
else if(rv == 0)
{
printf("no data received within 10 second"); /* a timeout occured */
if (specific condition is true)
{
reboot(RB_AUTOBOOT);
}
}
else
int s;
/* loop on all sockets */
for(i=s;i<=fd_max;i++)
{
read( i, buff, len );
/* some other instruction*/
}
}
...
I am trying to write and read data to a serial port file /dev/ttyUSB0, If the operation is unsuccessful for 5 sec I will move ahead. To implement this I chose using the select() system call. However, the exact case in which I am using it seems to not work as expected. Following code:
Simply, I need to check status from 8 devices. So I must write() first a query command. Then wait for a response from the device for timeout seconds.
This procedure should be done for all 8 devices connected on UART.
I am not sure if I must re initialize fdset for using select.
Result: First time select waits for 5 sec timeout. But after that it immediately shows "timeout" without waiting.
struct timeval timeout;
timeout.tv_sec = 5;
timeout.tv_usec = 10;
for(i=0; i <= noOfDevicesDiscovered; i++)
{
if( i < 9 && i > 0)
{
uart_init();
printf("ID: %d\n", i);
address = ((i-1)<<1) | 0x01;
command = 0xA0;
fd_set set;
FD_ZERO(&set);
FD_SET(uart_fd, &set);
write(uart_fd, &address, 1);
write(uart_fd, &command, 1);
rv = select(uart_fd + 1, &set, NULL, NULL, &timeout);
if(rv == -1)
perror("select\n");
else if(rv == 0)
{
printf("timeout\n");
new.level = 0;
new.address = i;
fwrite(&new, sizeof(struct Light_Info), 1, fptr);
continue;
}
else
{
read( uart_fd, &level, 1 );
new.level = level;
}
new.address = i;
fwrite(&new, sizeof(struct Light_Info), 1, fptr);
close(uart_fd);
FD_ZERO(&set);
}
}
How can we solve this.
You need to reinitialise "timeout" after each call of select. From the select man page; "On Linux, select() modifies timeout to reflect the amount of time not slept". So in your case, after the first select call, your timeout values are all 0. Hence subsequent calls to select will timeout immediately.
I just wondered about how Instant Messengers and Online Games can accept and deliver messages so fast. (Network programming with sockets)
I read about that this is done with nonblocking sockets.
I tried blocking sockets with pthreads (each client gets its own thread) and nonblocking sockets with kqueue.Then I profiled both servers with a program which made 99 connections (each connection in one thread) and then writes some garbage to it (with a sleep of 1 second). When all threads are set up, I measured in the main thread how long it took to get a connection from the server (with wall clock time) (while "99 users" are writing to it).
threads (avg): 0.000350 // only small difference to kqueue
kqueue (avg): 0.000300 // and this is not even stable (client side)
The problem is, while testing with kqueue I got multiple times a SIGPIPE error (client-side). (With a little timeout usleep(50) this error was fixed). I think this is really bad because a server should be capable to handle thousands of connections. (Or is it my fault on the client side?) The crazy thing about this is the infamous pthread approach did just fine (with and without timeout).
So my question is: how can you build a stable socket server in C which can handle thousands of clients "asynchronously"? I only see the threads approach as a good thing, but this is considered bad practice.
Greetings
EDIT:
My test code:
double get_wall_time(){
struct timeval time;
if (gettimeofday(&time,NULL)){
// Handle error
return 0;
}
return (double)time.tv_sec + (double)time.tv_usec * .000001;
}
#define NTHREADS 100
volatile unsigned n_threads = 0;
volatile unsigned n_writes = 0;
pthread_mutex_t main_ready;
pthread_mutex_t stop_mtx;
volatile bool running = true;
void stop(void)
{
pthread_mutex_lock(&stop_mtx);
running = false;
pthread_mutex_unlock(&stop_mtx);
}
bool shouldRun(void)
{
bool copy;
pthread_mutex_lock(&stop_mtx);
copy = running;
pthread_mutex_unlock(&stop_mtx);
return copy;
}
#define TARGET_HOST "localhost"
#define TARGET_PORT "1336"
void *thread(void *args)
{
char tmp = 0x01;
if (__sync_add_and_fetch(&n_threads, 1) == NTHREADS) {
pthread_mutex_unlock(&main_ready);
fprintf(stderr, "All %u Threads are ready...\n", (unsigned)n_threads);
}
int fd = socket(res->ai_family, SOCK_STREAM, res->ai_protocol);
if (connect(fd, res->ai_addr, res->ai_addrlen) != 0) {
socket_close(fd);
fd = -1;
}
if (fd <= 0) {
fprintf(stderr, "socket_create failed\n");
}
if (write(fd, &tmp, 1) <= 0) {
fprintf(stderr, "pre-write failed\n");
}
do {
/* Write some garbage */
if (write(fd, &tmp, 1) <= 0) {
fprintf(stderr, "in-write failed\n");
break;
}
__sync_add_and_fetch(&n_writes, 1);
/* Wait some time */
usleep(500);
} while (shouldRun());
socket_close(fd);
return NULL;
}
int main(int argc, const char * argv[])
{
pthread_t threads[NTHREADS];
pthread_mutex_init(&main_ready, NULL);
pthread_mutex_lock(&main_ready);
pthread_mutex_init(&stop_mtx, NULL);
bzero((char *)&hint, sizeof(hint));
hint.ai_socktype = SOCK_STREAM;
hint.ai_family = AF_INET;
if (getaddrinfo(TARGET_HOST, TARGET_PORT, &hint, &res) != 0) {
return -1;
}
for (int i = 0; i < NTHREADS; ++i) {
pthread_create(&threads[i], NULL, thread, NULL);
}
/* wait for all threads to be set up */
pthread_mutex_lock(&main_ready);
fprintf(stderr, "Main thread is ready...\n");
{
double start, end;
int fd;
start = get_wall_time();
fd = socket(res->ai_family, SOCK_STREAM, res->ai_protocol);
if (connect(fd, res->ai_addr, res->ai_addrlen) != 0) {
socket_close(fd);
fd = -1;
}
end = get_wall_time();
if (fd > 0) {
fprintf(stderr, "Took %f ms\n", (end - start) * 1000);
socket_close(fd);
}
}
/* Stop all running threads */
stop();
/* Waiting for termination */
for (int i = 0; i < NTHREADS; ++i) {
pthread_join(threads[i], NULL);
}
fprintf(stderr, "Performed %u successfull writes\n", (unsigned)n_writes);
/* Lol.. */
freeaddrinfo(res);
return 0;
}
SIGPIPE comes when I try to connect to the kqueue server (after 10 connections are made, the server is "stuck"?). And when too many users are writing stuff, the server cannot open a new connection. (kqueue server code from http://eradman.com/posts/kqueue-tcp.html)
SIGPIPE means you're trying to write to a socket (or pipe) where the other end has already been closed (so noone will be able to read it). If you don't care about that, you can ignore SIGPIPE signals (call signal(SIGPIPE, SIG_IGN)) and the signals won't be a problem. Of course the write (or send) calls on the sockets will still be failing (with EPIPE), so you need to make you code robust enough to deal with that.
The reason that SIGPIPE normally kills the process is that its too easy to write programs that ignore errors on write/send calls and run amok using up 100% of CPU time otherwise. As long as you carefully always check for errors and deal with them, you can safely ignore SIGPIPEs
Or is it my fault?
It was your fault. TCP works. Most probably you didn't read all the data that was sent.
And when too many users are writing stuff, the server cannot open a new connection
Servers don't open connections. Clients open connections. Servers accept connections. If your server stops doing that, there something wrong with your accept loop. It should only do two things: accept a connection, and start a thread.
How would I create a delayed execution of code or timeout events using epoll? Both libevent and libev has the functionality but I can't figure out how to do this using epoll.
Currently the main loop looks like this:
epoll_ctl(epfd, EPOLL_CTL_ADD, client_sock_fd, &epev);
while(1) {
int nfds = epoll_wait(epfd, &epev, 1, 10);
if (nfds < 0) exit(EXIT_FAILURE);
if (nfds > 0) {
// If an event has been recieved
}
// Do this every 10ms
}
I am well aware that this functionality could be achieved by simply adding how much time has passed but using epoll seems like a cleaner solution.
You can create timerfd and add the file descriptor to the epoll_wait
Stupid question: why not just keep track of the time explicitly? I do this in a multi-TCP client (for sending heartbeats) and the loop essentially does:
uint64_t last = get_time_in_usec();
uint64_t event_interval = 10 * 1000;
while(1) {
int nfds = epoll_wait(epfd, &epev, 1, 0); /* note that i set timeout = 0 */
if (nfds <= 0) { /* do some cleanup logic, handle EAGAIN */
if (nfds > 0) { /* If an event has been received */ }
if(get_time_in_usec() >= last + event_interval) { ... }
}
get_time_in_usec can be implemented using gettimeofday or rdtsc in linux. YMMV