how to check packet availability in libpcap - c

I use libpcap to capture packets, and I need to put the packets into a FIFO queue as soon as a packet is available. But the FIFO queue is shared by 2 threads, one thread call pcap_next() and put packet into the FIFO queue. Another thread fetch packet from the fifo queue. so I have to relate it to a mutex. Like below:
u_char* pkt;
for(;;){
pkt = pcap_next();
lock(&mutex);
some_process(pkt);
insert(pkt, list);
unlock(&mutext);
}
The pcap_next() is related to a packet buffer, if there is no packet in the buffer, pcap_next() is blocked. If there is/are packet(s), each call of pcap_next() returns 1 packet.
it can only fetch oen packet for each lock-unlock operation pair, If packet arrival is not frequent, then it is fine. But if packet arrival is frequent, like in the buffer there are many pending packets, it is a bit resource-consuming to don a lock-unlock operation pair for one packet.
what I hope is: after processing and inserting a packet, I immediately can check whether there are packets available in the packet buffer. If there are, continue processing and insertion. Otherwise, unlock mutex and go back to loop.
Is there a workaround for this?

Try something such as
/*
* XXX - this can fail on some platforms and with some devices,
* and there may be issues with select() on this. See the
* pcap_get_selectable_fd() man page for details.
*/
pcap_fd = pcap_get_selectable_fd(p);
pcap_setnonblock(p); /* XXX - check for failure */
for (;;) {
fd_set fdset;
struct timeval timeout;
/*
* Wait for a batch of packets to be available.
*/
FD_ZERO(&fdset);
FD_SET(pcap_fd, &fdset);
timeout.tv_sec = 1;
timeout.tv_usec = 0;
if (select(1, &fdset, NULL, NULL, &timeout) == -1) {
report an error;
} else {
lock(&mutex);
pcap_dispatch(p, -1, callback, pointer-to-stuff);
unlock(&mutex);
}
}
That way, you lock the mutex, process an entire batch of packets, and then unlock the mutex. Many OS capture mechanisms deliver multiple packets in a batch, so there will be one lock/unlock pair per batch in this case.
callback would do the some_process(pkt); and insert(pkt, list); stuff.
It is possible that, once you're done with a batch, the next batch will be immediately available, so this doesn't achieve an absolute minimum of lock/unlock pairs; however, the absolute minimum might lock out the other thread for a significant period of time, so that it can never make any progress, so lock and unlock around each batch might be best.

just use pcap_dispatch(), or select() with non-blocking style pcap_next()

Related

Using select() to detect a block on a UIO device file

I'm working on an embedded processor running Yocto. I have a modified uio_pdrv_genirq.c UIO driver.
I am writing a library to control the DMA. There is one function which writes to the device file and initiates the DMA. A second function is intended to wait for the DMA to complete by calling select(). Whilst DMA is in progress the device file blocks. On completion the DMA controller issues an interrupt which releases the block on the device file.
I have the system working as expected using read() but I want to switch to select() so that I can include a time out. However, when I use select(), it doesn't seem to be recognising the block and always returns immediately (before the DMA has completed). I have included a simple version of the code:
int gannet_dma_interrupt_wait(dma_device_t *dma_device,
dma_direction dma_transfer_direction) {
fd_set rfds;
struct timeval timeout;
int select_res;
/* Initialize the file descriptor set and add the device file */
FD_ZERO(&rfds);
FD_SET(dma_device->fd, &rfds);
/* Set the timeout period. */
timeout.tv_sec = 5;
timeout.tv_usec = 0;
/* The device file will block until the DMA transfer has completed. */
select_res = select(FD_SETSIZE, &rfds, NULL, NULL, &timeout);
/* Reset the channel */
gannet_dma_reset(dma_device, dma_transfer_direction);
if (select_res == -1) {
/* Select has encountered an error */
perror("ERROR <Interrupt Select Failed>\n");
exit(0);
}
else if (select_res == 1) {
/* The device file descriptor block released */
return 0;
}
else {
/* The device file descriptor block exceeded timeout */
return EINTR;
}
}
Is there anything obviously wrong with my code? Or can anyone suggest an alternative to select?
It turns out that the UIO driver contains two counters. One records the
number of events (event_count), the other records how many events the
calling function is aware of (listener->event_count).
When you do a read() on a UIO driver it returns the number of events and
makes listener->event_count equal to event_count. ie. the listener is
now up to date with all the events that have occurred.
When you use poll() or select() on a UIO driver, it checks if these two
numbers are different and returns if they are (if they are the same it
waits until they differ and then returns). It does NOT update the
listener->event_count.
Clearly if you do not do a read() between calls to select() then
the listener->event_count will not match the event_count and the second
select() will return immediately. Therefore it is necessary to call
read() in between calls to select().
With hindsight it seems clear that select() should work in this way but it wasn't obvious to me at the time.
This answer assumes that it is possible to use select() as intented for the specified device file (I use select() for socket descriptors only). As an alternative function to select(), you may want to check out the poll() family of functions. What follows will hopefully at least offer hints as to what can be done to resolve your problem with calling select().
The first parameter to the select() function has to be the maximum despriptor number plus 1. Since you have only one descriptor, you can pass it directly to select() as its first parameter and add 1. Also consider that the file descriptor in dma_device could be invalid. Returning EINTR on a timeout may actually be what you intend to do but should that not be the case and to test for an invalid descriptor, here is a different version for you to consider. The select() call could be interrupted by a signal, in which case, the return value is -1 and errno will be set to EINTR. This could be handled internally by your function as in:
FD_ZERO(&rfds);
FD_SET(dma_device->fd, &rfds);
timeout.tv_sec = 5;
timeout.tv_usec = 0;
// restart select() if it's interrupted by a signal;
do {
select_res = select(dma_device->fd + 1, &rfds, NULL, NULL, &timeout);
}
while( select_res < 0 && errno == EINTR);
if (select_res > 0) {
// a file descriptor is legible
}
else {
if (select_res == 0) {
// select() timed-out
}
else {
// an error other than a signal occurred
if (errno == EBADF) {
// your file descriptor is invalid
}
}
}

Use of select or multithread for almost 80 or more clients?

I am working on one project in which i need to read from 80 or more clients and then write their o/p into a file continuously and then read these new data for another task. My question is what should i use select or multithreading?
Also I tried to use multi threading using read/fgets and write/fputs call but as they are blocking calls and one operation can be performed at one time so it is not feasible. Any idea is much appreciated.
update 1: I have tried to implement the same using condition variable. I able to achieve this but it is writing and reading one at a time.When another client tried to write then it cannot able to write unless i quit from the 1st thread. I do not understand this. This should work now. What mistake i am doing?
Update 2: Thanks all .. I am able to succeeded to get this model implemented using mutex condition variable.
updated Code is as below:
**header file*******
char *mailbox ;
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER ;
pthread_cond_t writer = PTHREAD_COND_INITIALIZER;
int main(int argc,char *argv[])
{
pthread_t t1 , t2;
pthread_attr_t attr;
int fd, sock , *newfd;
struct sockaddr_in cliaddr;
socklen_t clilen;
void *read_file();
void *update_file();
//making a server socket
if((fd=make_server(atoi(argv[1])))==-1)
oops("Unable to make server",1)
//detaching threads
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr,PTHREAD_CREATE_DETACHED);
///opening thread for reading
pthread_create(&t2,&attr,read_file,NULL);
while(1)
{
clilen = sizeof(cliaddr);
//accepting request
sock=accept(fd,(struct sockaddr *)&cliaddr,&clilen);
//error comparison against failire of request and INT
if(sock==-1 && errno != EINTR)
oops("accept",2)
else if ( sock ==-1 && errno == EINTR)
oops("Pressed INT",3)
newfd = (int *)malloc(sizeof(int));
*newfd = sock;
//creating thread per request
pthread_create(&t1,&attr,update_file,(void *)newfd);
}
free(newfd);
return 0;
}
void *read_file(void *m)
{
pthread_mutex_lock(&lock);
while(1)
{
printf("Waiting for lock.\n");
pthread_cond_wait(&writer,&lock);
printf("I am reading here.\n");
printf("%s",mailbox);
mailbox = NULL ;
pthread_cond_signal(&writer);
}
}
void *update_file(int *m)
{
int sock = *m;
int fs ;
int nread;
char buffer[BUFSIZ] ;
if((fs=open("database.txt",O_RDWR))==-1)
oops("Unable to open file",4)
while(1)
{
pthread_mutex_lock(&lock);
write(1,"Waiting to get writer lock.\n",29);
if(mailbox != NULL)
pthread_cond_wait(&writer,&lock);
lseek(fs,0,SEEK_END);
printf("Reading from socket.\n");
nread=read(sock,buffer,BUFSIZ);
printf("Writing in file.\n");
write(fs,buffer,nread);
mailbox = buffer ;
pthread_cond_signal(&writer);
pthread_mutex_unlock(&lock);
}
close(fs);
}
I think for the the networking portion of things, either thread-per-client or multiplexed single-threaded would work fine.
As for the disk I/O, you are right that disk I/O operations are blocking operations, and if your data throughput is high enough (and/or your hard drive is slow enough), they can slow down your network operations if the disk I/O is done synchronously.
If that is an actual problem for you (and you should measure first to verify that it really is a problem; no point complicating things if you don't need to), the first thing I would try to ameliorate the problem would be to make your file's output-buffer larger by calling setbuffer. With a large enough buffer, it may be possible for the C runtime library to hide any latency caused by disk access.
If larger buffers aren't sufficient, the next thing I'd try is creating one or more threads dedicated to reading and/or writing data. That is, when your network thread wants to save data to disk, rather than calling fputs()/write() directly, it allocates a buffer containing the data it wants written, and passes that buffer to the IO-write thread via a (mutex-protected or lockless) FIFO queue. The I/O thread then pops that buffer out of the queue, writes the data to the disk, and frees the buffer. The I/O thread can afford to be occasionally slow in writing because no other threads are blocked waiting for the writes to complete. Threaded reading from disk is a little more complex, but basically the IO-read thread would fill up one or more buffers of in-memory data for the network thread to drain; and whenever the network thread drained some of the data out of the buffer, it would signal the IO-read thread to refill the buffer up to the top again. That way (ideally) there is always plenty of input-data already present in RAM whenever the network thread needs to send some to a client.
Note that the multithreaded method above is a bit tricky to get right, since it involves inter-thread synchronization and communication; so don't do it unless there isn't any simpler alternative that will suffice.
Either select/poll or multithreading is ok if you you program solves the problem.
I' guess your program would be io-bound as the number of clients grows up, as you have disk read/write frequently. So it would not speed up to have multiple threads doing the io operation. Polling may be a better choice then
You can set a socket that you get from accept to be non-blocking. Then it is easy to use select to find out when there is data, read the number of bytes that are available and process them.
With (only) 80 clients, I see no reason to expect any significant difference from using threads unless you get very different amounts of data from different clients.

select for multiple non-blocking connections

I have a single threaded program. It sends message to four destinations every five seconds. I don't want connect() to be blocked. So I am writing my program like this:
int j, rc, non_blocking=1, sockets[4], max_fd=0;
struct sockaddr server=get_server_addr();
fd_set fdset;
const struct timeval conn_timeout = { 2, 0 }; /* 2 seconds */
for (j=0; j<4; ++j)
{
sockets[j]=socket( AF_INET, SOCK_STREAM, 0 );
ioctl(sockets[j], FIONBIO, (char *)&non_blocking);
connect(sockets[j], &server, sizeof (server));
}
/* prepare fd_set */
FD_ZERO ( &fdset );
for (j=0;j<4;++j)
{
if (sockets[j] != -1 )
{
FD_SET ( sockets[j], &fdset );
if ( sockets[j] > max_fd )
{
max_fd = sockets[j];
}
}
}
rc=select(max_fd + 1, NULL, &fdset, NULL, &conn_timeout );
if(rc > 0)
{
for (j=0;j<4;++j)
{
if(sockets[j]!=-1 && FD_ISSET(sockets[j],&fdset))
{
/* send() */
}
}
}
/* close all valid sockets */
However, it seems select() returns immediately after ONE file descriptor is ready instead of blocking for conn_timeout (2 seconds). So in this case how can I achieve my targets?
The program continues if all sockets are ready.
The program can block there for 2 seconds if any one of sockets are not ready.
Yeah, select was designed on the assumption that you would want to service each socket as soon as it became ready.
If I understand what you're trying to do, then the simplest way to accomplish it will be to remove each socket from the fdset as it becomes ready. If there are any sockets left in the set, use gettimeofday to adjust the timeout downward, and call select again. When the set is empty, all four sockets are usable and you can proceed.
There are three basic approaches:
If you want to stay strictly portable you need to iterate:
calculate end time from current time and timeout of your choice
Cycle:
-- Create fdset with those fds not yet ready
-- calculate max time to wait
-- select()
-- remeber those fds that are now ready
-- break if end time reached or all fds ready
End cycle
Now you have knowledge of the ready fds and the elapsed time
If you want to stay portable, but can use threads:
start n threads
select on one fd per thread
join all threads
If you do not need to be portable: Most OSes have a facility for such a situation, e.g. Windows/.NET has WaitAll (together with async send and an event)
I don't see the connection between your stated targets and your stated problem. You are correct in saying that select() blocks until at least one socket is ready, but according to target #2 above that is exactly what you want. There's nothing in your stated targets about blocking until all four sockets are ready at the same time.
You should also note that sockets are almost always ready for writing, unless the send buffer is full, which means the receiver's receive buffer is full, which means the receiver is slower than the sender. So using select() alone as the underlying write timer isn't a good idea.

epoll and timeouts

I'm using epoll to manage about 20 to 30 sockets. I figure out that epoll_wait can be used to wait for some data to arrive over one of the socket but I'm missing how do I implement timeouts on socket level. I can use timeout on epoll_wait but it not very useful in my case. For example, if I need to every close a socket where no activity is recorded for > 500 ms orr may be send some data to a socket every 200 ms no matter what. How can these socket level timeout be implemented using epoll? Any suggestion and idea would be appreciated!
Thanks,
Shivam Kalra
Try pairing each socket with a timer fd object (timerfd_create). For each socket in your application, create a timer that's initially set to expire after 500ms, and add the timer to the epoll object (same as with a socket—via epoll_ctl and EPOLL_CTL_ADD). Then, whenever data arrives on a socket, reset that socket's associated timer back to a 500ms timeout.
If a timer expires (because a socket has been inactive for 500ms) then the timer will become "read ready" in the epoll object and cause any thread waiting on epoll_wait to wake up. That thread may then handle the timeout for the timer's associated socket.
Sounds like you're trying to write an event loop (if so have a look at libev btw). epoll will not help you there, you have to keep track of socket inactivity yourself (clock_gettime() or gettimeofday() for instance), then wake up several times a second and check everything you need.
Some pseudo code
while (1) {
n = epoll_wait(..., 5);
if (n > 0) {
/* process activity */
} else {
/* process inactivity */
}
}
This will wake you up 200 times a second if all sockets are inactive.
The inactivity check requires a list of the sockets to be examined along with timestamps of the last inactivity:
struct sockstamp_s {
/* socket descriptor */
int sockfd;
/* last active */
struct timeval tv;
};
/* check which socket has been inactive */
for (struct sockstamp_s *i = socklist; ...; i = next(i)) {
if (diff(s->tv, now()) > 500) {
/* socket s->sockfd was inactive for more than 500 ms */
...
}
}
where diff() gives you the difference of 2 struct timevals and now() gives you the current timestamp.

c - WSAWaitForMultipleObjects blocking any thread but last

i have a problem with a multi-thread SMTP/POP3 server. The server starts a pool of threads to handle incoming connections. The main thread create the sockets and the the threads, passing the sockets as parameters in a proper structure. The loop function for the threads is the following:
SOCKET SMTP_ListenSocket = (SOCKET) data->SMTPconn;
SOCKET POP3_ListenSocket = (SOCKET) data->POP3conn;
static struct sockaddr_in ClntAddr;
unsigned int clntLen = sizeof(ClntAddr);
hEvents[0] = CreateEvent(NULL, FALSE, FALSE, NULL);
hEvents[1] = CreateEvent(NULL, FALSE, FALSE, NULL);
hEvents[2] = exitEvent; //HANDLE FOR A MANUAL RESET EVENT
WSAEventSelect(SMTP_ListenSocket, hEvents[0], FD_ACCEPT);
WSAEventSelect(POP3_ListenSocket, hEvents[1], FD_ACCEPT);
while(1){
DWORD res = WaitForMultipleObjects(3, hEvents, FALSE, INFINITE);
switch(res){
case WAIT_OBJECT_0: {
ClientSocket = my_accept(SMTP_ListenSocket,(struct sockaddr *) &ClntAddr,&clntLen);
/* ... */
my_shutdown(ClientSocket,2);
my_closesocket(ClientSocket);
ClientSocket = INVALID_SOCKET;
break;
}
case WAIT_OBJECT_0 + 1: {
ClientSocket = my_accept(POP3_ListenSocket,(struct sockaddr *) &ClntAddr,&clntLen);
/* ... */
my_shutdown(ClientSocket,2);
my_closesocket(ClientSocket);
ClientSocket = INVALID_SOCKET;
break;
}
case WAIT_OBJECT_0 + 2:
{
exitHandler(0);
break;
}
}//end switch
}//end while
When the pool contains only one thread there's no problem. When the pool consist of more threads, only one thread accepts the incoming connections
Do you have the pooled threads all calling this same code? If so, then don't use WaitForMultipleObjects() (or WSAWaitForMultipleEvents()) like this. This kind of model only works reliably if one thread is polling connections. If you have multiple threads polling at the same time, then you have race conditions.
Instead, you should use AcceptEx() with Overlapped I/O or Completion Ports instead. The thread that creates the sockets can call AcceptEx() on each socket to queue a new operation on each one, then the pooled threads can use GetQueuedCompletionStatus() or GetOverlappedResult() to dequeue a pending connection without worrying about trampling on other threads. Once a connection is accepted, the receiving thread can process it as needed and then call AcceptEx() to queue a new operation for that socket.
Each thread here is setting a new WSAEventSelect prior to entering the wait. This overwrites any existing event selects. This means that, once a thread (call it thread A) accepts a connection, there is no event associated with the socket.
To solve this, you should call WSAEventSelect again within your switch, immediately after the accept(). This will restore the event binding immediately before going into any potentially lengthy processing.
Note that it's possible that two threads may be awoken for the same event, if the timing works out just right. You can hack around that by going back to your wait loop if the accept fails, but this is a bit unsatisfying.
So, instead of rolling your own version, use IO completion ports here. I/O completion ports have a number of additional features, and avoid potential race conditions in which two threads might pick up the same event. They also take steps to reduce context switches when your code is not CPU bound.

Resources