Best solution for dynamic account connection in C? - c

I'm not very familiar with C design patterns and searching for the best solution for the following problem. I want to write a little chat client based on libpurple.
While running the program I want to be able to connect and disconnect several instant message accounts. The connect and disconnect calls should be passed over command line, but waiting for input with gets(); is no solution, because the program should run all the time getting new messages from the already connected instant message accounts.

You probably want to use poll (or select) for handling the events. So after establishing the connections, you have the file descriptors, and in addition you have the standard input, which also has a file descriptor from the OS (namely 0), and you can pass all those file descriptors to poll, which notifies you when there is incoming data on any of the file descriptors. Example code:
/* fd1, fd2 are sockets */
while(1) {
pollfd fds[3];
int ret;
fds[0].fd = fd1;
fds[1].fd = fd2;
fds[2].fd = STDIN_FILENO;
fds[0].events = POLLIN;
fds[1].events = POLLIN;
fds[2].events = POLLIN;
ret = poll(fds, 3, -1); /* poll() blocks, but you can set a timeout here */
if(ret < 0) {
perror("poll");
}
else if(ret == 0) {
printf("timeout\n");
}
else {
if(fds[0].revents & POLLIN) {
/* incoming data from fd1 */
}
if(fds[0].revents & (POLLERR | POLLNVAL)) {
/* error on fd1 */
}
if(fds[1].revents & POLLIN) {
/* incoming data from fd2 */
}
if(fds[1].revents & (POLLERR | POLLNVAL)) {
/* error on fd2 */
}
if(fds[2].revents & POLLIN) {
/* incoming data from stdin */
char buf[1024];
int bytes_read = read(STDIN_FILENO, buf, 1024);
/* handle input, which is stored in buf */
}
}
}
You didn't mention the OS. This works for POSIX (OS X, Linux, Windows with mingw). If you need to use the Win32 API, it'll look a bit different but the principle is the same.

Check out select(2). I'm not really sure how libpurple works, but if it allows notification via file-descriptor (like a file or socket), then select is your solution.
You could also try creating a seperate thread with pthread_create(3). That way it can block on gets (or whatever) while the rest of your program does it's thing.

Related

Read chardevice with libevent

I wrote a chardevice that passes some messages received from the network to an user space application. The user space application has to both read the chardevice and send/receive messages via TCP sockets to other user-space applications. Both read and receiving should be blocking.
Since Libevent is able to handle multiple events at the same time, I thought registering an event for the file created by the chardevice and an event for a socket would just work, but I was wrong.
But a chardevice creates a "character special file", and libevent seems to not be able to block. If I implement a blocking mechanism inside the chardevice, i.e. mutex or semaphore, then the socket event blocks too, and the application cannot receive messages.
The user space application has to accept outside connections at any time.
Do you know how to make it work? Maybe also using another library, I just want a blocking behaviour for both socket and file reader.
Thank you in advance.
Update: Thanks to #Ahmed Masud for the help. This is what I've done
Kernel module chardevice:
Implement a poll function that waits until new data is available
struct file_operations fops = {
...
.read = kdev_read,
.poll = kdev_poll,
};
I have a global variable to handle if the user space has to stop, and a wait queue:
static working = 1;
static wait_queue_head_t access_wait;
This is the read function, I return -1 if there is an error in copy_to_user, > 0 if everything went well, and 0 if the module has to stop. used_buff is atomic since it handles the size of a buffer shared read by user application and written by kernel module.
ssize_t
kdev_read(struct file* filep, char* buffer, size_t len, loff_t* offset)
{
int error_count;
if (signal_pending(current) || !working) { // user called sigint
return 0;
}
atomic_dec(&used_buf);
size_t llen = sizeof(struct user_msg) + msg_buf[first_buf]->size;
error_count = copy_to_user(buffer, (char*)msg_buf[first_buf], llen);
if (error_count != 0) {
atomic_inc(&used_buf);
paxerr("send fewer characters to the user");
return error_count;
} else
first_buf = (first_buf + 1) % BUFFER_SIZE;
return llen;
}
When there is data to read, I simply increment used_buf and call wake_up_interruptible(&access_wait).
This is the poll function, I just wait until the used_buff is > 0
unsigned int
kdev_poll(struct file* file, poll_table* wait)
{
poll_wait(file, &access_wait, wait);
if (atomic_read(&used_buf) > 0)
return POLLIN | POLLRDNORM;
return 0;
}
Now, the problem here is that if I unload the module while the user space application is waiting, the latter will go into a blocked state and it won't be possible to stop it. That's why I wake up the application when the module is unloaded
void
kdevchar_exit(void)
{
working = 0;
atomic_inc(&used_buf); // increase buffer size to application is unlocked
wake_up_interruptible(&access_wait); // wake up application, but this time read will return 0 since working = 0;
... // unregister everything
}
User space application
Libevent by default uses polling, so simply create an event_base and a reader event.
base = event_base_new();
filep = open(fname, O_RDWR | O_NONBLOCK, 0);
evread = event_new(base, filep, EV_READ | EV_PERSIST,
on_read_file, base);
where on_read_file simply reads the file, no poll call is made (libevent handles that):
static void
on_read_file(evutil_socket_t fd, short event, void* arg)
{
struct event_base* base = arg;
int len = read(...);
if (len < 0)
return;
if (len == 0) {
printf("Stopped by kernel module\n");
event_base_loopbreak(base);
return;
}
... // handle message
}

Designing a proxy with non-blocking pipe forwarding to another server

I have written a proxy which also duplicates traffic. I am trying to duplicate network traffic to a replica server which should receive all the inputs and also process all the requests. However only the responses on the main server are visible to the client. The high level workflow is as follows
Thread 1. Take input from client forward it to a pipe in non-blocking way, and to the server
Thread 2. Read from server and send to client
Thread 3. Read from pipe and forward to replica server
Thread 4. Read from replica server and drop
The code is available in this gist: https://gist.github.com/nipunarora/679d49e81086b5a75195ec35ced646de
The test seems to work for smaller data and transactions, but I seem to be getting the following error when working with iperf and larger data sets:
Buffer overflow? : Resource temporarily unavailable
The specific part in the code where the problem is stemming from:
void forward_data_asynch(int source_sock, int destination_sock) {
char buffer[BUF_SIZE];
int n;
//put in error condition for -1, currently the socket is shutdown
while ((n = recv(source_sock, buffer, BUF_SIZE, 0)) > 0)// read data from input socket
{
send(destination_sock, buffer, n, 0); // send data to output socket
if( write(pfds[1],buffer,n) < 0 )//send data to pipe
{
//fprintf(stats_file,"buffer_overflow \n");
//printf("format string" ,a0,a1);
//int_timeofday();
perror("Buffer overflow? ");
}
//DEBUG_PRINT("Data sent to pipe %s \n", buffer);
}
shutdown(destination_sock, SHUT_RDWR); // stop other processes from using socket
close(destination_sock);
shutdown(source_sock, SHUT_RDWR); // stop other processes from using socket
close(source_sock);
}
The reading process is as follows:
void forward_data_pipe(int destination_sock) {
char buffer[BUF_SIZE];
int n;
sleep(10);
//put in error condition for -1, currently the socket is shutdown
while ((n = read(pfds[0], buffer, BUF_SIZE)) > 0)// read data from pipe socket
{
//sleep(1);
//DEBUG_PRINT("Data received in pipe %s \n", buffer);
send(destination_sock, buffer, n, 0); // send data to output socket
}
shutdown(destination_sock, SHUT_RDWR); // stop other processes from using socket
close(destination_sock);
}
Please note, the pipe has been defined as follows:
/** Make file descriptor non blocking */
int setNonblocking(int fd)
{
int flags;
/* If they have O_NONBLOCK, use the Posix way to do it */
#if defined(O_NONBLOCK)
/* Fixme: O_NONBLOCK is defined but broken on SunOS 4.1.x and AIX 3.2.5. */
if (-1 == (flags = fcntl(fd, F_GETFL, 0)))
flags = 0;
return fcntl(fd, F_SETFL, flags | O_NONBLOCK);
#else
/* Otherwise, use the old way of doing it */
flags = 1;
return ioctl(fd, FIOBIO, &flags);
#endif
}
Could anyone help in fixing what could be the reason of the error?
The problem in your case is that data is sent too fast to the socket that has been set to non-blocking mode. You have several options:
Accept the fact that data may be lost. If you do not want to delay the processing on the main server, this is your only option.
Don't set the socket to non-blocking mode. The default mode, blocking, seems like a better fit for your application if you don't want data to be lost. However, this will also mean that the system may be slowed down.
Use poll(), select(), kqueue(), epoll(), /dev/poll or similar to wait until the socket has enough buffer space available. However, when using this, you should consider why you set the socket to non-blocking mode in the first place if you nevertheless want to block on it. This also leads to slowdown of the system.

Making read and write sets with FD_SET for sending and receiving data in C

I have a client and server, and the client runs a select loop to multiplex between a TCP and a UDP connection. I'm trying to add my TCP connection file descriptor to both the read and the write set and then initiate one message exchange using write set and one using read set. My message communication with the write set works fine but with the read set I'm unable to do so.
Client Code:
char buf[256] = {};
char buf_to_send[256] = {};
int nfds, sd, r;
fd_set rd, wr;
int connect_init = 1;
/* I do the Connect Command here */
FD_ZERO(&rd);
FD_ZERO(&wr);
FD_SET(sd, &rd);
FD_SET(sd, &wr);
nfds = sd;
for(; ;){
r = select(nfds + 1, &rd, &wr, NULL, NULL);
if(connect_init == 0){
if(FD_ISSET(sd, &rd)){ // this is not working, if I change rd to wr, it works!
r = recv(sd, buf, sizeof(buf),0);
printf("received buf = %s", buf);
sprintf(buf, "%s", "client_reply\n");
send(sd, buf, strlen(buf), 0);
}
}
/* Everything below this works correctly */
if (connect_init){
if(FD_ISSET(sd, &wr)){
sprintf(buf_to_send, "%s", "Client connect request");
write(sd, buf_to_send, strlen(buf_to_send));
recv(sd, buf, sizeof(buf), 0);
printf("Server said = %s", buf);
sprintf(buf_to_send, "Hello!\n"); // client Hellos back
send(sd, buf_to_send, strlen(buf_to_send), 0);
}
connect_init = 0;
}
} // for loops ends
You need to initialize the sets in the loop, every time before calling select. This is needed because select modifies them. Beej's Guide to Network Programming has a comprehensive example on one way to use select.
So in your code, it seems select returns first with writing allowed, but reading not, which has the read bit reset to 0, and then there's nothing to set it back to 1, because from then on select will not touch it, because it is already 0.
If select API bothers you, look at poll, it avoids this (note that there's probably no practical/efficiency difference, it basically boils down to personal preference). On a "real" code with many descriptors (such as a network server with many clients), where performance matters, you should use some other mechanism though, probably some higher level event library, which then uses the OS specific system API, such as Linux's epoll facility. But checking just a few descriptors, select is the tried and true and relatively portable choice.

close() is not closing socket properly

I have a multi-threaded server (thread pool) that is handling a large number of requests (up to 500/sec for one node), using 20 threads. There's a listener thread that accepts incoming connections and queues them for the handler threads to process. Once the response is ready, the threads then write out to the client and close the socket. All seemed to be fine until recently, a test client program started hanging randomly after reading the response. After a lot of digging, it seems that the close() from the server is not actually disconnecting the socket. I've added some debugging prints to the code with the file descriptor number and I get this type of output.
Processing request for 21
Writing to 21
Closing 21
The return value of close() is 0, or there would be another debug statement printed. After this output with a client that hangs, lsof is showing an established connection.
SERVER 8160 root 21u IPv4 32754237 TCP localhost:9980->localhost:47530 (ESTABLISHED)
CLIENT 17747 root 12u IPv4 32754228 TCP localhost:47530->localhost:9980 (ESTABLISHED)
It's as if the server never sends the shutdown sequence to the client, and this state hangs until the client is killed, leaving the server side in a close wait state
SERVER 8160 root 21u IPv4 32754237 TCP localhost:9980->localhost:47530 (CLOSE_WAIT)
Also if the client has a timeout specified, it will timeout instead of hanging. I can also manually run
call close(21)
in the server from gdb, and the client will then disconnect. This happens maybe once in 50,000 requests, but might not happen for extended periods.
Linux version: 2.6.21.7-2.fc8xen
Centos version: 5.4 (Final)
socket actions are as follows
SERVER:
int client_socket;
struct sockaddr_in client_addr;
socklen_t client_len = sizeof(client_addr);
while(true) {
client_socket = accept(incoming_socket, (struct sockaddr *)&client_addr, &client_len);
if (client_socket == -1)
continue;
/* insert into queue here for threads to process */
}
Then the thread picks up the socket and builds the response.
/* get client_socket from queue */
/* processing request here */
/* now set to blocking for write; was previously set to non-blocking for reading */
int flags = fcntl(client_socket, F_GETFL);
if (flags < 0)
abort();
if (fcntl(client_socket, F_SETFL, flags|O_NONBLOCK) < 0)
abort();
server_write(client_socket, response_buf, response_length);
server_close(client_socket);
server_write and server_close.
void server_write( int fd, char const *buf, ssize_t len ) {
printf("Writing to %d\n", fd);
while(len > 0) {
ssize_t n = write(fd, buf, len);
if(n <= 0)
return;// I don't really care what error happened, we'll just drop the connection
len -= n;
buf += n;
}
}
void server_close( int fd ) {
for(uint32_t i=0; i<10; i++) {
int n = close(fd);
if(!n) {//closed successfully
return;
}
usleep(100);
}
printf("Close failed for %d\n", fd);
}
CLIENT:
Client side is using libcurl v 7.27.0
CURL *curl = curl_easy_init();
CURLcode res;
curl_easy_setopt( curl, CURLOPT_URL, url);
curl_easy_setopt( curl, CURLOPT_WRITEFUNCTION, write_callback );
curl_easy_setopt( curl, CURLOPT_WRITEDATA, write_tag );
res = curl_easy_perform(curl);
Nothing fancy, just a basic curl connection. Client hangs in tranfer.c (in libcurl) because the socket is not perceived as being closed. It's waiting for more data from the server.
Things I've tried so far:
Shutdown before close
shutdown(fd, SHUT_WR);
char buf[64];
while(read(fd, buf, 64) > 0);
/* then close */
Setting SO_LINGER to close forcibly in 1 second
struct linger l;
l.l_onoff = 1;
l.l_linger = 1;
if (setsockopt(client_socket, SOL_SOCKET, SO_LINGER, &l, sizeof(l)) == -1)
abort();
These have made no difference. Any ideas would be greatly appreciated.
EDIT -- This ended up being a thread-safety issue inside a queue library causing the socket to be handled inappropriately by multiple threads.
Here is some code I've used on many Unix-like systems (e.g SunOS 4, SGI IRIX, HPUX 10.20, CentOS 5, Cygwin) to close a socket:
int getSO_ERROR(int fd) {
int err = 1;
socklen_t len = sizeof err;
if (-1 == getsockopt(fd, SOL_SOCKET, SO_ERROR, (char *)&err, &len))
FatalError("getSO_ERROR");
if (err)
errno = err; // set errno to the socket SO_ERROR
return err;
}
void closeSocket(int fd) { // *not* the Windows closesocket()
if (fd >= 0) {
getSO_ERROR(fd); // first clear any errors, which can cause close to fail
if (shutdown(fd, SHUT_RDWR) < 0) // secondly, terminate the 'reliable' delivery
if (errno != ENOTCONN && errno != EINVAL) // SGI causes EINVAL
Perror("shutdown");
if (close(fd) < 0) // finally call close()
Perror("close");
}
}
But the above does not guarantee that any buffered writes are sent.
Graceful close: It took me about 10 years to figure out how to close a socket. But for another 10 years I just lazily called usleep(20000) for a slight delay to 'ensure' that the write buffer was flushed before the close. This obviously is not very clever, because:
The delay was too long most of the time.
The delay was too short some of the time--maybe!
A signal such SIGCHLD could occur to end usleep() (but I usually called usleep() twice to handle this case--a hack).
There was no indication whether this works. But this is perhaps not important if a) hard resets are perfectly ok, and/or b) you have control over both sides of the link.
But doing a proper flush is surprisingly hard. Using SO_LINGER is apparently not the way to go; see for example:
http://msdn.microsoft.com/en-us/library/ms740481%28v=vs.85%29.aspx
https://www.google.ca/#q=the-ultimate-so_linger-page
And SIOCOUTQ appears to be Linux-specific.
Note shutdown(fd, SHUT_WR) doesn't stop writing, contrary to its name, and maybe contrary to man 2 shutdown.
This code flushSocketBeforeClose() waits until a read of zero bytes, or until the timer expires. The function haveInput() is a simple wrapper for select(2), and is set to block for up to 1/100th of a second.
bool haveInput(int fd, double timeout) {
int status;
fd_set fds;
struct timeval tv;
FD_ZERO(&fds);
FD_SET(fd, &fds);
tv.tv_sec = (long)timeout; // cast needed for C++
tv.tv_usec = (long)((timeout - tv.tv_sec) * 1000000); // 'suseconds_t'
while (1) {
if (!(status = select(fd + 1, &fds, 0, 0, &tv)))
return FALSE;
else if (status > 0 && FD_ISSET(fd, &fds))
return TRUE;
else if (status > 0)
FatalError("I am confused");
else if (errno != EINTR)
FatalError("select"); // tbd EBADF: man page "an error has occurred"
}
}
bool flushSocketBeforeClose(int fd, double timeout) {
const double start = getWallTimeEpoch();
char discard[99];
ASSERT(SHUT_WR == 1);
if (shutdown(fd, 1) != -1)
while (getWallTimeEpoch() < start + timeout)
while (haveInput(fd, 0.01)) // can block for 0.01 secs
if (!read(fd, discard, sizeof discard))
return TRUE; // success!
return FALSE;
}
Example of use:
if (!flushSocketBeforeClose(fd, 2.0)) // can block for 2s
printf("Warning: Cannot gracefully close socket\n");
closeSocket(fd);
In the above, my getWallTimeEpoch() is similar to time(), and Perror() is a wrapper for perror().
Edit: Some comments:
My first admission is a bit embarrassing. The OP and Nemo challenged the need to clear the internal so_error before close, but I cannot now find any reference for this. The system in question was HPUX 10.20. After a failed connect(), just calling close() did not release the file descriptor, because the system wished to deliver an outstanding error to me. But I, like most people, never bothered to check the return value of close. So I eventually ran out of file descriptors (ulimit -n), which finally got my attention.
(very minor point) One commentator objected to the hard-coded numerical arguments to shutdown(), rather than e.g. SHUT_WR for 1. The simplest answer is that Windows uses different #defines/enums e.g. SD_SEND. And many other writers (e.g. Beej) use constants, as do many legacy systems.
Also, I always, always, set FD_CLOEXEC on all my sockets, since in my applications I never want them passed to a child and, more importantly, I don't want a hung child to impact me.
Sample code to set CLOEXEC:
static void setFD_CLOEXEC(int fd) {
int status = fcntl(fd, F_GETFD, 0);
if (status >= 0)
status = fcntl(fd, F_SETFD, status | FD_CLOEXEC);
if (status < 0)
Perror("Error getting/setting socket FD_CLOEXEC flags");
}
Great answer from Joseph Quinsey. I have comments on the haveInput function. Wondering how likely it is that select returns an fd you did not include in your set. This would be a major OS bug IMHO. That's the kind of thing I would check if I wrote unit tests for the select function, not in an ordinary app.
if (!(status = select(fd + 1, &fds, 0, 0, &tv)))
return FALSE;
else if (status > 0 && FD_ISSET(fd, &fds))
return TRUE;
else if (status > 0)
FatalError("I am confused"); // <--- fd unknown to function
My other comment pertains to the handling of EINTR. In theory, you could get stuck in an infinite loop if select kept returning EINTR, as this error lets the loop start over. Given the very short timeout (0.01), it appears highly unlikely to happen. However, I think the appropriate way of dealing with this would be to return errors to the caller (flushSocketBeforeClose). The caller can keep calling haveInput has long as its timeout hasn't expired, and declare failure for other errors.
ADDITION #1
flushSocketBeforeClose will not exit quickly in case of read returning an error. It will keep looping until the timeout expires. You can't rely on the select inside haveInput to anticipate all errors. read has errors of its own (ex: EIO).
while (haveInput(fd, 0.01))
if (!read(fd, discard, sizeof discard)) <-- -1 does not end loop
return TRUE;
This sounds to me like a bug in your Linux distribution.
The GNU C library documentation says:
When you have finished using a socket, you can simply close its file
descriptor with close
Nothing about clearing any error flags or waiting for the data to be flushed or any such thing.
Your code is fine; your O/S has a bug.
include:
#include <unistd.h>
this should help solve the close(); problem

how to read and write data on serial port using threads

I am creating a serial port application in which i am creating two threads one is WRITER THREAD which will write data to serial port and a READER THREAD which will read data from serial port.I know how to open, configure,read and write data on serial port but how to do it using threads.
I am using LINUX(ubuntu) and trying to open ttyS0 port programming in C.
The way I have done this in the past is to set up the port for asynchronous I/O using a VMIN of 0 and a VTIME of, say, 5 deciseconds. The purpose of this was to allow the thread to notice when it was time for the application to shut down, as it could try to read, time out, check for a quit flag, and then try to read some more.
Here is an example read function:
size_t myread(char *buf, size_t len) {
size_t total = 0;
while (len > 0) {
ssize_t bytes = read(fd, buf, len);
if (bytes == -1) {
if (errno != EAGAIN && errno != EINTR) {
// A real error, not something that trying again will fix
if (total > 0) {
return total;
}
else {
return -1;
}
}
}
else if (bytes == 0) {
// EOF
return total;
}
else {
total += bytes;
buf += bytes;
len -= bytes;
}
}
return total;
}
The write function would look as you would expect.
In your setup function, make sure to set:
struct termios tios;
...
tios.c_cflag &= ~ICANON;
tios.c_cc[VMIN] = 0;
tios.c_cc[VTIME] = 5; // You may want to tweak this; 5 = 1/2 second, 10 = 1 second, ...
...
Using of a serial port from 2 threads is simple, if only one thread reads and other thread only writes.
You should use one file descriptor for the serial port.
Open and initialize it in one thread by using normal open, tcsetattr, etc functions.
Then deliver the file descriptor to the other thread(s).
Now the reader thread can use read() function, and the writer can use write() function without any extra synchronization. You can also use select() in both threads.
Closing of the file descriptor needs attention, you should do it in one thread for avoiding problems.

Resources