Accept multiple subsequent connections to socket - c

I have a listener that will pass arbitrary data, HTTP requests, to a network socket which is then delivered over TCP. This works fine for the first request but the listener does not accept subsequent new requests.
My question is:
If I have sock=accept(listener,(struct addr *)&sin, &sinlen); then, based on the socket function reference, the listener socket remains open and I should be able to re-call accept() any number of times for subsequent requests. Is this correct? If so, can someone more familiar than I with socket programming please explain how this code might look?

Yes, you can accept() many times on the listening socket. To service multiple clients, you need to avoid blocking I/O -- i.e., you can't just read from the socket and block until data comes in. There are two approaches: you can service each client in its own thread (or its own process, by using fork() on UNIX systems), or you can use select(). The select() function is a way of checking whether data is available on any of a group of file descriptors. It's available on both UNIX and Windows.

Here is a simple example from Beej's Guide to Network Programming.
while(1) { // main accept() loop
sin_size = sizeof their_addr;
new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &sin_size);
if (new_fd == -1) {
perror("accept");
continue;
}
inet_ntop(their_addr.ss_family,
get_in_addr((struct sockaddr *)&their_addr),
s, sizeof s);
printf("server: got connection from %s\n", s);
if (!fork()) { // this is the child process
close(sockfd); // child doesn't need the listener
if (send(new_fd, "Hello, world!", 13, 0) == -1)
perror("send");
close(new_fd);
exit(0);
}
close(new_fd); // parent doesn't need this
}
The child process — after the fork() — handles the communication asynchronously from accept()ing further connections in the parent.

Yes, you have the right general idea.
While my C socket programming is a bit rusty, calling accept on a server socket sets up the communications channel back to the client side of the socket. Calling accept on future connection attempts will set up multiple socket channels.
This means that one should take care to not overwrite a single shared structure with a specific connection's data, but it doesn't sound like that's the kind of error you would be prone to make.

Related

Socket in C: Proper way to close socket

I often see sample codes for a server program using socket.
(For simplicity, here I don't check return values of functions such as socket() or bind() etc.)
int sockfd = 0, newsockfd = 0;
struct sockaddr_in address;
sockfd = socket(AF_INET, SOCK_STREAM, 0);
bind(sockfd, (struct sockaddr*)&address, sizeof(address));
listen(sockfd, 1);
newsockfd = accept(sockfd, (struct sockaddr*)NULL, NULL);
... do some communication ...
close(newsockfd);
Most of sample codes I found on the web have close(newsockfd) but don't have close(sockfd).
My question is whether it is really correct NOT to close sockfd.
If it's correct, I want to know why.
My understanding is that sockfd is one of the file descriptors and
it seems to have no reason to quit program without closing it.
More specifically, I'm wondering that not-closing-sockfd can cause the bind error (e.g. this socket is aready in use...) when the program works next time.
I really appreciate if you help me.
Thank you for your time.
Resources allocated by the application, like memory or open file descriptors (which includes sockets) will be automatically freed by modern OS if the program exits. Thus, if the server socket should be available throughout the whole program (in order to accept connections) it is fine to not explicitly close it but let the OS do this when the application exits.
You should always close sockfd when you stop the listening.
Two reasons why some developers do not care very much about not closing sockfd:
If your program quits without closing it, the OS will close sockfd for you
Most of the time, servers keep the listening open all the time (eg: for months)
However if your program launches a listening socket on some event, then closes it after a while, and loops, then you must close the sockfd, to prevent an error EADDRINUSE (Address already in use) at next iteration, and a memory leak.
Besides, the error EADDRINUSE (Address already in use) may occur on other circumstances that I do not detail here.
sockfd act as a server socket: it is used only to accept more and more incoming connections. You keep sockfd opened, bound and listening as long as you have to accept and handle new connections on newsockfd, wich hold the current connection on wich you are reading/writing from/to some peer program. When done with newsockfd you close it and, if requiref, accept a new one with accept() on sockfd. And so on.

C sockets - keeping pool of connections for reuse - s

I am writing a two daemon application - a client and a server. It is a very basic version of distributed shell. Clients are connecting to the server, and server issues a command, that is propagated to every client.
I dont know how to create the socket logic on server-side - I do some testing and for now I am accepting connections in an loop and for every incoming connection I fork a child to process the connection
while (1) {
clisockfd = accept(sockfd, (struct sockaddr *) &cliaddr, &clilen);
if (clisockfd < 0) {
log_err("err on opening client socket");
exit(EXIT_FAILURE);
}
/* create a new child to process the connection */
if((pid = fork()) < 0) {
log_err("err on forking, something is really broken!");
exit(EXIT_FAILURE);
}
if(!pid) {
/* here we are in forked process so we dont need the sockfd */
close(sockfd);
/* function that handles connection */
handle_connection(clisockfd);
exit(EXIT_FAILURE);
} else {
close(clisockfd);
}
}
However what I have now have some disadvantages - I can accept a connection, do something with it, and return to main process (forked process have to return, and then execution in main process is resumed). I would like to keep every socketfd somewhere(a list?) and be able to choose one of those (or all of them) and send to this socketfd a command that I want to issue on my client/s. I assume that I cant do it in traditional accept->fork->return to main process manner.
So it probably should looks like:
client connects -> server set up a new socketfd and saves it somewhere -> drops to shell where I can choose one of socket and send it a command -> somewhere in the whole process it also should wait for next incoming client connections - but where?
If someone could give me an idea what mechanisms should I use to create the logic that I need? Maybe it would be better to issue connection from server to client, not from client to server.
Regards,
Krzysztof
I assume that I cant do it in traditional accept->fork->return to main process manner.
You could but it will be hard to design/maintain.
The best solution is to use select() (POSIX), epoll() (Linux), kqueue() (BSD) or I/O Completion Ports (Windows) depending on your platform.
There is a good examples/explanations about select() in Beej's network programming guide.

closing client socket and keeping server socket active

I am establishing a server-client connection using TCP sockets. Whenever I close the client socket my server is also closed. But I want only my client to be closed and my server must wait for next accept().
Server side:
{
bind(lfd,(struct sockaddr*)&serv_addr, sizeof(serv_addr));
listen(lfd, 10);
while(1)
{
cfd = accept(lfd, (struct sockaddr*)NULL, NULL);
//server must wait here after client closes the connection application code
close(lfd);
}
}
client side:
inet_pton(AF_INET, argv[1], &serv_addr.sin_addr);
connect(fd, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
// ... application code
if(c == 1)
close(fd);
When you accept server side, you generate a new socket for that client only.
When you have finished dealing with the client you must close() that socket, (that's close(cfd) in your terminology). You may also shutdown() the socket - that will influence how the socket is closed at a TCP level. But whether you do or do not do a shutdown(), you must close() it, else you will leak FDs.
You must not close() your listen fd (lfd in your program) until you intend not to accept any more connections.
TLDR: change close(lfd) to close(cfd)
The TCP listening socket described by the descriptor lfd is used for waiting for TCP incoming connections at a specific port. After the call of accepta new socket descriptor is created, the cfd in your example.
All the data exchange between server and client is performed using the cfd. If the client first close the socket, a possible send or recv at the server side, will return -1 with the appropriate errno value.
If you want the server to close the connection, you should use shutdown(cfd, SHUT_RDWR) and close(cfd) after, NOT close(lfd). This lets the lfd socket open, allowing the server to wait at the accept for the next incoming connection. The lfd should close at the termination of the server.
The shutdown() provides more flexibility, to send or receive remaining data prior the permanent termination of the communication.
accept call would return new socket descriptor (cfd in your code) that is connected to the client side, so when client closes its end of connection cfd will be closed and not lfd. You can use lfd (listening socket) for accepting connection as long as server needs. Also consider invoking shutdown before close(fd) in client code.

Why doesn't client's close() of socket cause server's select() to return

[I asked something similar before. This is a more focused version.]
What can cause a server's select() call on a TCP socket to consistently time-out rather than "see" the client's close() of the socket? On the client's side, the socket is a regular socket()-created blocking socket that successfully connects to the server and successfully transmits a round-trip transaction. On the server's side, the socket is created via an accept() call, is blocking, is passed to a child server process via fork(), is closed by the top-level server, and is successfully used by the child server process in the initial transaction. When the client subsequently closes the socket, the select() call of the child server process consistently times-out (after 1 minute) rather than indicating a read-ready condition on the socket. The select() call looks for read-ready conditions only: the write-ready and exception arguments are NULL.
Here's the simplified but logically equivalent select()-using code in the child server process:
int one_svc_run(
const int sock,
const unsigned timeout)
{
struct timeval timeo;
fd_set fds;
timeo.tv_sec = timeout;
timeo.tv_usec = 0;
FD_ZERO(&fds);
FD_SET(sock, &fds);
for (;;) {
fd_set readFds = fds;
int status = select(sock+1, &readFds, 0, 0, &timeo);
if (status < 0)
return errno;
if (status == 0)
return ETIMEDOUT;
/* This code not reached when client closes socket */
/* The time-out structure, "timeo", is appropriately reset here */
...
}
...
}
Here's the logical equivalent of the sequence of events on the client-side (error-handling not shown):
struct sockaddr_in *raddr = ...;
int sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
(void)bindresvport(sock, (struct sockaddr_in *)0);
connect(sock, (struct sockaddr *)raddr, sizeof(*raddr));
/* Send a message to the server and receive a reply */
(void)close(sock);
fork(), exec(), and system() are never called. The code is considerably more complex than this, but this is the sequence of relevant calls.
Could Nagel's algorithm cause the FIN packet to not be sent upon close()?
Most likely explanation is that you're not actually closing the client end of the connection when you think you are. Probably because you have some other file descriptor that references the client socket somewhere that is not being closed.
If your client program ever does a fork (or related calls that fork, such as system or popen), the forked child might well have a copy of the file descriptor which would cause the behavior you're seeing.
One way to test/workaround the problem is to have the client do an explicit shutdown(2) prior to closing the socket:
shutdown(sock, SHUT_RDWR);
close(sock);
If this causes the problem to go away then that is the problem -- you have another copy of the client socket file descriptor somewhere hanging around.
If the problem is due to children getting the socket, the best fix is probably to set the close-on-exec flag on the socket immediately after creating it:
fcntl(sock, F_SETFD, fcntl(sock, F_GETFD) | FD_CLOEXEC);
or on some systems, use the SOCK_CLOEXEC flag to the socket creation call.
Mystery solved.
#nos was correct in the first comment: it's a firewall problem. A shutdown() by the client isn't needed; the client does close the socket; the server does use the right timeout; and there's no bug in the code.
The problem was caused by the firewall rules on our Linux Virtual Server (LVS). A client connects to the LVS and the connection is passed to the least-loaded of several backend servers. All packets from the client pass through the LVS; all packets from the backend server go directly to the client. The firewall rules on the LVS caused the FIN packet from the client to be discarded. Thus, the backend server never saw the close() by the client.
The solution was to remove the "-m state --state NEW" options from the iptables(8) rule on the LVS system. This allows the FIN packets from the client to be forwarded to the backend server. This article has more information.
Thanks to all of you who suggested using wireshark(1).
select() call of Linux will modify value of timeout argument. From man page:
On Linux, select() modifies timeout to reflect the amount of time not
slept
So your timeo will runs to zero. And when it is zero select will return immediately (mostly with return value zero).
The following change may help:
for (;;) {
struct timeval timo = timeo;
fd_set readFds = fds;
int status = select(sock+1, &readFds, 0, 0, &timo);

client/server architecture with multiple clients

I need to implement a server/client code in C.
Server needs to be able to accept exactly four connections at the time.
I can't get this working. What I've done so far:
1. create a socket
2. set it to non-blocking: fcntl(sock,F_SETFL, O_NONBLOCK);
3. bind it
4. listen: listen(sock, 4);
The part which I am not quite sure about is how to accept the client's connection. My code looks something like this:
while (1) {
if ((sockfd = accept(sock, (struct sockaddr *) &client_addr, &client_size)) < 0) {
perror("Error\n");
}
read(sockfd, &number, sizeof(number));
write(sockfd, &number, sizeof(number));
}
When I execute client and server code, client seems to be writing something to the socket, which server never receives and the entire execution blocks.
What is the proper way to accept connections from multiple clients?
One basic workflow for this kind of server, if you don't want to use multithreading, is like this:
Create an fd_set of file descriptors to watch for reading
Open a socket
Bind the socket to a port to listen on
Start listening on the socket
Add the socket's file descriptor to the fd_set
While not done
Use select to wait until a socket is ready to read from
loop through the fds in your fd_set that have data available
If the current fd is your listening socket, accept a new connection
Else, it's a client fd. Read from it, and perhaps write back to it.
This page shows a flowchart of the above process. (Scroll down for a very nicely annotated example.)
This page is chock full of examples for select.
You should look the man of select. It will tell you when and which sockets are ready to write/read

Resources