I am establishing a server-client connection using TCP sockets. Whenever I close the client socket my server is also closed. But I want only my client to be closed and my server must wait for next accept().
Server side:
{
bind(lfd,(struct sockaddr*)&serv_addr, sizeof(serv_addr));
listen(lfd, 10);
while(1)
{
cfd = accept(lfd, (struct sockaddr*)NULL, NULL);
//server must wait here after client closes the connection application code
close(lfd);
}
}
client side:
inet_pton(AF_INET, argv[1], &serv_addr.sin_addr);
connect(fd, (struct sockaddr *)&serv_addr, sizeof(serv_addr));
// ... application code
if(c == 1)
close(fd);
When you accept server side, you generate a new socket for that client only.
When you have finished dealing with the client you must close() that socket, (that's close(cfd) in your terminology). You may also shutdown() the socket - that will influence how the socket is closed at a TCP level. But whether you do or do not do a shutdown(), you must close() it, else you will leak FDs.
You must not close() your listen fd (lfd in your program) until you intend not to accept any more connections.
TLDR: change close(lfd) to close(cfd)
The TCP listening socket described by the descriptor lfd is used for waiting for TCP incoming connections at a specific port. After the call of accepta new socket descriptor is created, the cfd in your example.
All the data exchange between server and client is performed using the cfd. If the client first close the socket, a possible send or recv at the server side, will return -1 with the appropriate errno value.
If you want the server to close the connection, you should use shutdown(cfd, SHUT_RDWR) and close(cfd) after, NOT close(lfd). This lets the lfd socket open, allowing the server to wait at the accept for the next incoming connection. The lfd should close at the termination of the server.
The shutdown() provides more flexibility, to send or receive remaining data prior the permanent termination of the communication.
accept call would return new socket descriptor (cfd in your code) that is connected to the client side, so when client closes its end of connection cfd will be closed and not lfd. You can use lfd (listening socket) for accepting connection as long as server needs. Also consider invoking shutdown before close(fd) in client code.
Related
I have a problem with connecting to a destination IP using connect() API. The connect() API returns a -1 and errno as operation in progress
. Am I checking the return code too early before it establishes a connection? Please see the following code snippet:
struct sockaddr_in servAddr;
servAddr.sin_family = AF_INET;
servAddr.sin_port = htons(9190);
const char * remoteIp = 10.10.20.86;
rc = inet_pton(AF_INET,remoteIp, &servAddr.sin_addr);
if (rc == -1 || errno == EAFNOSUPPORT)
{
return 0;
}
rc = connect(fd, (sockaddr*)&servAddr, sizeof(servAddr));
if ( rc < 0) // this is where it fails. rc is -1.
{
log("connect failure with [%s]",strerror(errno));
print_sock_connect_error();
}
I have 2 questions here:
The destination IP and port 10.10.20.86:9190 is waiting for a connection and once the connection is received, it send the ack back to the source. I see the tcp established - ACK,SYN/ACK and ACK to destination - in pcap but still couldn't figure out why it returns -1 with error. So Am I checking the rc before the connection establishment is complete? sysctl net.ipv4.tcp_syn_retries is set to 6.
Is there anything wrong with the code above?
Am I checking the rc before the connection establishment is complete?
Yes, you are. The TCP ping-pong during the connection's set up isn't all that has to be done.
Is there anything wrong with the code above?
Well, yes, either the way it handles the EINPROGRESS case or that is uses a non-blocking socket to connect.
From connect()'s Linux documentation:
EINPROGRESS
The socket is nonblocking and the connection cannot be
completed immediately. It is possible to select(2) or poll(2)
for completion by selecting the socket for writing. After
select(2) indicates writability, use getsockopt(2) to read the
SO_ERROR option at level SOL_SOCKET to determine whether
connect() completed successfully (SO_ERROR is zero) or
unsuccessfully (SO_ERROR is one of the usual error codes
listed here, explaining the reason for the failure).
10.10.20.86:9190 is waiting for a connection and once the connection is received, it send the ack back to the source. I see the tcp established - ACK,SYN/ACK and ACK to destination - in pcap but still couldn't figure out why it returns -1 with error. So Am I checking the rc before the connection establishment is complete?
Of course you are. You're checking it immediately connect() returns. As you have put the socket into non-blocking mode, there is no chance the three-way wire handshake will have completed by then.
sysctl net.ipv4.tcp_syn_retries is set to 6.
Irrelevant.
Is there anything wrong with the code above?
Only that it doesn't make sense.
If you want he connection complete or failed before connect() returns, don't use non-blocking mode.
If you want to use non-blocking mode, you have to use select() to tell you when the connect attempt has completed. Select for the socket becoming writeable. (That doesn't necessarily mean it has become writeable: it means the connect attempt has completed, with a result you can discover via getsockopt()/SO_ERROR.)
[I asked something similar before. This is a more focused version.]
What can cause a server's select() call on a TCP socket to consistently time-out rather than "see" the client's close() of the socket? On the client's side, the socket is a regular socket()-created blocking socket that successfully connects to the server and successfully transmits a round-trip transaction. On the server's side, the socket is created via an accept() call, is blocking, is passed to a child server process via fork(), is closed by the top-level server, and is successfully used by the child server process in the initial transaction. When the client subsequently closes the socket, the select() call of the child server process consistently times-out (after 1 minute) rather than indicating a read-ready condition on the socket. The select() call looks for read-ready conditions only: the write-ready and exception arguments are NULL.
Here's the simplified but logically equivalent select()-using code in the child server process:
int one_svc_run(
const int sock,
const unsigned timeout)
{
struct timeval timeo;
fd_set fds;
timeo.tv_sec = timeout;
timeo.tv_usec = 0;
FD_ZERO(&fds);
FD_SET(sock, &fds);
for (;;) {
fd_set readFds = fds;
int status = select(sock+1, &readFds, 0, 0, &timeo);
if (status < 0)
return errno;
if (status == 0)
return ETIMEDOUT;
/* This code not reached when client closes socket */
/* The time-out structure, "timeo", is appropriately reset here */
...
}
...
}
Here's the logical equivalent of the sequence of events on the client-side (error-handling not shown):
struct sockaddr_in *raddr = ...;
int sock = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
(void)bindresvport(sock, (struct sockaddr_in *)0);
connect(sock, (struct sockaddr *)raddr, sizeof(*raddr));
/* Send a message to the server and receive a reply */
(void)close(sock);
fork(), exec(), and system() are never called. The code is considerably more complex than this, but this is the sequence of relevant calls.
Could Nagel's algorithm cause the FIN packet to not be sent upon close()?
Most likely explanation is that you're not actually closing the client end of the connection when you think you are. Probably because you have some other file descriptor that references the client socket somewhere that is not being closed.
If your client program ever does a fork (or related calls that fork, such as system or popen), the forked child might well have a copy of the file descriptor which would cause the behavior you're seeing.
One way to test/workaround the problem is to have the client do an explicit shutdown(2) prior to closing the socket:
shutdown(sock, SHUT_RDWR);
close(sock);
If this causes the problem to go away then that is the problem -- you have another copy of the client socket file descriptor somewhere hanging around.
If the problem is due to children getting the socket, the best fix is probably to set the close-on-exec flag on the socket immediately after creating it:
fcntl(sock, F_SETFD, fcntl(sock, F_GETFD) | FD_CLOEXEC);
or on some systems, use the SOCK_CLOEXEC flag to the socket creation call.
Mystery solved.
#nos was correct in the first comment: it's a firewall problem. A shutdown() by the client isn't needed; the client does close the socket; the server does use the right timeout; and there's no bug in the code.
The problem was caused by the firewall rules on our Linux Virtual Server (LVS). A client connects to the LVS and the connection is passed to the least-loaded of several backend servers. All packets from the client pass through the LVS; all packets from the backend server go directly to the client. The firewall rules on the LVS caused the FIN packet from the client to be discarded. Thus, the backend server never saw the close() by the client.
The solution was to remove the "-m state --state NEW" options from the iptables(8) rule on the LVS system. This allows the FIN packets from the client to be forwarded to the backend server. This article has more information.
Thanks to all of you who suggested using wireshark(1).
select() call of Linux will modify value of timeout argument. From man page:
On Linux, select() modifies timeout to reflect the amount of time not
slept
So your timeo will runs to zero. And when it is zero select will return immediately (mostly with return value zero).
The following change may help:
for (;;) {
struct timeval timo = timeo;
fd_set readFds = fds;
int status = select(sock+1, &readFds, 0, 0, &timo);
I need to implement a server/client code in C.
Server needs to be able to accept exactly four connections at the time.
I can't get this working. What I've done so far:
1. create a socket
2. set it to non-blocking: fcntl(sock,F_SETFL, O_NONBLOCK);
3. bind it
4. listen: listen(sock, 4);
The part which I am not quite sure about is how to accept the client's connection. My code looks something like this:
while (1) {
if ((sockfd = accept(sock, (struct sockaddr *) &client_addr, &client_size)) < 0) {
perror("Error\n");
}
read(sockfd, &number, sizeof(number));
write(sockfd, &number, sizeof(number));
}
When I execute client and server code, client seems to be writing something to the socket, which server never receives and the entire execution blocks.
What is the proper way to accept connections from multiple clients?
One basic workflow for this kind of server, if you don't want to use multithreading, is like this:
Create an fd_set of file descriptors to watch for reading
Open a socket
Bind the socket to a port to listen on
Start listening on the socket
Add the socket's file descriptor to the fd_set
While not done
Use select to wait until a socket is ready to read from
loop through the fds in your fd_set that have data available
If the current fd is your listening socket, accept a new connection
Else, it's a client fd. Read from it, and perhaps write back to it.
This page shows a flowchart of the above process. (Scroll down for a very nicely annotated example.)
This page is chock full of examples for select.
You should look the man of select. It will tell you when and which sockets are ready to write/read
I have a listener that will pass arbitrary data, HTTP requests, to a network socket which is then delivered over TCP. This works fine for the first request but the listener does not accept subsequent new requests.
My question is:
If I have sock=accept(listener,(struct addr *)&sin, &sinlen); then, based on the socket function reference, the listener socket remains open and I should be able to re-call accept() any number of times for subsequent requests. Is this correct? If so, can someone more familiar than I with socket programming please explain how this code might look?
Yes, you can accept() many times on the listening socket. To service multiple clients, you need to avoid blocking I/O -- i.e., you can't just read from the socket and block until data comes in. There are two approaches: you can service each client in its own thread (or its own process, by using fork() on UNIX systems), or you can use select(). The select() function is a way of checking whether data is available on any of a group of file descriptors. It's available on both UNIX and Windows.
Here is a simple example from Beej's Guide to Network Programming.
while(1) { // main accept() loop
sin_size = sizeof their_addr;
new_fd = accept(sockfd, (struct sockaddr *)&their_addr, &sin_size);
if (new_fd == -1) {
perror("accept");
continue;
}
inet_ntop(their_addr.ss_family,
get_in_addr((struct sockaddr *)&their_addr),
s, sizeof s);
printf("server: got connection from %s\n", s);
if (!fork()) { // this is the child process
close(sockfd); // child doesn't need the listener
if (send(new_fd, "Hello, world!", 13, 0) == -1)
perror("send");
close(new_fd);
exit(0);
}
close(new_fd); // parent doesn't need this
}
The child process — after the fork() — handles the communication asynchronously from accept()ing further connections in the parent.
Yes, you have the right general idea.
While my C socket programming is a bit rusty, calling accept on a server socket sets up the communications channel back to the client side of the socket. Calling accept on future connection attempts will set up multiple socket channels.
This means that one should take care to not overwrite a single shared structure with a specific connection's data, but it doesn't sound like that's the kind of error you would be prone to make.
I am using Berkeley sockets (both: Internet domain and Unix domain) and I was wondering if the server can use the same sockets for reading the request and writing a response to the client. Or should the client create an other socket to wait for the replay and the server connect to it after processing the message received.
By the way, I am talking about connection oriented sockets (stream sockets, TCP, ...).
This is the simplified server code (I ommit error checking on system calls here just for simplicity):
int main() {
int server_socket, connected_socket;
struct sockaddr_in server_addr;
char buf[1024];
char aux[256];
int bytes_read;
server_socket = socket(AF_INET, SOCK_STREAM, 0);
server_addr.sin_family = AF_INET;
server_addr.sin_addr.s_addr = INADDR_ANY;
server_addr.sin_port = htons(1234);
bind(server_socket, &server_addr, sizeof(server_addr))
listen(server_socket, 5)
connected_sodket = accept(server_socket, 0, 0);
do {
bzero(buf, sizeof(buf));
bytes_read = read(connected_socket, buf, sizeof(buf));
} while (bytes_read > 0);
/* Here I want to use connected_socket to write the reply, can I? */
close(connected_socket);
close(server_socket);
return (EXIT_SUCCESS);
}
And this is the simplified client code (I ommit error checking on system calls here just for simplicity):
int main() {
int client_socket;
struct sockaddr_in server_addr;
client_socket = socket(AF_INET, SOCK_STREAM, 0);
hp = gethostbyname("myhost");
server_addr.sin_family = AF_INET;
memcpy(&server_addr.sin_addr, hp->h_addr_list[0], hp->h_length);
server_addr.sin_port = htons(1234);
connect(client_socket, &server_addr, sizeof(server_addr));
write(client_socket, MSG, sizeof(MSG));
/* Here I want to wait for a response from the server using client_socket, can I? */
close(client_socket);
return (EXIT_SUCCESS);
}
Can I use connected_socket in the server and client_socket in the client to pass a response message back? Or should I use the client address I get in the server when in "accept" to connect to a socket in the client?
I have tried by using read/wrint in the client/server where the comment is shown but that way both programs keep blocked, it seems to be a dead-lock.
Thanks ins advance!
Regards.
You should use the same socket!
Your application protocol defines unambiguously when the client and the server should wait for data or send messages to each other; assuming a protocol with only one request from the client and one response from the server, the following should hold:
The client establishes a connection with the server;
the client sends its request (with send());
the client knows, by virtue of the protocol, that the server will reply; therefore it waits for data on the same socket (recv());
after validating the response, the client can close the socket.
The server accepts a connection from the client;
the server knows that the first step is up to the client, hence it waits for data (recv());
the server validates the request;
the server now knows, from the protocol, that the client is waiting for data; hence it sends its response with send();
the server knows, from the protocol, that there are no further steps; hence it can close the socket.
You can use the same socket BUT your program is set up to have the server read EVERYTHING the client sends before attempting to reply. So the loop in the server won't complete until the client closes the write side of its socket so the server gets an EOF (0 bytes read), and thus the server will never send back its response.
There are a couple of ways you can deal with this.
You can break the loop in the server after its seen the whole request, rather than reading until EOF. This requires that the data sent by the client be self-delimiting somehow, so the server can know when its read it all.
You can use a second connection for the reply. Probably not the best.
You can use asymmetric shutdown of the socket. Have the client do shutdown(client_socket, SHUT_WR) to half-close the socket. The server will then see the EOF (and the loop will finish), but the other direction on the socket will still be open for the reply.
Yes, it's possible. Look check out this page for an example of a simple server (and simple client). Note that the server typically passes the "accept"ed file descriptor into a new process so that it can continue listening for more incoming connections
Not only should you use the same socket (as Federico says), you actually have to in order to get your return packets through firewalls.
Firewalls know about TCP connections, and automatically allow the return data to pass through if a machine inside the firewall initiated the connection. If instead you tried to create a new TCP socket from the outside the firewall would block it unless that traffic was specifically permitted.
Yes, SOCK_STREAM sockets are two-way. You should be able to read and write to/from the same socket on each side of the connection. The man page for socket(2) has more detail on this.
Yes, you can. TCP sockets are bi-directional. Just use the same read() and write() functions. Also make sure to check for error conditions on all calls to connect(), read(), write(), ... as you can't control what happens on the network.
Sorry; I did not say it but I did try it like this
This code in the server where the comment is:
write(connected_socket, "Ok, I got it", sizeof("Ok, I got it"));
and this code in the client where the comment is:
read(client_socket, buf, sizeof(buf));
Both programs keep blocked and when I kill the client, the server shows the messages it received (I have a printf just after the server calls read).
I try it with send and recv (both with 0 flags) instead of read and write and it did not change.
In your current setup the server tries to read until the client closes the socket, while the client doesn't close the socket until the server answered. Therefore you have a kind of "deadlock". The server is not stopping to read in the hope some more data might arrive and the client has no way to tell the server it is done.
You need some way for the server to recognize that the request is complete, while the connection is still open. If you for example terminate your request by a newline, the server can check if it received a full line and then stop reading and send the answer.
Have you tried it? It looks like it should work when you actually put the code in where indicated