I'm writing a very simple server application just for the purpose of testing some code.
After creating a socket and bind()ing it to my localhost and some port I'd like to use select() to know when an incoming connection arrives to the bound socket. After that the application should print the message up to a certain lenght and then exit().
My question is basically if I need to use listen() and accept() when I'm expecting only one connection (please remember this is just for testing). I believe these functions are not needed in this case and are only needed for accepting multiple incoming requests. Am I wrong?
With the above ideia in mind I wrote the following code
int main()
{
int fd = TCPcreate(atoh("127.0.0.1"), 15000); /*my localhost address*/
char *str = malloc(100);
int a;
fd_set rfds;
FD_ZERO(&rfds);
FD_SET(fd,&rfds);
a = select(fd+1,&rfds,(fd_set*)NULL,(fd_set*)NULL,(struct timeval*)NULL);
// printf("select returns %d\nfd = %d\n", a, fd);
// printf("fd is set? %s\n", FD_ISSET(fd,&rfds) ? "yes" : "no");
a = TCPrecv(fd, str, 100); /*receive at most 100B */
// printf("%d\n", a);
printf("%s\n", str);
close(fd);
exit(0);
}
TCPcreate()
int TCPcreate(unsigned long IP, unsigned short port)
{
int fd;
struct sockaddr_in address;
fd = socket(AF_INET, SOCK_STREAM, 0);
if(fd==-1)
{
return -1;
}
memset(&address, 0, sizeof(address));
address.sin_family = AF_INET;
address.sin_addr.s_addr = htonl(IP);
address.sin_port = htons(port);
/* struct sockaddr_in is the same size as struct sockaddr */
if(bind(fd, (struct sockaddr*)&address, sizeof(address))==-1)
{
return -2;
}
return fd;
}
atoh() simply returns its argument in host byte order.
What happens when I run the program is that select() doesn't block waiting for a connection. Instead, it immediately returns 1. If I uncomment the printf()s what I get is
select returns 1
fd = 3
is set? yes
-1
(blank line)
What am I missing here?...
If you look at the POSIX specification of select(), the file descriptors returned are ready for reading, writing, or have an error condition on them. This does not list 'a socket on which listen() would succeed' as one of the detectable conditions. So, you will need to use listen() and accept(); only after you've accepted the connection can you use select() on the descriptors.
As Gonçalo Ribeiro notes, the specification for select() also says:
If the socket is currently listening, then it shall be marked as readable if an incoming connection request has been received, and a call to the accept() function shall complete without blocking.
That means you must have done a listen() on the bound socket, but you can wait on multiple sockets for incoming connections.
If you want blocking call - use listen().
The problem with the select is in your code is - keep the select in the loop. As it is a non-blocking call, it will only check once that someone is there to listen or not. So, you can use loop to check for listen many times.
Related
I have a simple server program that looks like the code below:
// create, bind, listen accept etc..
while(1)
{
UpdateData();
int ret = send(sock, data, dataLength , 0);
// Check if client sent "Abort" and if so, break.
}
Is it possible to check if any data has arrived from the client without blocking, so that the server can continuously dump data to the client?
Yes of course, there are a lot of solutions, AFAIK:
Thread
Non blocking mode
select()
poll()
epoll()
IOCP()
kqueue()
Not all available for all OS.
If your gonna define a non-blocking socket, you should add SOCK_NONBLOCK value in socket-type parameter when open a socket. for example this statement open a RAW socket in which can read and write TCP packets in non-blocking mode :
recv_socket = socket(AF_INET, SOCK_RAW | SOCK_NONBLOCK
, IPPROTO_TCP);
Then you can call read function as bellow :
typedef struct
{
struct iphdr ip;
struct tcphdr tcp;
char datagram[DATAGRAM_SIZE];
} TCP_PACKET;
int ret_read = 0;
TCP_PACKET recv_packet;
ret_read = read(recv_socket, reinterpret_cast<void*>(&received_packet)
, sizeof(received_packet));
if(ret_read >= 0)
{
// read occured in success
}
Notice : never use of const-size in read :
ret_read = read(recv_socket, reinterpret_cast<void*>(&received_packet)
, 65536); // makes segmentation fault
it makes segmentation fault while reading packets. just use of sizeof.
I'm working on a C application that uses POSIX TCP/IP functions for communicating with a server. I'm currently doing some testing to see how the application responds when the connection unexpectedly closes.
The main workhouse function is shown below:
uint32_t netWriteMsg(uint8_t * pmsg, size_t msg_size)
{
if(write(m_sockfd, pmsg, msg_size) < msg_size)
return ERR_NET_NOT_ALL_BYTES_SENT;
return ERR_NONE;
}
This function works as expected when I have a good connection with the server. However, calling this function after killing the connection crashes my application.
Ideally, I would want the write function to return an error indicating that the write failed. This would then allow me to handle the error and transition my program to the appropriate state. However, this is not what happens.
I'm curious as to why this function call would crash the application. I'm somewhat thinking that it may be a problem where the function call doesn't lock, and then the pointer its referencing becomes 'bad' resulting in a segmentation fault.
Here is how I configured my socket:
uint32_t netConnect()
{
/* locals */
struct sockaddr_in serv_addr;
fd_set fdset_sock; // only 1 file descriptor (socket fd) will be placed in this set
fd_set fdset_empty;
struct timeval time = {NET_TIMEOUT_CONNECT, 0};
int sock_error;
socklen_t optlen;
int error = ERR_NONE;
/* obtain socket file descriptor and set it to non-blocking */
m_sockfd = socket(AF_INET, SOCK_STREAM, 0);
memset(&serv_addr, 0, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(PORT_NO);
inet_pton(AF_INET, IP_ADDR, &(serv_addr.sin_addr.s_addr));
/* attempt to connect */
error = connect(m_sockfd, &serv_addr, sizeof(serv_addr));
if(error) return ERR_NET_CONNECT_FAILED_IMMEDIATELY;
select(m_sockfd, &fdset_empty, &fdset_sock, &fdset_empty, &time); // blocks until socket is good or timeout occured
error = getsockopt(m_sockfd, SOL_SOCKET, SO_ERROR, &sock_error, &optlen);
if(error) return ERR_NET_COULD_NOT_GET_SOCKET_OPTION;
if(sock_error)
return ERR_NET_CONNECT_ATTEMPT_TIMEOUT;
m_is_connected = 1;
return ERR_NONE;
}
Any help would be appreciated
Further to the missing error-checking #RemyLebeau mentioned, you are also not error-checking the write() itself:
if(write(m_sockfd, pmsg, msg_size) < msg_size)
return ERR_NET_NOT_ALL_BYTES_SENT;
Here you are ignoring the possibilty that it returned -1, in which case you should call perror() or construct an error message string with strerror() and print it, and close the socket, and tell the caller so he doesn't keep writing.
You also need to set SIGPIPE to SIG_IGNORE or whatever it is, so that EPIPE write errors don't cause SIGPIPE signals.
And all this ERR_NET_COULD_NOT_GET_SOCKET_OPTION stuff is poor practice. You should return the actual errno value, or at least print it, not just in the getsockopt() case but in all error cases.
And you are doing the connect() in blocking mode. The following select() is therefore completely pointless.
I'm writing a TCP server in C and find something unusual happens once the listening fd get "Too many open files" error. The accept call doesn't block anymore and returns -1 all the time.
I also tried closing the listening fd and re-opening, re-binding it, but didn't seem to work.
My questions are why accept keeps returning -1 in this situation, what am I supposed to do to stop it and make the server be able to accept new connections after any old clients closed? (the socket is of course able to accept correctly again when some connections closed)
====== UPDATE: clarification ======
The problem occurs just because the number of active clients is more than the limit of open fds, so I don't close any of the accepted fds in the sample code, just to make it reproduce more quickly.
I add the timestamp each time accept returns to the output and slow down connect frequency to once in 2 seconds, then I find that in fact the "Too many open files" error occurs immediately after the lastest success accept. So I think that is because when the maxium fds is reached, each call to accept will return immediately, and the return value is -1. (What I thought is that accept would still block, but returns -1 at the next incoming connect. The behavior of accept in this situation is my own theory, not from the man page. If it's wrong, please let me know).
So to my second question, to make it stop, I think it's a solution that stop to call accept before any connection is closed.
Also update the sample codes. Thanks for your help.
====== Sample codes ======
Here is how I test it. First set ulimit -n to a low value (like 16) and run the server program compiled from the following C source; then use the Python script to create several connections
/* TCP server; bind :5555 */
#include <stdio.h>
#include <unistd.h>
#include <time.h>
#include <stdlib.h>
#include <string.h>
#include <netdb.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>
#define BUFSIZE 1024
#define PORT 5555
void error(char const* msg)
{
perror(msg);
exit(1);
}
int listen_port(int port)
{
int parentfd; /* parent socket */
struct sockaddr_in serveraddr; /* server's addr */
int optval; /* flag value for setsockopt */
parentfd = socket(AF_INET, SOCK_STREAM, 0);
if (parentfd < 0) {
error("ERROR opening socket");
}
optval = 1;
setsockopt(parentfd, SOL_SOCKET, SO_REUSEADDR,
(const void *)&optval , sizeof(int));
bzero((char *) &serveraddr, sizeof(serveraddr));
serveraddr.sin_family = AF_INET;
serveraddr.sin_addr.s_addr = htonl(INADDR_ANY);
serveraddr.sin_port = htons((unsigned short)port);
if (bind(parentfd, (struct sockaddr *) &serveraddr, sizeof(serveraddr)) < 0) {
error("ERROR on binding");
}
if (listen(parentfd, 5) < 0) {
error("ERROR on listen");
}
printf("Listen :%d\n", port);
return parentfd;
}
int main(int argc, char **argv)
{
int parentfd; /* parent socket */
int childfd; /* child socket */
int clientlen; /* byte size of client's address */
struct sockaddr_in clientaddr; /* client addr */
int accept_count; /* times of accept called */
accept_count = 0;
parentfd = listen_port(PORT);
clientlen = sizeof(clientaddr);
while (1) {
childfd = accept(parentfd, (struct sockaddr *) &clientaddr, (socklen_t*) &clientlen);
printf("accept returns ; count=%d ; time=%u ; fd=%d\n", accept_count++, (unsigned) time(NULL), childfd);
if (childfd < 0) {
perror("error on accept");
/* the following 2 lines try to close the listening fd and re-open it */
// close(parentfd);
// parentfd = listen_port(PORT);
// the following line let the program exit at the first error
error("--- error on accept");
}
}
}
The Python program to create connections
import time
import socket
def connect(host, port):
s = socket.socket()
s.connect((host, port))
return s
if __name__ == '__main__':
socks = []
try:
try:
for i in xrange(100):
socks.append(connect('127.0.0.1', 5555))
print ('connect count: ' + str(i))
time.sleep(2)
except IOError as e:
print ('error: ' + str(e))
print ('stop')
while True:
time.sleep(10)
except KeyboardInterrupt:
for s in socks:
s.close()
why accept keeps returning -1 in this situation
Because you've run out of file descriptors, just like the error message says.
what am I supposed to do to stop it and make the server be able to accept new connections after any old clients closed?
Close the clients. The problem is not accept() returning -1, it is that you aren't closing accepted sockets once you're finished with them.
Closing the listening socket isn't a solution. It's just another problem.
EDIT By 'finished with them' I mean one of several things:
They have finished with you, which is shown by recv() returning zero.
You have finished with them, e.g. after sending a final response.
When you've had an error sending or receiving to/from them other than EAGAIN/EWOULDBLOCK.
When you've had some other internal fatal error that prevents you dealing further with that client, for example receiving an unparseable request, or some other fatal application error that invalidates the connection or the session, or the entire client for that matter.
In all these cases you should close the accepted socket.
The answer of EJP is correct, but it does not tell you how to deal with the situation. What you have to do is actually do something with the sockets that you get as accept returns. Simple calling close on them you won't receive anything of course but it would deal with the resource depletion problem. What you have to do to have a correct implementation is start receiving on the accepted sockets and keep receiving until you receive 0 bytes. If you receive 0 bytes, that is an indication that the peer is done using his side of the socket. That is your trigger to call close on the socket as well and deal with the resource problem.
You don't have to stop listening. That would stop your server from being able to process new requests and that is not the problem here.
The solution I implemented here was to review the value of the new (accepted) fd and if that value was equal or higher then the allowed server capacity, then a "busy" message is sent and the new connection is closed.
This solution is quite effective and allows you to inform your clients about the server's status.
I've encountered a case where using write() server-side on a remotely closed client doesn't return 0.
According to man 2 write :
On success, the number of bytes written is returned (zero indicates
nothing was written). On error, -1 is returned, and errno is set
appropriately.
From my understanding: when using read/write on a remotely closed socket, the first attempt is supposed to fail (thus return 0), and the next try should trigger a broken pipe. But it doesn't. write() acts as if it succeeded in sending the data on the first attempt, and then i get a broken pipe on the next try.
My question is why?
I know how to handle a broken pipe properly, that's not the issue. I'm just trying to understand why write doesn't return 0 in this case.
Below is the server code I wrote. Client-side, I tried a basic C client (with close() and shutdown() for closing the socket) and netcat. All three gave me the same result.
#include <stdio.h>
#include <string.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <string.h>
#include <unistd.h>
#include <stdlib.h>
#define MY_STR "hello world!"
int start_server(int port)
{
int fd;
struct sockaddr_in sin;
fd = socket(AF_INET, SOCK_STREAM, 0);
if (fd == -1)
{
perror(NULL);
return (-1);
}
memset(&sin, 0, sizeof(struct sockaddr_in));
sin.sin_addr.s_addr = htonl(INADDR_ANY);
sin.sin_family = AF_INET;
sin.sin_port = htons(port);
if (bind(fd, (struct sockaddr *)&sin, sizeof(struct sockaddr)) == -1
|| listen(fd, 0) == -1)
{
perror(NULL);
close(fd);
return (-1);
}
return (fd);
}
int accept_client(int fd)
{
int client_fd;
struct sockaddr_in client_sin;
socklen_t client_addrlen;
client_addrlen = sizeof(struct sockaddr_in);
client_fd = accept(fd, (struct sockaddr *)&client_sin, &client_addrlen);
if (client_fd == -1)
return (-1);
return (client_fd);
}
int main(int argc, char **argv)
{
int fd, fd_client;
int port;
int ret;
port = 1234;
if (argc == 2)
port = atoi(argv[1]);
fd = start_server(port);
if (fd == -1)
return (EXIT_FAILURE);
printf("Server listening on port %d\n", port);
fd_client = accept_client(fd);
if (fd_client == -1)
{
close(fd);
printf("Failed to accept a client\n");
return (EXIT_FAILURE);
}
printf("Client connected!\n");
while (1)
{
getchar();
ret = write(fd_client, MY_STR, strlen(MY_STR));
printf("%d\n", ret);
if (ret < 1)
break ;
}
printf("the end.\n");
return (0);
}
The only way to make write return zero on a socket is to ask it to write zero bytes. If there's an error on the socket you will always get -1.
If you want to get a "connection closed" indicator, you need to use read which will return 0 for a remotely closed connection.
This is just how the sockets interface was written. When you have a connected socket or pipe, you are supposed to close the transmitting end first, and then the receiving end will get EOF and can shut down. Closing the receiving end first is "unexpected" and so it returns an error instead of returning 0.
This is important for pipes, because it allows complicated commands to finish much more quickly than they would otherwise. For example,
bunzip2 < big_file.bz2 | head -n 10
Suppose big_file.bz2 is huge. Only the first part will be read, because bunzip2 will get killed once it tries sending more data to head. This makes the whole command finish much quicker, and with less CPU usage.
Sockets inherited the same behavior, with the added complication that you have to close the transmitting and receiving parts of the socket separately.
The point to be observed is that, in TCP, when one side of the connection closes its
socket, it is actually ceasing to transmit on that socket; it sends a packet to
inform its remote peer that it will not transmit anymore through that
connection. It doesn't mean, however, that it stopped receiving too. (To
continue receiving is a local decision of the closing side; if it stops receiving, it can
lose packets transmitted by the remote peer.)
So, when you write() to a socket that is remotely closed, but
not locally closed, you can't know if the other end is still waiting to read
more packets, and so the TCP stack will buffer your data and try to send it. As
stated in send() manual page,
No indication of failure to deliver is implicit in a send(). Locally detected
errors are indicated by a return value of -1.
(When you write() to a socket, you are actually send()ing to it.)
When you write() a second time, though, and the remote peer has definitely
closed the socket (not only shutdown() writing), the local TCP stack has probably
already received a reset packet from the peer informing it about the error on
the last transmitted packet. Only then can write() return an error, telling
its user that this pipe is broken (EPIPE error code).
If the remote peer has only shutdown() writing, but still has the socket open,
its TCP stack will successfully receive the packet and will acknowledge the
received data back to the sender.
if you read the whole man page then you would read, in error return values:
"EPIPE fd is connected to a pipe or *socket whose reading end is closed*."
So, the call to write() will not return a 0 but rather -1 and errno will be set to 'EPIPE'
I am facing one of the strangest programming problems in my life.
I've built a few servers in the past and the clients would connect normally, without any problems.
Now I'm creating one which is basically a web server. However, I'm facing a VERY strange situation (at least to me).
Suppose that you connect to localhost:8080 and that accept() accepts your connection and then the code will process your request in a separate thread (the idea is to have multiple forks and threads across each child - that's implemented on another file temporarily but I'm facing this issue on that setup as well so...better make it simple first). So your request gets processed but then after being processed and the socket being closed AND you see the output on your browser, accept() accepts a connection again - but no one connects of course because only one connection was created.
errno = 0 (Success) after recv (that's where the program blows up)
recv returns 0 though - so no bytes read (of course, because the connection was not supposed to exist)
int main(int argc, char * argv[]){
int sock;
int fd_list[2];
int fork_id;
/* Socket */
sock=create_socket(PORT);
int i, active_n=0;
pthread_t tvec;
char address[BUFFSIZE];
thread_buffer t_buffer;
int msgsock;
conf = read_config("./www.config");
if(conf == NULL)
{
conf = (config*)malloc(sizeof(config));
if(conf == NULL)
{
perror("\nError allocating configuration:");
exit(-1);
}
// Set defaults
sprintf(conf->httpdocs, DOCUMENT_ROOT);
sprintf(conf->cgibin, CGI_ROOT);
}
while(cicle) {
printf("\tWaiting for connections\n");
// Waits for a client
msgsock = wait_connection(sock, address);
printf("\nSocket: %d\n", msgsock);
t_buffer.msg = &address;
t_buffer.sock = msgsock;
t_buffer.conf = conf;
/* Send socket to thread */
if (pthread_create(&tvec, NULL, thread_func, (void*)&t_buffer) != 0)
{
perror("Error creating thread: ");
exit(-1);
}
}
free(conf);
return 0;
}
Here are two important functions used:
int create_socket(int port) {
struct sockaddr_in server, remote;
char buffer[BUFF];
int sock;
sock = socket(AF_INET, SOCK_STREAM, 0);
if (sock < 0) {
perror("opening stream socket");
exit(1);
}
server.sin_family = AF_INET;
server.sin_port = htons(port);
server.sin_addr.s_addr = htonl(INADDR_ANY);
if (bind(sock, (struct sockaddr *) &server, sizeof(struct sockaddr_in))) {
perror("binding stream socket");
exit(1);
}
gethostname(buffer, BUFF);
printf("\n\tServidor a espera de ligações.\n");
printf("\tUse o endereço %s:%d\n\n", buffer,port);
if (listen(sock, MAXPENDING) < 0) {
perror("Impossível criar o socket. O servidor vai sair.\n");
exit(1);
}
return(sock);
}
int wait_connection(int serversock, char *remote_address){
int clientlen;
int clientsock;
struct sockaddr_in echoclient;
clientlen = sizeof(echoclient);
/* Wait for client connection */
if ((clientsock = accept(serversock, (struct sockaddr *) &echoclient, &clientlen)) < 0)
{
perror("Impossivel estabelecer ligacao ao cliente. O servidor vai sair.\n");
exit(-1);
}
printf("\n11111111111111Received request - %d\n", clientsock);
sprintf(remote_address, "%s", inet_ntoa(echoclient.sin_addr));
return clientsock;
}
So basically you'd see:
11111111111111Received request - D
D is different both times so the fd is different definitely.
Twice! One after the other has been processed and then it blows up after recv in the thread function. Some times it takes a bit for the second to be processed and show but it does after a few seconds. Now, this doesn't always happen. Some times it does, some times it doesn't.
It's so weird...
I've rolled out the possibility of being an addon causing it to reconnect or something because Apache's ab tool causes the same issue after a few requests.
I'd like to note that even if I Don't run a thread for the client and simply close the socket, it happens as well! I've considered the possibility of the headers not being fully read and therefore the browsers sends another request. But the browser receives the data back properly otherwise it wouldn't show the result fine and if it shows the result fine, the connection must have been closed well - otherwise a connection reset should appear.
Any tips? I appreciate your help.
EDIT:
If I take out the start thread part of the code, sometimes the connection is accepted 4, 5, 6 times...
EDIT 2: Note that I know that the program blows up after recv failing, I exit on purpose.
This is certainly a bug waiting to happen:
pthread_create(&tvec, NULL, thread_func, (void*)&t_buffer
You're passing t_buffer, a local variable, to your thread. The next time you accept a client, which can happen
before another client finished, you'll pass the same variable to that thread too, leading to a lot of very indeterministic behavior.(e.g. 2 threads reading from the same connection, double close() on a descriptor and other oddities. )
Instead of passing the same local variable to every thread, dynamically allocate a new t_buffer for each new client.
Suppose ... after being processed and the socket being closed AND you see the output on your browser, accept() accepts a connection again - but no one connects of course because only one connection was created.
So if no-one connects, there is nothing to accept(), so this never happens.
So whatever you're seeing, that isn't it.