I'm writting a simple web server, below is the the code that is just for demo purpose(not robust):
//server
...
while (1) {
clientlen = sizeof(struct sockaddr_storage);
connfd = accept(listenfd, (SA *)&clientaddr,&clientlen);
getnameinfo((SA *)&clientaddr, clientlen, client_hostname, MAXLINE, client_port, MAXLINE, 0);
printf("Accepted connection from (%s, %s)\n", client_hostname, client_port);
...
}
...//handle the request
and below is the code on client side:
//client
...
clientfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol))
connect(clientfd, p->ai_addr, p->ai_addrlen)
printf("Successfully connected");
...
this simple web server is not a concurrent server because it should only service one client at a time. Thus, a single slow client can deny service to every other client.
But when I open two terminals as clients to connect the server, and start with the first one to connect to the server and does nothing just block the server, I found in the second request, I also got "Successfully connected", which is confusing me.
My understanding is that since the first request is blocking after the accept(), then the second request should get struck in accept() and therefore the second reuqest's connect() should not be returned until the first request finishes. So why the second request's connect() still return even though the server can only handle one request in a time?
Related
As this is my first question, so my apologies if I couldn't ask in a proper way.
I am implementing the server-client communication using lwip.
My server must be connected to multiple clients all the time.
For that I made my server listening on all its ports, and then when client wants to connect, the client will send the connect request and the server should accept it.
But the problem I am having is that the server keeps on waiting on lwip_accept() function for each client, which blocks the other clients connections also.
For example:
if there are 3 clients, who want to connect to server. Server is listening on its 3 ports for example. and now the lwip_accept() functions are being called like below:
client1Conn = lwip_accept(sock1, (struct sockaddr*)&client, (socklen_t*)&lenClient);
client2Conn = lwip_accept(sock2, (struct sockaddr*)&client, (socklen_t*)&lenClient);
client2Conn = lwip_accept(sock3, (struct sockaddr*)&client, (socklen_t*)&lenClient);
Now for some reason, if the client 1 is not present, then the server will not move further from the first lwip_accept call, as long as the client 1 becomes available, and till that time the other two clients who are available, will not be able to connect to server.
What I want is that the server checks for the client connection, and if the client is not available, then it should skip that and move to the next client connection.
What I tried is using lwip_fcntl like below
int flags = lwip_fcntl(sock1, F_GETFL, 0);
lwip_fcntl(sock1, F_SETFL, flags | O_NONBLOCK);
client1Conn = lwip_accept(sock1, (struct sockaddr*)&client, (socklen_t*)&lenClient);
But no effect, and lwip_ioctl like below
int nonblocking = 1;
lwip_ioctl(sock1, FIONBIO, &nonblocking);
client1Conn = lwip_accept(sock1, (struct sockaddr*)&client, (socklen_t*)&lenClient);
This makes the accept failed all the time.
I tried the solution given in How do I change a TCP socket to be non-blocking? but it didn't work either.
Is there any possible way to implement the required functionality?
I am writing a client-side FTP program, and so far, after a successful connection the server will run in extended passive mode. Using the port number returned from the EPSV command, I can create client-side sockets like this:
void create_data_channel() {
if ((data_sock = socket(AF_INET, SOCK_STREAM, 0)) == -1) {
perror("Cannot create client socket for data connection :(");
exit(1);
}
data_server_addr.sin_family = AF_INET;
data_server_addr.sin_port = htons(port);
data_server_addr.sin_addr = *((struct in_addr *)ftp_server->h_addr);
bzero(&(data_server_addr.sin_zero),8);
// Connect to the ftp server at given port for data connection
if (connect(data_sock, (struct sockaddr *)&data_server_addr,
sizeof(struct sockaddr)) == -1) {
perror("Cannot connect to the ftp server for data connection :(");
exit(1);
}
}
Now, whenever I want to send a command involving the data channel (e.g. LIST), I can first open a new socket using the method above, and get/send whatever data I need from/to the ftp server. Then, I close the data connection using close(data_sock).
This works well for the first LIST command. However, if I were to try to run two or more LIST command, the program fails with my error message "Cannot connect to the ftp server for data connection :(". Why is this so? What am I missing here?
Typically a FTP server does not accept multiple connections to the same dynamic port. Therefore the PASV or EPSV commands need to be done before each data transfer so that the server creates a new listen socket and returns its port number to the client.
I have an epoll event loop in my TCP server to handle client connections and read data from clients.
while(1) {
int n, i;
n = epoll_wait(efd, events, 64, -1); // This is blocking. It waits till new events arrive
for(i = 0; i < n; i++) {
if((events[i].events & EPOLLERR) || (events[i].events & EPOLLHUP) || (!(events[i].events & EPOLLIN))) {
/* An error has occured on this fd, or the socket is not
ready for reading */
dzlog_error("epoll error: %s", strerror(errno));
close(events[i].data.fd);
continue;
} else if(sock == events[i].data.fd) { // Event on the server socket. Accept client connection
while(1) {
if((cli = accept(sock, (struct sockaddr *)&their_addr, &addr_size)) == -1) {
if((errno == EAGAIN) || (errno == EWOULDBLOCK)) { // We have processed all incoming connections
break;
} else {
dzlog_error("accept: %s", strerror(errno));
break;
}
}
dzlog_info("Client connected: Identifier - %d", cli);
s = fcntl(cli, F_SETFL, O_NONBLOCK); // Make client socket non-blocking
if(s == -1) {
dzlog_error("Client no block: %s", strerror(errno));
close(cli);
break;
}
event.data.fd = cli;
event.events = EPOLLIN | EPOLLET;
s = epoll_ctl (efd, EPOLL_CTL_ADD, cli, &event); // Add the client socket to the list of file descriptors to poll
if(s == -1) {
dzlog_error("epoll_ctl: %s", strerror(errno));
close(cli);
break;
}
}
continue;
} else {
readClientData(events[i].data.fd);
}
}
}
When there is data to be read from the client socket, the readClientData function is called. Lets assume that inside that function, we have a call to a database that fetches some data from a table. If for some reason, the call to the database hangs or takes longer than expected, other clients waiting to be connected or send data will also be blocked.
For example consider the following scenario:
Client 1 connects to server
Client 2 connects to server
Client 1 sends data to server (this will cause the readClientData function to be called to process the data)
readClientData function calls the database and waits for response. (waits for 10 seconds or might hang indefinitely)
Client 2 sends data. This data can't be processed as the server is still waiting for the readClientData to complete for Client 1
A new Client 3 tries to connect but has to wait for its connection to be accepted because server is still processing data from Client 1
Is there a way to solve this problem?
Thanks
You can dedicate separate process for waited operations like database read, listening on socket as well, so that you can use your event loop to check send/recv completes for your DB process as well
And Keep event loop in main process:
Read from client,write to DB handling process in nowait mode,come back to event loop to check for DB process reply or new request from client
Unnecessary Sequentialisation?
Assuming that your mention of readClientData() making a database query is germane; databases are quite clever, any half decent engine these days can quite happily serve more than one client at a time. If there's no particular reason why a client shouldn't talk directly to the database, having them do so would likely make for a faster system.
More Than One Database Shim Required
If that's not an option, using a process/thread to be a shim between your main event loop and the database (as desribed by Pras) is one way of keeping the event loop response time low.
However, you may as well have more than one of these, so that a client making a short request isn't held up by another client that's just made a long request. With a single shim, it'd be getting held up waiting for the long request to complete before even starting the new, short request.
This starts getting complex; you need multiple processes/threads, you have to work out a way to divide up the incoming requests amongst the shims, etc. That's a lot of code to start writing.
ZeroMQ
Fortunately there is an answer; if you use ZeroMQ for the communications between the main event loop and the database shims, you can start exploiting its patterns (= you're not writing a ton of code yourself). PUSH/PULL comes to mind; that can be used to auto-magically farm out client requests amongst the shims.
If you get the high water marks right, new client requests could "overtake" a long running request, because the long running shim simply wouldn't be given the new client request. Provided of course that the database engine itself can actually serve all the shims simultanously.
I've got a piece of code:
while(1) {
if(recvfrom(*val, buffer, 1024, MSG_PEEK, NULL, NULL)==-1) {
perror("recv");
exit(1);
} else printf("recv msgpeek\n");
if(*(int*)buffer>5) {
if(recvfrom(*val, buffer, 1024, 0, NULL, NULL)==-1) {
perror("recv");
exit(1);
} else printf("recv\n");
if(*(int*)buffer==6) {
printf("%d\n", *(int*)(buffer+sizeof(int)+30));
printf("%s\n", (char*)buffer+sizeof(int));
}
}
This is part of a client programme. I'm sending messages from server to this client and I've noticed that when client receives this messages, they're connected. I'm using SOCK_STREAM type sockets. Anyone know how to avoid connecting messages?
If I understood you correctly, you are reading from TCP socket, and expecting to get exactly same number of bytes as you "sent" from the other side. Here you assumption is wrong. TCP socket is a bi-directional stream, i.e. it does not preserve boundaries of the application messages you send through it. A "write" on one side of the connection could result in multiple "reads" on the other side, and the other way around, multiple "writes" could be received together. That last case is what you are seeing. It is your responsibility to keep track of message boundaries.
Related question - Receiving data in TCP.
If I understood well your problem is that you send 2 for example messages but you receive one, containing the contents of the two messages sent. This is due to the Nagle's algorithm that TCP uses to improve efficiency. If you want to disable this algorithm use the TCP_NODELAY option.
I have a C application that sends data to a UDP server every few seconds. If the client loses it's network connection for a few minutes and then gets it's connection back, it will send all of the accumulated data to the server which may result in a hundred or more requests coming into the server at the same time from that client.
Is there any way to prevent these messages from being sent from the client if an error occurs during transmission using UDP? Would a connect call from the UDP client help to determine if the client can connect to the server? Or would this only be possible using TCP?
int socketDescriptor;
struct sockaddr_in serverAddress;
if ((socketDescriptor = socket(AF_INET, SOCK_DGRAM, 0)) < 0)
{
printf("Could not create socket. \n");
return;
}
serverAddress.sin_family = AF_INET;
serverAddress.sin_addr.s_addr = inet_addr(server_ip);
serverAddress.sin_port = htons(server_port);
if (sendto(socketDescriptor, data, strlen(data), 0,
(struct sockaddr *)&serverAddress, sizeof(serverAddress)) < 0)
{
printf("Could not send data to the server. \n");
return;
}
close(socketDescriptor);
It sounds like the behavior you're getting is from datagrams being buffered in socket sndbuf, and you would prefer that those datagrams be dropped if they can't immediately be sent?
If that's the case, you might have luck setting the size of the sndbuf to zero.
Word of warning--this area of behavior sounds like it treads very close to "implementation specific" territory.
As explained here, to retrieve errors on UDP send you should use a connect before, then the send method, yet on Linux it seems to have the same behaviour with or without connect.