How network event FD_WRITE is generated when using Event Driven Sockets? - c

I am working on newtwork event based socket application.
When client has sent some data and there is something to be read on the socket, FD_READ network event is generated.
Now according to my understanding, when server wants to write over the socket, there must be an event generated i.e. FD_WRITE. But how this message will be generated?
When there is something available to be read, FD_READ is automatically generated but what about FD_WRITE when server wants to write something?
Anyone who can help me with this confusion please?
Following is the code snippet:
WSAEVENT hEvent = WSACreateEvent();
WSANETWORKEVENTS events;
WSAEventSelect(newSocketIdentifier, hEvent, FD_READ | FD_WRITE);
while(1)
{ //while(1) starts
waitRet = WSAWaitForMultipleEvents(1, &hEvent, FALSE, WSA_INFINITE, FALSE);
//WSAResetEvent(hEvent);
if(WSAEnumNetworkEvents(newSocketIdentifier,hEvent,&events) == SOCKET_ERROR)
{
//Failure
}
else
{ //else event occurred starts
if(events.lNetworkEvents & FD_READ)
{
//recvfrom()
}
if(events.lNetworkEvents & FD_WRITE)
{
//sendto()
}
}
}

FD_WRITE means you can write to the socket right now. If the send buffers fill up (you're sending data faster than it can be sent on the network), eventually you won't be able to write anymore until you wait a bit.
Once you make a write that fails due to the buffers being full, this message will be sent to you to let you know you can retry that send.
It's also sent when you first open up the socket to let you know it's there and you can start writing.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms741576(v=vs.85).aspx
The FD_WRITE network event is handled slightly differently. An
FD_WRITE network event is recorded when a socket is first connected
with a call to the connect, ConnectEx, WSAConnect, WSAConnectByList,
or WSAConnectByName function or when a socket is accepted with accept,
AcceptEx, or WSAAccept function and then after a send fails with
WSAEWOULDBLOCK and buffer space becomes available. Therefore, an
application can assume that sends are possible starting from the first
FD_WRITE network event setting and lasting until a send returns
WSAEWOULDBLOCK. After such a failure the application will find out
that sends are again possible when an FD_WRITE network event is
recorded and the associated event object is set.
So, ideally you're probably keeping a flag as to whether it's OK to write, right now. It starts off as true, but eventually, you get a WSAEWOULDBLOCK when calling sendto, and you set it to false. Once you receive FD_WRITE, you set the flag back to true and resume sending packets.

Related

Solutions to blocking operations inside an epoll event loop

I have an epoll event loop in my TCP server to handle client connections and read data from clients.
while(1) {
int n, i;
n = epoll_wait(efd, events, 64, -1); // This is blocking. It waits till new events arrive
for(i = 0; i < n; i++) {
if((events[i].events & EPOLLERR) || (events[i].events & EPOLLHUP) || (!(events[i].events & EPOLLIN))) {
/* An error has occured on this fd, or the socket is not
ready for reading */
dzlog_error("epoll error: %s", strerror(errno));
close(events[i].data.fd);
continue;
} else if(sock == events[i].data.fd) { // Event on the server socket. Accept client connection
while(1) {
if((cli = accept(sock, (struct sockaddr *)&their_addr, &addr_size)) == -1) {
if((errno == EAGAIN) || (errno == EWOULDBLOCK)) { // We have processed all incoming connections
break;
} else {
dzlog_error("accept: %s", strerror(errno));
break;
}
}
dzlog_info("Client connected: Identifier - %d", cli);
s = fcntl(cli, F_SETFL, O_NONBLOCK); // Make client socket non-blocking
if(s == -1) {
dzlog_error("Client no block: %s", strerror(errno));
close(cli);
break;
}
event.data.fd = cli;
event.events = EPOLLIN | EPOLLET;
s = epoll_ctl (efd, EPOLL_CTL_ADD, cli, &event); // Add the client socket to the list of file descriptors to poll
if(s == -1) {
dzlog_error("epoll_ctl: %s", strerror(errno));
close(cli);
break;
}
}
continue;
} else {
readClientData(events[i].data.fd);
}
}
}
When there is data to be read from the client socket, the readClientData function is called. Lets assume that inside that function, we have a call to a database that fetches some data from a table. If for some reason, the call to the database hangs or takes longer than expected, other clients waiting to be connected or send data will also be blocked.
For example consider the following scenario:
Client 1 connects to server
Client 2 connects to server
Client 1 sends data to server (this will cause the readClientData function to be called to process the data)
readClientData function calls the database and waits for response. (waits for 10 seconds or might hang indefinitely)
Client 2 sends data. This data can't be processed as the server is still waiting for the readClientData to complete for Client 1
A new Client 3 tries to connect but has to wait for its connection to be accepted because server is still processing data from Client 1
Is there a way to solve this problem?
Thanks
You can dedicate separate process for waited operations like database read, listening on socket as well, so that you can use your event loop to check send/recv completes for your DB process as well
And Keep event loop in main process:
Read from client,write to DB handling process in nowait mode,come back to event loop to check for DB process reply or new request from client
Unnecessary Sequentialisation?
Assuming that your mention of readClientData() making a database query is germane; databases are quite clever, any half decent engine these days can quite happily serve more than one client at a time. If there's no particular reason why a client shouldn't talk directly to the database, having them do so would likely make for a faster system.
More Than One Database Shim Required
If that's not an option, using a process/thread to be a shim between your main event loop and the database (as desribed by Pras) is one way of keeping the event loop response time low.
However, you may as well have more than one of these, so that a client making a short request isn't held up by another client that's just made a long request. With a single shim, it'd be getting held up waiting for the long request to complete before even starting the new, short request.
This starts getting complex; you need multiple processes/threads, you have to work out a way to divide up the incoming requests amongst the shims, etc. That's a lot of code to start writing.
ZeroMQ
Fortunately there is an answer; if you use ZeroMQ for the communications between the main event loop and the database shims, you can start exploiting its patterns (= you're not writing a ton of code yourself). PUSH/PULL comes to mind; that can be used to auto-magically farm out client requests amongst the shims.
If you get the high water marks right, new client requests could "overtake" a long running request, because the long running shim simply wouldn't be given the new client request. Provided of course that the database engine itself can actually serve all the shims simultanously.

What is the order of the completion packets for WSASend() and WSARecv() in this case?

Say that I have two programs, a client and a server, and that I am running the client and the server on the same computer (so the speed is extremely fast), and say that the client socket's receive buffer is empty, and that the server will not send any data to the client except if the client told the server to do so.
Now in the client, I call WSASend() and then after it I call WSARecv():
WSASend(...); // tell the server to send me some data
WSARecv(...);
So in the code above, WSASend() is telling the server to send some data to the client (for example: the string "hello").
Now after some time, two completion packets will be placed in the completion port:
The first completion packet is for WSASend() (telling me that the data
has been placed in the client socket's send buffer).
The second completion packet is for WSARecv() (telling me that the data
has been placed in the buffer that I passed to WSARecv() when I
called it).
Now my question is: is it possible that the completion packet for WSARecv() be placed in the completion port before the completion packet for WSASend() (so when I call GetQueuedCompletionStatus() I will get the completion packet for WSARecv() first)?
you must never assume any order of completion packets you got. you must have independent from this knowledge - which is operation complete.
you must define some structure inherited from OVERLAPPED and it this structure place all data related to operation. including tag which describe type of operation. so when you extract pointer to OVERLAPPED from IOCP you cast it to this structure and will be know - are this for recv or send, connect or disconnect.. for example
class IO_IRP : public OVERLAPPED
{
//...
DWORD m_opCode;// `recv`, `send`, `dsct`, `cnct`
IO_IRP(DWORD opCode,...) : m_opCode(opCode) {}
VOID IOCompletionRoutine(DWORD dwErrorCode, DWORD dwNumberOfBytesTransfered)
{
// switch (m_opCode)
m_pObj->IOCompletionRoutine(m_packet, m_opCode, dwErrorCode, dwNumberOfBytesTransfered, Pointer);
delete this;
}
static VOID CALLBACK _IOCompletionRoutine(DWORD dwErrorCode, DWORD dwNumberOfBytesTransfered, LPOVERLAPPED lpOverlapped)
{
static_cast<IO_IRP*>(lpOverlapped)->IOCompletionRoutine(dwErrorCode, dwNumberOfBytesTransfered);
}
};
// recv
if (IO_IRP* Irp = new IO_IRP('recv', ..))
{
WSARecv(..., Irp);
...
}

Is there a way to detect that TCP socket has been closed by the remote peer, without reading from it?

First, a little background to explain the motivation: I'm working on a very simple select()-based TCP "mirror proxy", that allows two firewalled clients to talk to each other indirectly. Both clients connect to this server, and as soon as both clients are connected, any TCP bytes sent to the server by client A is forwarded to client B, and vice-versa.
This more or less works, with one slight gotcha: if client A connects to the server and starts sending data before client B has connected, the server doesn't have anywhere to put the data. I don't want to buffer it up in RAM, since that could end up using a lot of RAM; and I don't want to just drop the data either, as client B might need it. So I go for the third option, which is to not select()-for-read-ready on client A's socket until client B has also connected. That way client A just blocks until everything is ready to go.
That more or less works too, but the side effect of not selecting-for-read-ready on client A's socket is that if client A decides to close his TCP connection to the server, the server doesn't get notified about that fact -- at least, not until client B comes along and the server finally selects-for-read-ready on client A's socket, reads any pending data, and then gets the socket-closed notification (i.e. recv() returning 0).
I'd prefer it if the server had some way of knowing (in a timely manner) when client A closed his TCP connection. Is there a way to know this? Polling would be acceptable in this case (e.g. I could have select() wake up once a minute and call IsSocketStillConnected(sock) on all sockets, if such a function existed).
If you want to check if the socket is actually closed instead of data, you can add the MSG_PEEK flag on recv() to see if data arrived or if you get 0 or an error.
/* handle readable on A */
if (B_is_not_connected) {
char c;
ssize_t x = recv(A_sock, &c, 1, MSG_PEEK);
if (x > 0) {
/* ...have data, leave it in socket buffer until B connects */
} else if (x == 0) {
/* ...handle FIN from A */
} else {
/* ...handle errors */
}
}
Even if A closes after sending some data, your proxy probably wants to forward that data to B first before forwarding the FIN to B, so there is no point in knowing that A has sent FIN on the connection sooner than after having read all the data it has sent.
A TCP connection isn't considered closed until after both sides send FIN. However, if A has forcibly shutdown its endpoint, you will not know that until after you attempt to send data on it, and receive an EPIPE (assuming you have suppressed SIGPIPE).
After reading your mirror proxy application a bit more, since this is a firewall traversal application, it seems that you actually need a small control protocol to allow to you verify that these peers are actually allowed to talk to each other. If you have a control protocol, then you have many solutions available to you, but the one I would advocate would be to have one of the connections describe itself as the server, and the other connection describe itself as the client. Then, you can reset the connection the client if there is no server present to take its connection. You can let servers wait for a client connection up to some timeout. A server should not initiate any data, and if it does without a connected client, you can reset the server connection. This eliminates the issue of buffering data for a dead connection.
It appears the answer to my question is "no, not unless you are willing and able to modify your TCP stack to get access to the necessary private socket-state information".
Since I'm not able to do that, my solution was to redesign the proxy server to always read data from all clients, and throw away any data that arrives from a client whose partner hasn't connected yet. This is non-optimal, since it means that the TCP streams going through the proxy no longer have the stream-like property of reliable in-order delivery that TCP-using programs expect, but it will suffice for my purpose.
For me the solution was to poll the socket status.
On Windows 10, the following code seemed to work (but equivalent implementations seem to exist for other systems):
WSAPOLLFD polledSocket;
polledSocket.fd = socketItf;
polledSocket.events = POLLRDNORM | POLLWRNORM;
if (WSAPoll(&polledSocket, 1, 0) > 0)
{
if (polledSocket.revents &= (POLLERR | POLLHUP))
{
// socket closed
return FALSE;
}
}
I don't see the problem as you see it. Let's say A connects to the server sends some data and close, it does not need any message back. Server won't read its data until B connects, once it does server read socket A and send the data to B. The first read will return the data A had sent and the second return either 0 or -1 in either case the socket is closed, server close B. Let's suppose A send a big chunk of data, the A's send() method will block until server starts reading and consumes the buffer.
I would use a function with a select which returns 0, 1, 2, 11, 22 or -1,
where;
0=No data in either socket (timeout)
1=A has data to read
2=B has data to read
11=A socket has an error (disconnected)
22=B socket has an error (disconnected)
-1: One/both socket is/are not valid
int WhichSocket(int sd1, int sd2, int seconds, int microsecs) {
fd_set sfds, efds;
struct timeval timeout={0, 0};
int bigger;
int ret;
FD_ZERO(&sfds);
FD_SET(sd1, &sfds);
FD_SET(sd2, &sfds);
FD_SET(sd1, &efds);
FD_SET(sd2, &efds);
timeout.tv_sec=seconds;
timeout.tv_usec=microsecs;
if (sd1 > sd2) bigger=sd1;
else bigger=sd2;
// bigger is necessary to be Berkeley compatible, Microsoft ignore this param.
ret = select(bigger+1, &sfds, NULL, &efds, &timeout);
if (ret > 0) {
if (FD_ISSET(sd1, &sfds)) return(1); // sd1 has data
if (FD_ISSET(sd2, &sfds)) return(2); // sd2 has data
if (FD_ISSET(sd1, &efds)) return(11); // sd1 has an error
if (FD_ISSET(sd2, &efds)) return(22); // sd2 has an error
}
else if (ret < 0) return -1; // one of the socket is not valid
return(0); // timeout
}
Since Linux 2.6.17, you can poll/epoll for POLLRDHUP/EPOLLRDHUP. See epoll_ctl(2):
EPOLLRDHUP (since Linux 2.6.17)
Stream socket peer closed connection, or shut down writing half of connection. (This flag is especially useful for writing simple code to detect peer shutdown when using Edge Triggered monitoring.)
If your proxy must be a general purpose proxy for any protocol, then you should handle also those clients which sends data and immediately calls close after the send (one way data transfer only).
So if client A sends a data and closes the connection before the connection is opened to B, don't worry, just forward the data to B normally (when connection to B is opened).
There is no need to implement special handling for this scenario.
Your proxy will detect the closed connection when:
read returns zero after connection to B is opened and all pending data from A is read. or
your programs try to send data (from B) to A.
You could check if the socket is still connected by trying to write to the file descriptor for each socket. Then if the return value of the write is -1 or if errno = EPIPE, you know that socket has been closed.for example
int isSockStillConnected(int *fileDescriptors, int numFDs){
int i,n;
for (i=0;i<numFDs;i++){
n = write(fileDescriptors+i,"heartbeat",9);
if (n < 0) return -1;
if (errno == EPIPE) return -1;
}
//made it here, must be okay
return 0;
}

recv() links messages

I've got a piece of code:
while(1) {
if(recvfrom(*val, buffer, 1024, MSG_PEEK, NULL, NULL)==-1) {
perror("recv");
exit(1);
} else printf("recv msgpeek\n");
if(*(int*)buffer>5) {
if(recvfrom(*val, buffer, 1024, 0, NULL, NULL)==-1) {
perror("recv");
exit(1);
} else printf("recv\n");
if(*(int*)buffer==6) {
printf("%d\n", *(int*)(buffer+sizeof(int)+30));
printf("%s\n", (char*)buffer+sizeof(int));
}
}
This is part of a client programme. I'm sending messages from server to this client and I've noticed that when client receives this messages, they're connected. I'm using SOCK_STREAM type sockets. Anyone know how to avoid connecting messages?
If I understood you correctly, you are reading from TCP socket, and expecting to get exactly same number of bytes as you "sent" from the other side. Here you assumption is wrong. TCP socket is a bi-directional stream, i.e. it does not preserve boundaries of the application messages you send through it. A "write" on one side of the connection could result in multiple "reads" on the other side, and the other way around, multiple "writes" could be received together. That last case is what you are seeing. It is your responsibility to keep track of message boundaries.
Related question - Receiving data in TCP.
If I understood well your problem is that you send 2 for example messages but you receive one, containing the contents of the two messages sent. This is due to the Nagle's algorithm that TCP uses to improve efficiency. If you want to disable this algorithm use the TCP_NODELAY option.

Multi-clients on a server

For an application in C, i need to response more than one clients.
I setup the connection with a code like,
bind(...);
listen(...);
while(1){
accept(...);//accept a client
recv(...);//receive something
send(...);//send something to client
bzero(buf);//clear buffer
}
This works great when i have only one client. Other clients also can connect to the server but although they command something, server does not response back to clients who connected after the first client. How can i solve this problem?
Write a server using asynchronous, nonblocking connections.
Instead of a single set of data about a client, you need to create a struct. Each instance of the struct holds the data for each client.
The code looks vaguely like:
socket(...)
fcntl(...) to mark O_NONBLOCK
bind(...)
listen(...)
create poll entry for server socket.
while(1) {
poll(...)
if( fds[server_slot].revents & POLLIN ) {
accept(...)
fcntl(...) mark O_NONBLOCK
create poll and data array entries.
}
if( fds[i].revents & POLLIN ) {
recv(...) into data[i]
if connection i closed then clean up.
}
if( fds[i].revents & POLLOUT ) {
send(...) pending info for data[i]
}
}
If any of your calls return the error EAGAIN instead of success then don't panic. You just try again later. Be prepared for EAGAIN even if poll claims the socket is ready: it's good practice and more robust.
i need to response more than one clients.
Use Threading.
Basically you want your main thread to only do the accept part, and then handle the rest to another thread of execution (which can be either a thread or a process).
Whenever your main thread returns from "accept", give the socket descriptor to another thread, and call accept again (this can be done with fork, with pthread_create, or by maintaining a thread pool and using synchronization, for instance condition variables, to indicate that a new client has been accepted).
While the main thread will handle possible new clients incoming, the other threads will deal with the recv/send.

Resources