I am developing a proxy server using WinSock 2.0 in Windows. If I wanted to develop it in blocking model, select() was the way to wait for client or remote server to receive data from. Is there any applicable way to do this so using I/O Completion Ports?
I used to have two Contexts for two directions of data using I/O Completion Ports. But having a WSARecv pending couldn't receive any data from remote server! I coudn't find the problem.
Thanks in advance.
EDIT. Here's the WorkerThread Code on currently developed I/O Completion Ports. But I am asking about how to implement select() equivalence.
I/O Completion Ports provide an indication of when an I/O operation completes, they do not indicate when it is possible to initiate an operation. In many situations this doesn't actually matter. Most of the time the overlapped I/O model will work perfectly well if you assume it is always possible to initiate an operation. The underlying operating system will, in most cases, simply do the right thing and queue the data for you until it is possible to complete the operation.
However, there are some situations when this is less than ideal. For example you can always send to a socket using overlapped I/O. You can do this even when the remote peer is not reading and the TCP stack has started to use flow control and has filled the TCP window... This simply uses resources on your local machine in a completely uncontrolled manner (not entirely uncontrolled, but controlled by the peer, which is not ideal). I write about this here and in many situations you DO need to actively manage this kind of thing by tracking how many outstanding I/O write requests you have and using that as an indication of 'readiness to send'.
Likewise if you want a 'readiness to recv' indication you could issue a 'zero byte' read on the socket. This is a read which is issued with a zero length buffer. The read returns when there is data to read but no data is returned. This would give you the indication that there is data to be read on the connection but is, IMHO, pointless unless you are suffering from the very unlikely situation of hitting the I/O page lock limit, as you may as well read the data when it becomes available rather than forcing multiple kernel to user mode transitions.
In summary, you don't really need an answer to your question. You need to look at how the API works and write your code to work with it rather than trying to force the API to work in a way that other APIs that you are familiar with work.
Related
Should I use non-blocking or blocking TCP sockets when using an I/O multiplexing API like poll(2) or epoll(2)?
Some people suggest using non-blocking sockets here but the I/O multiplexing APIs inform you anyway if there is data to read so what is wrong with a blocking socket here?
If your TCP server is single-threaded and uses blocking I/O, then it's likely that any client that connects to it will be able to deny service to all of the other clients simply by sending only a partial-message, or alternatively by refusing to read any data from its TCP socket after the server sends data. In the former case, the server may block for a long time (perhaps forever) waiting for the entire message to be received from the client; during that time, the server will not be able to respond to other clients. In the latter case, the server will block for a long time (perhaps forever) waiting the client to read some TCP data so that the server-socket's send-buffer can be drained enough to fit some more outgoing data to that client.
One way to avoid that problem is to set all of the server's sockets to non-blocking I/O mode; that way the server knows it can never get "stuck" inside a recv() or a send() call, and thus can remain responsive to all clients regardless of whether any particular client is behaving nicely, or not. In the non-blocking design, the only place the server ever blocks is inside select() or poll() or similar, because those calls are designed to return whenever any client needs service, rather than blocking on only a single client. (the tradeoff is that with non-blocking I/O your server's buffering/queueing logic will need to be a bit more elaborate, since you can no longer assume that any particular fixed number of bytes will be sent or received during any given send or receive operation)
The other way to avoid the problem is to make a multi-threaded server; that has the advantage that each client gets its own thread, and therefore a badly-behaved client will block only its own thread and not the threads servicing other clients. The disadvantage is that now your server is multi-threaded, with all of the additional pitfalls that multithreading introduces.
(and, for completeness, the third approach is simply to ignore the possibility of badly-behaved/poorly-connected clients, and use a single-threaded/blocking model. That works fine for toy examples where clients are expected to be non-hostile, and where the network they are connecting over is reliable, but doesn't work so well in real life)
Non-blocking IO is used when you prefer an error response (EWOULDBLOCK / EAGAIN) over your thread waiting (blocking) until an IO operation becomes possible.
This leads to the question of how is the IO multiplexing achieved?
If you're using a thread-per-connection model (or a process-per-connection), using blocking IO might be more comfortable.
However, if the same thread is serving multiple IO objects, blocking IO would be hazardous and could bring the whole application to a halt.
It is better to use non-blocking IO when a single thread serves multiple IO objects.
Note that the issue might not be noticeable at first when polling (using select / poll or epoll/kqueue).
Since the IO operations are only performed by a code path that already "knows" that the IO operation will not block (it was polled and known to be an available operation).
This masks the issue that somewhere in the code an IO operation might be called directly without polling first, resulting in a blocking IO call that will grind the application to a halt.
Would the following single threaded UDP client application see a performance benefit from using epoll over simply calling recvfrom/sendto on non-blocking sockets?
Let me explain the client.
I am writing single threaded UDP based client (custom protocol) that both sends and receives data using non-blocking I/O and my colleague suggested I use epoll for this.
The client sends and receives multiple packets of information that are all associated with a unique session id and multiple sessions can be run simultaneously.
If I use epoll, there will be a limited number of maybe 10-20 file descriptors which epoll_wait could wait on. Each file descriptor would be associated with one session. So that's maximum 10 - 20 sessions and this number will be enforced.
Each session has it's own state machine. From a single thread I need to run each state machine reasonably frequently and poll the associated socket as well.
In my case, I'd have to use epoll_wait with a timeout of zero or some very small value so that I can give CPU time to run the state machines for each session.
If there is data for a session then it needs to be directed to the associated state machine.
However, I can't really see much benefit of this design with such a small number of file descriptors.
The way I see it is I have two design options:
1. In my main loop using epoll I can poll the descriptors using epoll_wait with either a small timeout or no timeout.
How it handles data at this point is where I'm getting a bit stuck... either I read it right away and then throw it into a queue for each state machine to pick up when it's run, or I set a flag on the state machine to tell it that data is waiting and when the state machine runs it'll pick it up with a call to recvfrom. Or, I read the data and handle it right away and run the state machine for it.
Or...
2, Just run each state machine from the main loop and call recvfrom. If I get some data, handle it. If I don't then do whatever else the state machine requires. Is there huge overhead calling recvfrom when there is no data?
With going the epoll route I'm coding in some extra complexity. If there is a strong likelyhood for it be faster in my case then I will start doing it. However, if the second way which really simple works just as well then I would not use epoll.
Any thoughts?
No, and in fact performance will be much worse using epoll if adding and removing file descriptors from the set to poll is anything but an extremely rare event. With poll, a single syscall performs the entire operation. With epoll, you need multiple syscalls to modify the set and then wait on it.
Unless you're writing a server that's intended to scale to tens, hundreds, or thousands of thousands of long-term persistent connections, epoll is not only premature optimization, but actually a pessimization. It's also completely nonstandard and non-portable.
This means, for example, a module can
start compressing the response from a
backend server and stream it to the
client before the module has received
the entire response from the backend.
Nice!
I know it's some kind of asynchronous IO but simple like that isn't enough.
Anyone knows?
Without looking at the source code of an actual implementation, I'm speculating here:
It's most likely some kind of stream (abstract buffered IO) that is passed from one module to the other ("chaining"). One module (maybe a servlet container) writes to a stream that is read by another module (the compression module in your example), which then writes its output to another stream. The contents of that stream may then be processed further or transmitted to the client.
The backend may need to wait on IO before it can fully produce the page. Modules can begin compressing the start of the message before the backend is entirely done writing it.
To understand why this is useful, you need to understand how ngnix is structured. ngninx is a server that relies on non-blocking input and output. Normally, a server will use blocking input and output: it will listen on a connection, and when a connection is found, it will process the page. In order to increase throughput, multiple threads are spawned, called 'workers'.
Contrast this to ngnix: It continually asks the kernel, "Are any of my IO requests ready?" This allows it to handle the same amount of pages with 1) less overhead from all the different processes, and 2) lower memory usage. It has some downsides, however. For extremely low-volume applications, ngnix may use more CPU than a blocking server. Second, it's much less portable. Windows uses an entirely different model for non-blocking IO.
Getting back to your original question, compressing the beginning of a page is useful because it can be ready for the rest of the page when it's done accessing a database or reading from a disk or what-have-you.
I'm writing a TCP server/client application on Windows, to become familiar with the Winsock API. I come from an UNIX background and would like to know which of these could be the best approach to implement the application:
First the specification
Must scale well on multiprocessor and single-processor systems.
No hardset limit of connections.
Application can both listen for connections, acting as server, and act as client.
Multi threaded.
First approach:
Non-blocking select-like socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Second approach:
Blocking socket for listening, in the 'server' thread.
for each client connecting we spawn a separate thread.
Third approach:
Non-blocking select-like socket for listening, in the 'server' thread.
No separate thread for each incoming connection, the protocol would need state information kept across sessions I suppose.
I wonder what is the most efficient and scalable approach, and especially if it can work with a UDP socket too.
Note: I'm writing the application in plain and old C. No .NET nor C++ involved, C++ exceptions disabled too.
As Gary says, I/O Completion Ports are the most efficient way to manage multiple network connections in a non-blocking/async manner on Windows platforms.
With IOCP you get notified when your networking operations complete and you can process these completions with a small number of threads. You get to decide how many threads you allocate to process the completions and the kernel decides when to use the threads that you're providing. It uses them in a LIFO order, to reduce context switching, so that if you are only using the minimal number of threads required at any point and you're reusing the same threads rather than cycling through all of the threads that you have available for use.
The asynchronous nature of IOCP programming can be a little confusing to start with, but once you get the hang of it it's fairly straight forward.
I have some free IOCP server code which demonstrates the basics and provides some example servers that are pretty easy to build on. You can find the code here: http://www.serverframework.com/products---the-free-framework.html. That page also links to some articles that I wrote to explain the code.
Relating this to the detail of your question. You should be looking at a variation on your third approach. Use AcceptEx() to accept new connections, this can be used in an asynchronous manner and so you don't need a separate thread for connection acceptance and can use the threads that are also processing your overlapped/async read and write operations.
I've written an asynchronous client which does not use blocking sockets, so if you're interested in that approach, then take a look at my client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
It's an HTTP client, but I've shown very little HTTP protocol processing in there, it's all just .NET sockets. The server would work in a similar way: you can take advantage of the *Async methods such as AsseptAsync.
Under Windows, the best performances are achieved by using I/O completion calls.
This is because the lists and queuing mechanism is done in the kernel, far from the heavy user-mode overhead (which drags your code down if you dare to do the hard work yourself).
Unfortunately, Windows I/O completion calls need to allocate many threads to scale and this is quickly killing the performances (as compared to Linux epoll which can scale independently of the number of worker threads you decide to involve in the task).
Recently, I discovered http://gwan.com/ a Web server which came from Windows and was then ported under Linux. And their authors describe the problem in details on their forum.
Appreciate if anyone can help me to get a better solution...
In my application, there is a TCP client(C) and other TCP server(S) on linux machine.
On production envoronment, on high load this server sometimes stop receiving request from Client and hence creating bottlenecks for client as client side is a blocking socket.To recreate the problem locally .. i put a load and take the server on GDB and this way the problem is recreated.
Can anyone suggest some other mechanism to block the socket wihout disturbing the process ?
What exactly would you like to hear? If the server is busy, that is, other processes are being serviced because they too do get a share of the timeslices by the scheduler, there is not much you can do except raising your program's priority/timeslice length, or lowering theirs.
Note that TCP implementations generally use a socket buffer so that some transfers can continue to happen while a process is currently busy dealing with data, or while waiting for the next timeslice.
Do you have some code to show?
Can anyone suggest some other mechanism to block the socket wihout
disturbing the process ?
Do the connection in a separate thread so that you do not block the whole process
I didn't really get the point, but I guess you have implement a blocked tcp server.
If this is true, then there may be some methods to solve it
Use multithreading or event-driven architecture to improve I/O efficiency.
Extract I/O methods code from the others
Multiprocessing may be required to avoid system limits, such as the count of open files.