HTTP Server Programming - c

I'm attempting to write my own http 1.1 server, just for fun and learning about more about HTTP, sockets, and threading.
I've gotten a good start i think with only delivering static pages (using c, which I would prefer to stay in for the time being). I have a test page I wrote a while ago and deliver it's ~50 files in 124ms according to chrome, without using threads or keep-alive sockets.
I've found it very difficult to get threading/keep-alive working at all. There are little to no resources on the web (that I can find in my hours of Googling) that explain keep-alive connections in detail. If anyone could recommend a good book on HTTP server programming, I would greatly appreciate it.
I've done some threading and socket programming before by making a simple chat program, so I have at least some experience with it.
The issue I'm having is that when I attempt to incorporate threads, the client browser sets up multiple connections. Somewhere along the line, the server gets confused and the client just sits there waiting for responses and the server stops doing anything. I send the Connection: Keep-Alive header, but that doesn't change anything and when I incorporate keep-alive it and create a loop for getting requests in the threaded function, it stalls until the connection is closed.
I would appreciate it if someone could give me some pseudo code on how to get keep alive/threading working for this so the client stops creating multiple connections at a time.
A brief description of whats going on:
main function
load in static pages to large array of fileinfo struct that hold the file data and length
create the socket
set it to listen to port 80
set it to listen for 10 connections at a time(i know this is low...)
start an endless loop
block while waiting for someone to connect
check if it's a localhost connection
shutdown the server
otherwise
start a thread(with pthread), sending it the socket variable
loop
Thread Function
setsock opt for 3 sec timeout on send/recv and enable Keep-alive
start endless loop
read in request
if request timed out, break the loop
Validate Request function call
Create Reponse function call
Send response
if request contained Connection: close header break the loop
loop
close socket
return

I would recommend looking at GNU libmicrohttpd. It focuses squarely on providing a framework upon which to build HTTP 1.1 servers. It is small and supports keep-alive with and without threading. (Personally I use it without threading. It has several threading models too.)
Even if you decide to write your web server from scratch, I would suggest looking at libmicrohttpd to gain insight in not only how the protocol works, but how the library models "the work flow" of a web server in a very clean way. I think it is a mistake to imagine that keep-alive implies threading and I think it is an impediment to understanding keep-alive.
(Regarding Apaches' credits as a web server, it is pretty huge, and there is a lot in there not related to protocols, but rather things like its plugin system and so on.)

I'd recommend grabbing the source for Apache and seeing how they handle it. There's not much point in psuedo code when you can see how the real thing works.

Perhaps you could look at Apache's code for some clues. It is written in C.
Hopefully someone will come along and give a more detailed answer :)

Related

How to wait for Windows TCP Network at startup?

I know this should be obvious, but I have found far too many DIFFERENT answers and the ones I've tried all fail (sometimes or all the time), so...
We are working on a service and some applications that run at startup on a Windows 10 computer that performs an automatic login. The service and applications require Windows sockets for TCP, UDP and Multicast. Most of the time, our programs fail because they get errors about the network not being ready and such. Currently, we work around this by just adding a dumb, fixed length delay time before attempting to start, but we would prefer to start as soon at the network is ready to be used.
Our most recent attempt was to wait on the LanmanWorkstation (Workstation) service, but that generally reports it is running/ready before the sockets functions will succeed. I have also seen suggestions to use LanmanServer (Server) or Netman (Network Connections) or maybe even Tcpip (TCP/IP Protocol Driver), but I cannot find anything definitive. One would think this is a common requirement, so why would Microsoft make the info so difficult to find?
Ahem. Does any know a definitive method for a service or application to wait until winsock functions will succeed before using them? Short of a spin wait on a failing winsock function, of course!

Websockets on stellaris board running lwIP 1.3.2

What I'm doing
I'm implementing a websocket server on a stellaris board as the title says. At the moment I'm able to establish connection to the client and send a few frames.
The way I'm implementing the websocket
The way I'm developing it is something like a master slave communication. Whenever the client sends a string, the server decodes it and then answers. At the moment I'm simply responding to a character 'e', which is designed to be just a counter. The thing is that I implemented the websocket on the client side to send 'e' whenever it receives a message and then displays the message on the page.
The problem
The problem is that it does about 15 transactions and then I can see the communication being re-transmitted from and to the stellaris board and then the communication closes. After the connection closes I noticed that that I can't access any other page on the board. It simply doesn't respond anymore.
My assumptions of what may be causing it
This lead me to believe that the transactions are being too fast and there may be an implementation bug, lwIP bug or hardware bug (I'm using the enet_io example as base).
My assumptions on how to fix it
After seeing this I can imagine that what I need is to control the string being sent to the microcontroller so that it sends once a second, or maybe even less, because at the moment it was doing something like 1000 transactions per second and sometimes more.
The question
So ... after my trials I still have a few questions that need to be answered. Do websockets need this kind of relationship? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open? Is my supposition that slowing down my rates will work?
Do websockets need this kind of relationship [request-response]? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open?
The Websocket protocol doesn't require a request-response model (except for the connection establishing handshake).
The server can stream data to the client without worrying about any response or request from the client.
However, it's common practice to get a response or a ping from a client once in a while, just to know they're alive.
This allows the client to renew a connection if a message or ping fails to reach the server - otherwise the client might not notice an abnormally dropped connection (it will just assume no updates are being sent because there's no new data).
It also allows the server to know a connection is still alive even when no information is being exchanged.
Is my supposition that slowing down my rates will work?
I guess this question becomes less relevant due to the first question's answer... however, I should probably note that the web socket client (often a browser) will have limited resources and a different memory management scheme.
Browsers are easy to overwhelm with too much data because they often keep references to all the exchanges since the page was loaded (or refreshed).
This is especially true when logging events to a browser's console.

What is the nginx equivalent of worker and prefork MPMs in Apache?

I'm seriously considering the switch from Apache to nginx, and I'd like to understand nginx better - I'm no Apache guru either, so I think I'll learn more about Apache in the answers to these questions. I think it will be apparent from my questions that I really have a lot to learn in this area and have probably misunderstood much. But that's why I'm asking:
So does nginx have no equivalent of Apache's prefork MPM? If so,
then how is nginx different from the worker MPM? And if it's like
the worker MPM, then why aren't there the same concerns about thread
safety which make people not use Apache's mod_php with a worker MPM?
If a process is an OS process, and a process can
have multiple threads (similar to Java where java executable is the
single process and it can start multiple threads), how do
'requests' fit into this model? I understand that a client
request doesn't result in a new OS process with nginx, but does it result in a
new thread or can a thread handle multiple simultaneous requests? Or if not, then multiple sequential connections where when a thread has finished with one request it can handle another?
What is the relationships between 'requests' and 'connections'? If a client makes 10 requests, are these 10 connections, or is it 1 connection? How long does a connection last? I realize that if a client makes 10 requests over a period of a month, those could be part of the same session (if the session cookie persists), but surely that wouldn't be the same connection. So where's the line drawn for what constitutes a connection?
What are the different ways of using PHP from nginx? Unless I'm
mistaken, Apache has 3 (mod_php, mod_fastcgi, and mod_fcgid). For
nginx I've heard of PHP-FPM and FastCGI. Are there other options or
are these the only 2 ways, and if so how do they differ from each
other? I keep reading the PHP-FPM is another way of doing FastCGI
so I'm not exactly sure what the difference is.
If there are 10 clients connected to the server accessing PHP pages,
how many processes will I see when running the 'top' command if using
nginx, and what will they be named? (I imagine the answer depends on the response to the question in the previous paragraph.) If this was with Apache prefork MPM and
mod_php, if I understand correctly then I think I'd see 10 httpd
processes when running 'top'.
How many ports on my server will now be occupied? Before it was just port 80 by Apache. Now I imagine there'll be port 80 by nginx, plus some other port for nginx to communicate with the thing that's actually processing the PHP. And what exactly is that thing that runs the PHP, is it the 'PHP' executable, or 'FastCGI', or something else?
So if nginx is configured to use multiple 'backend' PHP processors (is this possible?) how many APC instances will there be? And how would requests from nginx be handed to them (e.g. would it use the session cookie to send the same user back to the same PHP processor?)
So many questions, I know, but hopefully some out there who actually understand all this can help me understand as well. I really want to! Thanks.
This article should answer pretty much everything: http://arstechnica.com/business/news/2011/11/a-faster-web-server-ripping-out-apache-for-nginx.ars about Apache v.s. Nginx
As for the other questions:
3) A request is just that, a request for some resource on the server. GET /index.html is one request. POST /formhandler.php is another request. A connection is the literally TCP socket setup that links the client browser to the server. A connection is what the request will travel through. One connection can handle multiple requests, or it can handle only one request. It depends on if HTTP Keep-Alives are allowed/requested, and what mood the client and server are in that day. Best case, 1 connection handles 10 requests, requiring only one TCP handshake sequence. Worst case, each request goes over a separate connection, requiring 10 tcp handshakes.
6) There'll be one or two listening ports open on the server (port 80 for regular, 443 for ssl, maybe). Any number of requests can be multiplexed onto a single port. There'll never be LESS than one port held open by the webserver, but should never really be more than 1 or 2 either.

Problem supporting keep-alive sockets on a home-grown http server

I am currently experimenting with building an http server. The server is multi-threaded by one listening thread using select(...) and four worker threads managed by a thread pool. I'm currently managing around 14k-16k requests per second with a document length of 70 bytes, a response time of 6-10ms, on a Core I3 330M. But this is without keep-alive and any sockets I serve I immediatly close when the work is done.
EDIT: The worker threads processes 'jobs' that have been dispatched when activity on a socket is detected, ie. service requests. After a 'job' is completed, if there are no more 'jobs', we sleep until more 'jobs' gets dispatched or if there already are some available, we start processing one of these.
My problems started when I began to try to implement keep-alive support. With keep-alive activated I only manage 1.5k-2.2k requests per second with 100 open sockets. This number grows to around 12k with 1000 open sockets. In both cases the response time is somewhere around 60-90ms. I feel that this is quite odd since my current assumptions says that requests should go up, not down, and response time should hopefully go down, but definitely not up.
I've tried several different strategies for fixing the low performance:
1. Call select(...)/pselect(...) with a timeout value so that we can rebuild our FD_SET structure and listen to any additional sockets that arrived after we blocked, and service any detected socket activity.
(aside from the low performance, there's also the problem of sockets being closed while we're blocking, resulting in select(...)/pselect(...) reporting bad file descriptor.)
2. Have one listening thread that only accept new connections and one keep-alive thread that is notified via a pipe of any new sockets that arrived after we blocked and any new socket activity, and rebuild the FD_SET.
(same additional problem here as in '1.').
3. select(...)/pselect(...) with a timeout, when new work is to be done, detach the linked-list entry for the socket that has activity, and add it back when the request has been serviced. Rebuilding the FD_SET will hopefully be faster. This way we also avoid trying to listen to any bad file descriptors.
4. Combined (2.) and (3.).
-. Probably a few more, but they escape me atm.
The keep-alive sockets are stored in a simple linked List, whose add/remove methods are surrounded by a pthread_mutex lock, the function responsible for rebuilding the FD_SET also has this lock.
I suspect that it's the constant locking/unlocking of the mutex that is the main culprit here, I've tried to profile the problem but neither gprof or google-perftools has been very cooperative, either introducing extreme instability or plain refusing to gather any data att all (This could be me not knowing how to use the tools properly though.). But removing the locks risks putting the linked list in a non-sane state and probably crash or put the program into an infinite loop.
I've also suspected the select(...)/pselect(...) timeout when I've used it, but I'm pretty confident that this was not the problem since the low performance is maintained even without it.
I'm at a loss of how I should handle keep-alive sockets and I'm therefor wondering if you people out there has any suggestions on how to fix the low performance or have suggestions on any alternate methods I can use to go about supporting keep-alive sockets.
If you need any more information to be able to answer my question properly, don't hesitate to ask for it and I shall try my best to provide you with the necessary information and update the question with this new information.
Try to get rid of select completely. You can find some kind of event notification on every popular platform: kqueue/kevent on freebsd(), epoll on Linux, etc. This way you do not need to rebuild FD_SET and can add/remove watched fds anytime.
The time increase will be more visible when the client uses your socket for more then one request. If you are merely opening and closing yet still telling the client to keep alive then you have the same scenario as you did without keepalive. But now you have the overhead of the sockets sticking around.
If however you are using the sockets multiple times from the same client for multiple requests then you will lose the TCP connection overhead and gain performance that way.
Make sure your client is using keepalive properly. and likely a better way to get notification of the sockets state and data. Perhaps a poll device or queuing the requests.
http://www.techrepublic.com/article/using-the-select-and-poll-methods/1044098
This page has a patch for linux to handle a poll device. Perhaps some understanding of how it works and you can use the same technique in your application rather then rely on a device that may not be installed.
There are many alternatives:
Use processes instead of threads, and pass file descriptors via Unix sockets.
Maintain per-thread lists of sockets. You could even accept() directly on the worker threads.
etc...
Are your test clients reusing the sockets? Are they correctly handling keep alive?
I could see that case where you do the minimum change possible in your benchmarking code by just passing the keep alive header, but then not changing your code so that the socket is closed at the client end once the pay packet is received.
This would incure all the costs of keep-alive with none of the benefits.
What you are trying to do has been done before. Consider reading about the Leader-Follower network server pattern, http://www.kircher-schwanninger.de/michael/publications/lf.pdf

Real time embeddable http server library required

Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements.
I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency).
So the web server library will need to support the following flow
1) HTTP Client transmits http request 1
2) HTTP Client transmits http request 2 ...
3) Web Server Library receives request 1 and passes it to My Web Server App
4) My Web Server App receives request 1 and dispatches it to My System
5) Web Server receives request 2 and passes it to My Web Server App
6) My Web Server App receives request 2 and dispatches it to My System
7) My Web Server App receives response to request 1 from My System and passes it to Web Server
8) Web Server transmits HTTP response 1 to HTTP Client
9) My Web Server App receives response to request 2 from My System and
passes it to Web Server
10) Web Server transmits HTTP response 2 to HTTP Client
Hopefully this illustrates my requirement. There are
two key points to recognise. Responses to the Web Server Library are
asynchronous and there may be several HTTP requests passed to My Web
Server App with responses outstanding.
Additional requirements are
Embeddable into an existing 'C' application
Small footprint; I don't need all the functionality available in Apache etc.
Efficient; will need to support thousands of requests a second
Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me.
Support persistent TCP connections
Support use with Server-Push Comet connections
Open Source / GPL
support for HTTPS
Portable across linux, windows; preferably more.
I will be very grateful for any recommendation
Best Regards
You could try libmicrohttp.
Use the Onion, Luke. This is lightweight and easy to use HTTP server library in C.
For future reference, that meets your requirement, take a look at libasyncd
I'm one of contributors.
Embeddable into an existing 'C' application
It's written in C.
Small footprint; I don't need all the functionality available in Apache etc.
Very compact.
Efficient; will need to support thousands of requests a second
It's libevent based framework. Can handle more than that.
Allows asynchronous responses to requests;
It's asynchronous. Also support pipelining.
Support persistent TCP connections
Sure, keep-alive.
Support use with Server-Push Comet connections
It's up to how you code your logic.
Open Source / GPL
under BSD license
support for HTTPS
Yes. it supports https with openssl.
Portable across linux, windows; preferably more.
Portable but not windows at this moment but it's portable to windows.
What you want is something that supports HTTP pipelining. You should make yourself familiar with that page if you are not already.
Yes, go for libmicrohttp. It has support for SSL etc and work in both Unix and Windows.
However, Christopher is right on the spot in his comment. If you have a startup time for each response, you are not going to gain much by pipelining. However, if you only have a significant response time to the first request, you may win something.
On the other hand, if each response has a startup time, you may gain a lot by not using pipelining, but create a new request for each object. Then each request can have its own thread, sucking up the startup costs in parallel. All responses will then be sent "at once" in the optimum case. libmicrohttp supports this mode of operation in its MHD_USE_THREAD_PER_CONNECTION thread model.
Following up on previous comments and updates...
You don't say how many concurrent connections you'll have but just "a TCP link".
If it's a single connection, then you'll be using HTTP pipelining as previously mentioned; so you would only need a handful of threads — rather than thousands — to process the requests at the head of the pipeline.
So you wouldn't need to have a thread for every request; just a small pool of workers for each connection.
Have you done any testing or implementation so far to show whether you actually do have problems with response latency for pipelined connections?
If your embedded device is powerful enough to cope with thousands of requests per second, including doing TLS setup, encryption and decryption, I would worry about premature optimisation at this level.
Howard,
Have you taken a look at lighthttpd? It meets all of your requirements except it isn't explicitly an embedded webserver. But it is open source and compiling it in to your application shouldn't be too hard. You can then write a custom plugin to handle your requests.
Can't believe no one has mentioned nginx. I've read large portions of the source-code and it is extremely modular. You could probably get the parts you need working pretty quickly.
uIP or lwip could work for you. I personally use uIP. It's good for a small number of clients and concurrent connections (or as you call it, "pipelining"). However, it's not as scalable or as fast at serving up content as lwip from what I've read. I went with simplicity and small size of uIP instead of the power of lwip, as my app usually only has 1 user.
I've found uIP pretty limited as the concurrent connections increase. However, I'm sure that's a limitation of my MAC receive buffers available and not uIP itself. I think lwip uses significantly more memory in some way to get around this. I just don't have enough ethernet RAM to support a ton of request packets coming in. That said, I can do background ajax polling with about a 15ms latency on a 56mhz processor.
http://www.sics.se/~adam/software.html
I've actually modified uIP in several ways. Adding a DHCP server and supporting multipart POST for file uploads are the big things.) Let me know if you have any questions.

Resources