high performance application webserver in C/C++ - c

Is there any high performance (ideally evented and open source) web server in C or C++?
I'd like to be able to use it in that it calls a method/function in my application with a filled out HTTP Request class/struct, and then I can return a filled out HTTP Response class/struct to it.
If it isn't open source, I'd need built in support for long-polling connections, keep-alive, etc—otherwise, I think that I can add these things myself.
If you don't know of any such servers available, would you recommend writing my own web server to fit the task? It cannot be file-based, and must be written in high-performance C/C++.
Edit: I'm thinking something like the Ruby Mongrel for C, if that helps.

I had the very same requirements for my job, so I evaluated a number of solutions: mongoose, libmicrohttpd, libevent. And I also was thinking about writing nginx modules. Here is the summary of my findings:
nginx
nginx project page
I love this server and use it a lot. Its performance and resource usage is much better than that of Apache, which I also still use but plan migrating to nginx.
Very good tunable performance. Rich functionality. Portability.
Module API is not documented and seems to be very verbose. See this nginx hello world module as example.
Nginx does not use threads but uses multiple processes. This makes writing modules harder, need to learn nginx API for shared memory, etc.
mongoose
mongoose project page
All server's code is in single mongoose.c file (about 130K), no dependencies. This is good.
One thread per connection, so if you need concurrency you've got to configure lots of threads, ie. high RAM usage. Not too good.
Performance is good, although not exceptional.
API is simple but you have to compose all response HTTP headers yourself, ie. learn HTTP protocol in detail.
libmicrohttpd
libmicrohttpd project page
Official GNU project.
Verbose API, seems awkward to me, although much more simple than writing nginx modules.
Good performance in keep-alive mode (link to my benchmarks below), not so good without keep-alive.
libevent
libevent project page
Libevent library has built-in web server called evhttp.
It is event based, uses libevent for that.
Easy API. Constructs HTTP headers automatically.
Officially single-threaded. This is major disadvantage. I've found a hack, which makes several instances of evhttp run simultaneously accepting connections from the same socket. Not sure if it is all safe and robust.
Performance of single-threaded evhttp is surprisingly poor. Multi-threaded hack works better, but still not good.
G-WAN
G-WAN project is not open source, but I'd like to say a few words about it.
Very good performance, low memory usage, 150 KB executable.
Very convenient 'servlet' deployment: just copy .c file into csp directory, and running server automatically compiles it. Code modifications also compiled on the fly.
Simple API. Although constrained in some ways. Rich functionality (json, key-value store, etc.).
Unstable. I had segfaults on static files. Hangs on some sample scripts. (Experienced on clean install. Never mixed files of different versions).
Only 32-bit binary (not anymore).
So as you can see, none of existing alternatives have fully satisfied me. So I have developed my own server, which is ...
NXWEB
NXWEB project page
Feature highlights:
Very good performance; see benchmarks on project page
Can serve tens of thousands concurrent requests
Small memory footprint
Multi-threaded model designed to scale
Exceptionally light code base
Simple API
Decent HTTP protocol handling
Keep-alive connections
SSL support (via GNUTLS)
HTTP proxy (with keep-alive connection pooling)
Non-blocking sendfile support (with configurable small file memory cache; gzip pre-encoded file serving)
Modular design for developers
Can be run as daemon; relaunches itself on error
Open source
Limitations:
Depends on libev library (not anymore)
Only tested on Linux

I would suggest to write a FastCGI executable that can be used with many high performance web servers (even closed source ones).

mongoose: one file. simple and easy to use. not an asycn io but perfect for embedded and particular purposes.
gwan. excellent. no crashes. ultra well planned configuration. very smart and easy for c/c++ development in other words, very clean sensible api compared to nginx. provides a thread per core. or whatever you specify. a great choice. largest disadvantage (maybe im lacking in this area): cannot step thru code.
libevent: single thread is not a disadvantage on a single core machine. afterall its point is an async i/o. does have multithreads for other cores.
nginx: no personal experience. gaining serious ground on a-patchy server. (terribly confusing api)
boost asio: a c++ library for asynchio (asio). awesome. needs a friendly higher-level api for simpletons like myself. and others who come from php, java, javascript, node.js and other web languages.
python bottle: awesome 1 file lib (framework/system) that makes it easy to build python web apps. has/is a built in httpd server, like libevent and node.js
node.js: javascript asyncio server. an excellent selection. unfortunately, have to program in javascript that does become tedious. while there is something to be said for getting the job done; there is also something to be said for enjoying yourself during the process. hopefully no ones comes up with node.php

I'm going to suggest the same thing as Axel Gneiting - but have provided an answer with my reasons for taking this approach:
1) HTTP is not trivial as a protocol - writing your own server or amending an off-the-shelf solution is a very complex task - a lot more complex than using the available APIs for implementing a separate processing engine
2) Using (an unmodified) mainstream webserver should provide you with more functionality than you require (so you've got growing room).
3) Using (an unmodified) mainstream webserver will usually mean that it has been far more extensively tested and documented than a homebrew system.
4) .. and its more likely to be secure and stable.
5) Using fastCGI you can use all sorts of languages to develop your back-end processing in - including C++ and C. There are standard toolkits available to facilitate this.
6) alternatively many webservers provide support for running interpreter engines in-process (e.g. mod_php, mod_perl). I'd advise against running your own native code as a module though.
It cannot be file-based.
Eh? What does that mean?

I'm an avid nginx user; nginx is written in C; nginx seems like it could work for you. If you want the very best speed out of nginx, I would make a nginx module. Here are 3rd party modules which you can examine to get an idea of what it requires.
As for the long polling requirement, you might want to have a look at the http push modules.

Related

How to use libraries in codename one?

I am wondering about the limitations of using libraries in code name one. Specifically, I would like to use a specific http client library that uses nio but I am not sure if that will even work in code name one. There is an http1 client and an http2 client here
https://github.com/deanhiller/webpieces
Can the nio stuff actually be compiled into iOs? or does it have to be synchronous socket http client implementations?
thanks,
Dean
It won't work and you can't. This article is from 2016 but it's still mostly accurate. The gist of it is that most of these APIs aren't essential and if we add all of them performance/size would balloon to huge numbers.
E.g. a Codename One application can weigh less than 3mb for iOS production builds with 32 and 64 bit support. Our closest competitors clock at 50mb for the same functionality with only 64bit support. This isn't just a matter of size, it's a matter of quality (QA), maintenance etc.
This also reduces portability as we have to test this on all ports including iOS, UWP, Web etc.
Having said that we're open to adding things and have added some features to the core since the publication of that article. But either way, you can't just use an arbitrary jar and need to use a cn1lib.

Running Node.js on STM32

Iam working yet with C program on STM32 microproc, what contains a web server, accesible by the user via web gui(HTML and javascript files). The web gui part became more complex, and it needs higher level operations.
The questions: is it possible to embed a node.js program with some node modules? Does it work with the C webserver, or the node program have to make the webserver, and communicate with the C program?
Or there is an other solution what is better in this case?
This question maybe seems dumb, but i didnt find documentation about it.
After a research I found some solutions:
Node.js for Embedded Systems
The book can then guide you to jerryscript which:
is a lightweight JavaScript engine for resource-constrained devices
such as microcontrollers
You can find there that it's also used with the STM32-Discovery board.
Node.js on clientside
This article guides to Browserify which allows to run node on clientside.
Just make it simpler
You could use HTTPD implementation shipped with LwIP. There is a script called makefsdata which allows to convert html, js, css ... files into c-arrays. This implementation also supports POST method.

Custom protocol support

I am not finding documentation for custom protocol support.
From what I understand, Gatling has core engine that does scheduling, thread management etc, and protocol support is designed as an Actor ?
I am trying to develop a custom protocol (thats basically a shell script that will talk to an external service). The latest reference documentation does not seem to have any reference to how to do this ? Any pointers will be greatly appreciated.
If you need to stress test something that is implemented in a shell script, then Gatling probably isn't the best fit. Gatling is designed for stress testing networking protocols. So unless you can duplicate what your shell script is doing in Gatling expressed in networking protocols, you then might want to use something else.
Secondly, if you did implement it, I would check with the core developers of Gatling if it's something that they would consider including (use a github issue to ask). Since the applications of this might not be widespread, they may choose to not include it in their project. If that's the case you would have to either run your own fork with the implementation or add some sort of plugin architecture to Gatling for 3rd part extensibility.
So my suggestions are:
Decompose your shell script into the specific network protocol parts you're interested in stress testing implementing in Gatling.
Use a different tool that's designed to running multiple shell scripts at once for stress testings. Something like GNU Parallel if you're on a Linux box.
Implement it yourself. There's no documentation on how to do this. However a good starting example would be the JMS Protocol Implementation to give you an idea of all that's involved.

I'm searching a cgi lib in C to build a restful web service

I want to build a restful (CoAP) web service which can execute c code to handle events.
Therefore I'm searching a lib which provides me with a rest api in C and cgi similar to
restcgi which is sadly in c++ or CGI-Simple which is in perl.
The server is running on a embedded device so it has very limited resources and the services will be accessed only by machines.
Thank you very much.
You may be interested in Raphters framework and its architecture. It's pretty small, so you can examine the code, the framework itself can be used as a FastCGI backend for some web server, e.g. for nginx.
I have recently came across one quite interesting CoAP library which uses libevent. You will aslo want to check Klone embeddable HTTP server by the same guys at KoanLogic. I have previously looked at libcoap, but it didn't find it very usable at the time. You may also wish to try using either libuv, libev or libevent. But I guess it's probably gonna be much easier to adopt some of the code from WT repository and get your CoAP/HTTP server done.

How can I best deploy a web application written in C?

Say I have fancy new algorithm written in C,
int addone(int a) {
return a + 1;
}
And I want to deploy as a web application, for example at
http://example.com/addone?a=5
which responds with,
Content-Type: text/plain
6
What is the best way to host something like this? I have an existing setup using Python mod_wsgi on Apache2, and for testing I've just built a binary from C and call as a subprocess using Python's os.popen2.
I want this to be very fast and not waste overhead (ie I don't need this other Python stuff at all). I can dedicate the whole server to it, re-compile anything needed, etc.
I'm thinking about looking into Apache C modules. Is that useful? Or I may build SWIG wrappers to call directly from Python, but again that seems wasteful if I'm not using Python at all. Any tips?
The easiest way should be to write this program as a CGI app (http://en.wikipedia.org/wiki/.cgi). It would run with any webserver that supports the Common Gateway Interface.
The output format needs to follow the CGI rules.
If you want to take full advantage of the web server capabilities then you can write an Apache module in C. That needs a bit more preparation but gives you full control.
Maybe this tiny dynamic webserver in C to be used with C language can help you.. it should be easy to use and self-contained.
Probably the fastest solution you can adopt according to the benchmarks shown on their homepage!
This article from yesterday has a good discussion on why not to use C as a web framework. I think an easy solution for you would be to use ctypes, it's certainly faster than starting a subprocess. You should make sure that your method is thread safe and that you check your input argument.
from ctypes import *
libcompute = CDLL("libcompute.so")
libcompute.addone(int(a))
I'm not convinced that you're existing general approach might not be the best one. I'm not saying that Apache/Python is necessarily the correct one but there is something compelling about separating the concerns in your architecture being composed of highly focused elements that are specialists in their functions within the overall system.
Having your C-based algorithm server being decoupled from the HTTP server may give you access to things like HTTP scalability and caching facilities that might otherwise have to be in-engineered (or reinvented) within your algorithm component if things are too tightly coupled.
I don't think performance concerns in of themselves are always the best or only reasons when designing an architecture. For example the a YAWS deployment with a C-based driver could be a very performant option.
I have just setup a web service using libmicrohttpd and have had amazing results. On a quad core I've been handling 20400 requests a second and the CPU is running only at 58%. This is probably going to be deployed on a server with 8 cores, so I'm expecting much better results. A very simple C service will be even faster!
I have tried GWAN, it is very good, but it's closed, and doesn't play well with virtual environments. I will give #Gil kudos being good at supporting it here though. We just had a few issues and found LibMicroHttpd works better for our needs.
If you go here, you may need to update your openssl if you're using CentOs from axivo
rpm -ivh --nosignature http://rpm.axivo.com/redhat/axivo-release-6-1.noarch.rpm
yum --disablerepo=* --enablerepo=axivo update openssl-devel
You can try Duda I/O, it only requires a Linux host: http://duda.io

Resources