So I've created a simple chatserver / chatclient in C. The chatclient reads from stdin and outputs to stdout. The goal is to adapt this to a frontend web UI. I was thinking of using React and it seems like the most commonly used socket libraries are socket.io or Websockets.
So my big question is: can I replace my chatclient I've built in C with a React chatclient that uses socket.io or Websockets to connect to my chatserver in C?
Are the two sockets compatible?
The answer is both "Yes" and "No", but the answer given by justcivah probably should have been "No", since it lacks vital information (edit: the referenced answer was deleted)...
The reason is that WebSockets is a protocol, so your server needs to understand that specific protocol.
The WebSockets protocol usually runs as an additional layer over TCP/IP sockets (which is the only layer you probably have on your C chatserver), and it includes additional ("header") data before the "message" (payload). WebSockets also require an HTTP based handshake (though other possible handshakes might be added in the future).
There are a number of different WebSocket libraries in C that could help you implement the WebSockets protocol in your C chatserver - just search for C WebSocket Framework or something like that.
We have requirement to write to single file using multiple instance of camel interface running simultaneously.
The file is on windows shared file system which has been mounted on JBoss server using SMB.
We are using camel file component to write file from each instance as a local file.
Below is the endpoint URI in camel context
file:/fuse/server/location/proc?fileName=abc.csv&fileExist=Append
The file generate has no issues when the write is happening from single instance, but in case of multiple instance it add junk characters to the file at random lines.
We are using JBoss Fuse 6.0.0 and the interface have been written using camel 2.10 version.
How can this be fixed? Is this the issue with SMB mount or the interface need to handle it.
I've had a look at the source code of the relevant camel component (https://github.com/apache/camel/tree/master/camel-core/src/main/java/org/apache/camel/component/file) & there isn't any built in support for concurrent access to a single file from multiple JVMs. Concurrent access from a single JVM is handled though.
I think you have two basic options to address your requirement:
Write some code to support shared access to a single file. The camel file component looks like it was built with extension in mind, or you could just create a standalone component to do this.
As #Namphibian suggests use some queuing system to serialise your writes (though I don't think seda will work, as it doesn't span JVMS.
My solution would be to use ActiveMQ. Each instance of your application would send messages to a single, shared queue. Some other processes would then consume the messages from MQ & write them to disk.
With a single process consuming all MQ messages there would be no concurrent writes to the filesystem.
A more robust solution would to run ActiveMQ in a cluster (possibly with a node in each of your application instances). Look at "JMSXGroupID" to prevent concurrent consumption of messages.
I'm searching for an efficient solution to make a C Freebsd platform communicate with a Node.js server.
Here are my requirements:
the client must be able to send binary streams and text messages (typically JSON messages)
those messages can be multiplexed: for instance, a text message can be sent while a binary stream is being sent.
I was heading to the libwebsockets library, but it seems to be a half-dead project, no multiplexing extension support, for instance.
I thought to implement a TCP/IP solution myself.
I want to perform a HTTP POST and/or PUT (using libcurl) with the request being compressed using GZIP. I haven't been able to find any native support for this in libcurl, and am wondering if I just haven't found the correct documentation or if there really is no support for this? (ie. Will I have to implement my own wrapper to gzip the request body?)
HTTP has no "automatic" or negotiated compression for requests, you need to do them explicitly by yourself before sending the data.
Also, I disagree with Aditya who provided another answer (judging my bias it's not that strange) but would say that libcurl is one of the best possible options for you to do HTTP requests with C (or C++, or other languages) ...
I would recommend not using libcurl if you are interested in supporting automatic proxy detection, certificate store (think HTTPS/SSL support) along with stuff like gzipping of requests.
You could use zlib for coming out of this situation, but what about other scenarios? It is better to use platform APIs for network requests even if overall program is platform independent in C++.
Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements.
I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency).
So the web server library will need to support the following flow
1) HTTP Client transmits http request 1
2) HTTP Client transmits http request 2 ...
3) Web Server Library receives request 1 and passes it to My Web Server App
4) My Web Server App receives request 1 and dispatches it to My System
5) Web Server receives request 2 and passes it to My Web Server App
6) My Web Server App receives request 2 and dispatches it to My System
7) My Web Server App receives response to request 1 from My System and passes it to Web Server
8) Web Server transmits HTTP response 1 to HTTP Client
9) My Web Server App receives response to request 2 from My System and
passes it to Web Server
10) Web Server transmits HTTP response 2 to HTTP Client
Hopefully this illustrates my requirement. There are
two key points to recognise. Responses to the Web Server Library are
asynchronous and there may be several HTTP requests passed to My Web
Server App with responses outstanding.
Additional requirements are
Embeddable into an existing 'C' application
Small footprint; I don't need all the functionality available in Apache etc.
Efficient; will need to support thousands of requests a second
Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me.
Support persistent TCP connections
Support use with Server-Push Comet connections
Open Source / GPL
support for HTTPS
Portable across linux, windows; preferably more.
I will be very grateful for any recommendation
Best Regards
You could try libmicrohttp.
Use the Onion, Luke. This is lightweight and easy to use HTTP server library in C.
For future reference, that meets your requirement, take a look at libasyncd
I'm one of contributors.
Embeddable into an existing 'C' application
It's written in C.
Small footprint; I don't need all the functionality available in Apache etc.
Very compact.
Efficient; will need to support thousands of requests a second
It's libevent based framework. Can handle more than that.
Allows asynchronous responses to requests;
It's asynchronous. Also support pipelining.
Support persistent TCP connections
Sure, keep-alive.
Support use with Server-Push Comet connections
It's up to how you code your logic.
Open Source / GPL
under BSD license
support for HTTPS
Yes. it supports https with openssl.
Portable across linux, windows; preferably more.
Portable but not windows at this moment but it's portable to windows.
What you want is something that supports HTTP pipelining. You should make yourself familiar with that page if you are not already.
Yes, go for libmicrohttp. It has support for SSL etc and work in both Unix and Windows.
However, Christopher is right on the spot in his comment. If you have a startup time for each response, you are not going to gain much by pipelining. However, if you only have a significant response time to the first request, you may win something.
On the other hand, if each response has a startup time, you may gain a lot by not using pipelining, but create a new request for each object. Then each request can have its own thread, sucking up the startup costs in parallel. All responses will then be sent "at once" in the optimum case. libmicrohttp supports this mode of operation in its MHD_USE_THREAD_PER_CONNECTION thread model.
Following up on previous comments and updates...
You don't say how many concurrent connections you'll have but just "a TCP link".
If it's a single connection, then you'll be using HTTP pipelining as previously mentioned; so you would only need a handful of threads — rather than thousands — to process the requests at the head of the pipeline.
So you wouldn't need to have a thread for every request; just a small pool of workers for each connection.
Have you done any testing or implementation so far to show whether you actually do have problems with response latency for pipelined connections?
If your embedded device is powerful enough to cope with thousands of requests per second, including doing TLS setup, encryption and decryption, I would worry about premature optimisation at this level.
Howard,
Have you taken a look at lighthttpd? It meets all of your requirements except it isn't explicitly an embedded webserver. But it is open source and compiling it in to your application shouldn't be too hard. You can then write a custom plugin to handle your requests.
Can't believe no one has mentioned nginx. I've read large portions of the source-code and it is extremely modular. You could probably get the parts you need working pretty quickly.
uIP or lwip could work for you. I personally use uIP. It's good for a small number of clients and concurrent connections (or as you call it, "pipelining"). However, it's not as scalable or as fast at serving up content as lwip from what I've read. I went with simplicity and small size of uIP instead of the power of lwip, as my app usually only has 1 user.
I've found uIP pretty limited as the concurrent connections increase. However, I'm sure that's a limitation of my MAC receive buffers available and not uIP itself. I think lwip uses significantly more memory in some way to get around this. I just don't have enough ethernet RAM to support a ton of request packets coming in. That said, I can do background ajax polling with about a 15ms latency on a 56mhz processor.
http://www.sics.se/~adam/software.html
I've actually modified uIP in several ways. Adding a DHCP server and supporting multipart POST for file uploads are the big things.) Let me know if you have any questions.