I am currently building a chat server (meebo style).
The architecture is something like this.
Bitlbee over libpurple is on host B. Its a trivial server on data center.
User communicates with bitlbee via web server (just like meebo) on Host A. The backend of this web server maintains chat session. It just translates the user commands to proper bitlbee comamnd and sends back to host A.
The most important part here is that host A will be deployed in embedded Linux.
I have 2 questions.
To keep the chat session persistent I am thinking of using node.js. As its much more easier to create a real time application with persistent connection. But I doubt if its supported in such platform.
If I use C instead of node.js (I am not using any web server) I can talk to the irc server at host A by libirc. But how do I implement all the web server features in C (like session, url/cookie/post data parsing etc) ?
Also if you think my approach is wrong or there is a better approach please tell me how can I improve this architecture?
Note: This is NOT a high volume chat server.
If building V8/Node.js is prohibitive on the embedded platform, the next best thing would be to take the event loop and platform layer (libuv) and HTTP parser (http-parser) of Node, both written in C and use those as a starting point. These are the same libraries used to build Node.js so they are battle tested and will give you the performance characteristics you seek.
Ryan Dahl, author of Node.js, demonstrates exactly how to use libuv and http-parser to build an asynchronous web server in C.
Put a ZNC server between Bitlbee and the web-based IRC client. Bitlbee will think that the user has never logged out and ZNC can maintain a backlog of messages until the user connects again with the web client.
I would try to go with node.js if that is your choice, also what embedded system is it? As knowing that would help more. Also, another plus for node.js is that it does have session handling built it, but if you wanted to do it in C try and see if you can get a sqlite wrapper running on the embedded device to store the session information.
But, if possible stick to something with less work on embedded devices, feels bad to reinvent a lot of stuff or have to fiddle with compile issues for your device.
Related
I have a need to provide restful api support to a linux deamon process that will maintain and manipulate a in-memory table (simple C structure of arrays). This deamon will act as a configuration entity and will relay the table contents to another process on its bootup or during configuration request.
Now in this context i would be happy to obtain the following information:
Would it be good to have an integrated web server or have an independent web server and talk to this daemon. Please note this server would not be required to handle huge loads.
Please suggest some good web servers with good REST support.
If an independent web server then what is the best mechanism for web server to deamon communication.
Please note this would be deployed on a small embedded board running debian.
Well, a possible solution is based on developing a CGI that is executed by any webserver (apache, lighttpd, ...) . This program connects to the main daemon with an IPC mechanism such as sockets, fifos or message queues and after interacting with the daemons returns the desired output to the REST client.
The CGI program can be written in any language, but if you want to write it in C, check this project: it's a CGI program written in C that takes commands for an IP camera. The connection with a main daemon is not implemented, since it was outside the scope of the project. I like it because it has an embedded XML parser and does not require any external library
I am querying using solr search engine in my webapp. But right now I am calling solr server from python using
"http://localhost:8983/solr/select?q="
But I heard using unix socket instead helps reduce tcp/ip overhead. How to go about it? Thanks in advance.
None of the popular application containers support connections over local unix sockets, and usually for a good reason; it's a very, very, very small part of the request/response cycle, so unless you're actually seeing issues from tcp/ip setup and teardown, there should be very little reason for this.
Client libraries does not support local unix sockets either, as they exclusively talk TCP/IP to the server (and assumes an URL as an endpoint).
You can use EmbeddedSolr to integrate Solr directly into a Java project and use it without any request overhead, but the overhead from a search application will be small compared with the time taken to perform the actual search and handling the application container.
But I can't seem to find much about how the NX protocol actually works. I have heard it does something with sending X11 commands. But does this mean that the listening clients need to have an x server to run the actual commands and display them?
Basically, I am trying to figure out if it is possible to write an NX client for a web browser, because it sounds interesting to me. Thoughts?
Yes. NX is essentially compressed X-Window protocol.
It's not a spec, but here is a general introduction to how it works: http://www.nomachine.com/documents/NX-XProtocolCompression.php
The client doesn't need to be an X-server, but it will probably need to be able to handle at least some subset of the X protocol.
If you are going to create an web based NX client, make sure you look at noVNC which is a web based VNC/RFB client. Better yet, fork noVNC and add NX support. That way you don't have to waste time on input, events positioning, networking, etc.
Disclaimer: I am the creator of noVNC. Implementing other remote desktop protocols (NX, RDP, Spice) is on my long term todo list (part of the reason for the name). If you're serious, contact me via github and I can give you some direction/thoughts and put you in touch with somebody else who has also expressed interest.
I've never managed to move from unit-testing to integration-testing in any graceful or automated way when it comes to network code.
So my question is: Given a simple single-threaded client/server based network application, how would you go about integrating both client and server into your currently favorite testing suite (I currently use check).
I am of course willing to change unit-test suite to accomplish my goal.
Edit: While I appreciate the answers, I was more looking for some magical way of integrating integration-testing into my unit-test framework (if it's possible at all). Like if fork() or something could be applied without getting too many side effects.
Another approach is to mock up both ends with a dummy server and dummy client that just send the messages that you want to test and verify the responses are as expected. These mock servers cab be really, really dumb: they only need to read/write sockets and dump pre-set data back. You can spiff them up a bit by templating the responses from data in the requests if it's easy to parse.
The win here is that you know exactly what the mocked item is going to do (including fake timeouts, send garbage, whatever you want).
It would probably be very easy to use a Perl or Python socket library to build your mock servers and clients; if you use Perl, you should be able to use the very capable Test:: classes from CPAN to help do the actual "did this work" and reporting.
netcat is a great tool for testing network servers and clients.
man netcat says that netcat is TCP/IP swiss army knife. Having experience with both netcat and Victorinox Swiss army knife I can assure you that netcat is much better than Victorinox - I'd rather compare it to Leatherman.
We structure our applications so that the core code is in a library and the executable is generated from a main.c (really main.cxx in our case) that is just a very thin wrapper that starts the server or client. This lets us set up test suites that can instantiate a complete server and client in proc and do tests where they talk to one another using their normal network protocol. It works quite well.
If you can't structure things this way, you could start your usual server executable using fork/CreateProcess and then have the client code inside the test talk to the external server.
I want to get some ideas on how I might control a video camera through an IP address. I have an API to control pan and tilt from a local machine. The code is going to be in C/C++ on Windows. I am still designing if I want multiple cameras controlled from one application or have a one camera to one application. Would SOA be a useful architecture to structure my messaging?
I think you could be well served by something like REST for a task like this. Executing a command towards a REST server is really intuitive and simple, which sounds just like what you need. I'd probably make some kind of application that would be running inside a web-server, since this would handle most of the infrastructure, including authentication if needed. I'm sure both apache and IIS could do this for you quite easily. Even though your API is coded in C you could also consider using some higher-level scripting language as a client to the API (inside the web server).
SOA sounds a little overkill for a task like this.
I did something similar for aproject in my university. What we had was the cameras connected to a LAN and with message passing was very easy to communicate with them, is the same that communicate with any PC. We had the same application to communicate them. You can use SOA or any architecture you consider convinient, that depens on your application.
For our case was just an ad hoc architecture, it was not a complex thing.
Hessian is nice. It is basically REST, but has a binary protocol which is more efficient than XML and it also lets you make calls from other languages quite easily. So, you could develop the client GUI applciation in C# and the server in C. There are free libraries for a few different languages available.
http://hessian.caucho.com/