How to integration-test a network application in C - c

I've never managed to move from unit-testing to integration-testing in any graceful or automated way when it comes to network code.
So my question is: Given a simple single-threaded client/server based network application, how would you go about integrating both client and server into your currently favorite testing suite (I currently use check).
I am of course willing to change unit-test suite to accomplish my goal.
Edit: While I appreciate the answers, I was more looking for some magical way of integrating integration-testing into my unit-test framework (if it's possible at all). Like if fork() or something could be applied without getting too many side effects.

Another approach is to mock up both ends with a dummy server and dummy client that just send the messages that you want to test and verify the responses are as expected. These mock servers cab be really, really dumb: they only need to read/write sockets and dump pre-set data back. You can spiff them up a bit by templating the responses from data in the requests if it's easy to parse.
The win here is that you know exactly what the mocked item is going to do (including fake timeouts, send garbage, whatever you want).
It would probably be very easy to use a Perl or Python socket library to build your mock servers and clients; if you use Perl, you should be able to use the very capable Test:: classes from CPAN to help do the actual "did this work" and reporting.

netcat is a great tool for testing network servers and clients.
man netcat says that netcat is TCP/IP swiss army knife. Having experience with both netcat and Victorinox Swiss army knife I can assure you that netcat is much better than Victorinox - I'd rather compare it to Leatherman.

We structure our applications so that the core code is in a library and the executable is generated from a main.c (really main.cxx in our case) that is just a very thin wrapper that starts the server or client. This lets us set up test suites that can instantiate a complete server and client in proc and do tests where they talk to one another using their normal network protocol. It works quite well.
If you can't structure things this way, you could start your usual server executable using fork/CreateProcess and then have the client code inside the test talk to the external server.

Related

Sending smtp email from microcontroller

This may not be in the right location, so tell me and I'll move it.
I am a recent EE grad and I was hired to build a system that exists on a SoC with a simple 32-bit processor. The system basically monitors several external devices and performs some DSP on it, and then is supposed to send the results using a WiFi device (in my case I have the ESP8266 using UDP) to an email server for logging/notification.
I have been trying to find a library that I can use, but my uC can only program in C and I have it set up for UDP, and everything is in C++ using some other protocol, or something else completely.
I am great at DSP, decent at SoC's and uC's, but when it come to this email server communication thing I am at a loss.
I have successfully configured everything for the sensors, the datapath, the DSP, and connected the system to my WiFi via UDP, but I have yet to figure out how to send data to any servers.
Could someone help me understand how I should go about this?
I have looked into some simple SMTP commands such as HELO, MAIL, RCPT, DATA, etc. but I cannot understand how I actually should implement them in my code.
When I send out the WiFi data via UDP what type of data do I send and how do I format it? Do I need to send any other kind of flags? How should I expect the response? I also know the data has to be transformed into base 64 which is confusing me further.
I am also not super familiar with UDP to begin with, I have been using libraries that are part of the SoC's default library to connect to my WiFi.
I know these may either seem like obvious or stupid questions but it is were I no longer have any knowledge, and everything I find online doesn't make sense, or doesn't attempt to explain it, just gives a pre-made solution
I have found the RFC2821 but it doesn't get any clearer.
I know that's a lot but any help at all would be a lifesaver!
Since you are asking this question, I'm assuming that you are not booting and running an OS suitable for micro-controllers such as an embedded variant of Linux or such. If you were, you would simply be able to take advantage of possibly built in applications or other existing code.
But you don't mention having written an Ethernet stack, so are you using some other library or operating environment which might have some of the functionality needed for an implementation of SMTP?
If you don't and really do need to write your own SMTP client to run directly on the processor you are using, then you should be able to find plenty of examples of source code for this. A quick google search of How To Write an SMTP client showed a few articles with some example code. One article seems to be an exact hit, but you need to look at it further.
However, I would highly suggest just sitting down with a telnet client and connect to an SMTP server you are allowed to use and try the commands you need to just send a message. If you only need to send text, you don't need to get involved in MIME encoding or anything like that.

Chat *Server* on Embedded platform

I am currently building a chat server (meebo style).
The architecture is something like this.
Bitlbee over libpurple is on host B. Its a trivial server on data center.
User communicates with bitlbee via web server (just like meebo) on Host A. The backend of this web server maintains chat session. It just translates the user commands to proper bitlbee comamnd and sends back to host A.
The most important part here is that host A will be deployed in embedded Linux.
I have 2 questions.
To keep the chat session persistent I am thinking of using node.js. As its much more easier to create a real time application with persistent connection. But I doubt if its supported in such platform.
If I use C instead of node.js (I am not using any web server) I can talk to the irc server at host A by libirc. But how do I implement all the web server features in C (like session, url/cookie/post data parsing etc) ?
Also if you think my approach is wrong or there is a better approach please tell me how can I improve this architecture?
Note: This is NOT a high volume chat server.
If building V8/Node.js is prohibitive on the embedded platform, the next best thing would be to take the event loop and platform layer (libuv) and HTTP parser (http-parser) of Node, both written in C and use those as a starting point. These are the same libraries used to build Node.js so they are battle tested and will give you the performance characteristics you seek.
Ryan Dahl, author of Node.js, demonstrates exactly how to use libuv and http-parser to build an asynchronous web server in C.
Put a ZNC server between Bitlbee and the web-based IRC client. Bitlbee will think that the user has never logged out and ZNC can maintain a backlog of messages until the user connects again with the web client.
I would try to go with node.js if that is your choice, also what embedded system is it? As knowing that would help more. Also, another plus for node.js is that it does have session handling built it, but if you wanted to do it in C try and see if you can get a sqlite wrapper running on the embedded device to store the session information.
But, if possible stick to something with less work on embedded devices, feels bad to reinvent a lot of stuff or have to fiddle with compile issues for your device.

REST client library in C

Is there a good C library that I can use in my client application for talking to REST servers ?
libcurl comes to mind, as REST is based around basic HTTP requests.
Of course this is just a starting point; you'd need to write a little logic on top of it. I'm not sure if what you're looking for is a source-generating solution where you can point it at a service descriptor and have stubs produced automatically, or whether you're just looking for connectivity.

Sending calls to libraries remotely across linux

I am developing some experimental setup in C.
I am exploring a scenario as follows and I need help to understand it.
I have a system A which has a lot of Applications using cryptographic algorithms.
But these crypto calls(openssl calls) should be sent to another System B which takes care of cryptography.
Therefore, I have to send any calls to cryptographic (openssl) engines via socket to a remote system(B) which has openssl support.
My plan is to have a small socket prog on System A which forwards these calls to system B.
What I'm still unclear at this moment is how I handle the received commands at System B.
Do I actually get these commands and translate them into corresponding calls to openssl locally in my system? This means I have to program whatever is done on System A right?
Or is there a way to tunnel/send these raw lines of code to the openssl libs directly and just received the result and then resend to System A
How do you think I should go about the problem?
PS: Oh by the way, the calls to cryptography(like EngineUpdate, VerifyFinal etc or Digest on System A can be either on Java or C.. I already wrote a Java/C program to send these commands to System B via sockets...
The problem is only on System B and how I have to handle..
You could use sockets on B, but that means you need to define a protocol for that. Or you use RPC (remote procedure calls).
Examples for socket programming can be found here.
RPC is explained here.
The easiest (not to say "the easy", but still) way I can imagine would be to:
Write wrapper (proxy) versions of the libraries you want to make remote.
Write a server program that listens to calls, performs them using the real local libraries, and sends the result back.
Preload the proxy library before running any application where you want to do this.
Of course, there are many many problems with this approach:
It's not exactly trivial to define a serializing protocol for generic C function calls.
It's not exactly trivial to write the server, either.
Applications will slow a lot, since the proxy call needs to be synchronous.
What about security of the data on the network?
UPDATE:
As requested in a comment, I'll try to expand a bit. By "wrapper" I mean a new library, that has the same API as another one, but does not in fact contain the same code. Instead, the wrapper library will contain code to serialize the arguments, call the server, wait for a response, de-serialize the result(s), and present them to the calling program as if nothing happened.
Since this involves a lot of tedious, repetitive and error-prone code, it's probably best to abstract it by making it code-driven. The best would be to use the original library's header file to define the serialization needed, but that (of course) requires quite heavy C parsing. Failing that, you might start bottom-up and make a custom language to describe the calls, and then use that to generate the serialization, de-serialization, and proxy code.
On Linux systems, you can control the dynamic linker so that it loads your proxy library instead of the "real" library. You could of course also replace (on disk) the real library with the proxy, but that will break all applications that use it if the server is not working, which seems very risky.
So you basically have two choices, each outlined by unwind and ammoQ respectively:
(1) Write a server and do the socket/protocol work etc., yourself. You can minimize some of the pain by using solutions like Google's protocol buffers.
(2) use an existing middleware solution like (a) message queues or (b) an RPC mechanism like CORBA and its many alternatives
Either is probably more work than you anticipated. So really you have to answer this yourself. How serious is your project? How varied is your hardware? How likely is the hardware and software configuration to change in the future?
If this is more than a learning or pet project you are going to be bored with in a month or two then an existing middleware solution is probably the way to go. The downside is there is a somewhat intimidating learning curve.
You can go the RPC route with CORBA, ICE, or whatever the Java solutions are these days (RMI? EJB?), and a bunch of others. This is an elegant solution since your calls to the remote encryption machine appear to your SystemA as simple function calls and the middleware handles the data issues and sockets. But you aren't going to learn them in a weekend.
Personally I would look to see if a message queue solution like AMQP would work for you first. There is less of a learning curve than RPC.

What are some ways to control a device through an IP address?

I want to get some ideas on how I might control a video camera through an IP address. I have an API to control pan and tilt from a local machine. The code is going to be in C/C++ on Windows. I am still designing if I want multiple cameras controlled from one application or have a one camera to one application. Would SOA be a useful architecture to structure my messaging?
I think you could be well served by something like REST for a task like this. Executing a command towards a REST server is really intuitive and simple, which sounds just like what you need. I'd probably make some kind of application that would be running inside a web-server, since this would handle most of the infrastructure, including authentication if needed. I'm sure both apache and IIS could do this for you quite easily. Even though your API is coded in C you could also consider using some higher-level scripting language as a client to the API (inside the web server).
SOA sounds a little overkill for a task like this.
I did something similar for aproject in my university. What we had was the cameras connected to a LAN and with message passing was very easy to communicate with them, is the same that communicate with any PC. We had the same application to communicate them. You can use SOA or any architecture you consider convinient, that depens on your application.
For our case was just an ad hoc architecture, it was not a complex thing.
Hessian is nice. It is basically REST, but has a binary protocol which is more efficient than XML and it also lets you make calls from other languages quite easily. So, you could develop the client GUI applciation in C# and the server in C. There are free libraries for a few different languages available.
http://hessian.caucho.com/

Resources