Is there a good C library that I can use in my client application for talking to REST servers ?
libcurl comes to mind, as REST is based around basic HTTP requests.
Of course this is just a starting point; you'd need to write a little logic on top of it. I'm not sure if what you're looking for is a source-generating solution where you can point it at a service descriptor and have stubs produced automatically, or whether you're just looking for connectivity.
Related
In the RabbitMQ tutorials there's a demonstration of how to do remote process calls in all but C-languages (C, C++). I'm using rabbitmq-c and I'm close to replicating what the Python tutorial is doing, after all correlation_id and reply_to are available fields in the amq_basic_properties.
That been said, I can see the following two methods in the amqp.h header:
amq_simple_rpc
amq_simple_rpc_decoded
It's my understanding that these are used internally for the library's communication with the broker (e.g. that how a call to create a queue goes through) but I was wondering whether I can use those directly to support my own remote process calls, i.e. have a function that "lives" in one client and make it callable by another client.
If these methods can't be used like this, is there a standard alternative or a description of how to do routed RPCs with librabbitmq-c? Is my approach of replicating the pika tutorial "sane"?
You're right in your suspicion that amq_simple_rpc etc are for low-level client-broker communication. They are indeed unsuitable for (broker-mediated) client-to-client communication.
My opinion is that your approach following the pika tutorial is sensible. I'm afraid I do not know of any standard RPC helper library for librabbitmq-c.
I have an application that began its life as a C#-based Windows GUI that used marshalling to talk to a C DLL.
I now need to separate the Windows client and DLL so that the client is installed on a remote PC and communicates with the C DLL over the internet.
A further complication is that I want to have multiple Windows clients connecting to the C DLL.
This whole world is new to me, so excuse me if the following are naive questions.
My questions:
0) What is the best method for having the client communicate with the DLL over the internet? TCP/IP Sockets?
1) I need to make modifications to my DLL to have it service multiple clients. But I need some piece of middleware that collects the queries from the different clients, feeds them to the DLL, and then sends the results back to the appropriate client. Is there any code (such as node.js) that would facilitate this?
Regarding: What is the best method for having the client communicate with the DLL over the internet?
Your suggestion of using TCP/IP could certainly (and likely will) be part of the solution, but there will be other components of the solution as well. The direction you choose will in part be made by answering whether you are using standard marshaling (COM), or custom? At the very least, your problem description suggests a scenario requiring interprocess communications.
There are many ways to implement. This diagram maps out a general approach, that based on your description might apply:
Components of Interprocess Communications
Read more here
Regarding: make modifications to my DLL to have it service multiple clients...
The dll is simply a file like any other. Several processes can read, and subsequently own content from, a file as long as the processes doing the reading adhere to common file access rules. I do not think you will have to modify your dll, at least for that reason. Just make sure the processes accessing the dll comply with safe file access protocols. (Safe file access).
Do you know of any HTTP client library in C (with SSL support) that also allows direct communication with the remote server?
I have a client-server application where the client uses HTTP to start a session in the server and then tells the server to switch the connection from HTTP to a different protocol. All communication is encapsulated in SSL. It is written in Perl and works well, but I'm looking into implementing the client in C.
I know libcurl gives you access to the underlaying socket but it's not enough because of the SSL requirement.
Notice that libcurl doesn't do the SSL part by itself, it uses OpenSSL. So, if you can get the socket handle from libcurl after the first HTTP interactions, AND the session key it uses (some spelunking required) you can go on directly with OpenSSL from that point.
I think that you must be looking for this otherwise you must have to write it yourself, like this
Sounds like you want Web Sockets. Don't know if there's a C library available though. I would assume there is, if you dig.
I am developing some experimental setup in C.
I am exploring a scenario as follows and I need help to understand it.
I have a system A which has a lot of Applications using cryptographic algorithms.
But these crypto calls(openssl calls) should be sent to another System B which takes care of cryptography.
Therefore, I have to send any calls to cryptographic (openssl) engines via socket to a remote system(B) which has openssl support.
My plan is to have a small socket prog on System A which forwards these calls to system B.
What I'm still unclear at this moment is how I handle the received commands at System B.
Do I actually get these commands and translate them into corresponding calls to openssl locally in my system? This means I have to program whatever is done on System A right?
Or is there a way to tunnel/send these raw lines of code to the openssl libs directly and just received the result and then resend to System A
How do you think I should go about the problem?
PS: Oh by the way, the calls to cryptography(like EngineUpdate, VerifyFinal etc or Digest on System A can be either on Java or C.. I already wrote a Java/C program to send these commands to System B via sockets...
The problem is only on System B and how I have to handle..
You could use sockets on B, but that means you need to define a protocol for that. Or you use RPC (remote procedure calls).
Examples for socket programming can be found here.
RPC is explained here.
The easiest (not to say "the easy", but still) way I can imagine would be to:
Write wrapper (proxy) versions of the libraries you want to make remote.
Write a server program that listens to calls, performs them using the real local libraries, and sends the result back.
Preload the proxy library before running any application where you want to do this.
Of course, there are many many problems with this approach:
It's not exactly trivial to define a serializing protocol for generic C function calls.
It's not exactly trivial to write the server, either.
Applications will slow a lot, since the proxy call needs to be synchronous.
What about security of the data on the network?
UPDATE:
As requested in a comment, I'll try to expand a bit. By "wrapper" I mean a new library, that has the same API as another one, but does not in fact contain the same code. Instead, the wrapper library will contain code to serialize the arguments, call the server, wait for a response, de-serialize the result(s), and present them to the calling program as if nothing happened.
Since this involves a lot of tedious, repetitive and error-prone code, it's probably best to abstract it by making it code-driven. The best would be to use the original library's header file to define the serialization needed, but that (of course) requires quite heavy C parsing. Failing that, you might start bottom-up and make a custom language to describe the calls, and then use that to generate the serialization, de-serialization, and proxy code.
On Linux systems, you can control the dynamic linker so that it loads your proxy library instead of the "real" library. You could of course also replace (on disk) the real library with the proxy, but that will break all applications that use it if the server is not working, which seems very risky.
So you basically have two choices, each outlined by unwind and ammoQ respectively:
(1) Write a server and do the socket/protocol work etc., yourself. You can minimize some of the pain by using solutions like Google's protocol buffers.
(2) use an existing middleware solution like (a) message queues or (b) an RPC mechanism like CORBA and its many alternatives
Either is probably more work than you anticipated. So really you have to answer this yourself. How serious is your project? How varied is your hardware? How likely is the hardware and software configuration to change in the future?
If this is more than a learning or pet project you are going to be bored with in a month or two then an existing middleware solution is probably the way to go. The downside is there is a somewhat intimidating learning curve.
You can go the RPC route with CORBA, ICE, or whatever the Java solutions are these days (RMI? EJB?), and a bunch of others. This is an elegant solution since your calls to the remote encryption machine appear to your SystemA as simple function calls and the middleware handles the data issues and sockets. But you aren't going to learn them in a weekend.
Personally I would look to see if a message queue solution like AMQP would work for you first. There is less of a learning curve than RPC.
I've never managed to move from unit-testing to integration-testing in any graceful or automated way when it comes to network code.
So my question is: Given a simple single-threaded client/server based network application, how would you go about integrating both client and server into your currently favorite testing suite (I currently use check).
I am of course willing to change unit-test suite to accomplish my goal.
Edit: While I appreciate the answers, I was more looking for some magical way of integrating integration-testing into my unit-test framework (if it's possible at all). Like if fork() or something could be applied without getting too many side effects.
Another approach is to mock up both ends with a dummy server and dummy client that just send the messages that you want to test and verify the responses are as expected. These mock servers cab be really, really dumb: they only need to read/write sockets and dump pre-set data back. You can spiff them up a bit by templating the responses from data in the requests if it's easy to parse.
The win here is that you know exactly what the mocked item is going to do (including fake timeouts, send garbage, whatever you want).
It would probably be very easy to use a Perl or Python socket library to build your mock servers and clients; if you use Perl, you should be able to use the very capable Test:: classes from CPAN to help do the actual "did this work" and reporting.
netcat is a great tool for testing network servers and clients.
man netcat says that netcat is TCP/IP swiss army knife. Having experience with both netcat and Victorinox Swiss army knife I can assure you that netcat is much better than Victorinox - I'd rather compare it to Leatherman.
We structure our applications so that the core code is in a library and the executable is generated from a main.c (really main.cxx in our case) that is just a very thin wrapper that starts the server or client. This lets us set up test suites that can instantiate a complete server and client in proc and do tests where they talk to one another using their normal network protocol. It works quite well.
If you can't structure things this way, you could start your usual server executable using fork/CreateProcess and then have the client code inside the test talk to the external server.