Can anyone show (through code) or explain to me how I might use libevent and curl together in a c program? I'm trying to write a high-performance non-blocking data monitor which needs to upload data to a CouchDB instance. I'm familiar with both libevent and curl, but merging curl_multi with libevent has me stumped for some reason. I do not understand the program flow of the offical libcurl example - can anyone point me to, or supply, a simpler example?
The key is really the curl_multi_socket_action() function that should be used as soon as your event library says there's something on a socket to deal with. Event-based libcurl is more complex than "plain" libcurl so doing a very easy example is not that straight forward.
One exotic way to use curl with libevent is in a coroutine.
Related
In the RabbitMQ tutorials there's a demonstration of how to do remote process calls in all but C-languages (C, C++). I'm using rabbitmq-c and I'm close to replicating what the Python tutorial is doing, after all correlation_id and reply_to are available fields in the amq_basic_properties.
That been said, I can see the following two methods in the amqp.h header:
amq_simple_rpc
amq_simple_rpc_decoded
It's my understanding that these are used internally for the library's communication with the broker (e.g. that how a call to create a queue goes through) but I was wondering whether I can use those directly to support my own remote process calls, i.e. have a function that "lives" in one client and make it callable by another client.
If these methods can't be used like this, is there a standard alternative or a description of how to do routed RPCs with librabbitmq-c? Is my approach of replicating the pika tutorial "sane"?
You're right in your suspicion that amq_simple_rpc etc are for low-level client-broker communication. They are indeed unsuitable for (broker-mediated) client-to-client communication.
My opinion is that your approach following the pika tutorial is sensible. I'm afraid I do not know of any standard RPC helper library for librabbitmq-c.
I'm doing some experiments with lwIP on a small, embedded device. There are some examples that come with lwIP but they do not help me. What I want to implement is a server (using wlIP) that accepts a connection, reads several commands, sends several answers to the connected client and closes only when the connection is interrupted or a special close-command is sent.
So somehow similar to a telnet-server.
Is there an example for lwIP available that demonstrates this behaviour?
Thanks!
I know this is an old question - but I found it when looking for something similar!
If you look in the lwip contrib directory (http://download.savannah.gnu.org/releases/lwip/) there are some example applications - including a tcp (and udp) echo server.
You don't say what device you are using or whether or not you are using an RTOS, so it is hard to provide example code. However, if you are not using an RTOS I would highly recommend you start! My experience of using the lwip raw api (without an rtos) is that it is difficult to read data from the outside world (e.g. using interrupts) without things falling over.
HTH,
Alex
Is there a good C library that I can use in my client application for talking to REST servers ?
libcurl comes to mind, as REST is based around basic HTTP requests.
Of course this is just a starting point; you'd need to write a little logic on top of it. I'm not sure if what you're looking for is a source-generating solution where you can point it at a service descriptor and have stubs produced automatically, or whether you're just looking for connectivity.
I'd like to know whether something like this has been done before:
I've recently started work on a networking library in C. The library maintains a set of sockets, each of which is associated with two FIFO byte streams, input and output.
A developer using the library is expected to register some callbacks, consisting of a recognizer function and a handler function. If new data arrives on a socket (i.e. the input stream), every recognizer is called. If one of the recognizers finds a matching portion of data, its associated handler is called, consuming the data and possibly queuing new data on the socket's output stream, scheduled to be transmitted later on.
Here's an example to make clear how the library is used:
// create client socket
client = nc_create(NC_CLIENT);
// register some callback functions that you'll have to supply yourself
nc_register_callback(client, &is_login, &on_login);
nc_register_callback(client, &is_password, &on_password);
// connect to server
nc_dial(client, "www.google.com", "23");
// start main loop (we might as well have more than one connection here)
nc_talk();
To me, this is the most obvious way to write a general purpose networking library in C. I did some research using Google, but I wasn't able to find something similar written in C. But it's hard to believe that I'm the first one that implements this approach.
Are there other data-driven general purpose C networking libraries like this out there?
Would you use them?
Here's a few libraries that provides similar APIs, (at various levels, e.g. libevent provides a general callback driven API for socket/file descriptors)
libesmtp (example)
libevent
libcurl
The Sun/OncRPC APIs have a similar style, in that the library does the heavy lifting for you, dispatching requests to the proper callback handlers.
The Java netty and mina libraries works in a similar manner, although more object oriented.
I am developing some experimental setup in C.
I am exploring a scenario as follows and I need help to understand it.
I have a system A which has a lot of Applications using cryptographic algorithms.
But these crypto calls(openssl calls) should be sent to another System B which takes care of cryptography.
Therefore, I have to send any calls to cryptographic (openssl) engines via socket to a remote system(B) which has openssl support.
My plan is to have a small socket prog on System A which forwards these calls to system B.
What I'm still unclear at this moment is how I handle the received commands at System B.
Do I actually get these commands and translate them into corresponding calls to openssl locally in my system? This means I have to program whatever is done on System A right?
Or is there a way to tunnel/send these raw lines of code to the openssl libs directly and just received the result and then resend to System A
How do you think I should go about the problem?
PS: Oh by the way, the calls to cryptography(like EngineUpdate, VerifyFinal etc or Digest on System A can be either on Java or C.. I already wrote a Java/C program to send these commands to System B via sockets...
The problem is only on System B and how I have to handle..
You could use sockets on B, but that means you need to define a protocol for that. Or you use RPC (remote procedure calls).
Examples for socket programming can be found here.
RPC is explained here.
The easiest (not to say "the easy", but still) way I can imagine would be to:
Write wrapper (proxy) versions of the libraries you want to make remote.
Write a server program that listens to calls, performs them using the real local libraries, and sends the result back.
Preload the proxy library before running any application where you want to do this.
Of course, there are many many problems with this approach:
It's not exactly trivial to define a serializing protocol for generic C function calls.
It's not exactly trivial to write the server, either.
Applications will slow a lot, since the proxy call needs to be synchronous.
What about security of the data on the network?
UPDATE:
As requested in a comment, I'll try to expand a bit. By "wrapper" I mean a new library, that has the same API as another one, but does not in fact contain the same code. Instead, the wrapper library will contain code to serialize the arguments, call the server, wait for a response, de-serialize the result(s), and present them to the calling program as if nothing happened.
Since this involves a lot of tedious, repetitive and error-prone code, it's probably best to abstract it by making it code-driven. The best would be to use the original library's header file to define the serialization needed, but that (of course) requires quite heavy C parsing. Failing that, you might start bottom-up and make a custom language to describe the calls, and then use that to generate the serialization, de-serialization, and proxy code.
On Linux systems, you can control the dynamic linker so that it loads your proxy library instead of the "real" library. You could of course also replace (on disk) the real library with the proxy, but that will break all applications that use it if the server is not working, which seems very risky.
So you basically have two choices, each outlined by unwind and ammoQ respectively:
(1) Write a server and do the socket/protocol work etc., yourself. You can minimize some of the pain by using solutions like Google's protocol buffers.
(2) use an existing middleware solution like (a) message queues or (b) an RPC mechanism like CORBA and its many alternatives
Either is probably more work than you anticipated. So really you have to answer this yourself. How serious is your project? How varied is your hardware? How likely is the hardware and software configuration to change in the future?
If this is more than a learning or pet project you are going to be bored with in a month or two then an existing middleware solution is probably the way to go. The downside is there is a somewhat intimidating learning curve.
You can go the RPC route with CORBA, ICE, or whatever the Java solutions are these days (RMI? EJB?), and a bunch of others. This is an elegant solution since your calls to the remote encryption machine appear to your SystemA as simple function calls and the middleware handles the data issues and sockets. But you aren't going to learn them in a weekend.
Personally I would look to see if a message queue solution like AMQP would work for you first. There is less of a learning curve than RPC.