I am writing a custom phy agent in UnetStack. I knew we can use Groovy, Java, Julia or C, Can i use python to write my agent ? If yes, What should i take care of and is there specific skeleton for it ?
fjåge only supports Groovy and Java agents out of the box, but supports only the Gateway API for Python, Julia, C, etc. An alpha version of Julia agent support is already available, but even without that, one can call Julia from Groovy agents. The Blog article: Harnessing the power of Julia in UnetStack — Part II covers how a custom PHY can be written using a Groovy agent, with all the signal processing in Julia.
You could do pretty much the same with Python, calling your Python code from a Java/Groovy agent. I have not tried doing this, but the basic idea is the same as what we do with Julia in the blog, and shouldn't be too hard to get to work. You can check out Java2Python and/or this StackOverflow post as a starting point.
Related
I am trying to use SQS on aws (on a linux box) using generic C. Not using any sdk (not that there is one for C). I can not find an example I can relate to. Sorry, I don't relate to these newfangled languages. I am proficient in Cobol, fortran, pascal and C. Not python, c++, c# or java. There are "steps" on amazon site, but honestly they expect proficiency on aws and an object oriented language. I just want to create my own https get command for accessing SQS/SNS, can anyone provide a 'C' snipet that creates a complete url with the version 4 signature? Or point me in the correct direction?
Have a look at https://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
If you're proficient with any programming language, you should be able to understand all of that code. It's just string operations and some hashing for which you'll have to use another library. There's also lots of comments to help you with the details.
You can use libcurl for the call:
Use CURLOPT_AWS_SIGV4 argument for the signature https://curl.se/libcurl/c/CURLOPT_AWS_SIGV4.html
You can take a look at CURLOPT_WRITEFUNCTION if you want to store the result into a variable: https://curl.se/libcurl/c/CURLOPT_WRITEFUNCTION.html
And for debugging purpose CURLOPT_VERBOSE can be useful too: https://curl.se/libcurl/c/CURLOPT_VERBOSE.html
Note that you need a version of libcurl superior to 7.75.
I am not finding documentation for custom protocol support.
From what I understand, Gatling has core engine that does scheduling, thread management etc, and protocol support is designed as an Actor ?
I am trying to develop a custom protocol (thats basically a shell script that will talk to an external service). The latest reference documentation does not seem to have any reference to how to do this ? Any pointers will be greatly appreciated.
If you need to stress test something that is implemented in a shell script, then Gatling probably isn't the best fit. Gatling is designed for stress testing networking protocols. So unless you can duplicate what your shell script is doing in Gatling expressed in networking protocols, you then might want to use something else.
Secondly, if you did implement it, I would check with the core developers of Gatling if it's something that they would consider including (use a github issue to ask). Since the applications of this might not be widespread, they may choose to not include it in their project. If that's the case you would have to either run your own fork with the implementation or add some sort of plugin architecture to Gatling for 3rd part extensibility.
So my suggestions are:
Decompose your shell script into the specific network protocol parts you're interested in stress testing implementing in Gatling.
Use a different tool that's designed to running multiple shell scripts at once for stress testings. Something like GNU Parallel if you're on a Linux box.
Implement it yourself. There's no documentation on how to do this. However a good starting example would be the JMS Protocol Implementation to give you an idea of all that's involved.
I'm looking for a message handler for Julia, because I want to integrate it in a bigger project with other services. The other services are using RabbitMQ, but I have not been able to find any RabbitMQ or ActiveMQ drivers for Julia.
Is anyone aware of a message handler driver for Julia or should I just start implementing it on my own?
[UPDATE]
I just noticed that Julia is able to call C and Fortran code, so I thought perhaps I could use the RabbitMQ driver for C.
What do you think about this idea?
Thank you!
I'm not aware of one but have only done a cursory search. There are many Julia libraries which simply wrap an existing and well-understood C API. While getting the package build and install correct this way can be slightly tricky, it saves re-implementing complex protocols. There doesn't seem to be much dogma in the community about trying to make 'pure Julia' packages where there's no clear benefit.
A client of ours is asking us to implement a module in C in Apache webserver for performance reasons. This module should handle RESTful uri's, access a database and return results in json format. Many people here have recommended python mod_wsgi instead - but for simplicity of programming reasons. Can anyone tell me if there is a significant difference in performance between the mod_wsgi python solution vs. the Apache + C.module. Any anecdotes? Pointers to some study posted online?
This module should handle RESTful uri's, access a database and return results in json format.
That sounds like the bulk of the work is I/O bound so you will not get much of a performance boost by using C.
Here is the strategy I would recommend.
Implement in Python
After getting it done, profile the code to see if there are any CPU bottlenecks.
Implement just the bottleneck portions in C.
G-WAN ANSI C scripts have shown that C scripts make a world of difference in terms of speed, see:
gwan.com
So using C might not be a bad idea after all...
If you want the best of both worlds: maintainable code and speed, use Cython (http://cython.org). Cython compiles Python code (with optional type information) to C or C++, which in turn is compiled to system code.
Say I have fancy new algorithm written in C,
int addone(int a) {
return a + 1;
}
And I want to deploy as a web application, for example at
http://example.com/addone?a=5
which responds with,
Content-Type: text/plain
6
What is the best way to host something like this? I have an existing setup using Python mod_wsgi on Apache2, and for testing I've just built a binary from C and call as a subprocess using Python's os.popen2.
I want this to be very fast and not waste overhead (ie I don't need this other Python stuff at all). I can dedicate the whole server to it, re-compile anything needed, etc.
I'm thinking about looking into Apache C modules. Is that useful? Or I may build SWIG wrappers to call directly from Python, but again that seems wasteful if I'm not using Python at all. Any tips?
The easiest way should be to write this program as a CGI app (http://en.wikipedia.org/wiki/.cgi). It would run with any webserver that supports the Common Gateway Interface.
The output format needs to follow the CGI rules.
If you want to take full advantage of the web server capabilities then you can write an Apache module in C. That needs a bit more preparation but gives you full control.
Maybe this tiny dynamic webserver in C to be used with C language can help you.. it should be easy to use and self-contained.
Probably the fastest solution you can adopt according to the benchmarks shown on their homepage!
This article from yesterday has a good discussion on why not to use C as a web framework. I think an easy solution for you would be to use ctypes, it's certainly faster than starting a subprocess. You should make sure that your method is thread safe and that you check your input argument.
from ctypes import *
libcompute = CDLL("libcompute.so")
libcompute.addone(int(a))
I'm not convinced that you're existing general approach might not be the best one. I'm not saying that Apache/Python is necessarily the correct one but there is something compelling about separating the concerns in your architecture being composed of highly focused elements that are specialists in their functions within the overall system.
Having your C-based algorithm server being decoupled from the HTTP server may give you access to things like HTTP scalability and caching facilities that might otherwise have to be in-engineered (or reinvented) within your algorithm component if things are too tightly coupled.
I don't think performance concerns in of themselves are always the best or only reasons when designing an architecture. For example the a YAWS deployment with a C-based driver could be a very performant option.
I have just setup a web service using libmicrohttpd and have had amazing results. On a quad core I've been handling 20400 requests a second and the CPU is running only at 58%. This is probably going to be deployed on a server with 8 cores, so I'm expecting much better results. A very simple C service will be even faster!
I have tried GWAN, it is very good, but it's closed, and doesn't play well with virtual environments. I will give #Gil kudos being good at supporting it here though. We just had a few issues and found LibMicroHttpd works better for our needs.
If you go here, you may need to update your openssl if you're using CentOs from axivo
rpm -ivh --nosignature http://rpm.axivo.com/redhat/axivo-release-6-1.noarch.rpm
yum --disablerepo=* --enablerepo=axivo update openssl-devel
You can try Duda I/O, it only requires a Linux host: http://duda.io