OkHttp requests are delayed - arm

Using OkHttp3 with Retrofit to send simple post request to a service inside local network. I use the same implementation in different projects. The only difference in the scenario is it runs on raspberry pi (armv6l) platform.
The symptom is simply explained i invoke a request synchronous or asynchronous doesn't make a difference and those request are executed delayed for from around 30 up to 60 seconds.
I don't know what to do and investigate it deeper. Also wireshark exposes me exactly the same.
If i invoke the request via cURL it works as expected.
Thank you for all assistance to solve this issue.

Related

dbus_connection_send_with_reply timeout

When calling dbus_connection_send_with_reply through the D-Bus C API in Linux, I pass in a timeout of 1000ms, but the timeout never occurs when the receiving application doesn't reply.
If the receiving application does send a reply then this is received correctly.
Could this be due to the way that I'm servicing libdbus?
I am calling dbus_connection_dispatch and dbus_connection_dispatch periodically for servicing.
Thanks
It is highly recommended that you use a D-Bus library other than libdbus, as libdbus is fiddly to use correctly, as you are finding. If possible, use GDBus or QtDBus instead, as they are much higher-level bindings which are easier to use. If you need a lower-level binding, sd-bus is more modern than libdbus.
If you use GDBus, you can use GMainLoop to implement a main loop to handle timeouts, and set the timeout period with g_dbus_proxy_set_default_timeout() or in the arguments to individual g_dbus_proxy_call() calls. If you use sd-bus, you can use sd-event.

Stable HTTP server w/ Arduino WiFi Shield?

I'm building my first Arduino project. You can see the basics of what I've done, here: http://lostechies.com/derickbailey/2013/04/10/a-first-look-at-my-arduino-bbq-thermometer/ - it's a network enabled BBQ thermometer, to tell me when my meat is done cooking on the grill.
I've got this set up with a basic HTTP server to produce a JSON document when an HTTP request is made. All of the HTTP handing code that I'm using comes from the samples that are built in to the Arduino IDE software.
The Ethernet Shield version of this code seems to work great. It seems to run for as long as I let the thing stay plugged in / turned on. But when I switch over to my Arduino WiFi shield, and upload the WiFi version of my code - which is also based on the samples in the WiFi libraries demos - it stops responding to requests after about 10 minutes or so.
I'm using an Arduino Undo R3, with the latest Arduino WiFi shield. I've got Arduino IDE v1.0 on my Mac. Everything compiles fine, and seems to run fine for a while.
The HTTP server code very quickly starts having problems. If I put up a simple web page with jQuery.ajax calls to hit the http server every 3 seconds, approximately 1 in 3 requests will fail, almost immediately. Once it moves beyond 10+ minutes, the HTTP server code on the Arduino just stops responding entirely.
It's as if I have a resource leak on HTTP clients, and they aren't being cleaned up... but this is a total guess.
For the gist of what I'm doing, see the code found here: https://github.com/arduino/wifishield/tree/master/libraries/WiFi/examples/WifiWebServer
I've literally just copy & pasted this code, turned it on, and then it starts erroring out. I don't even have to modify the code anymore than setting the right SSID and password, and setting a CORS: * header in the HTTP response. Once I upload it to my Arduino, it starts bombing requests frequently, and a few minutes later, it stops responding entirely.
Has anyone seen this problem before, with the WiFi shield? Does anyone have better HTTP request handling code for the Arduino WiFi shield?
I can provide more information, my actual code, or whatever else is needed, as well.

DBus synchronous call timeout

I have a DBus server which exposes a method that requires a huge time to complete (about 3 minutes).
The client performs a synchronous call to this method.
The problem is, after exactly 25 secs the client throws an error because 'did not receive a reply'.
Unfortunately, I cannot change the client, so I cannot make the call asynchronous, as it should be.
I tried to use this line in my server configuration:
<limit name = "reply_timeout">240000</limit>
but the situation does not change.
Any idea?
That limit parameter configures the bus daemon, which is only one of the processes involved. The others are the client and the server, and the particular D-Bus library used on each end may have a default timeout for synchronous messages. And 25 seconds is indeed the _DBUS_DEFAULT_TIMEOUT_VALUE in libdbus, the C reference implementation.
Changing the timeout in the client, for example in dbus_connection_send_with_reply_and_block, is easier than changing the API to be asynchronous.

HTTP Server Programming

I'm attempting to write my own http 1.1 server, just for fun and learning about more about HTTP, sockets, and threading.
I've gotten a good start i think with only delivering static pages (using c, which I would prefer to stay in for the time being). I have a test page I wrote a while ago and deliver it's ~50 files in 124ms according to chrome, without using threads or keep-alive sockets.
I've found it very difficult to get threading/keep-alive working at all. There are little to no resources on the web (that I can find in my hours of Googling) that explain keep-alive connections in detail. If anyone could recommend a good book on HTTP server programming, I would greatly appreciate it.
I've done some threading and socket programming before by making a simple chat program, so I have at least some experience with it.
The issue I'm having is that when I attempt to incorporate threads, the client browser sets up multiple connections. Somewhere along the line, the server gets confused and the client just sits there waiting for responses and the server stops doing anything. I send the Connection: Keep-Alive header, but that doesn't change anything and when I incorporate keep-alive it and create a loop for getting requests in the threaded function, it stalls until the connection is closed.
I would appreciate it if someone could give me some pseudo code on how to get keep alive/threading working for this so the client stops creating multiple connections at a time.
A brief description of whats going on:
main function
load in static pages to large array of fileinfo struct that hold the file data and length
create the socket
set it to listen to port 80
set it to listen for 10 connections at a time(i know this is low...)
start an endless loop
block while waiting for someone to connect
check if it's a localhost connection
shutdown the server
otherwise
start a thread(with pthread), sending it the socket variable
loop
Thread Function
setsock opt for 3 sec timeout on send/recv and enable Keep-alive
start endless loop
read in request
if request timed out, break the loop
Validate Request function call
Create Reponse function call
Send response
if request contained Connection: close header break the loop
loop
close socket
return
I would recommend looking at GNU libmicrohttpd. It focuses squarely on providing a framework upon which to build HTTP 1.1 servers. It is small and supports keep-alive with and without threading. (Personally I use it without threading. It has several threading models too.)
Even if you decide to write your web server from scratch, I would suggest looking at libmicrohttpd to gain insight in not only how the protocol works, but how the library models "the work flow" of a web server in a very clean way. I think it is a mistake to imagine that keep-alive implies threading and I think it is an impediment to understanding keep-alive.
(Regarding Apaches' credits as a web server, it is pretty huge, and there is a lot in there not related to protocols, but rather things like its plugin system and so on.)
I'd recommend grabbing the source for Apache and seeing how they handle it. There's not much point in psuedo code when you can see how the real thing works.
Perhaps you could look at Apache's code for some clues. It is written in C.
Hopefully someone will come along and give a more detailed answer :)

Real time embeddable http server library required

Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements.
I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency).
So the web server library will need to support the following flow
1) HTTP Client transmits http request 1
2) HTTP Client transmits http request 2 ...
3) Web Server Library receives request 1 and passes it to My Web Server App
4) My Web Server App receives request 1 and dispatches it to My System
5) Web Server receives request 2 and passes it to My Web Server App
6) My Web Server App receives request 2 and dispatches it to My System
7) My Web Server App receives response to request 1 from My System and passes it to Web Server
8) Web Server transmits HTTP response 1 to HTTP Client
9) My Web Server App receives response to request 2 from My System and
passes it to Web Server
10) Web Server transmits HTTP response 2 to HTTP Client
Hopefully this illustrates my requirement. There are
two key points to recognise. Responses to the Web Server Library are
asynchronous and there may be several HTTP requests passed to My Web
Server App with responses outstanding.
Additional requirements are
Embeddable into an existing 'C' application
Small footprint; I don't need all the functionality available in Apache etc.
Efficient; will need to support thousands of requests a second
Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me.
Support persistent TCP connections
Support use with Server-Push Comet connections
Open Source / GPL
support for HTTPS
Portable across linux, windows; preferably more.
I will be very grateful for any recommendation
Best Regards
You could try libmicrohttp.
Use the Onion, Luke. This is lightweight and easy to use HTTP server library in C.
For future reference, that meets your requirement, take a look at libasyncd
I'm one of contributors.
Embeddable into an existing 'C' application
It's written in C.
Small footprint; I don't need all the functionality available in Apache etc.
Very compact.
Efficient; will need to support thousands of requests a second
It's libevent based framework. Can handle more than that.
Allows asynchronous responses to requests;
It's asynchronous. Also support pipelining.
Support persistent TCP connections
Sure, keep-alive.
Support use with Server-Push Comet connections
It's up to how you code your logic.
Open Source / GPL
under BSD license
support for HTTPS
Yes. it supports https with openssl.
Portable across linux, windows; preferably more.
Portable but not windows at this moment but it's portable to windows.
What you want is something that supports HTTP pipelining. You should make yourself familiar with that page if you are not already.
Yes, go for libmicrohttp. It has support for SSL etc and work in both Unix and Windows.
However, Christopher is right on the spot in his comment. If you have a startup time for each response, you are not going to gain much by pipelining. However, if you only have a significant response time to the first request, you may win something.
On the other hand, if each response has a startup time, you may gain a lot by not using pipelining, but create a new request for each object. Then each request can have its own thread, sucking up the startup costs in parallel. All responses will then be sent "at once" in the optimum case. libmicrohttp supports this mode of operation in its MHD_USE_THREAD_PER_CONNECTION thread model.
Following up on previous comments and updates...
You don't say how many concurrent connections you'll have but just "a TCP link".
If it's a single connection, then you'll be using HTTP pipelining as previously mentioned; so you would only need a handful of threads — rather than thousands — to process the requests at the head of the pipeline.
So you wouldn't need to have a thread for every request; just a small pool of workers for each connection.
Have you done any testing or implementation so far to show whether you actually do have problems with response latency for pipelined connections?
If your embedded device is powerful enough to cope with thousands of requests per second, including doing TLS setup, encryption and decryption, I would worry about premature optimisation at this level.
Howard,
Have you taken a look at lighthttpd? It meets all of your requirements except it isn't explicitly an embedded webserver. But it is open source and compiling it in to your application shouldn't be too hard. You can then write a custom plugin to handle your requests.
Can't believe no one has mentioned nginx. I've read large portions of the source-code and it is extremely modular. You could probably get the parts you need working pretty quickly.
uIP or lwip could work for you. I personally use uIP. It's good for a small number of clients and concurrent connections (or as you call it, "pipelining"). However, it's not as scalable or as fast at serving up content as lwip from what I've read. I went with simplicity and small size of uIP instead of the power of lwip, as my app usually only has 1 user.
I've found uIP pretty limited as the concurrent connections increase. However, I'm sure that's a limitation of my MAC receive buffers available and not uIP itself. I think lwip uses significantly more memory in some way to get around this. I just don't have enough ethernet RAM to support a ton of request packets coming in. That said, I can do background ajax polling with about a 15ms latency on a 56mhz processor.
http://www.sics.se/~adam/software.html
I've actually modified uIP in several ways. Adding a DHCP server and supporting multipart POST for file uploads are the big things.) Let me know if you have any questions.

Resources