We have a datalogger running a relatively slow processor (7.3 MHz) that has instructions to send email using SMTP. The datalogger operating system uses axTLS to support TLS connections. When connecting to a service, such as GMail, that requires the use of TLS, it can take the datalogger some time (12 seconds or more) to perform the calculations required to complete the handshake. While this is taking place, GMail times out our client connection. Is there some kind of SSL heartbeat or keepalive message that can be sent while the datalogger finishes the required calculations?
Just imagine that all clients would take such a long time with the SSL handshake. This would tie up lots of resources at the server and would actually more resemble a denial of service attempt like slowloris.
Thus it is expected from the client to be a nicely behaving citizen on the internet and be fast enough to be handle the handshake in a short time. Using a processor with a speed which was state of the art 30 years ago simply is not sufficient to connect against services which are designed to be used by current clients.
Thus if you want to use such services from a lowest-end device you should instead transmit the data to your own relay server which is willing to wait such a long time and this relay can then deliver the data with the expected speed to the public servers.
I found the RFC (RFC 6520) that discusses the HeartBeatRequest extension for TLS. This RFC has the following text:
A HeartbeatRequest message can arrive almost at any time during the
lifetime of a connection. Whenever a HeartbeatRequest message is
received, it SHOULD be answered with a corresponding
HeartbeatResponse message.
However, a HeartbeatRequest message SHOULD NOT be sent during
handshakes. If a handshake is initiated while a HeartbeatRequest is
still in flight, the sending peer MUST stop the DTLS retransmission
timer for it. The receiving peer SHOULD discard the message
silently, if it arrives during the handshake.
This leads me to believe that, although it MAY be possible to extend the handshake time out, it is unlikely to do so.
Related
when a client connects to my server side, after they connect if they switch to a VPN or something the server side still says the socket is alive and still tries to read from it i tried using another thread to check all my sockets constantly with read and close it if it returns -1 but it still doesn't do anything
It very depends on what type of protocol you use, but generalized question is : yes and no. You have to learn network protocol stack to know what you csn do in you situations, details of which you did not disclose.
Usual way to solve this problem is establish some policy or two way cpmmunocation. E.g. there was np data or "i'm alive" message send from client X for duration of time Y, we close connection. Or, send a regular "ping" message to client C and expect a response before period Y expires.
If we're talking TCP, and if the client's connection is properly closed, a message is sent to the server, so the server will know the connection is closed, so read/recv will return 0 bytes indicating EOF.
But you're asking about the times when the client becomes unable to communicate with the server. Detecting an absence of messages is necessarily done using a timeout.
You can have the server "ping" the client (send a message to which the client must respond) periodically.
You can have the client send a message periodically (a "heartbeat") when idle.
Either way, no message (of any kind) for X seconds indicates a broken connection.
If you enable the SO_KEEPALIVE socket option on each new TCP connection, the OS will automatically ping the remote side periodically to see if it still responds, and close the connection if it doesn't. The default timeout is several hours, but many OSes allow you to configure a lower timeout on a per-socket basis. Unfortunately, each one is different in how to do this. Linux, for example, uses the TCP_KEEPIDLE socket option. NetBSD (And probably other BSDs) uses TCP_KEEPALIVE. And so on.
What I'm doing
I'm implementing a websocket server on a stellaris board as the title says. At the moment I'm able to establish connection to the client and send a few frames.
The way I'm implementing the websocket
The way I'm developing it is something like a master slave communication. Whenever the client sends a string, the server decodes it and then answers. At the moment I'm simply responding to a character 'e', which is designed to be just a counter. The thing is that I implemented the websocket on the client side to send 'e' whenever it receives a message and then displays the message on the page.
The problem
The problem is that it does about 15 transactions and then I can see the communication being re-transmitted from and to the stellaris board and then the communication closes. After the connection closes I noticed that that I can't access any other page on the board. It simply doesn't respond anymore.
My assumptions of what may be causing it
This lead me to believe that the transactions are being too fast and there may be an implementation bug, lwIP bug or hardware bug (I'm using the enet_io example as base).
My assumptions on how to fix it
After seeing this I can imagine that what I need is to control the string being sent to the microcontroller so that it sends once a second, or maybe even less, because at the moment it was doing something like 1000 transactions per second and sometimes more.
The question
So ... after my trials I still have a few questions that need to be answered. Do websockets need this kind of relationship? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open? Is my supposition that slowing down my rates will work?
Do websockets need this kind of relationship [request-response]? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open?
The Websocket protocol doesn't require a request-response model (except for the connection establishing handshake).
The server can stream data to the client without worrying about any response or request from the client.
However, it's common practice to get a response or a ping from a client once in a while, just to know they're alive.
This allows the client to renew a connection if a message or ping fails to reach the server - otherwise the client might not notice an abnormally dropped connection (it will just assume no updates are being sent because there's no new data).
It also allows the server to know a connection is still alive even when no information is being exchanged.
Is my supposition that slowing down my rates will work?
I guess this question becomes less relevant due to the first question's answer... however, I should probably note that the web socket client (often a browser) will have limited resources and a different memory management scheme.
Browsers are easy to overwhelm with too much data because they often keep references to all the exchanges since the page was loaded (or refreshed).
This is especially true when logging events to a browser's console.
I am currently working on a embedded c project using mqtt 3.1.1 and mosquitto broker 1.4.3. the issue I have is when the client board is publishing and subscribed to the same topic, after a random number of messages the client is blocked and the connection gets timed-out.
I am trying to send a string message, 25 bytes, over 3G network. Using QOS2 on both pub & sub, I have tried different settings on the client for keepalive (15s <-> 120s) and have a delay between each message (2000ms <-> 300000ms), on the broker I have tried different settings also, but nothing seem to work, is it possible to send messages using QOS2 over a 3G network or am I expecting too much?
We want to guarantee the transfer of some data that is critical so if this is not possible on mqtt is there a better alternative?
A keepalive of 120ms sounds bogus.
Keepalive is there for the broker to detect that a client may have gone missing, without having to wait for the TCP connection to time out. You would typically use a keepalive in the range of seconds, if not minutes.
With a keepalive of 120ms, you have to send a PING packet at least every 100ms or so (or do any other MQTT exchange in that time frame), so it might explain why you are introducing so much latency in your scenario – and probably killing your 3G data plan too ;-)
I suggest you start using a keep-alive of 30s to see if that improves things.
I am working on a client/server model based on Berkeley sockets and have almost finished but I'm stuck with a way to know that all of the data has been received whilst minimising the processing being executed on the client side.
The client I am working with has very little memory and battery and is to be deployed in remote conditions. This means that wherever possible I am trying to avoid processing (and therefore battery loss) on the client side. The following conditions on the client are outside of my control:
The client sends its data 1056 bytes at a time until it has ran out of data to send (I have no idea where the number 1056 came from but if you think that you know I would be very interested)
The client is very unpredictable in when it will send the data (it is attached to a wild animal and sends data determined by connection strength and battery life)
The client has an unknown amount of data to send at any given time
The data is transmitted though a GRSM enabled phone tag (Not sure that this is relevant but I'm assuming that extra information could only help)
(I am emulating the data I am expecting to receive from the client through localhost, if it seems to work I will ask the company where I am interning to invest in a static ip address to allow "real" tcp transfers, if it doesn't I won't. I don't think this is relevant but, again, I would rather provide too much information than too little)
At the moment I am using a while loop and incrementing the number of bytes received in order to "recv()" each of the 1056 byte sections. My problem is that the server needs to receive an unknown number of these. To me, the most obvious solutions are to send the number of sections to be received in an initial header from the client or to mark the last section being sent in some way. However, both of these approaches would require processing on the client side, I was wondering if there was a way to check whether the client has closed its socket from the server side? Or even whether something like closing the connection from the server after a pre-determined period of time without information from the client would be feasible? If these aren't possible then I would love to hear any other suggestions.
TLDR: What condition can I use here to minimise client-side processing?
while(!(/* Client has ran out of data to send*/)) {
receive1056Section();
}
Also, I know that it is bad practise to make a stackOverflow account and immediately ask a question, I didn't know what else to do, I'm sorry. Please don't hesitate to be mean if I've missed something very obvious.
Here is a suggestion for how to do the interaction:
The client:
Client connects to server via tcp.
Client sends chunks of data until all data has been sent. Flush the send buffer after each chunk.
When it is done the client issues a shutdown on the socket, sleeps for a couple of seconds and then closes the connection.
The client then sleeps until the next transmission. If the transmission was unsuccessful, the sleep time should be shorter to prevent unsent data to overflow the avaiable memory.
If the client is unable to connect for an extended period of time, you would have to discard data that doesn't fit in the memory.
I am assuming that sleep reduces power consumption.
The server:
The server programcan be single-threaded unless you need massive scalability. It is listening for incoming connections on the agreed port.
Whenever a client connects, a new socket is created.
Use select() to see which sockets has data (don't forget to include the listening socket!), and non-blocking reads to read from the sockets.
When you get the appropriate error (no more data to read and the other side has shutdown it's side of the connection), then you can close that socket.
This should work fine up to a couple of thousand simultaneous connections.
Example that handles many of the difficulties of implementing a server
Assuming the implementation in C.
If the intermediate link goes down, through which the TCP connection was sending the data. Will the sockets at both ends become unusable to send and receive data immediately?
If the link comes up after 5-6 secs, Can the same sockets be used to send and receive the packets?
The TCP/IP protocol suite was designed to work over unreliable links. If the connection comes back after a few seconds the applications only notice a drop in throughput.