Minimising client processing - c socket programming - c

I am working on a client/server model based on Berkeley sockets and have almost finished but I'm stuck with a way to know that all of the data has been received whilst minimising the processing being executed on the client side.
The client I am working with has very little memory and battery and is to be deployed in remote conditions. This means that wherever possible I am trying to avoid processing (and therefore battery loss) on the client side. The following conditions on the client are outside of my control:
The client sends its data 1056 bytes at a time until it has ran out of data to send (I have no idea where the number 1056 came from but if you think that you know I would be very interested)
The client is very unpredictable in when it will send the data (it is attached to a wild animal and sends data determined by connection strength and battery life)
The client has an unknown amount of data to send at any given time
The data is transmitted though a GRSM enabled phone tag (Not sure that this is relevant but I'm assuming that extra information could only help)
(I am emulating the data I am expecting to receive from the client through localhost, if it seems to work I will ask the company where I am interning to invest in a static ip address to allow "real" tcp transfers, if it doesn't I won't. I don't think this is relevant but, again, I would rather provide too much information than too little)
At the moment I am using a while loop and incrementing the number of bytes received in order to "recv()" each of the 1056 byte sections. My problem is that the server needs to receive an unknown number of these. To me, the most obvious solutions are to send the number of sections to be received in an initial header from the client or to mark the last section being sent in some way. However, both of these approaches would require processing on the client side, I was wondering if there was a way to check whether the client has closed its socket from the server side? Or even whether something like closing the connection from the server after a pre-determined period of time without information from the client would be feasible? If these aren't possible then I would love to hear any other suggestions.
TLDR: What condition can I use here to minimise client-side processing?
while(!(/* Client has ran out of data to send*/)) {
receive1056Section();
}
Also, I know that it is bad practise to make a stackOverflow account and immediately ask a question, I didn't know what else to do, I'm sorry. Please don't hesitate to be mean if I've missed something very obvious.

Here is a suggestion for how to do the interaction:
The client:
Client connects to server via tcp.
Client sends chunks of data until all data has been sent. Flush the send buffer after each chunk.
When it is done the client issues a shutdown on the socket, sleeps for a couple of seconds and then closes the connection.
The client then sleeps until the next transmission. If the transmission was unsuccessful, the sleep time should be shorter to prevent unsent data to overflow the avaiable memory.
If the client is unable to connect for an extended period of time, you would have to discard data that doesn't fit in the memory.
I am assuming that sleep reduces power consumption.
The server:
The server programcan be single-threaded unless you need massive scalability. It is listening for incoming connections on the agreed port.
Whenever a client connects, a new socket is created.
Use select() to see which sockets has data (don't forget to include the listening socket!), and non-blocking reads to read from the sockets.
When you get the appropriate error (no more data to read and the other side has shutdown it's side of the connection), then you can close that socket.
This should work fine up to a couple of thousand simultaneous connections.
Example that handles many of the difficulties of implementing a server

Related

Websockets on stellaris board running lwIP 1.3.2

What I'm doing
I'm implementing a websocket server on a stellaris board as the title says. At the moment I'm able to establish connection to the client and send a few frames.
The way I'm implementing the websocket
The way I'm developing it is something like a master slave communication. Whenever the client sends a string, the server decodes it and then answers. At the moment I'm simply responding to a character 'e', which is designed to be just a counter. The thing is that I implemented the websocket on the client side to send 'e' whenever it receives a message and then displays the message on the page.
The problem
The problem is that it does about 15 transactions and then I can see the communication being re-transmitted from and to the stellaris board and then the communication closes. After the connection closes I noticed that that I can't access any other page on the board. It simply doesn't respond anymore.
My assumptions of what may be causing it
This lead me to believe that the transactions are being too fast and there may be an implementation bug, lwIP bug or hardware bug (I'm using the enet_io example as base).
My assumptions on how to fix it
After seeing this I can imagine that what I need is to control the string being sent to the microcontroller so that it sends once a second, or maybe even less, because at the moment it was doing something like 1000 transactions per second and sometimes more.
The question
So ... after my trials I still have a few questions that need to be answered. Do websockets need this kind of relationship? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open? Is my supposition that slowing down my rates will work?
Do websockets need this kind of relationship [request-response]? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open?
The Websocket protocol doesn't require a request-response model (except for the connection establishing handshake).
The server can stream data to the client without worrying about any response or request from the client.
However, it's common practice to get a response or a ping from a client once in a while, just to know they're alive.
This allows the client to renew a connection if a message or ping fails to reach the server - otherwise the client might not notice an abnormally dropped connection (it will just assume no updates are being sent because there's no new data).
It also allows the server to know a connection is still alive even when no information is being exchanged.
Is my supposition that slowing down my rates will work?
I guess this question becomes less relevant due to the first question's answer... however, I should probably note that the web socket client (often a browser) will have limited resources and a different memory management scheme.
Browsers are easy to overwhelm with too much data because they often keep references to all the exchanges since the page was loaded (or refreshed).
This is especially true when logging events to a browser's console.

How to handle when a Client or Server is Down in a UDP Application

I am developing a windows application for Client Server communication using UDP, but since UDP is connectionless, whenever a Client goes down, the Server does not know that Client is off and keeps sending the data. Similar is the case when a Server is down.
How can I cater this condition that whenever any of the Client or Server is down, the other party must know it and can handle it.
Waiting for reply.
What you are asking is beyond the scope of UDP. You'd need to implement your own protocol, over UDP, to achieve this.
One simple idea could be to periodically send keepalive messages (TCP on the other hand has this feature).
You can have a simple implementation as follows:
Have a background thread keep sending those messages and waiting for replies.
Upon receiving replies, you can populate some sort of data structure
or a file with a list of alive devices.
Your other main thread (or threads) can have the following changes:
Before sending any data, check if the client you're going to send to is present in that file/data structure.
If not, skip this client.
Repeat the above for all remaining clients in the populated file/data structure.
One problem I can see in the above implementation is analogous to the RAW hazard from the main thread's perspective.
Use the following analogy instead of the mentioned example for the RAW hazard,
i1 = Your background thread which sends the keepalive messages.
i2 = Your main thread (or threads) which send/receive data and do your other tasks.
The RAW hazard here would be when i2 tries to read the data structure/file which is populated by i1 before i1 has updated it.
This means (worst case), i2 will not get the updated list and it can miss out a few clients this way.
If this loss would be critical, I can suggest that you possibly have a sort of mechanism whereby i1 will signal i2 when it completes any-ongoing writing.
If this loss is not critical, then you can skip the above mechanism to make your program faster.
Explanation for Keepalive Messages:
You just need to send a very lightweight message (usually has no data. Just the header information). Make sure this message is unique. You do not want another message being interpreted as a keepalive message.
You can send this message using a sendto() call to a broadcast address. After you finish sending, wait for replies for a certain timeout using recv().
Log every reply in a data structure/file. After the timeout expires, have the thread go to sleep for some time. When that time expires, repeat the above process.
To help you get started writing good, robust networking code, please go through Beej's Guide to Network Programming. It is absolutely wonderful. It explains many concepts.

recovering transmission from lost TCP connection

I am working on client server application written in C for Linux where I am replicating the data to multiple slave replicas using TCP and I would like to know how to deal with unexpected temporary shutdown of some replica (it may be the crash of the unix process or hardware power off).
When I issue the write() syscall to the kernel, the successful return means the data was copied to the socket, but doesn't mean that the receiving end got the data. If the destination is powered off and then powered up, the data must be resent (after establishing a new TCP connection) to the replica from the point where it lost the data.
Lets say I am working with large data amounts and I don't keep the data that I already sent (i.e. the write() syscall returned success). I keep only the pending data to be sent.
When the replica recovers from the unexpected shutdown and connects again, how do I get, from kernel, the data that has been written to the socket, but wasn't 'ack'-nowledged on the destination host, yet?
Or in other words, how do I recover from a loss of a TCP connection and reestablish transmission between client and server from the point where it stopped?
You need to add another level of abstraction on top of TCP. After every piece of data is sent (TCP guarantees that it will get there intact and in order), have the process at the other end send it's own kind of ACK, in your own higher level protocol (whatever that is -- be it "ACK\0", "GOT\n" or anything else). On the other side (the originator), read for this data. If it comes through good without error, everything's fine. If you get an error -- check the type. If you get ECONNRESET, that means that the remote end is dead. From this, you can respond accordingly. Wait until you can reconnect, and repeat the data send all over again.
There is no way to do what you want through the standard API.
A solution could be to have you client periodically send back a running total of bytes received and verified written to disc, and then keep a buffer of sent but not acknowledge data on the server. Then when the client reconnects, it sends it last good count, and the server knows where to start retransmitting.
TCP will take care of the sequence numbers needed for TCP, you can't make much use of those at the application level
You need some sequence control at the application level.
In your case here, you could assign a number to each block of data you sent. The destination need to keep persistent track of the last block number it has received. On startup from an unexpected shutdown, the destination need to communicate back the last blocknumber it processed, and you start sending from there.
how do I get from the kernel the data that has been written to the socket, but wasn't 'ack'-nowledged on the destination host yet?
Even if you could, this would not be enough. The destination host might very well have acked' the data, but for whatever reason the ack could be lost, or never sent, but the destination application could have received and processed that data fine. So if you use the TCP sequence number in this case, you'd end up with duplicated data.
An other case is that TCP sent back an ack for the data, and the destination application crashed/shutdown just as it read that data, but right before it wrote it to disk. So you end up with lost data.

Check how much data has been delivered to the destination using TCP/IP socket

I am developing a server application that needs to send a lot of data to the client. However, client can get disconnected at any time and send()/write() on socket will return an error in this case. I would like to check how much data has been actually delivered before a client get disconnected to be able to continue sending data from the place where it left off when the client reconnect.
Is it possible to check it using sockets API?
No, the sockets API does not give you this information. In fact, it is not possible in general to know this. Depending on the particular way in which the connection failed, the TCP stack on one side generally can't know how much data successfully made it to the other side. The only thing it can know is how much data was acknowledged, which is not the same thing. And considering that other things than TCP/IP might have failed (the local OS, the remote OS, the remote process, the remote application logic), the amount of data that has been acknowledged at the TCP level probably doesn't mean much anyway.
You need to use an end-to-end application protocol to have the remote end acknowledge the data it has received and successfully processed (and committed, if applicable).

Send same info to multiple threads/sockets?

I am writing a server application that simply connects a local serial port to multiple network connected clients. I am using linux and C for the server application because the equipment for the program is a router with limited memory.
I have everything setup for multiple clients to connect and send data to the serial port using a fork() process for each connection.
My problem lies in getting data incoming on the serial port out to the multiple (varing number) client connections. my problem lies in designing a way for each active socket to get all of the incoming data, and to only get it once. Any help?
Sounds like you need a data queue (buffer) for each connected client. Each time data comes in on the port, you post it to the back of each client's queue. The clients then read the data from the front of their respective queues. Since all the clients will probably read at different rates/times, this will ensure all of them get a copy of the data only once, and you won't get hung up waiting for any one client while more data comes in. Of course, you'll need to allocate a certain amount of memory for each connected client's queue (I'm not sure how many clients you're expecting, and you did say your available memory is limited), and you need to consider what to do if a queue gets full before the client reads all of it.
Presumably you keep a list or some other reference of/to connected clients, why not just loop over that for each bit of information and send it to all of them?
A thread per socket design might not be the best way to solve this. An event driven asynchronous approach should be a much better fit. However, if you must do it with threads, and given that serial ports are slow anyway, building a pipe between the thread listening to the serial port and all the threads talking to the network clients is the most practical. You could do fancy things with rwlocks to move the data, but you'll still need a way for the network threads to wait on both the socket and the data from the serial port, so you need to use file descriptors for both and something like poll.
But seriously, this would likely be much easier and would perform better without the threads. Think of it as a main loop which waits on poll which is watching the network and the serial port, determines which event occurred, and distributes data accordingly. It should be easier all around once you get the idea.

Resources