What I'm doing
I'm implementing a websocket server on a stellaris board as the title says. At the moment I'm able to establish connection to the client and send a few frames.
The way I'm implementing the websocket
The way I'm developing it is something like a master slave communication. Whenever the client sends a string, the server decodes it and then answers. At the moment I'm simply responding to a character 'e', which is designed to be just a counter. The thing is that I implemented the websocket on the client side to send 'e' whenever it receives a message and then displays the message on the page.
The problem
The problem is that it does about 15 transactions and then I can see the communication being re-transmitted from and to the stellaris board and then the communication closes. After the connection closes I noticed that that I can't access any other page on the board. It simply doesn't respond anymore.
My assumptions of what may be causing it
This lead me to believe that the transactions are being too fast and there may be an implementation bug, lwIP bug or hardware bug (I'm using the enet_io example as base).
My assumptions on how to fix it
After seeing this I can imagine that what I need is to control the string being sent to the microcontroller so that it sends once a second, or maybe even less, because at the moment it was doing something like 1000 transactions per second and sometimes more.
The question
So ... after my trials I still have a few questions that need to be answered. Do websockets need this kind of relationship? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open? Is my supposition that slowing down my rates will work?
Do websockets need this kind of relationship [request-response]? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open?
The Websocket protocol doesn't require a request-response model (except for the connection establishing handshake).
The server can stream data to the client without worrying about any response or request from the client.
However, it's common practice to get a response or a ping from a client once in a while, just to know they're alive.
This allows the client to renew a connection if a message or ping fails to reach the server - otherwise the client might not notice an abnormally dropped connection (it will just assume no updates are being sent because there's no new data).
It also allows the server to know a connection is still alive even when no information is being exchanged.
Is my supposition that slowing down my rates will work?
I guess this question becomes less relevant due to the first question's answer... however, I should probably note that the web socket client (often a browser) will have limited resources and a different memory management scheme.
Browsers are easy to overwhelm with too much data because they often keep references to all the exchanges since the page was loaded (or refreshed).
This is especially true when logging events to a browser's console.
Related
I am working on a client/server model based on Berkeley sockets and have almost finished but I'm stuck with a way to know that all of the data has been received whilst minimising the processing being executed on the client side.
The client I am working with has very little memory and battery and is to be deployed in remote conditions. This means that wherever possible I am trying to avoid processing (and therefore battery loss) on the client side. The following conditions on the client are outside of my control:
The client sends its data 1056 bytes at a time until it has ran out of data to send (I have no idea where the number 1056 came from but if you think that you know I would be very interested)
The client is very unpredictable in when it will send the data (it is attached to a wild animal and sends data determined by connection strength and battery life)
The client has an unknown amount of data to send at any given time
The data is transmitted though a GRSM enabled phone tag (Not sure that this is relevant but I'm assuming that extra information could only help)
(I am emulating the data I am expecting to receive from the client through localhost, if it seems to work I will ask the company where I am interning to invest in a static ip address to allow "real" tcp transfers, if it doesn't I won't. I don't think this is relevant but, again, I would rather provide too much information than too little)
At the moment I am using a while loop and incrementing the number of bytes received in order to "recv()" each of the 1056 byte sections. My problem is that the server needs to receive an unknown number of these. To me, the most obvious solutions are to send the number of sections to be received in an initial header from the client or to mark the last section being sent in some way. However, both of these approaches would require processing on the client side, I was wondering if there was a way to check whether the client has closed its socket from the server side? Or even whether something like closing the connection from the server after a pre-determined period of time without information from the client would be feasible? If these aren't possible then I would love to hear any other suggestions.
TLDR: What condition can I use here to minimise client-side processing?
while(!(/* Client has ran out of data to send*/)) {
receive1056Section();
}
Also, I know that it is bad practise to make a stackOverflow account and immediately ask a question, I didn't know what else to do, I'm sorry. Please don't hesitate to be mean if I've missed something very obvious.
Here is a suggestion for how to do the interaction:
The client:
Client connects to server via tcp.
Client sends chunks of data until all data has been sent. Flush the send buffer after each chunk.
When it is done the client issues a shutdown on the socket, sleeps for a couple of seconds and then closes the connection.
The client then sleeps until the next transmission. If the transmission was unsuccessful, the sleep time should be shorter to prevent unsent data to overflow the avaiable memory.
If the client is unable to connect for an extended period of time, you would have to discard data that doesn't fit in the memory.
I am assuming that sleep reduces power consumption.
The server:
The server programcan be single-threaded unless you need massive scalability. It is listening for incoming connections on the agreed port.
Whenever a client connects, a new socket is created.
Use select() to see which sockets has data (don't forget to include the listening socket!), and non-blocking reads to read from the sockets.
When you get the appropriate error (no more data to read and the other side has shutdown it's side of the connection), then you can close that socket.
This should work fine up to a couple of thousand simultaneous connections.
Example that handles many of the difficulties of implementing a server
I realise that I'll get at least one answer along the lines of "(re)write the code so it doesn't hang" but let's assume we don't live in that shiny happy utopia just yet...
In our embedded system we have a big SDK including a web-server (Boa) which is the primary method of user interaction.
It's possible, during certain phases of the moon, that something can cause the web server to hang or become otherwise stuck in such a way that the process appears running normally (not crashed/dead/using 100% CPU) but does not serve any web pages.
So, the question is, how do we test/detect this situation?
To test whether the server is hung, create a TCP socket and connect to port 80 on IP address 127.0.0.1 (loopback address). Then send the following text over the socket
GET / HTTP/1.1\r\n\r\n
Most servers will interpret that as a request for index.html. Alternatively, you could implement an undocumented URL for testing (which allows for a shorter, predetermined response), e.g.
GET /test/fdoaoqfaf12491r2h1rfda HTTP/1.1\r\n\r\n
You then need to read the response from the server. This involves using select with a reasonable timeout to determine whether any data came back from the server, and if so, use recv to read the data. The response from the server will consist of a header followed by content. The header consists of lines of text, with a blank line at the end of the header. Lines end with \r\n, so the end of the header is \r\n\r\n.
Getting the content involves calling select and recv until recv returns 0. This assumes that the server will send the response and then close the socket. Some sophisticated servers will leave a socket open to allow multiple requests over the same socket. A simple embedded server should not be doing that. (If your server is trying to use the same socket for multiple requests, then you need to figure out how to turn that feature off.)
That's all very well and good, but you really need to rewrite your code so it doesn't hang.
The mostly likely cause of the problem is that the server has a bunch of dangling sockets, i.e. connections from clients that were never properly cleaned up. Dangling sockets will eventually prevent the server from accepting more connections, either because the server has a limit on the number of open connections, or because the process that's running the server uses up all of its file descriptors.
The first thing to check is the TCP timeout value. One project that I worked on had a default timeout of 5 hours, which meant that dangling sockets stayed open for 5 hours. A reasonable timeout is 1 minute.
Then you need to create a client that deliberately misbehaves. Clients can misbehave by
leaving a socket open without reading the server's response
abruptly closing the socket while reading the response
gracefully closing the socket while reading the response
The first situation should be handled by the TCP timeout. The other two need to be properly handled by the server code. Graceful and abrupt socket closure is controlled via the SO_LINGER option of ioctl and the shutdown function. After the client misbehaves, check the number of open file descriptors in the server process, to verify that the server has handled the situation correctly.
I am developing a server application that needs to send a lot of data to the client. However, client can get disconnected at any time and send()/write() on socket will return an error in this case. I would like to check how much data has been actually delivered before a client get disconnected to be able to continue sending data from the place where it left off when the client reconnect.
Is it possible to check it using sockets API?
No, the sockets API does not give you this information. In fact, it is not possible in general to know this. Depending on the particular way in which the connection failed, the TCP stack on one side generally can't know how much data successfully made it to the other side. The only thing it can know is how much data was acknowledged, which is not the same thing. And considering that other things than TCP/IP might have failed (the local OS, the remote OS, the remote process, the remote application logic), the amount of data that has been acknowledged at the TCP level probably doesn't mean much anyway.
You need to use an end-to-end application protocol to have the remote end acknowledge the data it has received and successfully processed (and committed, if applicable).
I'm real noob in C. I'm trying to develop my own lock-server in C (just for practice). And I have a question... Let's imagine that we have server written in C, we have remote host connected to this server via socket. When connection being initiated - my server has created pointer in memory. Is it possible to remove this pointer when remote host has disconnected? How can I catch disconnect event?
Thank you
In a real world io scenario, you cannot truly detect the disconnection. Instead you must:
Receive a packet that indicates the other side intends to disconnect.
Attempt to transmit a package which will fail to be delivered due to changes in the connectivity during the "silent" period between communications.
This means that systems which "must" ensure connectivity typically send and receive periodic "dummy" messages to detect the loss of the connection sooner than it would be detected by "regular" traffic alone.
Depending on your application the overhead of the keep-alive messages may not be worth the effort.
The "connection" you have on your side of the network is really just a bunch of data structures which allow you to transmit and receive. The lower "IP" layer of "TCP/IP" is connectionless, that means you will not know if your simulated "connection" is available until you attempt to use it (or receive a package telling you explicitly that the other end will not process any more data).
The read(2) system call will return zero when the other end of the socket closes the connection.
I need to detect the presence/absence of internet connection. More precisely, let us suppose that the application is broken up into 2 parts - A and B.
A is responsible for checking whether or not the system is connected to the internet. If it finds that there is no connection, it starts up part B. And as soon as it detects that there is a network connection, it kills B and continues its own work.
What would be the best way to do the A part of the application? Continual pings sounds hideous. There has to be a better way of doing this (preferably in C).
With sufficient privilege you can test the various network interfaces and examine their state. This would tell you if any of the interfaces was connected to a network and operating. However, this won't tell you if the connection is actually usable, i.e., connected to the internet (or your local net if that's all you need). I don't know of anyway to do that short of actually using it.
Using ICMP (ping) can be useful at a low level, but presumably what you need is a connection to an actual endpoint via TCP/IP to do real work. I would say that you should change the design of your application so that B is responsible for indicating when it is unable to continue due to the absence of resources that it relies on -- network or otherwise. A and B should communicate so that A is aware of the situation and is able to either kill B or respond to B terminating itself and thus continuing its work.
A lot of companies have measures in place to prevent outgoing ICMP requests, TCP connections to ports other than 80/443 for example, or even to prevent you from reaching the internet directly by (transparently) proxying your traffic.
Under an internet connection I would understand any way to contact the outside, be it UDP, TCP or ICMP. Depending on what your application needs to contact the internet for, I would suggest to check over the same protocol, as that is the only thing that matters to your app.
If your application uses HTTP to communicate to an external source, try to connect to a few sites you would suspect to not be blacklisted and that have a reliable uptime. Like google.com, microsoft.com, apple.com, and so on...
Edit:
I am unsure what the specifics are, so let me give you an example with a hypothetical situation.
Application A collects data on the system it is running on and forwards it to a Web Service listening on yourserverhost.yourcompany.com:80
Application B would basically take over the job of the Web Service when it is down and log everything so no data is lost.
When all is well, App A will be sending the data to your web service
Once this connection drops, you immediatly launch App B (the obvious remark here would be, why not keep App B running as a failsafe)
App A connects to App B and forwards what it had been buffering
App A continues to try to reestablish the connection to your Web Service and once it is back up will request App B to stop
If the problem you are facing is nothing like this, please provide a more concrete description of what App A and App B are supposed to be doing. I will be more than happy to help.
In your code, you have to check whether the internet connection exists by using a socket to open a connection to a website.
Firstrun: Ask user to input the network parameters, like proxy settings. Save this info.
Next runs: Use these settings to check for the Internet connection. You may simply do a DNS search.
If results are negative, ask user to check settings.
Check whether the cable is connected , if so ping your internet connection to any host as google.com.
ping google.com