How would one go about to measure differences in clock time on two different devices? - timer

I'm currently in an early phase of developing a mobile app that depends heavily on timestamps.
A master device is connected to several client devices over wifi, and issues various commands to these. When the client devices receive commands, they need to mark the (relative) timestamp when the command is executed.
While all this is simple enough, I haven't come up with a solution for how to deal with clock differences. For example, the master device might have its clock at 12:01:01, while client A is on 12:01:02 and client B on 12:01:03. Mostly, I can expect these devices to be set to similar times, as they sync over NTP. However, the nature of my application requires ms precision, so therefore I would like to safeguard against discrepancies.
A short delay between issuing a command and executing the command is fine, however an incorrect timestamp of when that command was executed is not.
So far, I'm thinking of something along the line of having the master device ping each client device to determine transaction time, and then request the client to send their "local" time. Based on this, I can calculate what the time difference is between master and client. Once the time difference is know, the client can adapt its timestamps accordingly.
I am not very familiar with networking though, and I suspect that pinging a device is not a very reliable method of establishing transaction time, since a lot factors apply, and latency may change.
I assume that there are many real-world settings where such timing issues are important, and thus there should be solutions already. Does anyone know of any? Is it enough to simply divide response time by two?
Thanks!

One heads over to RFC 5905 for NTPv4 and learns from the folks who really have put their noodle to this problem and how to figure it out.
Or you simply make sure NTP is working properly on your servers so that you don't have this problem in the first place.

Related

Upload in a restricted country is too slow

I'm a C programmer and may write programs in Linux area to connect machines over internet. After I examined that speedtest.net can't upload or has a very poor upload speed, I decided to examine a simple TCP socket connection and see whether it's really slow, and found that yes, it's really slow. I've rented a VPS outside of my country. I don't know what's happening by the government in infrastructure and how packets are routed and how they're restricted. Examining what I saw in speedtest.net, again in a simple socket connection proves me that I can't have a chance. When the traffic in shaped so, there's no way. It proves that it's not a restriction on HTTPS or any application layer protocol, when just a simple TCP socket connection also can't succeed to gain reasonable speed. The speed is below 10 kilobytes per second! Damn!
In contrast, after I got disappointed, I examined some barrier breakers like CyberGhost extension for Chrome. I wondered when I saw that it may overcome the barrier by increasing the upload speed to about 200 kilobytes per second! How?! They can't use any method closer to hardware than sockets.
Now I come here to consult with you and see what ideas you may have about it, so that I may write a program or change the written program based on it.
Thank you

Hosting multiple clients with freemodbus

I am working on a project involving a microcontroller communicating to a PC via Modbus over TCP. My platform is an STM32F4 chip, programming in C with no RTOS. I looked around and found LwIP and Freemodbus and have had pretty good success getting them both to work. Unfortunately, I'm now running into some issues which I'm not sure how to handle.
I've noticed that if I establish connection, then lose connection (by unplugging the Ethernet cable) I will not be able to reconnect (once I've plugged back in, of course). Freemodbus only allows one client and still has the first client registered. Any new clients trying to connect are ignored. It won't drop the first client until after a specific timeout period which, as far as I can tell, is a TCP/IP standard.
My thoughts are...
I need a Modbus module that will handle multiple clients. The new client request after communication loss will be accepted and the first client will eventually be dropped due to the timeout.
How do I modify Freemodbus to handle this? Are there examples out there? I've looked into doing it myself and it appears to be a decently sized project.
Are there any good Modbus packages out there that handle multiple clients, are not too expensive, and easy to use? I've seen several threads about various options, but I'm not sure any of them meet exactly what I need. I've had a hard time finding any on my own. Most don't support TCP and the ones that do only support one client. Is it generally a bad idea to support multiple clients?
Is something wrong with how I connect to the microcontroller from my PC?
Why is the PC changing ports every time it tries to reconnect? If it kept the same port it used before, this wouldn't be a problem
Should I drop the client from Freemodbus as soon as I stop communicating?
This seems to go against standards but might work.
I'm leaning towards 1. Especially since I'm going to need to support multiple connections eventually anyways. Any help would be appreciated.
Thanks.
If you have a limit on the number of modbus clients then dropping old connections when a new one arrives is actually suggested in the modbus implementation guide (https://www.modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf)
Nevertheless a mechanism must be implemented in case of exceeding the number of
authorized connection. In such a case we recommend to close the oldest unused
connection.
It has its own problems but everything is a compromise.
Regarding supporting multiple clients...if you think about modbus/rs server - it could only ever have one master at a time. Then replace the serial cable with TCP and you see why it's not uncommon to only support one client (and of course it's easier to program). It is annoying though.
Depending on what you are doing you wont need the whole modbus protocol and implementing the parts you do need is pretty easy. Of course if you have to support absolutely everything its a different prospect. I haven't used freemodbus, or any other library appropriate to your setup, so I can't help with suggestions there.
Regarding the PC using different TCP source port each time - that is how TCP is supposed to work and no fault on your side. If it did reuse the same source port then it wouldn't help you because e.g. sequence numbers would be wrong.
Regarding dropping clients. You are allowed to drop clients though its better not to. Some clients will send a modbus command, notice the connection has failed, reconnect, but not reissue the command. That may be their problem but still nicer to not see it that often where possible. Of course things like battery life might make the calculation different.

Do Erlang timers scale?

In my websocket server developed with Erlang, I would like to use a timer (start_timer/3), for each connection, to terminate the connection if the timeout elapses without receiving a "ping" from the client.
Do Erlang timers scale well, assuming I will have a large number of client connestions?
What is a large number of connections? Erlangs VM uses a timer wheel internally to handle the timers so it scales pretty well up to some thousand connections. Then you might run into trouble.
Usually the trick is to group pids together on timers. This is also what kernels tend to do. If for instance you have a timer that has to awake in 200ms you schedule yourself ahead of time not on the next, but the next 200ms timer again. This means you will wait at least 200ms and perhaps 400ms, 300ms being typical. By approximating timers like this, you are able to run many more since you can have a single timer wake up large numbers of processes in one go. But depending on the timer frequency and amounts of timers a standard send_after/3 may be enough.
In any case, I would start by assuming it can scale and then handle the problem if it can't by doing approximate timing like envisioned above.
A usual pattern for this kind of server is to take advantage of the light weight Erlang processes, and create a server per connection.
You can build for example your server using a gen_server behavior that provide you both
the different state to manage the connection (waiting for a connection, login,...) with the State variable,
individual timeout for each connection and at each state, managed by the VM and the OTP behavior.
The nice thing is that each server has to take care of a single client, so it is really easier to write.
The init phase should launch one server waiting for connection,
Then on connection the server should launch a new one ready for next client (ideally through a supervisor launching simple_one_for_one child) and go to login step or whatever you want to do.
You will find very interesting information on the site LearnYouSomeErlang, particularly in the chapter http://learnyousomeerlang.com/supervisors and the following ones.

How to get WIFI parameters (bandwidth, delay) on Ubuntu using C

I am student and I am writting simple application in C99 standard. Program should working on Ubuntu.
I have one problem - I don't know how can I get some Wifi parameters like bandwidth or delay. I haven't any idea how to do this. It is possible to do this using standard functions or any linux API (ech I am windows user)?.
In general, you don't know the bandwidth or delay of a wifi device.
Bandwidth and delay is the type of information from a link.
As far as I know, there is no such information holding in WiFi drivers.
The most link-related information is SINR.
For measuring bandwidth or delay, you should write your own code.
Maybe you should tell us more about your concrete problem. For now, I assume that you are interested in the throughput and latency of a specific wireless link, i.e. a link between two 802.11 stations. This could be a link between an access point and a client or between two ad-hoc stations.
The short answer is that there is no such API. In fact, it is not trivial even to estimate these two link parameters. They depend on the signal quality, on the data rate used by the sending station, on the interference, on the channel utilization, on the load of the computer systems at both ends, and probably a lot of other factors.
Depending on the wireless driver you are using it may be possible to obtain information about the currently used data rate and some packet loss statistics for the station you are communicating with. Have a look at net/mac80211/sta_info.h in your Linux kernel source tree. If you are using MadWifi, you may find useful information in the files below /proc/net/madwifi/ath0/ and in the output of wlanconfig ath0 list sta.
However, all you can do is to make a prediction. If the link quality changes suddenly, your prediction may be entirely wrong.

Maintaining connections with many real-time devices

I'm writing a program on Linux to control about 1000 Patient Monitors at same time over UDP sockets. I've successfully written a library to parse and send messages to collect the data from a single patient monitor device. There are various scheduling constraints on the the device, listed below:-
Each device must constantly get an alive-request from computer client within max time-period of 300 milliseconds(may differ for different devices), otherwise connection is lost.
Computer client must send a poll-request to a device in order fetch the data within some time period. I'm polling for about 5 seconds of averaged data from patient monitor, therefore, I'm required to send poll-request in every 5 * 3 = 15 seconds. If I fail to send the request within 15 seconds time-frame, I looses the connection from device.
Now, I'm trying to extend my current program so that it is capable of handling about 1000+ devices at same time. Right now, my program can efficiently handle and parse response from just one device. In case of handling multiple devices, it is necessary to synchronize multiple responses from different device and serialize them and stream it over TCP socket, so that remote computers can also analyze the data. Well, that is not a problem because it is a well know multiple-producer and single consumer problem. My main concern is, what approach should I use in order to maintain alive-connection 1000+ devices.
After reading over Internet and browsing for similar questions on this website, I'm mainly considering two options:-
Use one thread per device. In order to control 1000+ device, I would end up in making 1000+ threads which does not look feasible to me.
Use multiplexing approach, selecting FD that requires attention and deal with it one at a time. I'm not sure how would I go about it and if multiplexing approach would be able to maintain alive-connection with all the devices considering above two constants.
I need some suggestions and advice on how to deal with this situation where you need to control 1000+ real-time-device over UDP sockets. Each device requires some alive-signal every 300 milliseconds (differ for different devices) and they require poll request in about 3 times the time interval mentioned during association phase. For example, patient monitors in ICU may require real-time (1 second averaged) data where as patient monitors in general wards may require 10-seconds averaged data, therefore, poll period for two devices would be 3*1(3 seconds) and 3*10 (30 seconds) respectively.
Thanks
Shivam Kalra
for the most part either approach is at least functionally capable of handling the functionality you describe, but by the sounds of things performance will be a crucial issue. From the figures you have provided it seems that the application could be CPU-buond.
A multithreaded approach has the advantage of using all of the available CPU cores on the machine, but multithreaded programs are notorious for being difficult to make reliable and robust.
You could also use the Apache's old tried-and-true forked-worker model - create, say, a separate process to handle a maximum of 100 devices. You could then need to write code to manage the mapping of connections to processes.
You could also use multiple hosts and some mechanism to distribute devices among them. This would have the advantage of making it easier to handle recovery situations. It sounds like your application could well be mission critical, and it may need to be architected so that if any one piece of hardware breaks then other hardware will take over automatically.

Resources