XBee communication in large networks - c

Here is my situation:
I have a network of 96 XBee S2B and S2C modules. My application runs on an ARM module and has an XBee S2C module. All modules (in total 97 of them) are in the same network and are able to communicate with each other.
The software starts and knows the 64 bit addresses of all modules. It will do a network discovery (Local AT -> ND) and will wait for the responses. With each response the 16 bit address of each module is updated. If a module has not responded to the network discovery, it will be sent again every 30 seconds (in most tests, after 60 seconds, all nodes are discovered.
Then, with all 64 bit and 16 bit addresses stored, the application will send a message to every node using unicasting. It will not wait between sending the messages. I tried this with, 36, 42, 78 and now 96 nodes. With 36 nodes the messages are received within 3 seconds by every node (as expected), with 42 and 78 it takes respectively 4 and 7 seconds to reach every node. With 96 however it takes 90 seconds (at the least).
There is no outside interference that I can detect and all nodes are within reach (if not, the network discovery would have failed).
I also tried using 64 bit messaging and ignoring the 16 bit address, it takes even longer when using this method.
I am using the xbee3library made by attie (https://github.com/attie/libxbee3).
My question is: How do I speed up the communication time of the 96 nodes (keep in mind that the goal is to be able to handle even bigger networks) and why is there such a big difference between 78 and 96 nodes (why is the network suddenly so slow?)
If there is any more information needed about my situation, I will be happy to provide it. As I manage the code I can perform tests if you need more information.

First off, get an 802.15.4 sniffer and start looking at the traffic to see what's going on. Without that, you're stuck guessing at what might be happening. I haven't worked with 802.15.4 in years, but outside of Ember Desktop (only available from Silicon Labs in expensive development kits) I was pleased with the Ubiqua Protocol Analyzer. You might also want to explore where Wireshark's 802.15.4 sniffing capabilities stand.
Second, try implementing code to wait for a Transmit Status message before sending your next message. Better yet, write code to keep track of multiple outstanding messages and test it out with various settings -- how does the network behave with 1 message waiting on a Transmit Status, versus 5 outstanding messages?
My guess is that you're running into challenges with the XBee modules managing a routing table for that many nodes. Digi provides a document for working with large XBee networks, which explains how to use Source Routing on a large network. Your central node may need to maintain a routing table and specify routes in outbound messages to improve network throughput.

The thing is there is a lot of collision and large overhead on your network in the scenario where 96 nodes are involved.
My suggestion is to cluster your nodes with multiple routers as your network growth.

The issue is likely to be you are using the stadard zigbee based routing which is AODV which is basically calculated with every transmission. Once the number of nodes reaches a large number the calculation takes exponentially longer. You should consider changing to Source Routing which is basically a different frame type that also makes use of stored routes at nodes. On a large stable network this should be much faster for transmission of messages.
https://www.digi.com/wiki/developer/index.php/Large_ZigBee_Networks_and_Source_Routing

Related

broadcast file transfer via CAN bus (CANopen?)

I'm building a system consisting of many (> 100) equal nodes all connected via CAN bus.
The idea is that all nodes must have the same information, any node may generate an event and will broadcast it via CAN. For those events, the 8 byte payload provided by a CAN frame are enough and a broadcast will reach all nodes on a bus, so the requirement is met.
Now I also want to distribute firmware updates (or other files) to all nodes via CAN, obviously here I need some sort of fragmentation and the 8 bytes are a bit scarce.
Someone suggested CANopen to me to save me some work, but it appears that it only supports peer-to-peer mode with SDO block transfer and no broadcast.
Is there already a protocol that supports distributing files to all CAN nodes or do I have to come up with my own?
If so, what considerations should I take?
I haven't used CAN before.
In order to send bigger messages you can use the ISO TP layer. I have used a module in python that implements it and you can probably find libraries for other devices in other languates since it is quite common. To implement CANopen to send bigger than 8 bytes messages is overkill.
Yes, PDOs is used to process real time data, to transfer the same variables always, not an stream data protocol.
May be you can add a feedback PDO from the slaves to the server. I've worked with some nodes that when I wanted to enable them, I'd to send a enable and then wait that in a PDO from slave to master, the slave said it was enabled.
Or you can use SYNC.

How does an XBee coordinator handle simultaneous data from multiple nodes?

How can I make multiple nodes communicate to a coordinator without loss of data?
When more number of XBee nodes send their data simultaneously to the same XBee coordinator won't there be problems of congestion? To my knowledge, it's an yes.
In such case, how can I avoid this congestion? Also, I want the system to work in real time. So there should not be any delay.
I came across Stack Overflow question XBee - XBee-API and multiple endpoints. I deal with a similar problem.
How was this was solved?
As you add devices on a network, the only way to avoid congestion is to transmit less frequently.
If you look at the XBee documentation, most of the modules have a "Transmit Status" frame that the host receives once the message has been successfully delivered (or abandoned due to errors). I believe the success response is triggered by a MAC-level ACK on the network.
If you have smart hosts on your nodes, they can adjust their transmit frequency by waiting for an ACK before sending their next frame, and maybe even using the retries counter in the Transmit Status frame to set a delay before sending.
While the 802.15.4 protocol sends data at 250 kbit/s, the overhead of headers, relaying of messages across a mesh network, and dealing with collisions brings that down to around 100 kbit/s of useable bandwidth. Try to maximize the payload from your devices, to increase the data-to-headers ratio. Sending five pieces of data in a single frame every five seconds is better than one piece in a frame every second.
How much data do you need to send, and what is your definition of "real time"? Is a 10 ms delay acceptable? How about 100 ms? 500 ms? How many devices will try to send at the same time? How often will they send?
All of those questions will figure into your design, and you may find that 802.15.4 isn't suited for what you need to do.
I have setup 15nodes of Series 2 XBee.Node will have multiple sensor like Light,motion etc. XBee placed on Fio board and send data for every 3 min.
Node will be in AT mode and Co-ordinator in API mode.Nodes go through few Router XBee ( AT mode). Co-ordinator collects data (Connected to R'pi) and upload data to server.
It is not mesh network and Xbee will not sleep.
So,i did not encounter any congestion issue at this level.
Hope this helps.

multiple or single requests per udp packet?

I am developing my own protocol over UDP (under Linux) for a cache application (similar to memcached) which only executes INSERT/READ/UPDATE/DELETE operations on an object and I am not sure which design would be the best:
Send one request per packet. (client prepares the request and sends it to the server immediately)
Send multiple requests per packet. (client enqueues the requests in a packet and when it is full (close to the MTU size) sends it to the server)
The size of the request (i.e. the record data) can be from 32 bytes to 1400 bytes, I don't know which will it be on average, it entirely depends on the user's application.
If choose single request per packet, I will have to manage a lot of small packets and the kernel will be interruped a lot of times. This will slow the operation since the kernel must save registers when switching from user space to system. Also there will be overhead in data transmition, if user's application sends many requests of 32 bytes (the packet overhead for udp is about 28 bytes) network traffic will double and I will have big impact on transmission speed. However high network traffic not necessarily implies low performance since the NIC has its own processor and does not makes the CPU stall. Additional network card can be installed in case of a network bottleneck.
The big advantage for using single packet is that the server and client will be so simple that I will save on instructions and gain on speed, at the same time I will have less bugs and the project will be finished earlier.
If I use multiple requests per packet, I will have fewer but bigger packets and therefore more data could be transmitted over the network. I will have reduced number of system calls but the complexity of the server will require more memory and more instructions to be executed so it is unknown if we get faster execution doing it this way. It may happen that the CPU will be the bottleneck, but what is cheaper, to add a CPU or a network card?
The application should have heavy data load, like 100,000 requests per second on lastest CPUs. I am not sure which way to do it. I am thinking to go for 'single request per packet', but before I rewrite all the code I already wrote for multiple request handling I would like to ask for recommendations.
Thanks in advance.
What do you care about more: latency or bandwidth?
If latency, send the request as soon as possible even if that means a lot of "slack" at the ends of packets and more packets overall.
If bandwidth, bundle multiple requests to eliminate the "slack" and send fewer packets overall.
NOTE: The network, not the CPU, will likely be your major bottleneck in either case, unless you are running over an extremely fast network. And even if you do, the INSERT/READ/UPDATE/DELETE in the database will likely spend more CPU and I/O than the CPU work needed for packets.
Another trade off to sending multiple requests per packet is that
On the one hand, the unreliable nature of UDP may cause you to drop multiple requests at at time, thus making re-transmissions more expensive.
On the other hand, the kernel will be using fewer buffers to deliver your data, reducing the chances of data drops
However, the analysis is incomplete without an understanding of the deployment architecture, such as the buffer sizes of the NICs, switches, and routers, and other networking hardware.
But the recommendation is to start with a relatively simple implementation (single request per packet), but write the code in such a way so that it will not be too difficult to add more complexity if needed.

How would one go about to measure differences in clock time on two different devices?

I'm currently in an early phase of developing a mobile app that depends heavily on timestamps.
A master device is connected to several client devices over wifi, and issues various commands to these. When the client devices receive commands, they need to mark the (relative) timestamp when the command is executed.
While all this is simple enough, I haven't come up with a solution for how to deal with clock differences. For example, the master device might have its clock at 12:01:01, while client A is on 12:01:02 and client B on 12:01:03. Mostly, I can expect these devices to be set to similar times, as they sync over NTP. However, the nature of my application requires ms precision, so therefore I would like to safeguard against discrepancies.
A short delay between issuing a command and executing the command is fine, however an incorrect timestamp of when that command was executed is not.
So far, I'm thinking of something along the line of having the master device ping each client device to determine transaction time, and then request the client to send their "local" time. Based on this, I can calculate what the time difference is between master and client. Once the time difference is know, the client can adapt its timestamps accordingly.
I am not very familiar with networking though, and I suspect that pinging a device is not a very reliable method of establishing transaction time, since a lot factors apply, and latency may change.
I assume that there are many real-world settings where such timing issues are important, and thus there should be solutions already. Does anyone know of any? Is it enough to simply divide response time by two?
Thanks!
One heads over to RFC 5905 for NTPv4 and learns from the folks who really have put their noodle to this problem and how to figure it out.
Or you simply make sure NTP is working properly on your servers so that you don't have this problem in the first place.

Maintaining connections with many real-time devices

I'm writing a program on Linux to control about 1000 Patient Monitors at same time over UDP sockets. I've successfully written a library to parse and send messages to collect the data from a single patient monitor device. There are various scheduling constraints on the the device, listed below:-
Each device must constantly get an alive-request from computer client within max time-period of 300 milliseconds(may differ for different devices), otherwise connection is lost.
Computer client must send a poll-request to a device in order fetch the data within some time period. I'm polling for about 5 seconds of averaged data from patient monitor, therefore, I'm required to send poll-request in every 5 * 3 = 15 seconds. If I fail to send the request within 15 seconds time-frame, I looses the connection from device.
Now, I'm trying to extend my current program so that it is capable of handling about 1000+ devices at same time. Right now, my program can efficiently handle and parse response from just one device. In case of handling multiple devices, it is necessary to synchronize multiple responses from different device and serialize them and stream it over TCP socket, so that remote computers can also analyze the data. Well, that is not a problem because it is a well know multiple-producer and single consumer problem. My main concern is, what approach should I use in order to maintain alive-connection 1000+ devices.
After reading over Internet and browsing for similar questions on this website, I'm mainly considering two options:-
Use one thread per device. In order to control 1000+ device, I would end up in making 1000+ threads which does not look feasible to me.
Use multiplexing approach, selecting FD that requires attention and deal with it one at a time. I'm not sure how would I go about it and if multiplexing approach would be able to maintain alive-connection with all the devices considering above two constants.
I need some suggestions and advice on how to deal with this situation where you need to control 1000+ real-time-device over UDP sockets. Each device requires some alive-signal every 300 milliseconds (differ for different devices) and they require poll request in about 3 times the time interval mentioned during association phase. For example, patient monitors in ICU may require real-time (1 second averaged) data where as patient monitors in general wards may require 10-seconds averaged data, therefore, poll period for two devices would be 3*1(3 seconds) and 3*10 (30 seconds) respectively.
Thanks
Shivam Kalra
for the most part either approach is at least functionally capable of handling the functionality you describe, but by the sounds of things performance will be a crucial issue. From the figures you have provided it seems that the application could be CPU-buond.
A multithreaded approach has the advantage of using all of the available CPU cores on the machine, but multithreaded programs are notorious for being difficult to make reliable and robust.
You could also use the Apache's old tried-and-true forked-worker model - create, say, a separate process to handle a maximum of 100 devices. You could then need to write code to manage the mapping of connections to processes.
You could also use multiple hosts and some mechanism to distribute devices among them. This would have the advantage of making it easier to handle recovery situations. It sounds like your application could well be mission critical, and it may need to be architected so that if any one piece of hardware breaks then other hardware will take over automatically.

Resources