No ADDRESS_RESOLUTION agent available - unetstack

I am trying to use Unetstack to develop an underwater sensor network of 400 sensor nodes. The nodes are assigned the addresses according to their names. Everything works well until node 255. According to the log file I get this:
startup: No ADDRESS_RESOLUTION available.
Is there a limit for the number of nodes I can use in the network? If not, then I will be grateful if you help in solving this problem.

A unet node's address is represented using 1 byte. Since the number of nodes will be less than 256 in most practical deployments, using 1 byte addresses helps reduces the packet overheads. This caps the maximum number of nodes in a single network to 256 (2^8).

Related

'Light' encrypt for socket tx in C

I have a socket server and client, both written in C.
These clients (multiple clients and one server) send information to the server, but these pieces of information must be as small as possible.
Currently, every 'packet' from the client is about 40-60 bytes and my client sends about 10 packets/second.
My protocol sends 2 bytes with packet size + payload + 2 bytes for 'end of TX', then the server compares the size + 2 bytes of EOT.
The size and two bytes of EOT are necessary because I can have those two bytes in the payload too, so size is considered too. This is working without any problem.
Now, I need to 'protect' this payload and I don't know how.
OpenSSL is not an option since I've made some tests and, using a 2048 byte certificate, the smallest 'encrypted' packet size is 256 bytes, even using 50 bytes as original data.
I don't need such strong encryption... in fact, something simple will be even better, since my client could run on embedded systems with low processing power. Probably something with one fixed 'key' on both ends would be perfect, but I don't have any idea how to do this or if there is already some standard method.
I think that any solution will increase my packet size... this is fine, but not from 50 bytes to 256 bytes (like using openSSL).
How can I protect the data with at most a moderate increase in the packet size?

Sending variable sized packets over the network using TCP/IP

I want to send variable sized packets between 2 linux OSes over an internal network. The packet is variable sized and its length and CRC are indicated in the header which is also sent along with the packet. Something roughly like-
struct hdr {
uint32 crc;
uint32 dataSize;
void *data;
};
I'm using CRC at the application layer to overcome the inherent limitation of TCP checksums
The problem I have is, there is a chance that the dataSize field itself is corrupted, in which case, I dont know where the next packet starts? Cos at the reciver, when I read the socket buffer, I read n such packets next to one another. So dataSize is the only way I can get to the next packet correctly.
Some ideas I have is to-
Restart the connection if a CRC mismatch occurs.
Aggregate X such packets into one big packet of fixed size and discard the big packet if any CRC error is detected. The big packet is to make sure we lose <= sizeof of one packet in case of errors
Any other ideas for these variable sized packets?
Since TCP is stream based, data length is the generally used way to extract one full message for processing at the application. If you believe that the length byte itself is wrong for some reason, there is not much we can do except discard the packet,"flush" the connection and expect that the sender and receiver would re-sync. But the best is to disconnect the line unless, there is a protocol at the application layer to get to re-sync the connection.
Another method other than length bytes would be to use markers. Start-of-Message and End-of-Message. Application when encountering Start-of-Message should start collecting data until End-of-Message byte is received and then further process the message. This requires that the message escapes the markers appropriately.
I think that you are dealing with second order error possibilities, when major risk is somewhere else.
When we used serial line transmissions, errors were frequent (one or two every several kBytes). We used good old Kermit with a CRC and a packet size of about 100 bytes and that was enough: I encountered many times a failed transfer because the line went off, but never a correct transfer with a bad file.
With current networks, unless you have very very poor lines, the hardware level is not that bad, and anyway the level 2 data link layer already has a checksum to control that each packet was not modified between 2 nodes. HDLC is commonly used at that level and it uses normaly a CRC16 or CRC32 checksum which is a very correct checksum.
So the checksum as TCP level is not meant to detect random errors in the byte stream, but simply as a last defense line for unexpected errors, for exemple if a router gets mad because of an electrical shock and sends full garbage. I do not have any statistical data on it, but I am pretty sure that the number of errors reaching the TCP level is already very very low. Said differently, do not worry about that: unless you are dealing with highly sensitive data - and in that case I would prefere to have two different channels, former for data, latter for a global checksum - TCP/IP is enough.
That being said, adding a control at the application level as an ultime defense is perfectly acceptable. It will only process the errors that could have been undetected at data link and TCP level, or more probably errors in the peer application (who wrote it and how was it tested?). So the probability to get an error is low enough to use a very rough recovery procedure:
close the connection
open a new one
restart after last packet correctly exchanged (if it makes sense) or simply continue sending new packets if you can
But the risk is much higher to get a physical disconnection, or a power outage anywhere in the network, not speaking in a flaw in application level implementations...
And do not forget to fully specify the byte order and the size of the crc and datasize...

Can XBee count collisions?

I have the task to count the number of collisions in my S1 XBee network and
I can't figure out how to do it. Are you guys aware if there is such a thing in the XBee arduino API library?
Must stress that I'm not trying to avoid collisions, I'm actually trying to analyze them.
My Setup:
•XBee S1 w/ API 2 (escaped);
•Arduino Uno w/ Shield;
Any suggestions would be appreciated.
Take a look at the Transmit Status frames, as they report failures to transmit including CCA (clear channel assessment) failures. You might also need to look into possible settings on retries, since the XBee may successfully send after a few collisions and not report on it.
Whatever you come up with, you'll probably need to set up an 802.15.4 sniffer to see if the reported number of collisions matches the number you count manually in a packet capture.

Which Module receive the data first

My question is simple, is it the closest Xbee that will receive data (from a broadcast) first ?
I'm working on a simple way to estimate the position of a module but I need to know which one is the closest from my module which Broadcast. So the first to read the data will send a message to the broadcaster to say him "Hi, i'm first?" and wait for the reply "yes, you are the Xth to ask me that".
Thanks
That won't be reliable for at least two reasons:
Broadcast messages are sent three times to ensure receipt by all nodes on the network. You don't know which retransmission a node actually receives.
Host processing likely introduces variable latency on each node -- how often is the host polling for bytes on the XBee module? That latency is likely high compared to the speed at which the RF signal travels through the air.
Most distance estimation on 802.15.4 networks makes use of the received signal strength indicator (RSSI). In an open-air environment with identical antennas, a lower RSSI should indicate a greater distance between nodes. For example, Freescale has published a paper on Position Location Monitoring
Using IEEE® 802.15.4/ZigBee® technology.

How to test the speed for Socket?

I write a program which can forward ip packets between 2 servers, so how to test the speed of the program ? thanks!
There are a number of communication metrics that may be of interest to your potential users.
Latency is the amount of time to send a message, usually quoted in microseconds for co-located devices and in milliseconds for all other scenarios. It is usually quoted as the "zero-byte latency", meaning the time required to transmitted the meta-data of a message. Lower is better.
Bandwidth is measured in bits per second. It is often quoted as "peak bandwidth" and can be obtained by sending a massive amount of data over the line. Higher is better.
CPU utilization is the percent of CPU time required to transmit a message. Network protocols that can offload a message's transmission have low utilization, which means that the communication can "overlap" some other computation in the user's application, which has the effect of hiding latency. Lower is better.
All of these are measured simply by a variation of the ping test, usually called the "ping-pong":
Node 1:
for n = 1 to MAXSIZE, step via n*=2
send message of size n bytes
receive a response of size n bytes
Node 2:
for n = 1 to MAXSIZE, step via n*=2
receive a message of size n bytes
send response of size n bytes
There's also a "ping-ping" test, in which both nodes write to each other at the same time. This requires non-blocking communication to set-up.
Just output n and the time required for each iteration. The first time is the zero-byte latency. The largest sustainable n/time is the bandwidth (convert to bits per second to be industry standard). You can also measure the CPU utilization required to run the larger iterations, but that's a tricky topic for a whole different question.
Take a look at iperf. You can find it at http://sourceforge.net/projects/iperf/ If you google around you will find tutorials for it. You can look at the source and might get some good ideas of how he does it. I use it for routine testing and it is quite robust

Resources