I have the task to count the number of collisions in my S1 XBee network and
I can't figure out how to do it. Are you guys aware if there is such a thing in the XBee arduino API library?
Must stress that I'm not trying to avoid collisions, I'm actually trying to analyze them.
My Setup:
•XBee S1 w/ API 2 (escaped);
•Arduino Uno w/ Shield;
Any suggestions would be appreciated.
Take a look at the Transmit Status frames, as they report failures to transmit including CCA (clear channel assessment) failures. You might also need to look into possible settings on retries, since the XBee may successfully send after a few collisions and not report on it.
Whatever you come up with, you'll probably need to set up an 802.15.4 sniffer to see if the reported number of collisions matches the number you count manually in a packet capture.
Related
I'm making a project that require send data from one microcontroller to another via wireless (I'm going to use 433Mhz RF modules or 2.4Ghz, didn't decide it yet). Specifically, I'm making a joystick which control 4 dc motors. So my question is: When I write the code, should I put the command to accelerate 'x' motor into the receiver's microcontroller (the board that control the motors) or into the transmitter's microcontroller (the joystick's board)? For example, if I put the joystick to the left to accelerate motor 3 and 4, where would I wrote this code?
I'm doing this project in arduino (ATmega328 with arduino bootloader).
Well this is most definitely a "primarily opinion based" question and may very well get closed...but...
This is your design, you need to design it. There isnt a wrong answer in this case, you can have the joystick board simply pass the joystick switch state or analog state depending on what kind they are...OR...you can have the joystick board using timing and how long the joystick is held in a position and perhaps compute what you think the devices position should be and just pass that (you need to rotate m steps this way and n steps that way). Or depending on your overall system design could come up with other solutions to balance the load between the microcontrollers. I assume what you are really wanting to do here is "simply" use the wireless and the extra mcu as a way to have the joystick not wired. If the joystick were wired and you did it with one mcu and these are switch based (not analog resistance) joysticks then that one mcu would simply be reading the switch state at some interval, perhaps debouncing. So then when you add wireless to it you would want the wireless solution to simply pass the same switch state, not in the same register space and not necessarily the same bit positions in whatever data, but just the switch state. As needed you have the added benefit of the joystick mcu can debounce. Also perhaps only send state changes and not have to bother the motor mcu every N units of time.
I tried modules like these recently and was very disappointed, but am considering trying them again or a different vendor/build of them. They should "feel" like a wire for say serial data at some slow rate 4800 baud, 2400 baud, etc. First try I had to do things like constantly send a pattern say the same byte 0xAA, then every so often have a small payload show up in there so search for 0xAA then when you see something else count out so many bytes that is the payload. It went downhill from there when I used separate computers/supplies to send the data as if most of my success although marginal was perhaps through ground noise not actually wireless. They sell tons of these and folks use them so no doubt it was something I did or bought garbage modules, or maybe I shouldnt have bought the ones with antennas, and soldered those on, etc.
I would recommend two things. First get both mcus up and running the application using a wired connection, just some jumper wires. You might have to come back to this as you perhaps change your protocol or baud rate, etc if needed. Perhaps (most likely) you want to send the payload out often even if the data has not changed if the wireless connection is not perfectly reliable, allowing the occasional lost data packet to be handled on the next one. Second learning to use the wireless connection is a separate experiment, the code on both sides should just be working that one problem how to I talk over this interface, no motor drivers no joysticks, just code to talk from one to the other.
Likely a good idea, esp if you dont need to be constantly transmitting, is to design your own packet. Ideally one or more features, a sync pattern and length or a sync pattern and checksum or sync pattern, length and checksum (as well as a payload). So maybe a 0x7E byte as a sync pattern, perhaps you are always going to send 32 bits or 4 bytes of payload so dont need a length and then maybe a checksum of the payload or checksum of the whole thing sync pattern and payload. your reciever then takes in bytes from the uart into a circular buffer basically or not even that just use a very simple state machine, if searching for sync and byte is not 0x7E then discard, if 0x7e then go to payload state with a count of zero, receive the next four bytes (payload) summing them into a checksum as each arrives. after payload the next byte is checksum, if it matches the the whole thing is good pass the payload to the control system to update the switch state and/or just shove it into the global variable that the control system is watching as if it were a direct connection to the joystick switch state.
You can get as complicated or not as you want, but I would never in this kind of a design assume that both sides are always in sync and every N bytes is the N bytes sent in the right position, even if you never drop a byte, this is a one direction communication path so if the receiver comes up while the transmitter is mid payload the receiver comes up in the middle of the payload/packet and is always off in its interpretation of the data. You can do a slip/ppp thing whichever it is and insure the sync pattern is never in the data by adding more work. Or not, I doubt you need to be that complicated as I assume you are likely only sending one or a few bytes of actual data per update.
why I tried the continuous stream as way back the first time i tried products like these, long before they were a buck or so from china on ebay, we had to insure "transition density" you could not string more than a couple bit cells of either state in a row, couldnt have more than say two zeros in a row or two ones in a row basically had to do a biphase-l (manchester II) in software before shoving the bytes into the uart, then undoing it on the other side. Hopefully the hardware does that for you and you dont have to, but push comes to shove you might have to take every byte and only every other bit in the byte is payload the other bit next to it is the inverse might have to send the byte 0x01 (00000001) as (0101010101010110) 0x55 0x56. Hopefully you dont have to do that, it is not much fun.
Your microcontroller development toolbox should include some usb to serial solutions, can get them for a buck or two on ebay from china or from adafruit or sparkfun for 10-15 bucks. The china ones sometimes have a 5v or 3.3v jumper or switch. One way to start with these modules is to hook one to one of these usb to serial breakouts 5V or 3.3 as needed for that rx or tx module. For tx put the uart out (tx) on the data pin, for rx the rx in on the data pin. And then fire up two copies of minicom or whatever dumb terminal program you use, set both to some slow speed like 1200baud, and type in one and see if what you type comes out the other. Hold down a character, U is a good one as with 8n1 it produces a square wave. does that come through clean? what if you slow down both sides? what if you speed up. From simple experiments like that you can start to develop your protocol. If you have a scope (having one is easier these days but can still be pricy, a few hundred bucks, but access to one is pretty important) you can do this even easier, generate the signal either with microcontroller or usb to serial or whatever then look at what shows up on the other side with the scope, how ugly/clean is it? Can you make it better looking by changing the data or by pounding the interface with continuous data.
My question is simple, is it the closest Xbee that will receive data (from a broadcast) first ?
I'm working on a simple way to estimate the position of a module but I need to know which one is the closest from my module which Broadcast. So the first to read the data will send a message to the broadcaster to say him "Hi, i'm first?" and wait for the reply "yes, you are the Xth to ask me that".
Thanks
That won't be reliable for at least two reasons:
Broadcast messages are sent three times to ensure receipt by all nodes on the network. You don't know which retransmission a node actually receives.
Host processing likely introduces variable latency on each node -- how often is the host polling for bytes on the XBee module? That latency is likely high compared to the speed at which the RF signal travels through the air.
Most distance estimation on 802.15.4 networks makes use of the received signal strength indicator (RSSI). In an open-air environment with identical antennas, a lower RSSI should indicate a greater distance between nodes. For example, Freescale has published a paper on Position Location Monitoring
Using IEEE® 802.15.4/ZigBee® technology.
i'm writing a C program to manage certain aspects of a wireless network (Access Point + Client Devices)
One Part of the program runs on the Devices an another runs on the AP. The AP is a simple Linux-Station (a Cubietruck, later on exchanged with a Intel Celeron holding Board; Access Point setup with hostapd and dnsmasq)
Some features are already implemented. I've done a lot with cfg80211/nl80211 and a bit with Wext and some Communication Routines over BSD Sockets are standing.
But now a problem came up. In the C program running on the Access Point i need the Received Signal Strength of the associated Devices.
On the Devices everything works well. With nl80211 i can get nearly every information about the connection. But on the Access Point i don't know how to obtain the RSS. I've tried some nl80211 requests with some attributes but can't get it to work.
Sure, on the Devices it's easy, because they have a single connection. But on the AP i had expected something like a nl80211 answer with a linked list or nested attributes, but nothing. I checked the contained attributes of the answers from certain requests and the messages contain nothing usable.
Does somebody know how to solve this? It shouldn't be a big deal like that to obtain the Received Signal Strength of the associated devices on a WLAN AP.
Would be really nice if it were doable with nl80211 but another solution would also be welcome.
Maybe with some WiFi Package Parsing? I heared that there is something like a RSSI (Received Signal Strength Indicator) but i'm not familiar with it.
Thanks in advance
Here's a wrokaround: the wireless channel attenuation from AP to a station/device and from that station to the same AP are identical at the same time. So, if transmission power of AP and stations are all the same, stations may report their RSS to AP using you current solution, and the work is done. Surely tx powers at different stations may be different, but they are constant. So find them out and make adjustments accordingly. Here's a simple example:
AP tx power 20 dBm;
Station 1 tx power 15 dBm with RSS -37 dBm;
Then the RSS from station 1 to AP link should be -42 dBm
I have developed a single Server/multiple Clients udp application, where Server can handle x number of clients at a time. The Server has x number of threads each thread dedicated to one Client.
The code works perfectly fine. Now I want to check my application for all possible scenarios i.e. validate my application. For this purpose, I need to design a test best.
Initial Design:
The test bed I initially designed has following functionalities:
The Server GUI has a button on it. When the button is clicked, the
each thread in the Server reads a text file, picks up few bytes of
the text file, and sends those chunks to its respective clients. The
thread then picks next chunk of bytes from the text file, sends those
chunks to the client and so on until EOF is found.
The Client on the other side keep receiving these chunks of bytes,
creates a text file, and keeps storing these chunks of bytes in its
text file.
When EOF from Server is received, the Client starts sending the
completely received text file back to the Server over its Socket.
When the file is completely received back (echoed), the Server then
compares the two text files, the Sent file and the echoed one. If
both files are same, the communication process has occurred without
any fault and the communication protocol is validated.
The above mentioned validation technique (sending the text file, receiving the echoed file and then comparing both) checks the following things:
The number of bytes sent = number of bytes receieved.
No data is corrupted.
The data is receieved in proper order.
If any of the above mentioned three conditions is not fulfilled, that means that there is some error in communication.
Now I have been asked to make changes to this test bead and add more functionlities to it. Does the procedure that I am using actually can check above mentioned 3 conditions in all scenarios?
Are there some other conditions that must be checked besides above mentioned 3 conditions.
What could be other methods of checking communication protocol except the one I desgined i.e. Sending a text file and getting it echoed and then comparing.
I have to implement more functionlities to his test bed for making validation system more efficient or completely replece the above test bed with some better option.
Please help me with your suggestions.
Thanks in advane :)
The first two of your conditions are guaranteed by UDP. Picking "a few bytes", i.e. anything less than 65535 bytes (64kiB isn't really a "few" bytes) will result in a single datagram being sent, and anything larger than that will fail. Though you will not want to max out the largest possible datagram size, as it will incur IP fragmentation (staying below 1280 bytes is a good idea).
You will be able to receive exactly the amount you sent or nothing at all, never more or less. UDP does not guarantee that any datagram that is sent out arrives (it cannot guarantee that, since IP does not), but it does guarantee that the entire datagram arrives as-is -- or nothing. Never anything in between.
It further guarantees that the data inside the datagram matches its checksum (the underlying protocols including IP/ethernet/ATM further do their own checksumming) and thus arrives in the same binary representation as it was sent. In other words, data arrives in order (inside the datagram) and is not corrupted.
It is of course in theory possible that a bit error passes all 3 layers of checksums, but this is extremely unlikely and will not happen in practice. Unless you need to guard against someone maliciously tampering with packets, you do not need to worry. The kinds of bit errors that happen accidentially are reliably picked up by the checksums used in the protocols.
If, on the other hand, you do need to guard against malicious modification of your data, you must add a MAC (or a checksum and encrypt the entire packet -- adding a checksum alone is useless).
To ensure that data spanning several datagrams arrives in order, you must add sequence numbers to your packets (in the same manner TCP does). And with that, you can as well use TCP, which is likely more efficient and less error-prone. One of the main reasons why one would want to use UDP is normally because in-order delivery and reliability are not needed, or sometimes reliability is needed, but not in-order delivery.
In-order delivery is the main cause of TCP's latency during packet loss (in absence of packet loss, TCP is exactly as "fast" as UDP), so if this is needed, there is no sane reason not to use TCP in the first place. It is a protocol that has been fine-tuned and worked reliably for literally billions of people for 4 decades.
Also, using one socket and one thread per client is possibly not the best approach. The disk won't read any faster, and the network card won't send any faster either. UDP doesn't need a socket per client either. When using TCP, you'll have no other choice but to use one socket per client, but still multiplexing using a readiness notification system will give you much better performance and fewer opportunities for threading errors.
Also, sending back a checksum such as one of the SHA family (or a MAC, if it needs to be secure) may be more efficient than echoing back the whole lot of data. The likelihood that the checksum matches and the data accidentially doesn't is neglegible.
Entire revision control systems that manage millions of lines of code for millions of people (such as git) rely on the fact that this just doesn't happen to identify files (well, it does happen of course, you just won't live to see it).
I have a question here ? Why UDP why not TCP? especially when you are worried for packet order and data corruption. According to me(I may be wrong), UDP is good only when the data is timesensitive like video stream.
Secondly, yes there are other methods of checking integrity of transmitted data. Simplest may be checking the MD5 and SHA1 checksum.
Does the procedure that I am using actually can check above mentioned 3 conditions in all scenarios?
yes
What could be other methods of checking communication protocol except the one I desgined i.e. Sending a text file and getting it echoed and then comparing.
It doesn't have to be a file, but it has to be something you can check once you get the response. You could just generate some random data and hold on to it until you get the response.
You'd have to tell us what you really want to test. If you are trying to make sure that UDP doesn't give you bad data or out of order data, you're using the wrong protocol. You're not testing anything by seeing if you get the exact data in the exact order you send it over UDP except for the networking infrastructure you have in place.
You say you want to test your application for "all possible scenarios", but that doesn't even mean anything. You're testing to see if a behavior that is part of the UDP specification exists and trying to see that it doesn't? Well, it does. Even if you never see it.
In all honesty, I think the answer is "no;" however, I want to get a second opinion. Basically, I need one micro-controller device to send a steady signal to another one, but the communicate between them is using RS232. So I think that I have to create/update the communication messages to get it to do what I want.
What do you think?
You should be able to set something like DTR (Data Terminal Ready), pin 20, or DSR (Data Set Ready), pin 6, high and keep it there as your steady-state signal. This is how modems/terminals detect that there is a device on the other end that is ready to communicate. It all depends on what level of access you have to the hardware through your driver.
[EDIT] This doesn't involve sending data, although you could still do that using TX/RX, pins 2 & 3.
RS-232 Reference on wikipedia
You mean a fixed voltage? Not a square wave? (the letter U) What about a break command (if you want to call it a command)?
Certainly you can use one of the control lines if that helps...Or are you specifically looking for something out of the TX?
If the question is "can I alter the DC state of the Tx line", then the answer is that many uarts (including the ones in PCs) can be asked to create a 'break' condition, which is the opposite to the normal idle condition of the line.
So you can turn 'break' on and off and toggle the line like that.
It might be possible to do something like that, provided you don't mind a burst-like interface. One micro could transmit a byte and the other could do something to that byte and send it back as a response.
If you can control both ends of the line, you may be able to turn the rs-232 tx and rx lines into regular logic lines to give that information.
In most situations, however, each end periodically sends a byte of status information that contains 8 possible digital values - gives much more status information.
A timer on the receiving end is reset everytime a message is received, and if the timer times out then the message has taken too long and yo can act on missing status message.
As others have pointed out, if you're using hardware flow control you have some status lines available as well, though in many cases those lines aren't implemented so that may not be an option.
-Adam
Steady signal could mean:
steady burst of charaters: keep the send buffer full
line held high or low : send nothing or send continual breaks
I think it depends to a large extent on the UART that you are using, e.g. link text, and the level of access you have to it under software. If you check the data sheet there are often ways of controlling most of the pins directly for testing purposes, but you will need to go at it from a pretty low level.
At a higher level, tvanfosson's answer is pretty much the way i'd do it.
While the first answer is correct it may not be possible to use this technique (using DTR or DSR) on many micro controllers as they may not have those signals (many micro controllers may just have the basic RX/TX lines and you would often have to use other i/o ports if you wanted extra control/status lines. However, all is not lost, many RS232 controllers allow you to set the TX line to 'mark' or 'space' (i.e set the TX line to logic high or low). This would allow you to get your steady state signal. The RX line on the receiver can be checked to see if its at mark or space level.