I have tried to run the example programs from arduino-xbee library. I need to send some data to a node from a node and at the same time need to be ready to read the data available to the sending node itself. Assuming X sends a data to Y. When Y receives data it sends an acknowledgement back to X.But if Z sends data to X or if Z sends a broadcast will I be able to read the data from Z at X and read the acknowledgement from Y to X.
So any pointers to send and receive at the same time using arduino-xbee would be very helpful.
Thanks in advance.
If arduino-xbee uses "API mode" for the XBee modules, you'll receive separate frames of data. Each frame will have headers to identify the source of the data, and to match responses to requests (for AT commands).
An XBee module in "AT mode" or "Transparent mode" will just stream data out of the serial port that was received over the network on a specific endpoint and cluster. You won't know who sent it, and you need to enter "command mode" to read or write parameters using AT commands.
Related
I have trouble understanding how to write asynchronous sending/receiving in Contiki. Suppose I am using the xmac layer, or any layer that is based on packetbuf. I am sending a message, or a list of packets. I start sending a message using void(*send)(mac_callback_t sent_callback, void *ptr). This takes the message that is in the global buffer packetbuf, and tries to send it. Meanwhile while the send is pending (for example waiting for the other device to wake up or acknowledge the transmission), the device receives a packet from a third device.
Will this packet overwrite the packet waiting to be sent that is in the packetbuf? How should I handle this?
I thought that maybe you can't be trying to send a packets and listen for incoming packets, but then there is an obvious deadlock: 2 devices sending messages to each other at the same time.
I am porting a higher-level routing layer to Contiki. This is the second OS I am porting it to, but the previous OS didn't use a single buffer for both incoming and outgoing packets.
The packetbuf is a space for short-term data and metadata storage. It's not meant to be used by code that blocks longer than a few timer ticks. If you can't send the packet immediately from your send() function, do not block there! You need to schedule a timer callback in the future and return MAC_TX_DEFERRED. To store packet data in between invocations of send(), use the queuebuf module.
The fact that there is a single packetbuf for both reception and transmission is not a problem, since the radio is a half-duplex communication medium anyway. It cannot both send and receive data at the same time. Similarly, a packet that is received is first stored in the radio chip's memory: it does not overwrite the packetbuf. Contiki interrupt handlers similarly never write to packetbuf directly. They simply wake up the rx handler process, which takes the packet from the radio chip and puts it in the packetbuf. Since one process cannot unexpectedly interrupt another, this operation is safe: a processing wanting to send a packet cannot interrupt the process reading another packet.
To summarize, the recommendations are:
Do not block in Contiki process context (this is a generic rule when programming this OS, not specific to this question).
Do not the expect the contents of packetbuf are going to be saved across yielding the execution in Contiki process context. Serialize to a queuebuf if you need this.
Do not access the packetbuf from interrupt context.
I have the following setup:
A: 1 x Coordinator connected via USB dongle (sparkfun) to a Windows 10 IoT device - Serial communication
B: 1 x Router connected to an Arduino Fio
C: 1 x Router connected via USB dongle (sparkfun) to Windows 10 to XCTU
All above are API mode 1.
My scenario is as follows:
I send at each 5 seconds a 6 byte message from A to B and C.
B is instructed to reply to that message with another one of the same size.
After some time, typically 40 - 50 minutes, A no longer receives messages from B.
Reads from Serial port are working (Transmit Status messages are received for each message Sent by A).
C receives messages as seen in XCTU.
If nothing changes A will never hear from B again.
However if (by some internal logic) B sends a message to A (other than the reply) or if C sends a 6 byte message (same as the one A sends to B and C) to B, suddenly A starts receiving messages from B.
Does anyone know why is this happening?
It was the arduino library that we misused.
It only works in API Mode 2 and we have the module configured for API Mode 1.
(does anyone know why the library has not yet been updated to be used with API Mode 1?)
It was happening only after a while since we have an incremental counter in our message and at some point, that counter reached a value that contained a special character from API Mode 2 perspective.
From XCTU was always running since there was no incremental logic in there.
Many thanks to #tomlogic for his suggestion. Helped a lot!
I have to send a file via serial port to my program that is running on an embedded device using HyperTerminal and XMODEM protocol. The serial communication is OK (9600 baud, 1 StopBit, No parity, 8 data bits, no flow control), because both sending commands and receiving answers work properly.
When I send the command "upload", the device answers when it's ready and waits for the file. In HyperTerminal, I then go to Transfer->Send File..., select a file and XMODEM protocol, then click "Send". After clicking send, the upload doesn't begin and appears a timeout message.
While debugging, I see that the program doesn't receive any byte from the serial port, but if I send a byte clicking a key the program receives it. Can I assume that the problem is that HyperTerminal doesn't send anything? Why is that?
XMODEM transfer is initiated by the receiver rather than the sender. The transfer starts when the receiving device sends an SOH (XMODEM) or 'C' (XMODEM-CRC/1K). If the receiving end does not initiate the transfer, no transfer will occur.
You may find that you have to start the transfer from the sending end, then initiate the transfer at the receiver. Alternatively when waiting for the transfer the receiving end may repeatedly send teh start character until it gets a response (or times-out).
I want to setup an xBee network with four serial 1 modules. Any two of them can communicate with each other in two-way. The transmitted data is string other than a single byte.
My original design is to setup a nonbeacon (with coordinator) network: One module is configured as coordinator. The left three modules is configured as end devices. The coordinator broadcasts the data from end devices.
The communication workflow is: If end device 1 want to send data to end device 2, it sends data to coordinator first. Then the coordinator broadcasts the data received from end device 1. End device 2 can receive the broadcast data. The communication workflow finishes.
I want the received string to be atomic. If end device 1 and end device 3 send out the data in the same time, there would be conflict. The two strings would combined together. And the end device 2 can't distinguish which byte is from which device. That is, end device 1 sends out string "{AAAA}" (quotes aren't included). In the meanwhile, end device 3 sends out string "<2222>". The end device 2 may receive the string like "{A<22AA2A2}>", which isn't what I want. My expected string is "{AAAA}<2222>" or "<2222>{AAAA}".
How do I setup the network to meet my requirements?
There are two ways to achieve atomic transmissions using Digi's XBee modules. The method varies depending on if API-mode (AP parameter > 0) is in use or not.
If API mode is not in use (AP = 0) then the atomicity of data can be encouraged by setting the RO time to be greater than the number of characters of the longest string you are going to send from one of your nodes. This will make the XBee buffer wait the specified number of character times (the time it takes to send a character at a particular baud rate) before starting the over-the-air transmission. Note: you'll have to ensure that you send your entire string all at once to the radio in order for this scheme to work.
If API mode is being used (AP > 0) then it is very easy to get the behavior you want. You'll simply use the Tx Request frame (API frame type 0x1) and specify the string data you want to send. The data will always be sent atomically.
If API mode is being used on the on the receiving node (i.e. in this case, the coordinator) then the frame data will always arrive atomically as well.
Please refer to the Digi XBee 802.15.4 product support page for more information on how to use API mode and search the Internet for the many wonderful XBee libraries which allow you to use Digi XBee modules in API mode easily.
I use blocking C sockets on Windows.
I use them to send updates of a data from the server to the client and vice versa. I send updates at a high frequency (every 100ms). Does the send() function will wait for the recipient recv() to receive the data before ending ?
I assume not if I understand well the man page:
"Successful completion of send() does not guarantee delivery of the message."
So what will happen if one is running 10 send() occurences while the other has only complete 1 recv() ?
Do I need to use so some sort of acknowledgement system ?
Lets assume you are using TCP. When you call send, the data that you are sending is immediately placed on the outgoing queue and send then completes successfully. If however, send is unable to place the data on the outgoing queue, send will return with an error.
Since Tcp is a guaranteed delivery protocol, the data on the outgoing queue can only be removed once acknowledgement has been received by the remote end. This is because the data may need to be resent if no ack has been received in time.
If the remote end is sluggish, the outgoing queue will fill up with data and send will then block until there is space to place the new data on the outgoing queue.
The connection can however fail is such a way that there is no way any further data can be sent. Although once a TCP connection has been closed, any further sends will result in an error, the user has no way of knowing how much data did actually make it to the other side. (I know of no way of retrieving TCP bookkeeping from a socket to the user application). Therefore, if confirmation of receipt of data is required, you should probably implement this on application level.
For UDP, I think it goes without saying that some way of reporting what has or has not been received is a must.
send() blocks until the operating system (kernel) has taken the data and put it into a buffer of outgoing data. It does not wait until the other end has received the data.
If you're sending by TCP, you get guaranteed delivery1 and the other end will receive the data in the order sent. That might, however, be coalesced together so what you sent as 10 separate updates could be received as a single large packet (or vice versa -- a single update could be broken up across an arbitrary number of packets). This means, among other things, that any ACK of any data implicitly acknowledges receipt of all previous data.
If you're using UDP, none of that is true -- data can arrive out of order, or be dropped and never delivered at all. If you care about all the data being received, you just about need to build some sort of acknowledgement system of your own on top of UDP itself.
1 Of course, there's a limit on the guarantee -- if a network cable gets cut (or whatever) packets won't be delivered, but you'll at least get an error message telling you that the connection was lost.
If you're using TCP, you get the acknowledgements for free as that is part of what the protocol does under the hood. But sounds like for this type of application you would probably want to use UDP. In either case though send() will not block until the client has successfully recv().
If it's crucial that the client receive every message, then use TCP. If it's ok for the client to miss one or more messages, then use UDP.
TCP guarantees delivery at a lower TCP stack level. It retries delivery until the receiving part acknowledges that the data was received, but your application may never know about that fact.
Let's say that you are sending chunks of data and you need to place those chunks of data somewhere according to some logic. If your application is not prepared to know where each individual block has to be placed, receiving it at the TCP level may be useless. The original post was about the application level logic.