Receiving a file via XModem on HyperTerminal - file

I have to send a file via serial port to my program that is running on an embedded device using HyperTerminal and XMODEM protocol. The serial communication is OK (9600 baud, 1 StopBit, No parity, 8 data bits, no flow control), because both sending commands and receiving answers work properly.
When I send the command "upload", the device answers when it's ready and waits for the file. In HyperTerminal, I then go to Transfer->Send File..., select a file and XMODEM protocol, then click "Send". After clicking send, the upload doesn't begin and appears a timeout message.
While debugging, I see that the program doesn't receive any byte from the serial port, but if I send a byte clicking a key the program receives it. Can I assume that the problem is that HyperTerminal doesn't send anything? Why is that?

XMODEM transfer is initiated by the receiver rather than the sender. The transfer starts when the receiving device sends an SOH (XMODEM) or 'C' (XMODEM-CRC/1K). If the receiving end does not initiate the transfer, no transfer will occur.
You may find that you have to start the transfer from the sending end, then initiate the transfer at the receiver. Alternatively when waiting for the transfer the receiving end may repeatedly send teh start character until it gets a response (or times-out).

Related

TCP send/receive packet timeout in Linux

I’m using raspberry pi b+ and building tcp server/client connection with C.
I have few questions from client side.
How long does Linux queue the packets for client? When the packet has received thru Linux, what if client is not ready to process it or select/epoll func inside loop has 1min sleep? If there is a timeout, is there a way to adjust the timeout with code/script?
What is the internal process inside of Linux when it receives the packet? (i.e., ethernet port->kernel->ram->application??)
The raspberry pi (with linux) and any known linux (or nonlinux) tcp/ip works in some way like this:
You have a kernel buffer in which the kernel stores all the data from the other side, this is the data that has not yet been read by the user process. the kernel normally has all this data acknowledged to the other side (the acknowledge states the last byte received and stored in that buffer) The sender side has also a buffer, where it stores all the sent data that has not yet been acknowledged by the receiver (This data must be resent in case of timeout) plus data that is not yet in the window admitted by the receiver. If this buffer fills, the sender is blocked, or a partial write is reported (depending on options) to the user process.
That kernel buffer (the reading buffer) allows the kernel to make the data available for reading to the user process while the process is not reading the data. If the user process cannot read it, it remains there until de process does a read() system call.
The amount of buffering that the kernel is still capable of reading (known as the window size) is sent to the other end on each acknowledge, so the sender knows the maximum amount of data it is authorized to send. When the buffer is full, the window size descends to zero and the receiver announces it cannot receive more data. This allows a slow receiver to stop a fast sender from filling the network with data that cannot be sent.
From then on (the situation with a zero window), the sender periodically (or randomly) sends a segment with no data at all (or with just one byte of data, depending on the implementation) to check if some window has open to allow it to send more data. The acknowledge to that packet will allow it to start communicating again.
Everything is stopped now, but no timeout happens. both tcps continue talking this way until some window is available (meaning the receiver has read() part of the buffer)
This situation can be mainained for days without any problem, the reading process is busy and cannot read the data, and the writing process is blocked in the write call until the kernel in the sending side has buffer to accomodate the data to be written.
When the reading process reads the data:
An ack of the last sent byte is sent, announcing a new window size, larger than zero (by the amount freed by the reader process when reading)
The sender receives this acknowledge and sends that amount of data from his buffer, if this allows to accomodate the data the writer has requested to write, it will be awaken and allowed to continue sending data.
Again, timeouts normally only occur if data is lost in transit.
But...
If you are behind a NAT device, your connection data can be lost from not exercising it (the nat device maintains a cache of used address/port local devices making connections to the outside) and on the next data transfer that comes from the remote device, the nat device can (or cannot) send a RST, because the packet refers to a connection that is not known to it (the cache entry expired)
Or if the packet comes from the internal device, the connection can be recached and continue, what happens, depends on who is the first to send a packet.
Nothing specifies that an implementation should provide a timeout for data to be sent, but some implementations do, aborting the connection with an error in case some data is timeout for a large amount of time. TCP specifies no timeout in this case, so it is the process resposibility to cope with it.
TCP is specified in RFC-793 and must be obeyed by all implementations if they want communications to succeed. You can read it if you like. I think you'll get a better explanation than the one I give you here.
So, to answer your first question: The kernel will store the data in its buffer as long as your process wants to wait for it. By default, you just call write() on a socket, and the kernel tries as long as you (the user) don't decide to stop the process and abort the operation. In that case the kernel will probably try to close the connection or reset it. The resources are surrogated to the life of the process, so as long as the process is alive and holding the connection, the kernel will wait for it.

Сan I exit immediately after receiving the file through the socket С

I am trying to read a file from a socket.
I use select with timeout to exit after reading.
select(maxfdp1, &rset, NULL, NULL, &timeout);
But if I knew the size of the file being sent right away, I could exit instantly after getting the right amount of bytes.
Сan i get the full file size before transferring it?
Or what should I use to exit instantly after the transfer is complete?
Because TCP is a stream-oriented protocol, it has no concept of the size of an application layer message. If you're setting up your own application layer protocol on top of TCP, you could have your sender first transmit the size of the following data, such as four bytes in network order (big Endian).
Once you've received all of the data you want, you can call close on the socket.

TCP Window Full STM32

I'm developing an ethernet application on an stm32 nucleo board using the LWIP library and I'm getting a TCP Window Full message when my board is receiving data as you can see at the Wireshark capture. I`ve tested several times and I´ve realized that it stops working when the window arrives to 2144 bytes.
Does anyone know how to clean/reset this window? I know that I could increase this number as you can see at the second photo but i would prefer to be able to reset o clean it after a while because if not I would full the memory in a few minutes.
Thanks in advance ;)
WhireShark Capture:
STM32CubeMX capture of the LWIP Configuration:
"TCP Window Full" happens when your receive window shrinks down to zero, that is - the receive buffers get filled up. It will remain full until you receive the data from the socket.
It typically happens when the sender sends data faster than the receiver processes it, or at least receives it from the socket. When this happens, sender should stop sending more data until receiver is able to again receive more. This happens after you receive data from the socket and there's two ways in which the sender is informed about it:
Receiver sends "TCP Window Update" which indicates how much space in the receive window is available again. This is not a frame that is acknowledged by the sender, it may get lost. Because of this there's also a second way below.
Sender continually polls the receiver by sending TCP Keep-Alive packets (packets with no data). Those packets must be acknowledged by the receiver and because each TCP frame contains the remote end's window size in its header, this way the sender is able to get the information whether you're able to receive again.
"TCP Window Full" is not an error - Wireshark colors it in black just to indicate that if you have issues with transmission, this may be something you might want to look at. Another example of such coloring is TCP retransmissions.
To summarize - you should receive the data from the socket. If you already are doing so, but in this case there's nothing to read (e.g. select indicates that there's no data to read), then this may indicate some other problem in your specific case.

Implementing acknowledge packet

I'm trying to implement a simple instant messenger server and came up with the following problem:
How can I implement a protocol with a acknowledge packet?
I think it could be implemented like this:
>> client sends packet with ACKID and waits for ACKID to arrive
<< server receives packet and sends the same ACKID back
now the client knows the packet was fully delivered.
But in this concept, the client would block until the ACKID was sent back, and if another packet interrupts this process then the client would block forever (or until timeout occurs).
I assume you are sending data like this at the moment:
Send("mydata");
Now, do this:
Send("mydata");
auto ack = Receive();
assert(ack == "data acknowledged");
(In pseudo-code).
Use a timeout for both operations. Only when the Receive completes without error you know that the data was received.
The same principle can be translated to async IO APIs. This is immaterial to the question.
(Stop talking about "packets" in the context of TCP. TCP does not know what that is.)

send and receive at the same time in arduino to xbee

I have tried to run the example programs from arduino-xbee library. I need to send some data to a node from a node and at the same time need to be ready to read the data available to the sending node itself. Assuming X sends a data to Y. When Y receives data it sends an acknowledgement back to X.But if Z sends data to X or if Z sends a broadcast will I be able to read the data from Z at X and read the acknowledgement from Y to X.
So any pointers to send and receive at the same time using arduino-xbee would be very helpful.
Thanks in advance.
If arduino-xbee uses "API mode" for the XBee modules, you'll receive separate frames of data. Each frame will have headers to identify the source of the data, and to match responses to requests (for AT commands).
An XBee module in "AT mode" or "Transparent mode" will just stream data out of the serial port that was received over the network on a specific endpoint and cluster. You won't know who sent it, and you need to enter "command mode" to read or write parameters using AT commands.

Resources