When will a TCP packet be fragmented at the application layer? When a TCP packet is sent from an application, will the recipient at the application layer ever receive the packet in two or more packets? If so, what conditions cause the packet to be divided. It seems like a packet won't be fragmented until it reaches the Ethernet (at the network layer) limit of 1500 bytes. But, that fragmentation will be transparent to the recipient at the application layer since the network layer will reassemble the fragments before sending the packet up to the next layer, right?
It will be split when it hits a network device with a lower MTU than the packet's size. Most ethernet devices are 1500, but it can often be smaller, like 1492 if that ethernet is going over PPPoE (DSL) because of the extra routing information, even lower if a second layer is added like Windows Internet Connection Sharing. And dialup is normally 576!
In general though you should remember that TCP is not a packet protocol. It uses packets at the lowest level to transmit over IP, but as far as the interface for any TCP stack is concerned, it is a stream protocol and has no requirement to provide you with a 1:1 relationship to the physical packets sent or received (for example most stacks will hold messages until a certain period of time has expired, or there are enough messages to maximize the size of the IP packet for the given MTU)
As an example if you sent two "packets" (call your send function twice), the receiving program might only receive 1 "packet" (the receiving TCP stack might combine them together). If you are implimenting a message type protocol over TCP, you should include a header at the beginning of each message (or some other header/footer mechansim) so that the receiving side can split the TCP stream back into individual messages, either when a message is received in two parts, or when several messages are received as a chunk.
Fragmentation should be transparent to a TCP application. Keep in mind that TCP is a stream protocol: you get a stream of data, not packets! If you are building your application based on the idea of complete data packets then you will have problems unless you add an abstraction layer to assemble whole packets from the stream and then pass the packets up to the application.
The question makes an assumption that is not true -- TCP does not deliver packets to its endpoints, rather, it sends a stream of bytes (octets). If an application writes two strings into TCP, it may be delivered as one string on the other end; likewise, one string may be delivered as two (or more) strings on the other end.
RFC 793, Section 1.5:
"The TCP is able to transfer a
continuous stream of octets in each
direction between its users by
packaging some number of octets into
segments for transmission through the
internet system."
The key words being continuous stream of octets (bytes).
RFC 793, Section 2.8:
"There is no necessary relationship
between push functions and segment
boundaries. The data in any particular
segment may be the result of a single
SEND call, in whole or part, or of
multiple SEND calls."
The entirety of section 2.8 is relevant.
At the application layer there are any number of reasons why the whole 1500 bytes may not show up one read. Various factors in the internal operating system and TCP stack may cause the application to get some bytes in one read call, and some in the next. Yes, the TCP stack has to re-assemble the packet before sending it up, but that doesn't mean your app is going to get it all in one shot (it is LIKELY will get it in one read, but it's not GUARANTEED to get it in one read).
TCP tries to guarantee in-order delivery of bytes, with error checking, automatic re-sends, etc happening behind your back. Think of it as a pipe at the app layer and don't get too bogged down in how the stack actually sends it over the network.
This page is a good source of information about some of the issues that others have brought up, namely the need for data encapsulation on an application protocol by application protocol basis Not quite authoritative in the sense you describe but it has examples and is sourced to some pretty big names in network programming.
If a packet exceeds the maximum MTU of a network device it will be broken up into multiple packets. (Note most equipment is set to 1500 bytes, but this is not a necessity.)
The reconstruction of the packet should be entirely transparent to the applications.
Different network segments can have different MTU values. In that case fragmentation can occur. For more information see TCP Maximum segment size
This (de)fragmentation happens in the TCP layer. In the application layer there are no more packets. TCP presents a contiguous data stream to the application.
A the "application layer" a TCP packet (well, segment really; TCP at its own layer doesn't know from packets) is never fragmented, since it doesn't exist. The application layer is where you see the data as a stream of bytes, delivered reliably and in order.
If you're thinking about it otherwise, you're probably approaching something in the wrong way. However, this is not to say that there might not be a layer above this, say, a sequence of messages delivered over this reliable, in-order bytestream.
Correct - the most informative way to see this is using Wireshark, an invaluable tool. Take the time to figure it out - has saved me several times, and gives a good reality check
If a 3000 byte packet enters an Ethernet network with a default MTU size of 1500 (for ethernet), it will be fragmented into two packets of each 1500 bytes in length. That is the only time I can think of.
Wireshark is your best bet for checking this. I have been using it for a while and am totally impressed
I have the task to count the number of collisions in my S1 XBee network and
I can't figure out how to do it. Are you guys aware if there is such a thing in the XBee arduino API library?
Must stress that I'm not trying to avoid collisions, I'm actually trying to analyze them.
My Setup:
•XBee S1 w/ API 2 (escaped);
•Arduino Uno w/ Shield;
Any suggestions would be appreciated.
Take a look at the Transmit Status frames, as they report failures to transmit including CCA (clear channel assessment) failures. You might also need to look into possible settings on retries, since the XBee may successfully send after a few collisions and not report on it.
Whatever you come up with, you'll probably need to set up an 802.15.4 sniffer to see if the reported number of collisions matches the number you count manually in a packet capture.
i'm writing a C program to manage certain aspects of a wireless network (Access Point + Client Devices)
One Part of the program runs on the Devices an another runs on the AP. The AP is a simple Linux-Station (a Cubietruck, later on exchanged with a Intel Celeron holding Board; Access Point setup with hostapd and dnsmasq)
Some features are already implemented. I've done a lot with cfg80211/nl80211 and a bit with Wext and some Communication Routines over BSD Sockets are standing.
But now a problem came up. In the C program running on the Access Point i need the Received Signal Strength of the associated Devices.
On the Devices everything works well. With nl80211 i can get nearly every information about the connection. But on the Access Point i don't know how to obtain the RSS. I've tried some nl80211 requests with some attributes but can't get it to work.
Sure, on the Devices it's easy, because they have a single connection. But on the AP i had expected something like a nl80211 answer with a linked list or nested attributes, but nothing. I checked the contained attributes of the answers from certain requests and the messages contain nothing usable.
Does somebody know how to solve this? It shouldn't be a big deal like that to obtain the Received Signal Strength of the associated devices on a WLAN AP.
Would be really nice if it were doable with nl80211 but another solution would also be welcome.
Maybe with some WiFi Package Parsing? I heared that there is something like a RSSI (Received Signal Strength Indicator) but i'm not familiar with it.
Thanks in advance
Here's a wrokaround: the wireless channel attenuation from AP to a station/device and from that station to the same AP are identical at the same time. So, if transmission power of AP and stations are all the same, stations may report their RSS to AP using you current solution, and the work is done. Surely tx powers at different stations may be different, but they are constant. So find them out and make adjustments accordingly. Here's a simple example:
AP tx power 20 dBm;
Station 1 tx power 15 dBm with RSS -37 dBm;
Then the RSS from station 1 to AP link should be -42 dBm
I am writing a program that uses libpcap to capture packets and reassemble a TCP stream. My program simply monitors the traffic and so I have no control over the reception and transmittal of packets. My program disregards all non TCP/IP traffic.
I calculate the next expected sequence number from the ISN and then the successive SEQ numbers. I have it set up so that every TCP connection is uniquely identified by a tuple made up of the source IP, source port, dest IP, and dest port. Everything goes swimmingly until I receive a packet that has a sequence number different than what I am expecting. I have uploaded screen shots to help illustrate what I am describing here.
My questions are:
1. Where is the data that was in the "lost" packet?
2. How does the SEQ number order recover from this situation?
3. What can I do to handle these occurrences.
Please remember; however, I am not writing a program that adheres to TCP. I am writing a program that passively monitors network traffic for TCP streams and attempts to save the raw data to disk, and I am confused as to why the above state situation happens and how I can program to handle it.
Thank you
Where is the data that was in the "lost" packet?
It got dropped by someone
It got lost on the way (wrong detour) and will arrive later
How does the SEQ number order recover from this situation
The receiver notices the segment is out of sequence and doesn't send it to the application, thereby fulfilling its contract: in-order reliable byte stream. Now, what actually happens to get the missing piece is quite intricate and varies from stack to stack. In a nutshell the stack waits for the missing piece to arrive.
The receiver can throw away out-of-sequence segments or it can queue them in a reassembly queue
The receiver can wait for the missing segment to arrive or it can immediately send the ACK it already sent before. Duplicate ACKs will alert the peer something is wrong (look for Fast Retransmit)
When sending acknowledgments the TCP can inform the peer some segments arrived successfully - they're just out of sequence (SACK)
What can I do to handle these occurrences
You can't do anything since you're only monitoring. You could probably get more insight into what is really happening if you also captured the response traffic.
Depending on the window-size of the current TCP connection, if the new packet fits within the receiving window (multi-packet buffer) it will be entered into the receiving queue (and reordered for ordered delivery to protocol clients).
If the sequence number is larger than the maximum for the current window, the packet gets rejected.
See also section 4.4.2 (INPUT PACKET HANDLER) in RFC 675
I'm coding a part of little complex communication protocol to control multiple medical devices from single computer terminal. Computer terminal need to manage about 20 such devices. Every device uses same protocol fro communication called DEP. Now, I've created a loop that multiplexes within different devices to send the request and received the patient data associated with a particular device. So structure of this loop, in general, is something like this:
Begin Loop
Select Device i
if Device.Socket has Data
Strip Header
Copy Data on Queue
end if
rem_time = TIMEOUT - (CurrentTime - Device.Session.LastRequestTime)
if TIMEOUT <= 0
Send Re-association Request to Device
else
Sort Pending Request According to Time
Select First Request
Send the Request
Set Request Priority Least
end Select
end if
end Select
end Loop
I might have made some mistake in above pseudo-code, but I hope I've made myself clear about what this loop is trying to do. I've priority list structure that selects the device and pending request for that device, so that, all the requests and devices are selected at good optimal intervals.
I forgot to mention, above loop do not actually parse the received data, but it only strips off the header and put it in a queue. The data in queue is parsed in different thread and recorded in file or database.
I wish to add a feature so that other computers may also import the data and control the devices attached to computer terminal remotely. For this, I would need to create socket that would listen to commands in this INFINITE LOOP and send the data in different thread where PARSING is performed.
Now, my question to all the concurrency experts is that:
Is it a good design to use single socket for reading and writing in two different threads? Where each of the thread will be strictly involved in either reading or writing not both. Also, I believe socket is synchronized on process level, so do I need locks to synchronize the read and write over one socket from different threads?
There is nothing inherently wrong with having multiple threads handle a single socket; however, there are many good and bad designs based around this one very general idea. If you do not want to rediscover the problems as you code your application, I suggest you search around for designs that best fit your planned particular style of packet handling.
There is also nothing inherently wrong with having a single thread handle a single socket; however, if you put the logic handling on that thread, then you have selected a bad design, as then that thread cannot handle requests while it is "working" on the last reqeust.
In your particular code, you might have an issue. If your packets support fragmentation, or even if your algorithm gets a little ahead of the hardware due to timing issues, you might have just part of the packet "received" in the buffer. In that case, your algorithm will fail in two ways.
It will process a partial packet, one which has the first part of it's data.
It will mis-process the subsequent packet, as the information in the buffer will not start with a valid packet header.
Such failures are difficult to conceive and diagnose until they are encountered. Perhaps your library already buffers and splits messages, perhaps not.
In short, your design is not dictated by how many threads are accessing your socket: how many threads access your socket is dictated by your design.