so, this is about at the limits of my technical understanding, so please be kind about any gross errors.
There are lots of articles on the web about how to give UDP some of the various great features of TCP. I have the opposite question: how much of TCP can be stripped away or deactivated to make it act like UDP?
This line of thought came from reading about how you can turn off Nagle's algorithm. I thought, gee, what else could you turn off?
For instance, can you turn off the requirement for guaranteed packet ordering?
Can you turn off the requirement for back-and-forth communication to verify a connection?
Would it make TCP act more like UDP if you only ever sent really small discrete packets and did all the ordering/reordering stuff yourself on the other end?
A reason one might want to do this is to simulate UDP over mobile networks, which tend to only reliably support TCP.
Any help or suggestions greatly appreciated.
For instance, can you turn off the requirement for guaranteed packet ordering?
No. It is possible to set the counter up. Than is not each packet confirmed but a paket loose will also be detected.
Can you turn off the requirement for back-and-forth communication to verify a connection?
You can set the time up to 2 minutes but not higher.
Would it make TCP act more like UDP if you only ever sent really small discrete packets and did all the ordering/reordering stuff yourself on the other end?
If you use for each message a new connection you will have a lord of overhead.
But all this operation are deep in the IP/TCP stack and not easy to configure.
A reason one might want to do this is to simulate UDP over mobile networks, which tend to only reliably support TCP.
You can use a VPN. So you have a TCP connection over the air and a UDP connection to your server but you will have the same latency.
Wireless connection are know about the packet lost. It is not a realy good idea to use UDP and mobile phone networks lose much more then wlan.
Related
Consider the prototypical multiplayer game server.
Clients connecting to the server are allowed to download maps and scripts. It is straightforward to create a TCP connection to accomplish this.
However, the server must continue to be responsive to the rest of the clients via UDP. If TCP download connections are allowed to saturate available bandwidth, UDP traffic will suffer severely from packet loss.
What might be the best way to deal with this issue? It definitely seems like a good idea to "throttle" the TCP upload connection somehow by keeping track of time, and send() on a regular time interval. This way, if UDP packet loss starts to occur more frequently the TCP connections may be throttled further. Will the OS tend to still bunch the data together rather than sending it off in a steady stream? How often would I want to be calling send()? I imagine doing it too often would cause the data to be buffered together first rendering the method ineffective, and doing it too infrequently would provide insufficient (and inefficient use of) bandwidth. Similar considerations exist with regard to how much data to send each time.
It sounds a lot like you're solving a problem the wrong way:
If you're worried about losing UDP packets, you should consider not using UDP.
If you're worried about sharing bandwidth between two functions, you should consider having separate pipes (bandwidth) for them.
Traffic shaping (which is what this sounds like) is typically addressed in the OS. You should look in that direction before making strange changes to your application.
If you haven't already gotten the application working and experienced this problem, you are probably prematurely optimizing.
To avoid saturating the bandwidth, you need to apply some sort of rate limiting. TCP actually already does this, but it might not be effective in some cases. For example, it has no idea weather you consider the TCP or UDP traffic to be the more important.
To implement any form of rate limiting involving UDP, you will first need to calculate UDP loss rate. UDP packets will need to have sequence numbers, and then the client has to count how many unique packets it actually got, and send this information back to the server. This gives you the packet loss rate. The server should monitor this, and if packet loss jumps after a file transfer is started, start lowering the transfer rate until the packet loss becomes acceptable. (You will probably need to do this for UDP anyway, since UDP has no congestion control.)
Note that while I mention "server" above, it could really be done either direction, or both. Depending on who needs to send what. Imagine a game with player created maps that transfer these maps with peer-to-peer connections.
While lowering the transfer rate can be as simple as calling your send function less frequently, attempting to control TCP this way will no doubt conflict with the existing rate control TCP has. As suggested in another answer, you might consider looking into more comprehensive ways to control TCP.
In this particular case, I doubt it would be an issue, unless you really need to send lots of UDP information while the clients are transferring files.
I wold expect most games to just show a loading screen or a lobby while this is happening. Neither should require much UDP traffic unless your game has it's own VOIP.
Here is an excellent article series that explains some of the possible uses of both TCP and UDP, specifically in the context of network games. TCP vs. UDP
In a later article from the series, he even explains a way to make UDP 'almost' as reliable as TCP (with code examples).
And as always... and measure your results. You have no way of knowing if your code is making the connections faster or slower unless you measure.
"# If you're worried about losing UDP packets, you should consider not using UDP."
Right on. UDP means no guarentee of packet delivery, especially over the internet. Check the TCP speed which is quite acceptable in modern day internet connections for most users playing games.
Hardware:
derived from Sequoia-Platform (AMCC)
Using AMCC PowerPC 440EPx and
Marvell 88E1111 Ethernet PHY, 256 M DDR2 RAM
Linux version 2.6.24.2
I transmit data via udp socket ca. 60MB per second in a linux-application (C –Language). Sometimes my PC-Test program notice a lost packet because all packets are numbered (GigE Vision Stream Channel protocol) I Know that the UDP-protocol is unreliable. But because I have clean labor conditions and it always the same last packet which is lost, I think it must be a systematic error somewhere in my code.
So I try to find out the cause of the missing packet over a week but I can’t find it.
Following issues:
using Jumbo-Frames : packet size 8K Bytes
always the same last packet which is lost
error is rare (after some hours and thousands of transferred images)
error rate is higher after a connect or reconnect the device on NIC (after Auto negotiation)
I tried :
Use another NIC
Check my code : check the return values of functions, check the error handling of functions
Log the outgoing packages on my device
View packages with wireshark tool, and check with logged
packages from device
How I can solve the problem?
I know it is difficult because there are so many possibilities of cause of failure.
Questions:
Are there any know bugs on linux 2.6.24 ethernet driver stack(
especially after Auto negotiation) which were fixed in later
versions?
Should I set special options on my transfer socket ? (sock
= socket(AF_INET, SOCK_DGRAM, 0);
Should I renew the socket after Auto negotiation ?
Should I enable any linux diagnostic messages in linux kernel to find out what is going wrong ?
Are there other recommendations ?
I have seen similar problems on an application I once developed where one side of the connection was windows, and the other side was an embedded real time OS (not linux) and the only thing in between was cat5 ethernet cable. Indeed I found that a certain flury of UDP messages would almost always cause 1 of the messages to be lost, and it was always the same message. Very strange, and after a lot of time with wireshark and other network tools I finally decided that it could only be the fact that UDP was unreliable.
My advice is to switch to TCP and build a small message framer:
http://blog.chrisd.info/tcp-message-framing/
I find using TCP to be very reliable, and it can also be very fast if the traffic is "stream like" (meaning: mostly unidirectional) additionally building a message framer on top of TCP is much easier than building TCP on top of UDP.
It might be possible that it's the camera's fault - surprisingly even on the camera side, GigE vision can be more complicated and less deterministic than competing technologies like CameraLink. In particular I've seen my own strange problems and have been told from manufacturers that many of the cameras have some known buffering issues, in particular when running at their higher resolution/framerates.
Check your camera's firmware with the vendor and see if there's an update to address this.
Alternatively perhaps you have some delay between the last packet recvmsg and the previous recvmsg such as processing the frame data before receiving the end of the gvsp frame packet?
Additional recommendation: make sure no switch or other networking equipment is in the middle between the system and the camera - use a direct Cat-6e cable.
I am working on a project involving a microcontroller communicating to a PC via Modbus over TCP. My platform is an STM32F4 chip, programming in C with no RTOS. I looked around and found LwIP and Freemodbus and have had pretty good success getting them both to work. Unfortunately, I'm now running into some issues which I'm not sure how to handle.
I've noticed that if I establish connection, then lose connection (by unplugging the Ethernet cable) I will not be able to reconnect (once I've plugged back in, of course). Freemodbus only allows one client and still has the first client registered. Any new clients trying to connect are ignored. It won't drop the first client until after a specific timeout period which, as far as I can tell, is a TCP/IP standard.
My thoughts are...
I need a Modbus module that will handle multiple clients. The new client request after communication loss will be accepted and the first client will eventually be dropped due to the timeout.
How do I modify Freemodbus to handle this? Are there examples out there? I've looked into doing it myself and it appears to be a decently sized project.
Are there any good Modbus packages out there that handle multiple clients, are not too expensive, and easy to use? I've seen several threads about various options, but I'm not sure any of them meet exactly what I need. I've had a hard time finding any on my own. Most don't support TCP and the ones that do only support one client. Is it generally a bad idea to support multiple clients?
Is something wrong with how I connect to the microcontroller from my PC?
Why is the PC changing ports every time it tries to reconnect? If it kept the same port it used before, this wouldn't be a problem
Should I drop the client from Freemodbus as soon as I stop communicating?
This seems to go against standards but might work.
I'm leaning towards 1. Especially since I'm going to need to support multiple connections eventually anyways. Any help would be appreciated.
Thanks.
If you have a limit on the number of modbus clients then dropping old connections when a new one arrives is actually suggested in the modbus implementation guide (https://www.modbus.org/docs/Modbus_Messaging_Implementation_Guide_V1_0b.pdf)
Nevertheless a mechanism must be implemented in case of exceeding the number of
authorized connection. In such a case we recommend to close the oldest unused
connection.
It has its own problems but everything is a compromise.
Regarding supporting multiple clients...if you think about modbus/rs server - it could only ever have one master at a time. Then replace the serial cable with TCP and you see why it's not uncommon to only support one client (and of course it's easier to program). It is annoying though.
Depending on what you are doing you wont need the whole modbus protocol and implementing the parts you do need is pretty easy. Of course if you have to support absolutely everything its a different prospect. I haven't used freemodbus, or any other library appropriate to your setup, so I can't help with suggestions there.
Regarding the PC using different TCP source port each time - that is how TCP is supposed to work and no fault on your side. If it did reuse the same source port then it wouldn't help you because e.g. sequence numbers would be wrong.
Regarding dropping clients. You are allowed to drop clients though its better not to. Some clients will send a modbus command, notice the connection has failed, reconnect, but not reissue the command. That may be their problem but still nicer to not see it that often where possible. Of course things like battery life might make the calculation different.
im working on a project with two clients ,one for sending, and the other one for receiving udp datagrams, between 2 machines wired directly to each other.
each datagram is 1024byte in size, and it is sent using winsock(blocking).
they are both running on a very fast machines(separate). with 16gb ram and 8 cpu's, with raid 0 drives.
im looking for tips to maximize my throughput , tips should be at winsock level, but if u have some other tips, it would be great also.
currently im getting 250-400mbit transfer speed. im looking for more.
thanks.
Since I don't know what else besides sending and receiving that your applications do it's difficult to know what else might be limiting it, but here's a few things to try. I'm assuming that you're using IPv4, and I'm not a Windows programmer.
Maximize the packet size that you are sending when you are using a reliable connection. For 100 mbs Ethernet the maximum packet is 1518, Ethernet uses 18 of that, IPv4 uses 20-64 (usually 20, thought), and UDP uses 8 bytes. That means that typically you should be able to send 1472 bytes of UDP payload per packet.
If you are using gigabit Ethernet equiptment that supports it your packet size increases to 9000 bytes (jumbo frames), so sending something closer to that size should speed things up.
If you are sending any acknowledgments from your listener to your sender then try to make sure that they are sent rarely and can acknowledge more than just one packet at a time. Try to keep the listener from having to say much, and try to keep the sender from having to wait on the listener for permission to keep sending.
On the computer that the sender application lives on consider setting up a static ARP entry for the computer that the receiver lives on. Without this every few seconds there may be a pause while a new ARP request is made to make sure that the ARP cache is up to date. Some ARP implementations may do this request well before the ARP entry expires, which would decrease the impact, but some do not.
Turn off as many users of the network as possible. If you are using an Ethernet switch then you should concentrate on the things that will introduce traffic to/from the computers/network devices on which your applications are running reside/use (this includes broadcast messages, like many ARP requests). If it's a hub then you may want to quiet down the entire network. Windows tends to send out a constant stream of junk to networks which in many cases isn't useful.
There may be limits set on how much of the network bandwidth that one application or user can have. Or there may be limits on how much network bandwidth the OS will let it self use. These can probably be changed in the registry if they exist.
It is not uncommon for network interface chips to not actually support the maximum bandwidth of the network all the time. There are chips which may miss packets because they are busy handling a previous packet as well as some which just can't send packets as close together as Ethernet specifications would allow. Additionally the rest of the system might not be able to keep up even if it is.
Some things to look at:
Connected UDP sockets (some info) shortcut several operations in the kernel, so are faster (see Stevens UnP book for details).
Socket send and receive buffers - play with SO_SNDBUF and SO_RCVBUF socket options to balance out spikes and packet drop
See if you can bump up link MTU and use jumbo frames.
use 1Gbps network and upgrade your network hardware...
Test the packet limit of your hardware with an already proven piece of code such as iperf:
http://www.noc.ucf.edu/Tools/Iperf/
I'm linking a Windows build, it might be a good idea to boot off a Linux LiveCD and try a Linux build for comparison of IP stacks.
More likely your NIC isn't performing well, try an Intel Gigabit Server Adapter:
http://www.intel.com/network/connectivity/products/server_adapters.htm
For TCP connections it has been shown that using multiple parallel connections will better utilize the data connection. I'm not sure if that applies to UDP, but it might help with some of the latency issues of packet processing.
So you might want to try multiple threads of blocking calls.
As well as Nikolai's suggestion of send and recv buffers, if you can, switch to overlapped I/O and have many recvs pending, this also helps to minimise the number of datagrams that are dropped by the stack due to lack of buffer space.
If you're looking for reliable data transfer, consider UDT.
is it possible to capture some packets in promiscuous mode (e.g. using winpcap) and than force OS (applications) to receive them as they were sent for our MAC?
My observation is following. We can:
capture all network traffic using
promiscuous mode (winpcap)
filter/modify the packets using
firewall-hook/filter-hook
send packets to the network with altered MAC
I am not sure if firewall-hook can access all the packets which are available thanks to promiscious mode. Isn't it on the lower layer? If it can't, the only solution would be to capture desired packets and then resend them to the network with altered MAC?
I am networking novice so please be easy on me :)
Any help is appreciated.
Thanks in advance.
You have your toes at the line of white hat/black hat hackers. I know that my company actively watches for promiscuous NICs, hunts down the owners and kills (fires) them. Maybe if you ask us what you're trying to do, we can offer some suggestions.
If you're trying to analyze your network, there is software and/or hardware solutions that will probably do a better job. If you're just trying to watch interesting text flow across your network, well ... maybe you're still in college.
First, yes if your interface operates in promiscuous mode then you will receive everything 'on the wire'. Which is already one difficulty, nowadays many (if not all) networks are switched, which means a piece of hardware exterior to your system will already do some filtering before packets arrive at your system, so you'll first need to trick a switch into transmitting those packets to your end (can be done by sending out dummy arps, by configuring the switch, or by bad intent ;-) ).
Then if these packets receive at your system, what do you plan to do with them ? There ethernet frames will carry ip packets, typically with a destination ip address, which is already something which will not be on your host (and if it is, this implies that you will have duplicate ip addresses on your network, causing problems as well.
So the main question is, what do you really really really want to do ?
Once you have recieved a packet, it has already been clean through the protocol stack. I don't think Windows gives you the access into the middle of Winsock that would be required to somehow stick it back in.
More importantly, this is a really dodgy think to be looking to do. Whatever it is you are looking to do, I can guarantee you there is some better way to do it.