UDP client and server not able to communicate across different machines - c

I'm trying to run the UDP client-server example given here: http://www.abc.se/~m6695/udp.html.
When run on same machine by changing #define SRV_IP "999.999.999.999" to #define SRV_IP "127.0.0.1" the program works fine.
However, the same program, where server is placed on one system and the client on the other, and changing #define SRV_IP "999.999.999.999" to #define SRV_IP "10.60.5.94"(this is my server ip) doesn't work. That is Even-though the client sends the the packets to the server's IP, the server is unable to receive it.
Please suggest changes that are required to be performed for the code to run over different machines. Thanks in advance.

You should learn to debug this a step at a time.
First use a sniffer on the client machine to make sure that the UDP packet is in fact going out. While you are at it, check the destination address in the packet.
Then use a sniffer on the server machine to see of the packet is in fact coming in. The sniffer will catch the packets before they hit the kernel. This will tell you if something on the network or even a firewall on the server is eating the packet.
Good luck.

Related

Checking if a UDP message is sent in C

I want to check why a program is not receiving UDP messages. A client program is sending 4 integers every time sample. I have a server program that is using "recvfrom()" to get the UDP messages on a port but the program does not seem to be receiving the messages on the port number and is stuck in the "recvfrom" function. I was wondering if there are debugging techniques I can use to debug this issue?
Thanks!
There are many possibilities, common case could be,
Firewall on your machine or in transit, for example, your another endpoint is behind a firewall that disallow UDP inbound traffic from this address.
Routing, make sure the ping works
Your code bug, make sure loopback address works
To debug, you could install Wireshark to inspect specific interface's traffic.

Deny a client's TCP connect request before accept()

I'm trying code TCP server in C language. I just noticed accept() function returns when connection is already established.
Some clients are flooding with random data some clients are just sending random data for one time, after that I want to close their's current connection and future connections for few minutes (or more, depends about how much load program have).
I can save bad client IP addresses in a array, can save timings too but I cant find any function for abort current connection or deny future connections from bad clients.
I found a function for windows OS called WSAAccept that allows you deny connections by user choice, but I don't use windows OS.
I tried code raw TCP server which allows you access TCP packet from begin including all TCP header and it doesn't accept connections automatically. I tried handle connections by program side including SYN ACK and other TCP signals. It worked but then I noticed raw TCP server receiving all packets in my network interface, when other programs using high traffic it makes my program laggy too.
I tried use libnetfilter which allows you filter whole traffic in your network interface. It works too but like raw TCP server it also receiving whole network interface's packets which is making it slow when there is lot of traffic. Also I tried compare libnetfilter with iptables. libnetfilter is slower than iptables.
So in summary how I can abort client's current and future connection without hurt other client connections?
I have linux with debian 10.
Once you do blacklisting on packet level you could get very fast vulnerable to very trivial attacks based on IP spoofing. For a very basic implementation an attacker could use your packet level blacklisting to blacklist anyone he wants by just sending you many packets with a fake source IP address. Usually you don't want to touch these filtering (except you really know what you are doing) and you just trust your firewall etc. .
So I recommend really just to close the file descriptor immediately after getting it from accept.

Possible causes for lack of data loss over my localhost UDP protocol?

I just implemented my first UDP server/client. The server is on localhost.
I'm sending 64kb of data from client to server, which the server is supposed to send back. Then, the client checks how many of the 64kb are still intact and they all are. Always.
What are the possible causes for this behaviour? I was expecting at least -some- dataloss.
client code: http://pastebin.com/5HLkfcqS
server code: http://pastebin.com/YrhfJAGb
PS: A newbie in network programming here, so please don't be too harsh. I couldn't find an answer for my problem.
The reason why you are not seeing any lost datagrams is that your network stack is simply not running into any trouble.
Your localhost connection can easily cope with what you provide, a localhost connection is able to process several 100 megabyte of data per second on a decent CPU.
To see dropped datagrams you should increase the probability of interference. You have several opportunities:
increase the load on the network
busy your cpu with other tasks
use a "real" network and transfer data between real machines
run your code over a dsl line
set up a virtual machine and simulate network outages (Vmware Workstation is able to do so)
And this might be an interesting read: What would cause UDP packets to be dropped when being sent to localhost?

How do apps like LogMeIn and TeamViewer work?

There's already a question How exactly does a remote program like team viewer work which gives a basic description, but I'm interested in how the comms works once the client has registered with the server. If the client is behind a NAT then it won't have its own IP address so how can the server (or another client) send a message to it? Or does the client just keep polling the server to see if its got any requests?
Are there any open source equivalents of LogMeIn or TeamViewer?
The simplest and most reliable way (although not always the most efficient) is to have each client make an outgoing TCP connection to a well-known server somewhere and keep that connection open. As long as the TCP connection is open, data can pass over that TCP connection in either direction at any time. It appears that both LogMeIn and TeamViewer use this method, at least as a fall-back. The main drawbacks for this technique are that all data has to pass through a TeamViewer/LogMeIn company server (which can become a bottleneck), and that TCP doesn't handle dropped packets very well -- it will stall and wait for the dropped packets to be resent, rather than giving up on them and sending newer data instead.
The other technique that they can sometimes use (in order to get better performance) is UDP hole-punching. That technique relies on the fact that many firewalls will accept incoming UDP packets from remote hosts that the firewalled-host has recently sent an outgoing UDP packet to. Given that, the TeamViewer/LogMeIn company's server can tell both clients to send an outgoing packet to the IP address of the other client's firewall, and after that (hopefully) each firewall will accept UDP packets from the other client's Internet-facing IP address. This doesn't always work, though, since different firewalls work in different ways and may not include the aforementioned UDP-allowing logic.

TCP client fails for a particular image(.bmp)

I have a simple c program to copy an image from the server using TCP
The problem is it always fails to work with certain images, it only receives 'x' bytes and then times out.
The program is not the problem here since i have tried with different programs(C and python using bigger recv buffers) using TCP and they still fail at 'x'th bytes.
server: vxworks
client: linux
if i try connecting from SUN client using the same code, it has no problem receiving the image. I did a bit of packet sniffing and found that my client is requesting packet 'A' which has the 'x'th byte in it. The server sends it or re-transmits it, but the client never acknowledges it and timesout eventually.
Question is why is this image specific ? and only happens on linux client?
the file written to the client is always 'x' bytes long
It looks like network issue for me. What is the size of packet? Sounds strange but could not it be MTU blackhole between server and linux?
My friend once experienced this exact same problem and it turned out to that the payload of the binary image he was transferring was triggering a bug in a filtering router along way. The route would just drop the connection when a particular byte sequence passed through. Bizarre but true.

Resources