packet retransmission - c

I have a scenario where multiple clients connect to a TCP server. When any of the clients sends a packet to the server, the server is supposed to have a retransmission timer and keep sending that packet to another server until it receives a reply. How do I go about setting up this retransmission mechanism? I'm doing this on Linux in C.

If you use a TCP socket, retransmit will happen automatically. Admittedly, if you want more control, you'll need to use UDP and handle the retransmit yourself.

I'm guessing this is an assignment. I had something similar where our channel was purposefully being corrupted.
I would suggest you follow something similar.
Send packet.
start a timer.
if an ACK (acknowledgment) is not received within a certain amount of time, then go back to step 1.

IIRC, the location of the files that contain these TCP config parameters are distro-dependent. They are in different folders on Red Hat and Ubuntu.

Related

Can a RAW socket be bound to an ip:port instead of an interface?

I need to write a proxy server in C language on Linux (Ubuntu 20.04). The purpose of this proxy server is as follows. There're illogical governmental barriers in accessing the free internet. Some are:
Name resolution: I ping telegram.org and many other sites which the government doesn't want me to access. I ask 8.8.8.8 to resolve the name, but they response of behalf of the server that the IP may be resolved to 10.10.34.35!
Let's concentrate on this one, because when this is solved many other problems will be solved too. For this, I need to setup such a configuration:
A server outside of my country is required. I prepared it. It's a VPS. Let's call it RS (Remote Server).
A local proxy server is required. Let's call it PS. PS runs on the local machine (client) and knows RS's IP. I need it to gather all requests going to be sent through the only NIC available on client, process them, scramble them, and send them to RS in a way to be hidden from the government.
The server-side program should be running on RS on a specific port to get the packet, unscramble it, and send it to the internet on behalf of the client. After receiving the response from the internet, it should send it back to the client via the PS.
PS will deliver the response to the client application which originates the request. Of course this happens after it will unscramble and will find the original response from the internet.
This is the design and some parts is remained gloomy for me. Since I'm not an expert in network programming context, I'm going to ask my questions in the parts I'm getting into trouble or are not clear for me.
Now, I'm in part 2. See whether I'm right. There're two types of sockets, a RAW socket and a stream socket. A RAW socket is opened this way:
socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
And a stream socket is opened this way:
socket(AF_INET, SOCK_STREAM, 0);
For RAW sockets, we use sockaddr_ll and for stream sockets we use sockaddr_in. May I use stream sockets between client applications and PS? I think not, because I need the whole RAW packet. I should know the protocol and maybe some other info of the packet, because the whole packet should be retrieved transparently in RS. For example, I should know whether it has been a ping packet (ICMP) or a web request (TCP). For this, I need to have packet header in PS. So I can't use a stream socket, because it doesn't contain the packet header. But until now, I've used RAW sockets for interfaces and have not written a proxy server to receive RAW packets. Is it possible? In another words, I've the following questions to go to next step:
Can a RAW socket be bound to localhost:port instead of an interface so that it may receive all low-level packets containing packet headers (RAW packets)?
I may define a proxy server for browser. But can I put the whole system behind the proxy server so that packets of other apps like PING may route automatically via it?
Do I really need RAW sockets in PS? Can't I change the design to suffice the data I got from the packets payload?
Maybe I'm wrong in some of the concepts and will appreciate your guidance.
Thank you
Can a RAW socket be bound to localhost:port instead of an interface so that it may receive all low-level packets containing packet headers (RAW packets)?
No, it doesn't make sense. Raw packets don't have port numbers so how would it know which socket to go to?
It looks like you are trying to write a VPN. You can do this on Linux by creating a fake network interface called a "tun interface". You create a tun interface, and whenever Linux tries to send a packet through the interface, instead of going to a network cable, it goes to your program! Then you can do whatever you like with the packet. Of course, it works both ways - you can send packets from your program back to Linux through the tun interface, and Linux will act like they just arrived on a network cable.
Then, you can set up your routing table so that all traffic goes to the tun interface, except for traffic to the VPN server ("RS"), which goes to your real ethernet/wifi interface. Otherwise you'd have an endless loop where your VPN program PS tried to send packets to RS but they just went back to PS.

Transfering file on UDP socket in C! Linux

I'm programming an application for transferring a file between two host with an UDP socket.
But it seems that some data arrives corrupted at the client.
My question is: is it possible that if the server is faster than the client, the client could read corrupted data from the socket?
I use sendto() in the server and read() in the client (I use the connect() before beginning transferring the file in the client),
and if yes: how can I stop the server from sending new data until the client has read all the previous data?
is it possible that if the server is faster than the client, the
client could read corrupted data from the socket?
No it is not possible - every datagram that you see is error checked by the IP layer and will be as it was sent.
how can I stop the server from sending new data until the client has
read all the previous data?
Typically you send a small packet, the receiver sends an acknowledgement, then you send the next. Problem with UDP however is packets can get dropped without telling you event duplicated moreover you can flood the network as there is no congestion control..
So why re-invent the wheel, use TCP for sending files which takes care of reliability and congestion control - everyone has been using that for decades, e.g. this web page is delivered to you using HTTP which uses TCP.

TCP in C Programming

I want to know how a client or server gets an acknowledgement packet from a client or server after sending a packet in TCP Channel in C programming.
Why don't I need to handle this in code? How exactly does the client receive the acknowledgement packet, if it sends a data packet to server? Does this happen internally?
The TCP protocol is designed to provide a stream protocol. The typical programming interface is the socket interface where you can give a chunk of data that the protocol stack will transfer to the receiver. The actual implementation hides when the data has been queued in the receiving protocol stack or has been handed off to the receiving application. You would have make this distinction.
Sou what you apparently want is a signal mechanism to know, that and when the client has read the data from the protocol stack. This can be done by designing and implementing a protocol on top of the TCP protocol. When one side doesn't have to send meaningful data it sends a heartbeat message. This message indicates that the TCP connection is still alive.
Regarding your questions:
Why don't I need to handle this in code? Because the underlying layer has it done for your.
How exactly does the client receive the acknowledgement packet, if it sends a data packet to server? You don't need to care as long as you don't need a heartbeat. TCP provides you a stream interface similar like the interface for file I/O. You don't ask, when the data has left the disk cache, do you? When you start to re-implement internas you will have to define what about reassemblies, Nagle-algorithm and many other nasty things. Never implement TCP if you have another choice.
Does this happen internally? Yes, fortunately.

Measure the delay between data sent by udp server and received by the client

I want to check my udp server client application. One of the features that I wanted to check is the time delay between data sent by the server and received by the client and vice versa.
I figured out a way of sending a message from the server to the client and note the time. The client when receives this message sends the same message back to the server. The server gets this echoed message back and again notes the time. The difference between the time at which the message was sent and at which the echoed message is received back tells me the delay between data sent by the server and received by the client.
Is this approach correct?
Because I also foresee a lot of other delays involved using this approach. What could be a possible way to calculate more accurate delays?
Waiting for help.
Yes this is the most traditional way of doing ,you can do this.
You can see on sniffer, using relative time taken between sender's udp packet and receiver's udp packet. For the need of more accurate results, you have to go deep into the window's stack where it checks for udp packet received or not. And for calculation of timer's you can use a real time clock which gives upto microsecond delay. Also you are using udp which has high priority of packet getting lost, unlike tcp which is much reliable.
What stack are you using? LwIP ?

Is there any way I can mess with the TCP stack in Windows?

I want to send a TCP ack packet a certain number of bytes ahead of the data that I have actually recieved in order to "resume" a download. I would also need to change the state of the TCP stack to be in synch with this ack.
One possible solution to doing this would be to gain direct control over lower level interfaces and transmit my own TCP packets using my own stack, however, this would be an inferior solution to using the windows TCP stack. Does anyone know how I can affect the windows tcp stack to do this?
Eh, that sounds like a recipe for connection failures. What happens if that ACK arrives at the sender before it sends the bytes that you're ACK-ing?

Resources