Assuming the implementation in C.
If the intermediate link goes down, through which the TCP connection was sending the data. Will the sockets at both ends become unusable to send and receive data immediately?
If the link comes up after 5-6 secs, Can the same sockets be used to send and receive the packets?
The TCP/IP protocol suite was designed to work over unreliable links. If the connection comes back after a few seconds the applications only notice a drop in throughput.
Related
I'm trying code TCP server in C language. I just noticed accept() function returns when connection is already established.
Some clients are flooding with random data some clients are just sending random data for one time, after that I want to close their's current connection and future connections for few minutes (or more, depends about how much load program have).
I can save bad client IP addresses in a array, can save timings too but I cant find any function for abort current connection or deny future connections from bad clients.
I found a function for windows OS called WSAAccept that allows you deny connections by user choice, but I don't use windows OS.
I tried code raw TCP server which allows you access TCP packet from begin including all TCP header and it doesn't accept connections automatically. I tried handle connections by program side including SYN ACK and other TCP signals. It worked but then I noticed raw TCP server receiving all packets in my network interface, when other programs using high traffic it makes my program laggy too.
I tried use libnetfilter which allows you filter whole traffic in your network interface. It works too but like raw TCP server it also receiving whole network interface's packets which is making it slow when there is lot of traffic. Also I tried compare libnetfilter with iptables. libnetfilter is slower than iptables.
So in summary how I can abort client's current and future connection without hurt other client connections?
I have linux with debian 10.
Once you do blacklisting on packet level you could get very fast vulnerable to very trivial attacks based on IP spoofing. For a very basic implementation an attacker could use your packet level blacklisting to blacklist anyone he wants by just sending you many packets with a fake source IP address. Usually you don't want to touch these filtering (except you really know what you are doing) and you just trust your firewall etc. .
So I recommend really just to close the file descriptor immediately after getting it from accept.
I am working on a client/server model based on Berkeley sockets and have almost finished but I'm stuck with a way to know that all of the data has been received whilst minimising the processing being executed on the client side.
The client I am working with has very little memory and battery and is to be deployed in remote conditions. This means that wherever possible I am trying to avoid processing (and therefore battery loss) on the client side. The following conditions on the client are outside of my control:
The client sends its data 1056 bytes at a time until it has ran out of data to send (I have no idea where the number 1056 came from but if you think that you know I would be very interested)
The client is very unpredictable in when it will send the data (it is attached to a wild animal and sends data determined by connection strength and battery life)
The client has an unknown amount of data to send at any given time
The data is transmitted though a GRSM enabled phone tag (Not sure that this is relevant but I'm assuming that extra information could only help)
(I am emulating the data I am expecting to receive from the client through localhost, if it seems to work I will ask the company where I am interning to invest in a static ip address to allow "real" tcp transfers, if it doesn't I won't. I don't think this is relevant but, again, I would rather provide too much information than too little)
At the moment I am using a while loop and incrementing the number of bytes received in order to "recv()" each of the 1056 byte sections. My problem is that the server needs to receive an unknown number of these. To me, the most obvious solutions are to send the number of sections to be received in an initial header from the client or to mark the last section being sent in some way. However, both of these approaches would require processing on the client side, I was wondering if there was a way to check whether the client has closed its socket from the server side? Or even whether something like closing the connection from the server after a pre-determined period of time without information from the client would be feasible? If these aren't possible then I would love to hear any other suggestions.
TLDR: What condition can I use here to minimise client-side processing?
while(!(/* Client has ran out of data to send*/)) {
receive1056Section();
}
Also, I know that it is bad practise to make a stackOverflow account and immediately ask a question, I didn't know what else to do, I'm sorry. Please don't hesitate to be mean if I've missed something very obvious.
Here is a suggestion for how to do the interaction:
The client:
Client connects to server via tcp.
Client sends chunks of data until all data has been sent. Flush the send buffer after each chunk.
When it is done the client issues a shutdown on the socket, sleeps for a couple of seconds and then closes the connection.
The client then sleeps until the next transmission. If the transmission was unsuccessful, the sleep time should be shorter to prevent unsent data to overflow the avaiable memory.
If the client is unable to connect for an extended period of time, you would have to discard data that doesn't fit in the memory.
I am assuming that sleep reduces power consumption.
The server:
The server programcan be single-threaded unless you need massive scalability. It is listening for incoming connections on the agreed port.
Whenever a client connects, a new socket is created.
Use select() to see which sockets has data (don't forget to include the listening socket!), and non-blocking reads to read from the sockets.
When you get the appropriate error (no more data to read and the other side has shutdown it's side of the connection), then you can close that socket.
This should work fine up to a couple of thousand simultaneous connections.
Example that handles many of the difficulties of implementing a server
When i send()/write() a message over a tcp stream, how can i find out if those bytes were successfully delivered?
The receiver acknowledges receiving the bytes via tcp, so the senders tcp stack should know.
But when I send() some bytes, send() immediately returns, even if the packet could not (yet) be delivered, i tested that on linux 2.6.30 using strace on netcat, pulling my network cable out before sending some bytes.
I am just developing an application where it is very important to know if a message was delivered, but implementing tcp features ("ack for message #123") feels awkward, there must be a better way.
The sending TCP does know when the data gets acknowledged by the other end, but the only reason it does this is so that it knows when it can discard the data (because someone else is now responsible for getting it to the application at the other side).
It doesn't typically provide this information to the sending application, because (despite appearances) it wouldn't actually mean much to the sending application. The acknowledgement doesn't mean that the receiving application has got the data and done something sensible with it - all it means is that the sending TCP no longer has to worry about it. The data could still be in transit - within an intermediate proxy server, for example, or within the receiving TCP stack.
"Data successfully received" is really an application-level concept - what it means varies depending on the application (for example, for many applications it would only make sense to consider the data "received" once it has been synced to disk on the receiving side). So that means you have to implement it yourself, because as the application developer, you're really the only one in a position to know how to do it sensibly for your application.
Having the receiver send back an ack is the best way, even if it "feels awkward". Remember that IP might break your data into multiple packets and re-assemble them, and this could be done multiple times along a transmission if various routers in the way have different MTUs, and so your concept of "a packet" and TCP's might disagree.
Far better to send your "packet", whether it's a string, a serialized object, or binary data, and have the receiver do whatever checks it needs to do to make it it's there, and then send back an acknowledgement.
The TCP protocol tries very hard to make sure your data arrives. If there is a network problem, it will retransmit the data a few times. That means anything you send is buffered and there is no timely way to make sure it has arrived (there will be a timeout 2 minutes later if the network is down).
If you need a fast feedback, use the UDP protocol. It doesn't use any of the TCP overhead but you must handle all problems yourself.
Even if it got as far as the TCP layer, there's no guarantee that it didn't sit in the application's buffer, then the app crashed before it could process it. Use an acknowledgement, that's what everything else does (e.g. SMTP)
Application layer has no control over the notifications at lower layers (such as the Transport layer) unless they are specifically provided - this is by design. If you want to know what TCP is doing on a per packet level you need to find out at the layer that TCP operates at; this means handling TCP headers and ACK data.
Any protocol you end up using to carry your payload can be used to pass messages back and forth by way of that payload, however. So if you feel awkward using the bits of a TCP header to do this, simply set it up in your application. For instance:
A: Send 450 Bytes
B: Recv 450 Bytes
B: Send 'B received 450 Bytes'
A: Recv 'B received 450 Bytes'
A: Continue
This sounds like SCTP could be something to look at; I think it should support what you want. The alternative seems to be to switch to UDP, and if you're switching protocols anyway…
Trying to implement a simple HTTP server in C using Linux socket interface I have encountered some difficulties with a certain feature I'd like it to have, namely persistent connections. It is relatively easy to send one file at a time with separate TCP connections, but it doesn't seem to be very efficient solution (considering multiple handshakes for instance). Anyway, the server should handle several requests (HTML, CSS, images) during one TCP connection. Could you give me some clues how to approach the problem?
It is pretty easy - just don't close the TCP connection after you write the reply.
There are two ways to do this, pipelined, and non pipelined.
In a non-pipelined implementation you read one http request on the socket, process it, write it back out of the socket, and then try to read another one. Keep doing that until the remote party closes the socket, or close it yourself after you stop getting requests on the socket after about 10 seconds.
In a pipelined implementation, read as many requests as are on the socket, process them all in parallel, and then write them all back out on the socket, in the same order as your received them. You have one thread reading requests in all the time, and another one writing them out again.
You don't have to do it, but you can advertize that you support persistent connections and pipelining, by adding the following header in your replies:
Connection: Keep-Alive
Read this:
http://en.wikipedia.org/wiki/HTTP_persistent_connection
By the way, in practice there aren't huge advantages to persistent connections. The overhead of managing the handshake is very small compared to the time taken to read and write data to network sockets. There is some debate about the performance advantages of persistent connections. On the one hand under heavy load, keeping connections open means many fewer sockets on your system in TIME_WAIT. On the other hand, because you keep the socket open for 10 seconds, you'll have many more sockets open at any given time than you would in non-persistent mode.
If you're interested in improving performance of a self written server - the best thing you can do to improve performance of the network "front-end" of your server is to implement an event based socket management system. Look into libev and eventlib.
I'm real noob in C. I'm trying to develop my own lock-server in C (just for practice). And I have a question... Let's imagine that we have server written in C, we have remote host connected to this server via socket. When connection being initiated - my server has created pointer in memory. Is it possible to remove this pointer when remote host has disconnected? How can I catch disconnect event?
Thank you
In a real world io scenario, you cannot truly detect the disconnection. Instead you must:
Receive a packet that indicates the other side intends to disconnect.
Attempt to transmit a package which will fail to be delivered due to changes in the connectivity during the "silent" period between communications.
This means that systems which "must" ensure connectivity typically send and receive periodic "dummy" messages to detect the loss of the connection sooner than it would be detected by "regular" traffic alone.
Depending on your application the overhead of the keep-alive messages may not be worth the effort.
The "connection" you have on your side of the network is really just a bunch of data structures which allow you to transmit and receive. The lower "IP" layer of "TCP/IP" is connectionless, that means you will not know if your simulated "connection" is available until you attempt to use it (or receive a package telling you explicitly that the other end will not process any more data).
The read(2) system call will return zero when the other end of the socket closes the connection.