Socket stress test - c

What will flood the network better?
1- Opening one socket to http web server and write data till crash
2- Openning multiple sockets and write data till crash
3- Opening a socket and send tcp packets till crash
4- Opening multiple sockets and send tcp packets till crash ?

It sounds like what you are looking to do is to test how the web server reacts to various flavors of Denial of Service attacks.
Something to consider is what is the Denial of Service logic in the web server and how is Denial of Service protection typically implemented in web servers. For instance is there logic to cap the number of concurrent connections or the number of concurrent connections from the same IP or to monitor the amount of traffic so as to throttle it or to disconnect if the amount of traffic exceeds some threshold.
One thing to consider is to not just push lots of bytes through the TCP/IP socket. The web server is interpreting the bytes and is expecting the HTTP protocol to be used. So what happens if you do strange and unusual things with the HTTP protocol as well as other protocols that are built onto HTTP.
For options 3 and 4 it sounds like you are considering bypassing the TCP/IP stack with its windowing logic and to just send a stream of TCP protocol packets ignoring reply packets until something smokes. This would be more of a test of the TCP stack robustness on the server rather than the robustness of the web server itself.

The ability to reach network saturation depends on network conditions. It is possible to write your "flooder" in such a way that it slows you down because you are causing intermediate devices to drop packets, and the server itself ends up not seeing its maximum load.
Your application should start with one connection, and monitor the data rate. Continue to add connections if the aggregate data rate for all connections continues to rise. Once it no longer rises, you have hit a throughput limit. If it starts to drop, you are exceeding capacity of your system, and are either causing congestion control to kick in, or the server is unable to efficiently handle that many connections. If the throughput limit is much lower than what you expected, then you probably need to debug your network, or tune your TCP/socket parameters. If it is the server that is slowing down, you will need to profile it to see why it is not able to handle the connection load.
Also, check the data rate of each connection, and see if certain connections are much faster than others. If that happens, then the server has a fairness problem which should also be addressed. This has less to do with server and network performance as it has to do with good user experience, though. The presence of such a problem could be exploited in a denial of service attack.

Related

Deny a client's TCP connect request before accept()

I'm trying code TCP server in C language. I just noticed accept() function returns when connection is already established.
Some clients are flooding with random data some clients are just sending random data for one time, after that I want to close their's current connection and future connections for few minutes (or more, depends about how much load program have).
I can save bad client IP addresses in a array, can save timings too but I cant find any function for abort current connection or deny future connections from bad clients.
I found a function for windows OS called WSAAccept that allows you deny connections by user choice, but I don't use windows OS.
I tried code raw TCP server which allows you access TCP packet from begin including all TCP header and it doesn't accept connections automatically. I tried handle connections by program side including SYN ACK and other TCP signals. It worked but then I noticed raw TCP server receiving all packets in my network interface, when other programs using high traffic it makes my program laggy too.
I tried use libnetfilter which allows you filter whole traffic in your network interface. It works too but like raw TCP server it also receiving whole network interface's packets which is making it slow when there is lot of traffic. Also I tried compare libnetfilter with iptables. libnetfilter is slower than iptables.
So in summary how I can abort client's current and future connection without hurt other client connections?
I have linux with debian 10.
Once you do blacklisting on packet level you could get very fast vulnerable to very trivial attacks based on IP spoofing. For a very basic implementation an attacker could use your packet level blacklisting to blacklist anyone he wants by just sending you many packets with a fake source IP address. Usually you don't want to touch these filtering (except you really know what you are doing) and you just trust your firewall etc. .
So I recommend really just to close the file descriptor immediately after getting it from accept.

Possible causes for lack of data loss over my localhost UDP protocol?

I just implemented my first UDP server/client. The server is on localhost.
I'm sending 64kb of data from client to server, which the server is supposed to send back. Then, the client checks how many of the 64kb are still intact and they all are. Always.
What are the possible causes for this behaviour? I was expecting at least -some- dataloss.
client code: http://pastebin.com/5HLkfcqS
server code: http://pastebin.com/YrhfJAGb
PS: A newbie in network programming here, so please don't be too harsh. I couldn't find an answer for my problem.
The reason why you are not seeing any lost datagrams is that your network stack is simply not running into any trouble.
Your localhost connection can easily cope with what you provide, a localhost connection is able to process several 100 megabyte of data per second on a decent CPU.
To see dropped datagrams you should increase the probability of interference. You have several opportunities:
increase the load on the network
busy your cpu with other tasks
use a "real" network and transfer data between real machines
run your code over a dsl line
set up a virtual machine and simulate network outages (Vmware Workstation is able to do so)
And this might be an interesting read: What would cause UDP packets to be dropped when being sent to localhost?

HTTP Persistent connection

Trying to implement a simple HTTP server in C using Linux socket interface I have encountered some difficulties with a certain feature I'd like it to have, namely persistent connections. It is relatively easy to send one file at a time with separate TCP connections, but it doesn't seem to be very efficient solution (considering multiple handshakes for instance). Anyway, the server should handle several requests (HTML, CSS, images) during one TCP connection. Could you give me some clues how to approach the problem?
It is pretty easy - just don't close the TCP connection after you write the reply.
There are two ways to do this, pipelined, and non pipelined.
In a non-pipelined implementation you read one http request on the socket, process it, write it back out of the socket, and then try to read another one. Keep doing that until the remote party closes the socket, or close it yourself after you stop getting requests on the socket after about 10 seconds.
In a pipelined implementation, read as many requests as are on the socket, process them all in parallel, and then write them all back out on the socket, in the same order as your received them. You have one thread reading requests in all the time, and another one writing them out again.
You don't have to do it, but you can advertize that you support persistent connections and pipelining, by adding the following header in your replies:
Connection: Keep-Alive
Read this:
http://en.wikipedia.org/wiki/HTTP_persistent_connection
By the way, in practice there aren't huge advantages to persistent connections. The overhead of managing the handshake is very small compared to the time taken to read and write data to network sockets. There is some debate about the performance advantages of persistent connections. On the one hand under heavy load, keeping connections open means many fewer sockets on your system in TIME_WAIT. On the other hand, because you keep the socket open for 10 seconds, you'll have many more sockets open at any given time than you would in non-persistent mode.
If you're interested in improving performance of a self written server - the best thing you can do to improve performance of the network "front-end" of your server is to implement an event based socket management system. Look into libev and eventlib.

Check how much data has been delivered to the destination using TCP/IP socket

I am developing a server application that needs to send a lot of data to the client. However, client can get disconnected at any time and send()/write() on socket will return an error in this case. I would like to check how much data has been actually delivered before a client get disconnected to be able to continue sending data from the place where it left off when the client reconnect.
Is it possible to check it using sockets API?
No, the sockets API does not give you this information. In fact, it is not possible in general to know this. Depending on the particular way in which the connection failed, the TCP stack on one side generally can't know how much data successfully made it to the other side. The only thing it can know is how much data was acknowledged, which is not the same thing. And considering that other things than TCP/IP might have failed (the local OS, the remote OS, the remote process, the remote application logic), the amount of data that has been acknowledged at the TCP level probably doesn't mean much anyway.
You need to use an end-to-end application protocol to have the remote end acknowledge the data it has received and successfully processed (and committed, if applicable).

Persistent TCP connections, long timeouts, and IP hopping mobile devices

We have an app with a long polling scheme over HTTP (although this question could apply to any TCP-based protocol). Our timeout is fairly high, 30 minutes or so.
What we see sometimes is mobile devices hop from IP to IP fairly often, every minute or so, and this causes dozens of long-lived sockets to pile up on the server. Can't help but think this is causing more load than neccessary.
So I am guessing that some IP gateways are better than others at closing connections when a device hops off. The strategies I can think of to deal with this are:
Decrease the timeout (increasing battery life on the device)
Close the last active connection when a user reconnects (requires cookie or user ID tracking)
Any others?
I would look into closing the last active connection using a cookie or some sort of ID in you're server. Yes it's more work, but as soon as the user hops addresses, you can find the old socket and clean up the resources right of way. It should be fairly easy to tie to a username or something like that.
The other problem you may run into even if the user equipment isn't hopping addresses, some mobile networks and maybe you're own network may have a statefull firewall that will clean up unused sockets, which will cause connectivity problems since a new connection will require the syn/syn-ack again. Just something to keep in mind if you're noticing connectivity problems.
If you do decide to play with keep alives, please don't be too aggressive, chatty applications are the plague of Mobile networks, and ones that hammer the network when it lose connection to the server can cause all sorts of problems for the network (and you if the carrier catches on). Atleast have a sort of backoff mechanism to retrying connectivity, and maybe even try to find out why the device is switching IP addresses every minute. If it's functioning properly that shouldn't occur.
***I work for a mobile operator in Canada, however, my comments do not reflect the position of my employer.
If you can, turn on TCP keepalive on the sockets, and give them a fairly low timer (e.g. every 1-5 minute). As long as you're reading from the socket, you'll detect an unreachable peer faster - and with less resources utiilization on the phone than decreasing your 30 minute application timeout.

Resources