I'm trying code TCP server in C language. I just noticed accept() function returns when connection is already established.
Some clients are flooding with random data some clients are just sending random data for one time, after that I want to close their's current connection and future connections for few minutes (or more, depends about how much load program have).
I can save bad client IP addresses in a array, can save timings too but I cant find any function for abort current connection or deny future connections from bad clients.
I found a function for windows OS called WSAAccept that allows you deny connections by user choice, but I don't use windows OS.
I tried code raw TCP server which allows you access TCP packet from begin including all TCP header and it doesn't accept connections automatically. I tried handle connections by program side including SYN ACK and other TCP signals. It worked but then I noticed raw TCP server receiving all packets in my network interface, when other programs using high traffic it makes my program laggy too.
I tried use libnetfilter which allows you filter whole traffic in your network interface. It works too but like raw TCP server it also receiving whole network interface's packets which is making it slow when there is lot of traffic. Also I tried compare libnetfilter with iptables. libnetfilter is slower than iptables.
So in summary how I can abort client's current and future connection without hurt other client connections?
I have linux with debian 10.
Once you do blacklisting on packet level you could get very fast vulnerable to very trivial attacks based on IP spoofing. For a very basic implementation an attacker could use your packet level blacklisting to blacklist anyone he wants by just sending you many packets with a fake source IP address. Usually you don't want to touch these filtering (except you really know what you are doing) and you just trust your firewall etc. .
So I recommend really just to close the file descriptor immediately after getting it from accept.
Related
I need to write a proxy server in C language on Linux (Ubuntu 20.04). The purpose of this proxy server is as follows. There're illogical governmental barriers in accessing the free internet. Some are:
Name resolution: I ping telegram.org and many other sites which the government doesn't want me to access. I ask 8.8.8.8 to resolve the name, but they response of behalf of the server that the IP may be resolved to 10.10.34.35!
Let's concentrate on this one, because when this is solved many other problems will be solved too. For this, I need to setup such a configuration:
A server outside of my country is required. I prepared it. It's a VPS. Let's call it RS (Remote Server).
A local proxy server is required. Let's call it PS. PS runs on the local machine (client) and knows RS's IP. I need it to gather all requests going to be sent through the only NIC available on client, process them, scramble them, and send them to RS in a way to be hidden from the government.
The server-side program should be running on RS on a specific port to get the packet, unscramble it, and send it to the internet on behalf of the client. After receiving the response from the internet, it should send it back to the client via the PS.
PS will deliver the response to the client application which originates the request. Of course this happens after it will unscramble and will find the original response from the internet.
This is the design and some parts is remained gloomy for me. Since I'm not an expert in network programming context, I'm going to ask my questions in the parts I'm getting into trouble or are not clear for me.
Now, I'm in part 2. See whether I'm right. There're two types of sockets, a RAW socket and a stream socket. A RAW socket is opened this way:
socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
And a stream socket is opened this way:
socket(AF_INET, SOCK_STREAM, 0);
For RAW sockets, we use sockaddr_ll and for stream sockets we use sockaddr_in. May I use stream sockets between client applications and PS? I think not, because I need the whole RAW packet. I should know the protocol and maybe some other info of the packet, because the whole packet should be retrieved transparently in RS. For example, I should know whether it has been a ping packet (ICMP) or a web request (TCP). For this, I need to have packet header in PS. So I can't use a stream socket, because it doesn't contain the packet header. But until now, I've used RAW sockets for interfaces and have not written a proxy server to receive RAW packets. Is it possible? In another words, I've the following questions to go to next step:
Can a RAW socket be bound to localhost:port instead of an interface so that it may receive all low-level packets containing packet headers (RAW packets)?
I may define a proxy server for browser. But can I put the whole system behind the proxy server so that packets of other apps like PING may route automatically via it?
Do I really need RAW sockets in PS? Can't I change the design to suffice the data I got from the packets payload?
Maybe I'm wrong in some of the concepts and will appreciate your guidance.
Thank you
Can a RAW socket be bound to localhost:port instead of an interface so that it may receive all low-level packets containing packet headers (RAW packets)?
No, it doesn't make sense. Raw packets don't have port numbers so how would it know which socket to go to?
It looks like you are trying to write a VPN. You can do this on Linux by creating a fake network interface called a "tun interface". You create a tun interface, and whenever Linux tries to send a packet through the interface, instead of going to a network cable, it goes to your program! Then you can do whatever you like with the packet. Of course, it works both ways - you can send packets from your program back to Linux through the tun interface, and Linux will act like they just arrived on a network cable.
Then, you can set up your routing table so that all traffic goes to the tun interface, except for traffic to the VPN server ("RS"), which goes to your real ethernet/wifi interface. Otherwise you'd have an endless loop where your VPN program PS tried to send packets to RS but they just went back to PS.
Looking into nginx: ignore some requests without proper Host header got me thinking that it's not actually possible to close(2) a TCP connection without the OS properly terminating the underlying TCP connection by sending an RST (and/or FIN) to the other end.
One workaround would be to use something like tcpdrop(8), however, as can be seen from usr.sbin/tcpdrop/tcpdrop.c on OpenBSD and FreeBSD, it's implemented through a sysctl-based interface, and may have portability issues outside of BSDs. (In fact, it looks like even the sysctl-based implementation may be different enough between OpenBSD and FreeBSD to require a porting layer -- OpenBSD uses the tcp_ident_mapping structure (which, subsequently, contains two sockaddr_storage elements, plus some other info), whereas FreeBSD, DragonFly and NetBSD use an array of two sockaddr_storage elements directly.) It turns out, that OpenBSD's tcpdrop does appear to send the R packet as per tcpdump(8), and can be confirmed by looking at /sys/netinet/tcp_subr.c :: tcp_drop(), which calls tcp_close() in the end (and tcp_close() is confirmed to send RST elsewhere on SO), so, it appears that it wouldn't even work, either.
If I'm establishing the connection myself through C, is there a way to subsequently drop it without an acknowledgement to the other side, e.g., without initiating RST?
If I'm establishing the connection myself through C, is there a way to subsequently drop it without an acknowledgement to the other side, e.g., without initiating RST?
No. Even if there was, if the peer subseqently sent anything it would be answered by an RST.
NB Normal TCP termination uses a FIN, not an RST.
Cheating an attacker in this way could be a good idea. Of course, in this case you are already reserving server resources for the established connection. In the most basic mode you can use netfilter to drop any TCP outgoing segment with RST or FIN flags set. This rule iptables rule could be an example:
sudo iptables -A OUTPUT -p tcp --tcp-flags FIN,RST SYN -j DROP
Of course, this rule will affect all your TCP connections. I wrote it just to provide a lead of how you can do it. Go to https://www.netfilter.org/ for getting more ideas working on your solution. Basically you should be able to do the same only for the selected connections.
Because of how TCP works, if you're able to implement it, the client (or attacker) will keep the connection open for a long time. To understand the effect it would have in a client read here: TCP, recv function hanging despite KEEPALIVE where I provide results of a test in which the other side doesn't return any TCP segment (not even an ACK). In my configuration, it takes 13 minutes for the socket to enter an error state (that depends on Linux parameters like tcp_retries1 and tcp_retries2).
Just consider a DoS attack will usually imply connections from thousands of different devices and not necessarily many connections from the same device. This is very easy to detect and block in the firewall. So, it's very improbable that you are going to generate resources exhaustion in the client. Also this solution will not work for cases of half-open connection attack.
There's already a question How exactly does a remote program like team viewer work which gives a basic description, but I'm interested in how the comms works once the client has registered with the server. If the client is behind a NAT then it won't have its own IP address so how can the server (or another client) send a message to it? Or does the client just keep polling the server to see if its got any requests?
Are there any open source equivalents of LogMeIn or TeamViewer?
The simplest and most reliable way (although not always the most efficient) is to have each client make an outgoing TCP connection to a well-known server somewhere and keep that connection open. As long as the TCP connection is open, data can pass over that TCP connection in either direction at any time. It appears that both LogMeIn and TeamViewer use this method, at least as a fall-back. The main drawbacks for this technique are that all data has to pass through a TeamViewer/LogMeIn company server (which can become a bottleneck), and that TCP doesn't handle dropped packets very well -- it will stall and wait for the dropped packets to be resent, rather than giving up on them and sending newer data instead.
The other technique that they can sometimes use (in order to get better performance) is UDP hole-punching. That technique relies on the fact that many firewalls will accept incoming UDP packets from remote hosts that the firewalled-host has recently sent an outgoing UDP packet to. Given that, the TeamViewer/LogMeIn company's server can tell both clients to send an outgoing packet to the IP address of the other client's firewall, and after that (hopefully) each firewall will accept UDP packets from the other client's Internet-facing IP address. This doesn't always work, though, since different firewalls work in different ways and may not include the aforementioned UDP-allowing logic.
I have looked through many pages and forums, but still am unsure about this. I am writing a project where the client reads in a txt file of numbers and sends the numbers to the server who will do some computation and send the result back to the client. Is it possible to connect a client to multiple servers using udp? and if so, an explanation would be nice. I don't think I quite understand udp fully yet. I am writing this in c also. The reason for connecting to multiple servers from one client is because I need to run the client using 1, 2, 4, and 8 servers (distributing numbers to each server until none are left) and compare the run time. Any quick help would be appreciated.
You can use UDP to multiple servers with the same socket. Probably the simplest way to do it is to have the client assign a session ID to each connection, include the session ID in each datagram it sends, and have the server return that session ID in each reply datagram it sends. Don't use the IP address to distinguish which server the packet is from because a server can have more than one IP address, making it unreliable.
Just remember that if you use UDP, you don't get any of the things TCP adds. If you need any of them, you need to do them yourself. If you need all or most of them, TCP is a much better choice. TCP does:
Session establishment
Session teardown
Retransmissions
Transmit pacing
Backoff and retry
Out of order detection and rearrangement
Sliding windows
Acknowledgments
If you need any of these things and choose to use UDP, you need to do them yourself.
im writing a multithreaded winsock application and im having some issues with closing the sockets.
first of all, is there a limit for a number of simultaneously open sockets? lets say like 32 sockets all in once.
i establish a connection on one of the sockets, and passing information and it all goes right.
problem is when i disconnect the socket and then reconnect to the same destination, i get a RST from the server after my SYN.
i dont have the code for the server app so i cant debug it.
when i used SO_LINGER and it sent a RST flag at the end of each session - it worked.
but i dont want to end my connections this way.
when not using SO_LINGER a FIN flag was sent but it seems the connection was not really closed.
any help?
thanks
On Unix there's a file descriptor limit per process - I'm guessing on Windows it's "handles".
You are probably bind()-ing your client socket to a fixed port. That might be the reason the server is rejecting your subsequent connection. Try normal ephemeral ports.
Firstly, I agree with Nikolai, are you binding your client socket?
If so it sounds like the socket on the server side is still in TIME_WAIT and is discarding the new connection attempt. By binding the client socket you're forcing the server to try and reuse the exact same connection that is currently in the 2MSL wait period, it can't be reused at this point in time and so you're seeing what you're seeing. There's usually no need to bind the client port, stop doing it and your problem will likely go away.
Secondly, yes, there are limits to the number of open sockets on Windows platforms but they're resource related rather than some hard coded number.
Each open socket uses some 'non paged pool' memory and each pending read or write request on a socket is also likely to use both 'non paged pool' and have pages of memory locked in memory during I/O (there's a limit to the number of pages that can be locked). That said on Vista and later there's much more 'non paged pool' available than on earlier versions of Windows and even then I've managed to achieve more than 70,000 concurrent active connections on a pretty low spec XP box (see here: http://www.lenholgate.com/blog/2005/11/windows-tcpip-server-performance.html). Note that there are some separate limits on the number of outbound connections that you can establish (which is more likely to be of interest to you) but that's around 4000 by default and can be tuned by setting MAX_USER_PORT see here: Maximum number of concurrent TCP/IP connections - Win XP SP3 for more details.