I'm running a game website where users connect using an Adobe Flash client to a C server running on a Fedora Linux box.
Often users complain about disconnects. Usually they're "Connection reset by peer"-disconnects.
Is there any way to make the connection more stable or does it all depend on the route from the user host to my server?
One thing I tried is to make it more stable by sending PING in clear text every other minute to avoid timeout problems.
Anyone got more ideas?
You are not exhausting the number of socket/memory use/cpu that the server process is given on the server, are you?
Do check with ulimit.
Also, if possible try to trace the error message in the source code (when a RST packet is sent--), i.e. when a send() or accept() returns an error value. In such cases print a debug message into the logs; if you really fancy debugging it do a simulation of the server:
run it into debug mode on a separate machine (possibly a clone of the server)
simulate thousands of connection (or find a network harnessing program)
backtrace the call and/or sniff the connection
where are you running the server?
at home? at work? at a hosting facility?
this will make a very big difference.
Can you design your app to connect to two sockets on the server and then load balance or make it active/passive (or active/active)?
You can use SO_KEEPALIVE TCP socket option.
Related
I'm trying code TCP server in C language. I just noticed accept() function returns when connection is already established.
Some clients are flooding with random data some clients are just sending random data for one time, after that I want to close their's current connection and future connections for few minutes (or more, depends about how much load program have).
I can save bad client IP addresses in a array, can save timings too but I cant find any function for abort current connection or deny future connections from bad clients.
I found a function for windows OS called WSAAccept that allows you deny connections by user choice, but I don't use windows OS.
I tried code raw TCP server which allows you access TCP packet from begin including all TCP header and it doesn't accept connections automatically. I tried handle connections by program side including SYN ACK and other TCP signals. It worked but then I noticed raw TCP server receiving all packets in my network interface, when other programs using high traffic it makes my program laggy too.
I tried use libnetfilter which allows you filter whole traffic in your network interface. It works too but like raw TCP server it also receiving whole network interface's packets which is making it slow when there is lot of traffic. Also I tried compare libnetfilter with iptables. libnetfilter is slower than iptables.
So in summary how I can abort client's current and future connection without hurt other client connections?
I have linux with debian 10.
Once you do blacklisting on packet level you could get very fast vulnerable to very trivial attacks based on IP spoofing. For a very basic implementation an attacker could use your packet level blacklisting to blacklist anyone he wants by just sending you many packets with a fake source IP address. Usually you don't want to touch these filtering (except you really know what you are doing) and you just trust your firewall etc. .
So I recommend really just to close the file descriptor immediately after getting it from accept.
What I'm doing
I'm implementing a websocket server on a stellaris board as the title says. At the moment I'm able to establish connection to the client and send a few frames.
The way I'm implementing the websocket
The way I'm developing it is something like a master slave communication. Whenever the client sends a string, the server decodes it and then answers. At the moment I'm simply responding to a character 'e', which is designed to be just a counter. The thing is that I implemented the websocket on the client side to send 'e' whenever it receives a message and then displays the message on the page.
The problem
The problem is that it does about 15 transactions and then I can see the communication being re-transmitted from and to the stellaris board and then the communication closes. After the connection closes I noticed that that I can't access any other page on the board. It simply doesn't respond anymore.
My assumptions of what may be causing it
This lead me to believe that the transactions are being too fast and there may be an implementation bug, lwIP bug or hardware bug (I'm using the enet_io example as base).
My assumptions on how to fix it
After seeing this I can imagine that what I need is to control the string being sent to the microcontroller so that it sends once a second, or maybe even less, because at the moment it was doing something like 1000 transactions per second and sometimes more.
The question
So ... after my trials I still have a few questions that need to be answered. Do websockets need this kind of relationship? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open? Is my supposition that slowing down my rates will work?
Do websockets need this kind of relationship [request-response]? Where client asks and server serves? Or can I simply stream data from the server to the client as long as the connection is open?
The Websocket protocol doesn't require a request-response model (except for the connection establishing handshake).
The server can stream data to the client without worrying about any response or request from the client.
However, it's common practice to get a response or a ping from a client once in a while, just to know they're alive.
This allows the client to renew a connection if a message or ping fails to reach the server - otherwise the client might not notice an abnormally dropped connection (it will just assume no updates are being sent because there's no new data).
It also allows the server to know a connection is still alive even when no information is being exchanged.
Is my supposition that slowing down my rates will work?
I guess this question becomes less relevant due to the first question's answer... however, I should probably note that the web socket client (often a browser) will have limited resources and a different memory management scheme.
Browsers are easy to overwhelm with too much data because they often keep references to all the exchanges since the page was loaded (or refreshed).
This is especially true when logging events to a browser's console.
I just implemented my first UDP server/client. The server is on localhost.
I'm sending 64kb of data from client to server, which the server is supposed to send back. Then, the client checks how many of the 64kb are still intact and they all are. Always.
What are the possible causes for this behaviour? I was expecting at least -some- dataloss.
client code: http://pastebin.com/5HLkfcqS
server code: http://pastebin.com/YrhfJAGb
PS: A newbie in network programming here, so please don't be too harsh. I couldn't find an answer for my problem.
The reason why you are not seeing any lost datagrams is that your network stack is simply not running into any trouble.
Your localhost connection can easily cope with what you provide, a localhost connection is able to process several 100 megabyte of data per second on a decent CPU.
To see dropped datagrams you should increase the probability of interference. You have several opportunities:
increase the load on the network
busy your cpu with other tasks
use a "real" network and transfer data between real machines
run your code over a dsl line
set up a virtual machine and simulate network outages (Vmware Workstation is able to do so)
And this might be an interesting read: What would cause UDP packets to be dropped when being sent to localhost?
I'm writing a port scanner in C and i want to detect what service is running on an open port and its version.I've already wrote the scanner code but now i have no idea about how to detect running service.
What can i do?
If you are determined to do it in your own code, you can connect to the port, see if you get any data on it, if nothing then send a few bytes, and check again.
Then match that against expected response.
to get an idea what you are looking for, you can connect manually to the port with telnet and poke at it. In many cases (a web server is an easy example) you must send some correctly formatted data in order to get a usable response.
nmap has done all this and much more (e.g. extensive checks such as looking for byte order and timing of arp traffic)
UPDATE: several people have mentioned well known ports, but that won't help you discover standard services running on nonstandard ports, such as ssh or http servers running on custom ports.
If server sends something first, use that to identify protocol.
If not, send something according to some protocol, such as http, and see what server sends back (valid response or error). You may need to make several attempts with different protocols, and a good order is important to minimize connection count.
Some protocols may be very hard to identify, and it is easy to make custom server with unique protocol you don't know about, or even hide real server under simple fake server of other proto such as http.
If you just want to know what the port usually is, check "well known ports" and official reserved ports.
Also check nmap source code.
I need to detect the presence/absence of internet connection. More precisely, let us suppose that the application is broken up into 2 parts - A and B.
A is responsible for checking whether or not the system is connected to the internet. If it finds that there is no connection, it starts up part B. And as soon as it detects that there is a network connection, it kills B and continues its own work.
What would be the best way to do the A part of the application? Continual pings sounds hideous. There has to be a better way of doing this (preferably in C).
With sufficient privilege you can test the various network interfaces and examine their state. This would tell you if any of the interfaces was connected to a network and operating. However, this won't tell you if the connection is actually usable, i.e., connected to the internet (or your local net if that's all you need). I don't know of anyway to do that short of actually using it.
Using ICMP (ping) can be useful at a low level, but presumably what you need is a connection to an actual endpoint via TCP/IP to do real work. I would say that you should change the design of your application so that B is responsible for indicating when it is unable to continue due to the absence of resources that it relies on -- network or otherwise. A and B should communicate so that A is aware of the situation and is able to either kill B or respond to B terminating itself and thus continuing its work.
A lot of companies have measures in place to prevent outgoing ICMP requests, TCP connections to ports other than 80/443 for example, or even to prevent you from reaching the internet directly by (transparently) proxying your traffic.
Under an internet connection I would understand any way to contact the outside, be it UDP, TCP or ICMP. Depending on what your application needs to contact the internet for, I would suggest to check over the same protocol, as that is the only thing that matters to your app.
If your application uses HTTP to communicate to an external source, try to connect to a few sites you would suspect to not be blacklisted and that have a reliable uptime. Like google.com, microsoft.com, apple.com, and so on...
Edit:
I am unsure what the specifics are, so let me give you an example with a hypothetical situation.
Application A collects data on the system it is running on and forwards it to a Web Service listening on yourserverhost.yourcompany.com:80
Application B would basically take over the job of the Web Service when it is down and log everything so no data is lost.
When all is well, App A will be sending the data to your web service
Once this connection drops, you immediatly launch App B (the obvious remark here would be, why not keep App B running as a failsafe)
App A connects to App B and forwards what it had been buffering
App A continues to try to reestablish the connection to your Web Service and once it is back up will request App B to stop
If the problem you are facing is nothing like this, please provide a more concrete description of what App A and App B are supposed to be doing. I will be more than happy to help.
In your code, you have to check whether the internet connection exists by using a socket to open a connection to a website.
Firstrun: Ask user to input the network parameters, like proxy settings. Save this info.
Next runs: Use these settings to check for the Internet connection. You may simply do a DNS search.
If results are negative, ask user to check settings.
Check whether the cable is connected , if so ping your internet connection to any host as google.com.
ping google.com