How to force wireguard to handshake immediately? - wireguard

I tried to change my client ip, and the server can't resolve the new client ip until 3 minutes later, when the client handshake with the server again. How to force the client/server to handshake immediately, or at least set the handshake timeout to very short?

I only know the way with wg-quick down and wg-quick up. In a bash script you can set the time in a loop with sleep.
If you are using Linux, you can find a script under /usr/share/doc/wireguard-tools/examples/reresolve-dns that allows you to set the time of the handshake.
I haven't tested if it also works if you just execute every few seconds
"wg set" every few seconds with your configuration.

Related

What happens when a server closes the connection and the client sends some data at the same time?

I have a server written in C that closes the connection if the connection is sitting idle for a specific time. I have an issue (that rarely happens). Read is failing on the client side and it says Connection broken. I suspect the server is closing the connection and the client is sending some data at the same time.
Consider the following scenario (A is server, B is the client)
B initiates the connection and the connection between A and B is established.
B is sitting idle and the idle timeout is reached.
A initiates the close
Before B receives the FIN from A, it starts sending request to A
After B sends the request, it will read the response
Since A has already closed the connection, B is not able to read.
My questions are
Is this a possible situation ?
How to handle idle timeout for clients?
How to close the connection between A and B properly (avoid B sending request during the process). In short, how to close the connection atomically?
By my only little more than rudimentary network experience... and assuming that you are talking about a connection-oriented connection like TCP/IP in contrary to UDP/IP that is connection-less.
Yes, of course. You cannot avoid it.
There are multiple ways to do it, but all of them include: Send something from the client before the server's timeout elapses. If the client has no data to send, let it send something like a "life sign". This could be an empty data message, it all depends on your application protocol. Or make the timeout as long as necessary, including some margin. Some protocol timeout only after 3 times of allowed idle time.
You cannot close the connection atomically, because client and server are separated. Each packet on the network needs some time to be transmitted, and both can start sending at the very same moment, the server its closing message, and the client a new data message. There is nothing that you can do about this.
You need to make the client handle this situation properly. For example, it can accept such a broken connection and interpret it as closed. You should have already some reaction, if the server closes the connection while the client is idle.
How to close the connection between A and B properly (avoid B sending request during the process).
Server detects timeout
Server sends timeout detection message to the Client
Server waits for a reply (if timeout, assume Client dead)
if Client receives a timeout detection from the Server, it replies with ACK (or something like that)
if Server receives an ACK from the Client, then 'gracefully' closes the connection
from now on, neither the Server nor the Client should send/receive any messages (after sending the ACK, do not immediately close the connection from the client side, do linger for the agreed timeout - see setsockopt: SO_LINGER)
On top of that, like the other answers suggested, the Client should send a heartbeat if idle (to avoid timeout detections).

persistent tcp socket connection in c

I have a C service application that use tcp socket for connection to a server. The server sends data now and then. Also my application sends a hearbeat every 15 seconds. But sometimes it disconnects while server seems to think the connection is live. Now if I try to reconnect the server refuses as it holds only one connection for the client at a time.
What is the best way to hold a persistent tcp connection?
Edit:
The server usually disconnects after 2 min without heartbeat. So after I find my connection is closed it takes 2min for me to successfully reconnect. I want to minimize this time.
The simplest fix is probably for the server to allow a new connection to replace an old connection rather than rejecting it. That would still keep only one connection to each client at a time.

MQTT recv from a publish and mqtt ping C

i've got this problem, in a test program, where i'm developing a client for MQTT, i'm subscribed on a topic, after that, i wait for "publish" message from the server to my client.
After a good recv (of a publish message) or after a recv timeout i send a mqtt PINGREQ to the server.
After a A PINGREQ i'm going to wait a PINGRESP, then i call a recv as in the case I were waiting for a PUBLISH message.
If the flow is this:
Client -> PINGREQ
Server -> PUBLISH
Server -> PINGRESP
Than the server publish message were lost. How to solve this? I'm using MQTT at QOS 0, it make sense solve this problem on this level of QOS or instead is smart to check this case at QOS1?
I think you've got things a bit confused. PINGREQ/PINGRESP are used when there isn't any other network traffic passing between the client and server, in order to let both the client and server know if the connection drops.
Your client should keep track of the when the last outgoing or incoming communication with the server was, and send a PINGREQ if it is going to exceed the keepalive timer it set with its CONNECT command. The server will disconnect the client at 1.5*keepalive if no communication is received. The client should assume the server has been disconnected if it does not receive a PINGRESP in response to its PINGREQ within keepalive of sending the PINGREQ.
The QoS level isn't that important, you have to ensure the keepalive timeout is maintained regardless.
It also occurs to me that it sounds like you're using blocking network calls - it might be best to move to non-blocking if you can to get more flexibility.

SYN receives RST,ACK very frequently

Hi Socket Programming experts,
I am writing a proxy server on Linux for SQL server 2005/2008 running on Windows.
The proxy is coded using bsd sockets and in C, and it is working fine with a problem described below.
When I use a database client (written in JAVA, and running on a Linux box) to fire queries (with a concurrency of 100 or more) directly to the Database server, not experiencing connection resets. But through my proxy I am experiencing many connection resets.
Digging deeper I came to know that connection from 'DB client' to 'Proxy' always succeeds
but when the 'Proxy' tries to connect to the DB server the connection fails, due to the SYN packet getting RST,ACK.
That was to give some background. The question is :
Why does sometimes SYN receives RST,ACK?
DB client(linux) to Server(windows) ----> Works fine
DB client(linux) to Proxy(Linux) to Server(windows) -----> problematic
I am aware that this can happen in "connection refused" case but this definitely is not that one. SYN flooding might be another scenario, but that does not explain fine behavior while firing to Server directly.
I am suspecting some socket option setting may be required, that the client does before connecting and my proxy does not. Please put some light on this. Any help (links or pointers) is most appreciated.
Additional info:
Wrote a C client that does concurrent connections, which takes concurrency as an argument. Here are my observations:
-> At 5000 concurrency and above, some connects failed with 'connection refused'.
-> Below 2000, it works fine.
But the actual problem is observed even at a concurrency of 100 or more.
Note: The problem is time dependent sometimes it never comes at all and sometimes it is very frequent and DB client (directly to server) works fine at all times .
SQL Server needs worker threads to accept incoming connections. If your server is worker starved (which can be easily diagnosed by a high number of entries in sys.dm_os_tasks in PENDING state) then attempting to open new connection will fail. So likely what I suspect it happens is that you're pushing to the server more workload that it can handle. You need to optimize the workload or get a beefier server.
Clients like Java client make effective use of connection pooling and, even under a high load, do not need to open new connection hence you do not see this problem, instead you see only delays in requests completion.
A listening socket keeps queue of established connections and connections in establishment process (e.g. SYN got, SYNACK replied, but not ACK from client yet). If a established queue overflows, IP stack reaction differs on OS. The most traditional approach was to ignore newcoming SYNs, waiting when userland accept()s and frees a slot in the queue. With SYN flooding attacks in mid-90s, the new method was invented named "SYN cookies" which drops need for establishment queue totally, in cost of need to support special TCP option.
OTOH I have heard that Windows stacks change their behavior - under some conditions, the reaction to queue overflow is RST response. In earlier stacks (e.g. Win95) this was the main response and client side was correspondingly changed to ignore RST response to SYN:(
That's why I guess that some proxy host feature triggers RST in Windows stack.
Another guess is that DB server closes listening socket at all under some condition (e.g. detected overload peak) which appears only with proxy.
When the SYN receives the RST response, it should not be the problem of SQL-SERVER.
Because the application is able to accept the socket only after the tcp handshake finished.
Is there any device between the Proxy and the SQL-Server machines?
Try to make sure that the RST response come from sql-server machine.
You connection count is far from SYN-FLOOD, I think.

How to detect connection closed by localhost?

I have write a TCP client and server in C. These software run on the same computer ONLY.
1) My TCP client send a command to the server (localhost).
2) The server works hard and give a response.
The problem is if the TCP Client closes the connection, I am unable to detect a client connection closed WHILE server is going to do its long work. I can only detect this with the SEND function, but it's too late because the server have already work.
I understand that this must be very hard to make this detection if the machines are remote. In my case, it is the same machine, which should make the task easier, but I have not found a solution ... Can be with the select function?
Thanks you.
You can do it with select like this:
Use select for read events on the socket. So select unblocks when the socket is readable
When a request arrives at the server, start work in a different thread
When the client closes the connection, select will unblock and recv will read 0 bytes. If so, you can stop the worker thread
If the worker thread finishes the task without being interrupted, it can send the result

Resources