Closing websocket connections using libcurl when server sends close signal - c

I'm not an advanced user, so please cope with me.
I'm trying to implement a WebSocket client using libcurl, and I'm good until the last step of a connection - termination.
The general logic is as follows:
Client connects and sends an upgrade request.
Websocket server accepts/upgrades and starts sending gibberish.
Client adds up all the gibberish sizes.
Server sends a closing signal after 10 secs.
So far so good. I'm not processing the payloads of incoming messages, and I don't want to. I have very limited resources and I don't want to experience any performance loss in order to check each payload and search for a close signal.
I'm using libcurl's easy interface and receive data with curl_easy_perform(). Is there any way to detect a close signal, or close the websocket connection after 10 secs?

Close signals are part of the WebSocket protocol at the framing layer (see RFC 6455 Sections 1.4, 5, and 5.5.1).
AFAIK, libcurl doesn't natively support WebSockets, just HTTP (which a WebSocket uses for its opening handshake, so you can fake it with libcurl). So, if libcurl doesn't process the WebSocket frames for you, you would have to process them yourself, even if you ignore their payloads.
Otherwise, just set a 10-second timer for yourself and close the underlying TCP connection directly, which you can get from libcurl using curl_easy_getinfo(CURLINFO_ACTIVESOCKET).
But, if the server is sending you a close signal, you SHOULD send one back, per Section 5.5.1, which means parsing the frames properly:
If an endpoint receives a Close frame and did not previously send a Close frame, the endpoint MUST send a Close frame in response. (When sending a Close frame in response, the endpoint typically echos the status code it received.) It SHOULD do so as soon as practical. An endpoint MAY delay sending a Close frame until its current message is sent (for instance, if the majority of a fragmented message is already sent, an endpoint MAY send the remaining fragments before sending a Close frame). However, there is no guarantee that the endpoint that has already sent a Close frame will continue to process data.
After both sending and receiving a Close message, an endpoint considers the WebSocket connection closed and MUST close the underlying TCP connection. The server MUST close the underlying TCP connection immediately; the client SHOULD wait for the server to close the connection but MAY close the connection at any time after sending and receiving a Close message, e.g., if it has not received a TCP Close from the server in a reasonable time period.
If a client and server both send a Close message at the same time, both endpoints will have sent and received a Close message and should consider the WebSocket connection closed and close the underlying TCP connection.

Related

Batch receive with Spring AMPQ

We are using #RabbitListener just fine to process one message after another and sending generated emails, using JavaMail to some SMTP.
Now there is request to close the connection to SMTP after specific count of messages. I have read something about ChannelAwareMessageListener and manual ack. This way you can acknowledge all batch messages with single ack, but I need to be able to just read some messages and then confirm only those which will be sent successfully to SMTP the others need to be dead-lettered.
Any other ideas how to close SMTP connection after count of messages?
Keep a list to the unack'd delivery tags, then use channel.basicAck(goodTag, false) and basicReject(badTag, false) when you are ready.
You should only do that on the listener thread.

TCP/IP: message got by recv() in order without keep buffer

I have a client-server programm. They communicate with some characters.
ex:
client --send A-> server
then
client <-recv A'-- server
But I let server send message back not in order.
ex:
client --A--> --B--> --C--> server
then
client <--A'-- <--C'-- <--B'-- server
what I want:
client <--A'-- <--B'-- <--C'-- server
so I want to handle the situation in client programm.
I only figure out one way is to keep a buffer to record data from server, and client will check the B' received and then check the C' received in order.
Is there anyway to do that in client and avoid using buffer?
Ideally you should send and then wait for reply, send next request when you received earlier reply.
But apparently your client seems to send without waiting for reply and server replies whichever request its done processing.
In such case you need to device a mechanism in your client and server program based on the data being sent and received.
You can decide your client and server will work on data having(Header+data) format something like:
length(2 bytes) - length of actual data
sequence numner(2 bytes) - to be incremented for each request
Actual data(length number of bytes)
When you send the data keep the request data, after sending over socket, in a pending list of requests waiting for reply from server.
The server on receiving request in above format, will update its reply data in the 'Actual data' section , update the length in header, keeping sequence as it is, server will send reply to client.
Client will match the sequence number from reply with the items in its pending list to get the request whose reply it received from server

closing the client after the file descriptor in server is closed using close(fd) in server

I have a scenario to close the connection in server if the client is inactive for say about 120 sec. It is intended such that I could reuse the fd after certain time inactivity.so I wrote a code to close the file descripter using
if ((int)time(NULL) - (int)value_data->timeout > 120)
{
zlog_warn(_c,"timeout of 120s for device = %s",key);
close(value_data->fd);
g_hash_table_iter_remove (&iter);
(*_collector_free_tcp_cache_cb)(value_data->device_ip, value_data->fd);
AFREE(value_data->device_ip);
AFREE(value_data);
}
using close(fd), in server side the connection fd is closed. The problem I am facing is after next time the client is connected, the server is crashed. The client is not able to send the data. While looking in the tcp client side, it gives information that the Tcp is in
TCP 192.168.2.138:50296->192.168.2.161:shell (CLOSE_WAIT)
how can i send the data starting next client again? is there certain time I need to wait such that the same fd can be reused to send the data.?

FTP implementation: close data socket every time

I'm implementing in c a sort of FTP protocol.
I' ve a server running.
I start the client, connect to the server, and then send a GET file.txt request.
The client parse the command, see it's a GET command and starts a server socket.
The server recieves the command, and starts the data connection with the client and start sending file.txt on this connection.
When the server sent the file, it closes the client socket (Data).
When i want to GET another file, the port is already in use. How can i prevent this? Should i keep the data-connection open for all the command-connection session? In this case, how can my client know when the file is over?
Thanks
When a socket is closed, it enters the TIME WAIT state (see here for the possible TCP states) and no other socket can be bound to the same address/port pair until the socket leaves TIME WAIT and is in the CLOSED state.
You can go around this by setting the SO_REUSEADDR socket option, that will allow two sockets to be bound to the same address if one of the sockets is in the TIME WAIT state.
you need to open socket for transfer each time as the server will close it when transfer finish.
you will know that the file is downloaded/uploaded by reading response from FTP Server for status code (226 or 250) - check List of FTP server return codes:
https://en.wikipedia.org/wiki/List_of_FTP_server_return_codes
In my project, I use apache-commons-net,
just keep the command connection alive with heartbeat command,
and enter local passive mode every time to do file tranfer.
The principle is same for your situation, I suggest send
EPSV
command before GETTING a file.txt.
refer: https://commons.apache.org/proper/commons-net/

Handling a streaming server

I have a server that sends data as fast as it can produce it and sends the data over a socket. The server uses a queue and has a producer thread and a consumer thread that sends the produced data out a socket to the client.
The problem is reading the data on the client side. How do I design a client to handle the data without it being out of sync?
If I send an acknowledgement from the client to the server I lose the concurrency speed on the server side. How can I write/design a client to handle the incoming data fast enough?
Do I need to implement a queue on the client side?
Unless you have a requirement that you must use something other than TCP, just let TCP do the job of flow control for you. Let the client consume the data as fast as it wants to, and the server will block after it sends more data than the client is prepared to consume and it fills up the TCP window.
TCP will never get out of sync in the sense that data on the socket will always be delivered in order. But the server may certainly have sent out more data than the client has consumed and so it may have moved on to sending the next batch of data while the client is still consuming the previous one. Is this what you mean by out of sync?
You don't want to make the client send an acknowledge before the server starts on the next task because that will cost an RTT (round trip time, i.e. the time for the last of one batch of data to arrive at the client and for the acknowledge to go back), which will slow down your protocol on a high-latency link.
If you don't want this RTT price, you are inevitably going to have to allow either:
for the client to request more than one batch at a time. You can use a tagged protocol like IMAP for this: the client submits several jobs at once on one socket, each with its own tag. The server responds to each request, with the tags in the header of each response so the client knows which response goes with which request. When the client has seen "enough" responses, it submits more requests. The client gets to control how many requests can be ongoing at the same time. If the client allows only one at a time, this degenerates to the simple ACK case with the RTT cost.
for the server to work a little ahead of the client, sending several responses to the client before the client has acknowledged the first one. After the pipe is filled to the maximum number of unacknowledged jobs that the server is willing to allow, it waits to acknowledged and sends one additional job response for each acknowledge it receives from the client. If the server allows only one outstanding job, this degenerates to the simple ACK case as above. If the server allows too many unacknowledged jobs at a time, this degenerates to just filling up TCP's buffers and counting on TCP flow control to block the server until the client is ready to accept more data.

Resources