How to cancel transmission of messages of a particular protocol? - unetstack

I want to cancel the transmission of messages which use DATA protocol. How to use clear Req to cancel the transmission of messages which use DATA protocol but not others messages which use different protocol.

ClearReq is supported by some agents (e.g. PHY) to stop any ongoing transmission/receptions at the next safe opportunity. However, if the transmission was due to a higher level protocol (e.g. reliable `DatagramReq), that protocol may initiate re-transmission down the road.
DatagramCancelReq is supported by many agents that implement the DATAGRAM service. When supported, this requests cancels a specified previous DatagramReq (if an id of the request is given), or all ongoing datagram transmissions by that agent (if no id is specified).

Related

How does windows handling of timeout after SetCommTimeouts unfold?

How does windows handling of timeout after SetCommTimeouts unfold?
Does it do reconnect at that level or do I in the app layer?
Perhaps you are assuming a TCP/IP session, but there is no such concept in the serial port API.
A serial port is a peer-to-peer physically connected cable that allows communication if the program opens the port at both ends.
Set the timeout separately for several Read/Write factors. See the API documentation for more details.
For both Read/Write timeout values, when you call the Read/Write API, you will be notified that a Read/Write API timeout error has occurred if the specified number of bytes of data cannot be sent or received within the specified time.
Even if those errors occur, the connections between the ports are maintained and there is no concept or API to reconnect at the serial port API level.
The programmer should consider whether a program has been created that conforms to the communication setting conditions and protocol specifications of the connected device, not to close/re-open when there is an error, for example.
Depending on the protocol specifications of the device, those errors may simply be that there is no data to notify, or that something is running and not ready to receive data.
In that case, simply repeat Read/Write until it is valid.
Other devices may have strict state transitions defined by something like a finite state automaton, with command/response/error handling specifications.
Therefore, your question asked without specifying the connected device is meaningless.

Problem receiving multicast traffic from several groups on one socket

I am working on an application in C that listens to several multicast groups on one single socket. I am disabling the socket option: IP_MULTICAST_ALL.
The socket is receiving traffic from 20 different multicast groups. This traffic arrives in bursts to the socket. One of those channels only publishes one message per second and no problem has been observed here.
I have also a reliable protocol for this multicasts feeds. If one listener misses a message then the protocol tries to recover the message by talking with the source via messages, then a retransmission is performed via the same channel as usual.
The problem appears when there are message bursts that arrived to the socket, then the RUDP protocol forces the retransmission of those messages. The messages arrive without problem but if the message-burst groups stop to retransmit new data because they don't have any more traffic to send, sometimes (it is pretty easy to reproduce it) the socket does not read those pending incoming messages from these groups if a periodic message arrives from a different group (the one that has tiny traffic, tiny and periodic traffic).
The situation up-to here is that there are many incoming messages sent before, pending to be read by the application (no more data is sent via this groups), periodic messages that arrive from the other group that sends periodically a few messages.
What I have seen here is that application reads a message from the group that periodically sends a few messages and then a batch of messages from other groups (the burst groups). The socket is configured as non-blocking and I get the EAGAIN errno everytime a batch of messages is read from the socket, then there is no more data to read until the socket gets a new message from the periodic group, then this message is read and a batch of the other pending messages from the other groups (the application is only reading from one single socket). I made sure the other groups do not produce more data because I tested stopping the other processes to send more data. So all the pending messages on these groups are already been sent.
The most surprising fact is that if I prevent the process that writes to the periodic group to send more messages, then the listener socket gets magically all the pending traffic from the groups that published a burst of messages before. It is like if the traffic of the periodic group stops somehow the processing of the traffic from the groups that do not publish new data but the buffers are plenty of it.
At first I thought it was related with IGMP or the poll mechanism (my application can perform active waiting or blocking waiting). The blocking waiting is implemented with the non-blocking socket but if the errno is set to EAGAIN then the app waits on a poll for new messages. I get the same behavior in both situations.
I don't think it is the IGMP because the IGMP_SNOOPING is off in the switches and because I reproduce the same behavior using one computer loopback for all this communications betweeen processes.
I also reproduce this behavior using kernel-bypass technologies (not using the kernel API to deal with the network), so it does not seem related to TCP/IP stack. Using kernel-bypass technologies I have the same paradigm: one message interface that gets all the traffic from all the groups. In this scenario all the processes use this mechanism to communicate, not several TCP/IP and several kernel-bypass. The model is homogeneous.
How could it be that I only receive batches of messages (but not all) when I am receiving live traffic from several groups but then I receive all the pending traffic if I stop the periodic traffic that arrives from a different multicast group?. This periodic traffic group it is only one message per second. The burst group does not publish anymore as all the messages were already published.
Please, does someone have an idea what should I check next?

why TCP keep-alive packet doesn't trigger I/O event? Is it because no payload or sequence number is 1 less than sequence number of connection

I want to let my application layer notified when my server received Keep Alive Packet. I am wondering what's the reason Keep Alive packet doesn't trigger I/O event. Is it because the TCP Keep Alive packet has no data or sequence number is 1 less than the sequence number of connection.
I did some test to let my client sent Keep Alive Packets. My server use epoll but didn't get triggered.
I am also wondering if I pad one byte to Keep Alive packet data/payload, will my application get notified/ I/O event / Epoll triggered?
You should not be surprised by that. For example, you are not notified of RST packets either.
Those are transport-level messaging details. On the application level, TCP gives you stream of bytes, independent of low-level details. If you want to have application-level heartbeats, you should implement them on the application level protocols.
Your latest edit seems to be stemming from some sort of confusion. You can't add data into Keep Alive packets, for two reasons:
First, they are sent by network layer and application doesn't have control over them (beside timeouts)
More importantly, if by some (dark) magic you manage to interfere with network layer (say, you patch your kernel :) and start putting data into them, they will stop being keep alive packets, and will become normal data packets, carrying data. Than, of course, your receiver will be notified of the data, which will become part of the message stream.

When during the socket lifetime should I set the TCP_QUICKACK option?

I know why I should use it, but I'm not sure where to put the setsockopt in my socket code.
It is clear to me that it can be modified by the inner mechanisms of the socket api, but when exactly should I set the TCP_QUICKACK option with setsockopt?
Should I set it at the socket creation then after (or before?) each receive and sends? Or only receives?
Should I check that the option is already set?
When should I set the TCP_QUICKACK option?
The IETF offers TCP Tuning for HTTP, draft-stenberg-httpbis-tcp-03. Section 4.4 of the document explains:
Delayed ACK [RFC1122] is a mechanism enabled in most TCP stacks that
causes the stack to delay sending acknowledgement packets in response
to data. The ACK is delayed up until a certain threshold, or until
the peer has some data to send, in which case the ACK will be sent
along with that data. Depending on the traffic flow and TCP stack
this delay can be as long as 500ms.
This interacts poorly with peers that have Nagle's Algorithm enabled.
Because Nagle's Algorithm delays sending until either one MSS of data
is provided or until an ACK is received for all sent data, delaying
ACKs can force Nagle's Algorithm to buffer packets when it doesn't
need to (that is, when the other peer has already processed the
outstanding data).
Delayed ACKs can be useful in situations where it is reasonable to
assume that a data packet will almost immediately (within 500ms) cause
data to be sent in the other direction. In general in both HTTP/1.1
and HTTP/2 this is unlikely: therefore, disabling Delayed ACKs can
provide an improvement in latency.
However, the TLS handshake is a clear exception to this case. For the
duration of the TLS handshake it is likely to be useful to keep
Delayed ACKs enabled.
Additionally, for low-latency servers that can guarantee responses to
requests within 500ms, on long-running connections (such as HTTP/2),
and when requests are small enough to fit within a small packet,
leaving delayed ACKs turned on may provide minor performance benefits.
Effective use of switching off delayed ACKs requires extensive
profiling.
Later in the document it offers the following:
On recent Linux kernels (since Linux 2.4.4), Delayed ACKs can be
disabled like this:
int one = 1;
setsockopt(fd, IPPROTO_TCP, TCP_QUICKACK, &one, sizeof(one));
Unlike disabling Nagle’s Algorithm, disabling Delayed ACKs on Linux is
not a one-time operation: processing within the TCP stack can cause
Delayed ACKs to be re-enabled. As a result, to use TCP_QUICKACK
effectively requires setting and unsetting the socket option during
the life of the connection.

STARTTLS timeout with GMail on a slow processor

We have a datalogger running a relatively slow processor (7.3 MHz) that has instructions to send email using SMTP. The datalogger operating system uses axTLS to support TLS connections. When connecting to a service, such as GMail, that requires the use of TLS, it can take the datalogger some time (12 seconds or more) to perform the calculations required to complete the handshake. While this is taking place, GMail times out our client connection. Is there some kind of SSL heartbeat or keepalive message that can be sent while the datalogger finishes the required calculations?
Just imagine that all clients would take such a long time with the SSL handshake. This would tie up lots of resources at the server and would actually more resemble a denial of service attempt like slowloris.
Thus it is expected from the client to be a nicely behaving citizen on the internet and be fast enough to be handle the handshake in a short time. Using a processor with a speed which was state of the art 30 years ago simply is not sufficient to connect against services which are designed to be used by current clients.
Thus if you want to use such services from a lowest-end device you should instead transmit the data to your own relay server which is willing to wait such a long time and this relay can then deliver the data with the expected speed to the public servers.
I found the RFC (RFC 6520) that discusses the HeartBeatRequest extension for TLS. This RFC has the following text:
A HeartbeatRequest message can arrive almost at any time during the
lifetime of a connection. Whenever a HeartbeatRequest message is
received, it SHOULD be answered with a corresponding
HeartbeatResponse message.
However, a HeartbeatRequest message SHOULD NOT be sent during
handshakes. If a handshake is initiated while a HeartbeatRequest is
still in flight, the sending peer MUST stop the DTLS retransmission
timer for it. The receiving peer SHOULD discard the message
silently, if it arrives during the handshake.
This leads me to believe that, although it MAY be possible to extend the handshake time out, it is unlikely to do so.

Resources