How to NACK Google Cloud PubSub message through their REST API? - google-cloud-pubsub

The closest thing to NACK a message seems to be to set modifyAckDeadline to 0.
There they mention that:
This method is useful ... to make the message available for redelivery if the processing was interrupted.
Is this really the default way to do it (it seems hackish a bit), or am I missing something?

Calling modifyAckDeadline with a value of 0 is exactly the way to NACK a message. That is what the client libraries do as well, e.g., Java. This essentially tells Pub/Sub that the client wants no more time to ack the message, which means it becomes a candidate for redelivery.

Related

TCP Sockets in C: Does the recv() function trigger sending the ACK?

Im working with TCP Sockets in C but yet dont really understand "how far" the delivery of data is ensured.
My main problem is that in my case the server sometimes sends a message to the client and expects an answer shortly after. If the client doesnt answer in time, the server closes the connection.
When reading through the manpages of the recv() function in C, I found the MSG_PEEK Flag which lets me look/peek into the Stream without actually reading the data.
But does the server even care if I read from the stream at all?
Lets say the server "pushes" a series of messages into the stream and a Client should receive them.
As long as the Client doesnt call recv() those messages will stay in the Stream right?
I know about ACK messages being send when receiving data, but is ACK sent when i call the recv() function or is the ACK already sent when the messsage successfully reached its destination and could (emphasising could) be received by the client if it choses to call recv()?
My hope is to trick the server into thinking the message wasnt completely send yet, because the client has not called recv() yet. Therefore the Client could already evaluate the message by using the MSG_PEEK flag and ensure it always answers in time.
Of course I know the timout thing with my server depends on the implementation. My question basically is, if PEEKING lets the server think the message hasnt reached it destination yet or if the server wont even care and when ACK is sent when using recv().
I read the manpages on recv() and wiki on TCP but couldnt really figure out how recv() takes part in the process. I found some similar questions on SO but no answer to my question.
TL;DR
Does the recv() function trigger sending the ACK?
No, not on any regular OS. Possibly on an embedded platform with an inefficient network stack. But it's almost certainly the wrong problem anyway.
Your question about finessing the details of ACK delivery is a whole can of worms. It's an implemention detail, which means it is highly platform-specific. For example, you may be able to modify the delayed ACK timer on some TCP stacks, but that might be a global kernel parameter if it even exists.
However, it's all irrelevant to your actual question. There's almost no chance the server is looking at when the packet was received, because it would need it's own TCP stack to even guess that, and it still wouldn't be reliable (TCP retrans can keep backing off and retrying for minutes). The server is looking at when it sent the data, and you can't affect that.
The closest you could get is if the server uses blocking writes and is single-threaded and you fill the receive window with un-acked data. But that will probably delay the server noticing you're late rather than actually deceiving it.
Just make your processing fast enough to avoid a timeout instead of trying to lie with TCP.

pubsub streaming pull nack vs no acknowledge behaviour

nack() have following beaviour
nack()
"""Decline to acknowldge the given message.
This will cause the message to be re-delivered to the subscription.
Now in streaming pull, I am pulling taxiride streaming data tested following behavior.
with nack()
Streaming pull continues to receive messages which were previously nacked()
Neither nack() or ack()
Streaming reads initial bunch of messages and waits for long time. I waited for almost 15 mins but it didn't pulled any new message.
Now my question is, in streaming pull when a message is neither ack() or nack(), what is the expected behavior and right way to process these messages?
Lets say if I want to count backlog messages every minute as processing requirement?
When a message is neither acked nor nacked, the Cloud Pub/Sub client library maintains the lease on the message for up to the maxAckExtensionPeriod. Once that time period has passed, the message will be nacked and redelivered. The reason you are not getting any more messages when you neither ack nor nack is likely because you are running into the values specified in the flowControlSettings, which limits the number of messages that can be outstanding and not yet acked or nacked.
It is generally an anti-pattern to neither ack nor nack messages. If you successfully process the message, you should ack it. If you are unable to process it (say some type of transient exception occurs) then you should nack it. Trying to receive messages without performing one of these actions isn't really going to be an effective way to count the number of messages in the backlog.

STARTTLS timeout with GMail on a slow processor

We have a datalogger running a relatively slow processor (7.3 MHz) that has instructions to send email using SMTP. The datalogger operating system uses axTLS to support TLS connections. When connecting to a service, such as GMail, that requires the use of TLS, it can take the datalogger some time (12 seconds or more) to perform the calculations required to complete the handshake. While this is taking place, GMail times out our client connection. Is there some kind of SSL heartbeat or keepalive message that can be sent while the datalogger finishes the required calculations?
Just imagine that all clients would take such a long time with the SSL handshake. This would tie up lots of resources at the server and would actually more resemble a denial of service attempt like slowloris.
Thus it is expected from the client to be a nicely behaving citizen on the internet and be fast enough to be handle the handshake in a short time. Using a processor with a speed which was state of the art 30 years ago simply is not sufficient to connect against services which are designed to be used by current clients.
Thus if you want to use such services from a lowest-end device you should instead transmit the data to your own relay server which is willing to wait such a long time and this relay can then deliver the data with the expected speed to the public servers.
I found the RFC (RFC 6520) that discusses the HeartBeatRequest extension for TLS. This RFC has the following text:
A HeartbeatRequest message can arrive almost at any time during the
lifetime of a connection. Whenever a HeartbeatRequest message is
received, it SHOULD be answered with a corresponding
HeartbeatResponse message.
However, a HeartbeatRequest message SHOULD NOT be sent during
handshakes. If a handshake is initiated while a HeartbeatRequest is
still in flight, the sending peer MUST stop the DTLS retransmission
timer for it. The receiving peer SHOULD discard the message
silently, if it arrives during the handshake.
This leads me to believe that, although it MAY be possible to extend the handshake time out, it is unlikely to do so.

Reliable check if tcp packet has been delivered [duplicate]

When i send()/write() a message over a tcp stream, how can i find out if those bytes were successfully delivered?
The receiver acknowledges receiving the bytes via tcp, so the senders tcp stack should know.
But when I send() some bytes, send() immediately returns, even if the packet could not (yet) be delivered, i tested that on linux 2.6.30 using strace on netcat, pulling my network cable out before sending some bytes.
I am just developing an application where it is very important to know if a message was delivered, but implementing tcp features ("ack for message #123") feels awkward, there must be a better way.
The sending TCP does know when the data gets acknowledged by the other end, but the only reason it does this is so that it knows when it can discard the data (because someone else is now responsible for getting it to the application at the other side).
It doesn't typically provide this information to the sending application, because (despite appearances) it wouldn't actually mean much to the sending application. The acknowledgement doesn't mean that the receiving application has got the data and done something sensible with it - all it means is that the sending TCP no longer has to worry about it. The data could still be in transit - within an intermediate proxy server, for example, or within the receiving TCP stack.
"Data successfully received" is really an application-level concept - what it means varies depending on the application (for example, for many applications it would only make sense to consider the data "received" once it has been synced to disk on the receiving side). So that means you have to implement it yourself, because as the application developer, you're really the only one in a position to know how to do it sensibly for your application.
Having the receiver send back an ack is the best way, even if it "feels awkward". Remember that IP might break your data into multiple packets and re-assemble them, and this could be done multiple times along a transmission if various routers in the way have different MTUs, and so your concept of "a packet" and TCP's might disagree.
Far better to send your "packet", whether it's a string, a serialized object, or binary data, and have the receiver do whatever checks it needs to do to make it it's there, and then send back an acknowledgement.
The TCP protocol tries very hard to make sure your data arrives. If there is a network problem, it will retransmit the data a few times. That means anything you send is buffered and there is no timely way to make sure it has arrived (there will be a timeout 2 minutes later if the network is down).
If you need a fast feedback, use the UDP protocol. It doesn't use any of the TCP overhead but you must handle all problems yourself.
Even if it got as far as the TCP layer, there's no guarantee that it didn't sit in the application's buffer, then the app crashed before it could process it. Use an acknowledgement, that's what everything else does (e.g. SMTP)
Application layer has no control over the notifications at lower layers (such as the Transport layer) unless they are specifically provided - this is by design. If you want to know what TCP is doing on a per packet level you need to find out at the layer that TCP operates at; this means handling TCP headers and ACK data.
Any protocol you end up using to carry your payload can be used to pass messages back and forth by way of that payload, however. So if you feel awkward using the bits of a TCP header to do this, simply set it up in your application. For instance:
A: Send 450 Bytes
B: Recv 450 Bytes
B: Send 'B received 450 Bytes'
A: Recv 'B received 450 Bytes'
A: Continue
This sounds like SCTP could be something to look at; I think it should support what you want. The alternative seems to be to switch to UDP, and if you're switching protocols anyway…

How can I explicitly wait for a TCP ACK before proceeding?

Is there a way to get send() to wait until all the data that has been sent has been ACK-ed (or return -1 if the timeout for an ACK has been reached), or is there some other mechanism to wait for the ACK after the send() but before doing something else?
I am using the standard Unix Berkeley sockets API.
I know I could implement an application-layer ACK, but I'd rather not do that when TCP's ACK serves the purpose perfectly well.
AFAIK there is no way.
Also, it wouldn't be reliable, the ACK means only that the kernel received the data, in the meantime the client or its machine could have crashed. You would think the client received the data, but actually it never processed it.
Unfortunately standard API doesn't reserve any appropriate way to do this. There could be a way to query the current TCP send window size/usage, but unfortunately it may not be queried by the standard means.
Of course there are tricky ways to achieve what you want. For instance on Windows one may create a network filter driver to monitor packet-level trafic.
I seem to have found a solution. At least on Linux, if you set SO_SNDBUF to a value of 0, it seems to wait for every transaction before allowing the next transfer through. While it will immediately return, it will not proceed to allowing another send to succeed until the previous send has sent. I haven't tried using select(...) to determine if the data has been sent.
This works on my Linux 3.8 kernel, and am confident it works elsewhere.
You can write a wrapper for send(). I have answered a question similar to this in another thread.

Resources