I need to check if my subscription has done any pull in last 15 days or not, which metric should i consider?
I'm new to this google pubsub stackdriver metrics and getting confused with them.
can anyone tell me difference between these metrics:
subscription/pull_message_operation_count
subscription/pull_request_count
subscription/pull_ack_message_operation_count
subscription/pull_ack_request_count
The difference between "operation_count" and "request_count" is due to the fact that both pull and ack allows for batching, i.e., a pull response can return multiple messages and an ack request can contain the IDs of multiple messages to ack. For pull, subscription/pull_request_count would be the number of pull requests you make. subscription/pull_message_operation_count would be the number of messages returned to you in responses to pull requests. Similarly, subscription/pull_ack_request_count would be the number of calls you make to ack. subscription/pull_ack_message_operation_count would be the number of messages acknowledged via ack requests.
Related
I was building a service that runs on Cloud Run that is triggered by PubSub through EventArc.
'PubSub' guarantees delivery at least one time and it would retry for every acknowledgement deadline. This deadline is set in the queue subscription details.
We could send an acknowledgement back at two points when a service receives a pub-sub request (which is received as a POST request in the service).
At the beginning of the request as soon as the request was received. The service would then continue to process the request at its own pace. However, this article points out that
When an application running on Cloud Run finishes handling a request, the container instance's access to CPU will be disabled or severely limited. Therefore, you should not start background threads or routines that run outside the scope of the request handlers.
So sending a response at the beginning may not be an option
After the request has been processed by the service. So this would mean that, depending on what the service would do, we cannot always predict how long it would take to process the request. Hence we cannot set the Acknowledgement deadline correctly, resulting in PubSub retries and duplicate requests.
So what is the best practice here? Is there a better way to handle this?
Best practice is generally to ack a message once the processing is complete. In addition to the Cloud Run limitation you linked, consider that if the endpoint acked a message immediately upon receipt and then an error occurred in processing it, your application could lose that message.
To minimize duplicates, you can set the ack deadline to an upper bound of the processing time. (If your endpoint ends up processing messages faster than this, the ack deadline won’t rate-limit incoming messages.) If the 600s deadline is not sufficient, you could consider writing the message to some persistent storage and then acking it. Then, a separate worker can asynchronously process the messages from persistent storage.
Since you are concerned that you might not be able to set the correct "Acknowledgement Deadline", you can use modify_ack_deadline() in your code where you can dynamically extend your deadline if the process is still running. You can refer to this document for sample code implementations.
Be wary that the maximum acknowledgement deadline is 600 seconds. Just make sure that your processing in cloud run does not exceed the said limit.
Acknowledgements do not apply to Cloud Run, because acks are for "pull subscriptions" where a process is continuously pulling the Cloud PubSub API.
To get events from PubSub into Cloud Run, you use "push subscriptions" where PubSub makes an HTTP request to Cloud Run, and waits for it to finish.
In this push scenario, PubSub already knows it made you a request (you received the event) so it does not need an acknowledgement about the receipt of the message. However, if your request sends a faulty response code (e.g. http 500) PubSub will make another request to retry (and this is configurable on the Push Subscription itself).
We recently integrated google pubsub into our app, and some of our long running tasks are now under problem, as they take more than 1 minute sometimes. We have configured our subscriber's ack deadline to 600 seconds, yet, anything that is taking more than 600ms, is being retried by pubsub.
this is our config:
gcloud pubsub subscriptions describe name
ackDeadlineSeconds: 600
expirationPolicy: {}
messageRetentionDuration: 604800s
Not sure what is the issue. Most of our tasks will get repeated because of this
Pub/Sub has a built in At-least-once delivery system which will retry messages that were not acknowledged. In this case, after 600s have passed, the message you first sent becomes unacknowledged, thus Pub/Sub retries the message. It will keep retrying it for 600s until it reaches the messageRetentionDuration or you acknowledge it.
Keep in mind that it's specified in the documentation that your subscriber should be idempotent. So, making your code be able to handle multiple messages should be the best approach to this issue.
You could also decrease the messageRetentionDuration to 600s(it's minimum) so anything that passes the 10 min mark will not be retried.
Also, it is stated in the FAQs that:
Why are there too many duplicate messages?
Cloud Pub/Sub guarantees at-least-once message delivery, which means
that occasional duplicates are to be expected. However, a high rate of
duplicates may indicate that the client is not acknowledging messages
within the configured ack_deadline_seconds, and Cloud Pub/Sub is
retrying the message delivery. This can be observed in the monitoring
metrics.
pubsub.googleapis.com/subscription/pull_ack_message_operation_count
for pull subscriptions, and
pubsub.googleapis.com/subscription/push_request_count for push
subscriptions. Look for elevated expired or webhook_timeout values in
the /response_code. This is particularly likely if there are many
small messages, since Cloud Pub/Sub may batch messages internally and
a partially acknowledged batch will be fully redelivered.
Another possibility is that the subscriber is not acknowledging some
messages because the code path processing those specific messages
fails, and the Acknowledge call is never made; or the push endpoint
never responds or responds with an error.
I have watch/subscribed to the topic using the following code.
request = {
'labelIds': ['INBOX'],
'topicName': 'projects/myproject/topics/mytopic'
}
gmail.users().watch(userId='me', body=request).execute()
How can I get the status of the topic at any given point in time? The problem is, sometimes I am not getting the push from Gmail for any incoming emails.
From the Cloud Pub/Sub perspective, if you want to check on the status of messages, you could look at metrics via Stackdriver. There are many Cloud Pub/Sub metrics that are available. You can create graphs on any of the metrics that will be mentioned later by going to Stackdriver, creating a new dashboard, clicking on "Add Chart," and then typing in the name of the metric in the "Find resource type and metric box:
The first thing you have to determine is whether the issue is on the publish side (from Gmail into your topic) or on the subscribe side (from the subscription to your push endpoint). To determine if the topic is receiving messages, look at the topic/send_message_operation_count metric. This should be non-zero at points where messages were sent from Gmail to the topic. If it is always zero, then it is likely that the connection from Gmail to Cloud Pub/Sub is not set up properly, e.g., you need to grant publish rights to the topic. Note that results are delayed, so from the time you expect a message to have been sent to when it would be reflected on the graph could be up to 5 minutes.
If the messages are successfully being sent to Pub/Sub, then you'll want to see the status of attempts to receive those messages. If your subscription is a push subscription, then you'll want to look at subscription/push_request_count for the subscription. Results are grouped by response code. If the responses are in the 400 or 500 ranges, then Cloud Pub/Sub is attempting to deliver messages to your subscriber, but the subscriber is returning errors. In this case, it is likely an issue with your subscriber itself.
If you are using the Cloud Pub/Sub client libraries, then you'll want to look at properties like subscription/streaming_pull_message_operation_count to determine if your subscriber is managing to try to fetch messages for a subscription. If you are calling the pull method directly in your subscriber, then you'll want to look at subscription/pull_message_operation_count to see if there are pull requests returning successfully to your subscriber.
If the metrics for push, pull, or streaming pull indicate errors, that should help to narrow down the problem. If there are no requests at all, then it indicates that the subscribers may not There could be permission problems, e.g., the subscriber is running as a user that doesn't have permission to read from subscriptions.
What is the difference between topic/send_message_operation_count and topic/send_request_count in google pubsub.
The requirement is though i can calculate no.of messages present in subscriber by undeliver_messages but I need to calculate how many messages are pushed to the topic.
Thanks,
Santosh
The docs for topic/send_message_operation_count say:
Cumulative count of publish message operations, grouped by result.
The docs for topic/send_request_count say:
Cumulative count of publish requests, grouped by result.
Pub/Sub allows multiple messages to be batched in a single API request. send_request_count is the number of publish API calls, while send_message_operation_count is the number of actual messages in those calls.
So if on average each request sends 10 messages, send_message_operation_count will be 10x send_request_count. If you publish one message per API call, they will be the same.
Earlier today I noticed the following error in my logs:
503 Too many outstanding pull requests for subscription '<...>'.
Please reduce the number of simultaneous Pull() requests invoked for
this subscription. (POST https://pubsub.googleapis.com/v1/projects/<...>:pull)
I tried searching for the exact quantity of allowed simultaneous pull requests, but can't seem to find it anywhere. The only mention of this error I find is here in the docs, but again, no numbers are stated.
I have 40 processes pulling from the subscription.
This error happens when there is a transient overload on a specific instance of a Cloud Pub/Sub server to which some of your requests are being routed. The error message is admittedly poor and should not tell you to reduce your pull requests and I will remedy that.