It was working fine till last day and suddenly stopped pushing to endpoint. Checked all settings including endpoint URL and found everything remains unchanged. Can you guys suggest possible causes.
Not receiving a message on a push endpoint could happen for many reasons. The first thing to do would be to go to Stackdriver and create a graph for the subscription/push_request_count metric. You can break this down by response_code to see how many requests Cloud Pub/Sub is sending to your push endpoint and what response codes it is returning. If there are requests being delivered that are returning errors, this graph will show that.
It might also be worth checking the publish side to ensure messages are still being published as expected. You can look at the topic/send_message_operation_count metric, which can also be broken down by response_code, to make sure the publish requests are all returning success.
You should also check to ensure the subscription still exists using the Pub/Sub Subscriptions page in the Cloud console. After 30 days of inactivity (including inability to successfully deliver a message to a push endpoint), subscriptions are potentially deleted.
If the issue still unsolved after those steps, it is best to contact Google Cloud support with your project ID and subscription name so that things can be investigated for your specific case.
Related
I have a Cloud Pub/Sub Push subscription that pushes multiple instances of the same messages to a processing end-point i GAE. I can track the message ID and it’s the same message that gets PUSH multiple times.
I have set the ack-timeout to 600 seconds but still it pushes multiple instances of some of the messages. Outside of the message doesn’t get “acked”, what can trigger this behavior? Anyone had the same problem?
The issue seems to be bigger the more instances I run, but even when using basic_scaling and with max_instances: 1 problem still remains.
I can see a bunch of 503 errors in GAE but if I understand it correct, that is not an issue since these messages automatically gets "re-tried" but Pub/Sub.
As it turns out this is a well known issue with Pub/Sub. Pub/Sub is "At least Once Delivery", and duplicates are to be expected. To resolve this, read here for some inspiration, https://cloud.google.com/blog/products/serverless/cloud-functions-pro-tips-building-idempotent-functions
I am posting this as an answer, because i dont have enough reputation to put as comment. :)
As you have already figured out, once Pub/Sub sends a message to a subscriber, the subscriber should acknowledge the message. Any message that has not been acknowledged, Cloud Pub/Sub will repeatedly attempt to deliver (Check here). This means that occasional duplicates are to be expected. However, a high rate of duplicates may indicate that the client is not acknowledging messages within the configured ack_deadline_seconds, and Cloud Pub/Sub is retrying the message delivery.
You could use Stackdriver, to monitor if the Pub/Sub System is successful and your messages are being acknowledged (Check here & here), or if there are too many duplicates (Check here & here).
I am attempting to use the Gmail api to synchronize all the email's from a user's Gmail inbox. I am using the Partial Synchronization technique described in Gmail's "Synchronizing Clients" [1] documentation. One of the listed limitations of this is that in rare cases the historyId of certain emails are unavailable. Under these circumstances, it is advised that the client fall back on using "Full Synchronization", which states that the client should "retrieve and store as many of the most recent messages or threads as are necessary for your purpose".
This all makes sense. When I have issues with Partial Synchronization, I attempt to look through an inboxes messages by time range. To do this, I effectively store a record of the ( emailAddress, historyId, internalDate ) of each email I sync and then when falling back on Full Synchronization I attempt to sync all email since the most recent internalDate that I have already synced.
My issue is that the cases that seem to cause partial synchronization to fail also seem to cause Full Synchronization to fail, and many of these cases are caused by emails with internalDates in the future (I can't share these examples for privacy reasons). The failure case seems to be something like the following
I sync email E with historyId H and an internalDate I some time in the future
Some time passes
I receive a push notification from google indicating that their are new emails to sync
I lookup the most recent message that I have syncecd for this inboxId, finding email E
I attempt a partial sync using the listHistory [2] endpoint with historyId H
The listHistory request fails with a 404
I attempt a full sync using the listMessages [3] endpoint using the query newer_than:{hours_since-internalDate-I}, but this request doesn't make any sense since the internalDate of this message is in the future.
I can imagine a few different solutions to this problem. Perhaps I should simply ignore these emails as spam, or perhaps I should store a timestamp of when I synced each email and then perform a Full Synchronization on the timestamp I have stored.
Either way, this seems like a bug in the Gmail API, as the internalDate should really be when Gmail received the email. I initially suspected that this might be caused by Gmail's new schedule feature and that the internalDate might be when the email was scheduled in the future, but I confirmed that some of the examples I have are definitely for emails that the user's inbox received, not sent. Really not sure what to make of this edge case within the internalDate api.
So my question is, what is the advised way to handle bogus future internalDates? And is it a bug?
https://developers.google.com/gmail/api/guides/sync
https://developers.google.com/gmail/api/v1/reference/users/history/list
https://developers.google.com/gmail/api/v1/reference/users/messages/list
If you're sure this is a bug, you can head to Google's Issue Tracker (template here) and report it so their engineering team can take a look and see what is causing this error. Alternatively if this persists with other mails or users, you can open a support ticket directly with them by going to your admin dashboard and selecting 'Contact Support' in the ? menu in the top right. This way Google can take a look into the erroneous internalDates without the need for you to post any potentially sensitive data in a public forum.
In the mean time you can workaround this dynamically by making sure that you don't fetch mails with a time in the future (psuedo-code):
var now = new Date().getTime()
var q = "newer_than:1h before:" + now
GmailServiceConnect.Users.messages.list(userId = "user#domain.ext", q = q).execute()
But remember that Gmail uses milliseconds for Unix time not seconds so this will have to be adjusted accordingly.
I have watch/subscribed to the topic using the following code.
request = {
'labelIds': ['INBOX'],
'topicName': 'projects/myproject/topics/mytopic'
}
gmail.users().watch(userId='me', body=request).execute()
How can I get the status of the topic at any given point in time? The problem is, sometimes I am not getting the push from Gmail for any incoming emails.
From the Cloud Pub/Sub perspective, if you want to check on the status of messages, you could look at metrics via Stackdriver. There are many Cloud Pub/Sub metrics that are available. You can create graphs on any of the metrics that will be mentioned later by going to Stackdriver, creating a new dashboard, clicking on "Add Chart," and then typing in the name of the metric in the "Find resource type and metric box:
The first thing you have to determine is whether the issue is on the publish side (from Gmail into your topic) or on the subscribe side (from the subscription to your push endpoint). To determine if the topic is receiving messages, look at the topic/send_message_operation_count metric. This should be non-zero at points where messages were sent from Gmail to the topic. If it is always zero, then it is likely that the connection from Gmail to Cloud Pub/Sub is not set up properly, e.g., you need to grant publish rights to the topic. Note that results are delayed, so from the time you expect a message to have been sent to when it would be reflected on the graph could be up to 5 minutes.
If the messages are successfully being sent to Pub/Sub, then you'll want to see the status of attempts to receive those messages. If your subscription is a push subscription, then you'll want to look at subscription/push_request_count for the subscription. Results are grouped by response code. If the responses are in the 400 or 500 ranges, then Cloud Pub/Sub is attempting to deliver messages to your subscriber, but the subscriber is returning errors. In this case, it is likely an issue with your subscriber itself.
If you are using the Cloud Pub/Sub client libraries, then you'll want to look at properties like subscription/streaming_pull_message_operation_count to determine if your subscriber is managing to try to fetch messages for a subscription. If you are calling the pull method directly in your subscriber, then you'll want to look at subscription/pull_message_operation_count to see if there are pull requests returning successfully to your subscriber.
If the metrics for push, pull, or streaming pull indicate errors, that should help to narrow down the problem. If there are no requests at all, then it indicates that the subscribers may not There could be permission problems, e.g., the subscriber is running as a user that doesn't have permission to read from subscriptions.
I'm looking to implement Google Wallet for digital goods' subscription on my website.
I understand how it works with postback on start and cancellation.
I'm worried if cancellation postback fail contacting my server. As I have a rather large amount of subscriptions, checking manually would be bothersome so I was wondering if there is any way to check subscription state contacting google wallet servers (like Paypal API).
How do you manage failed cancelation postback ?
Thanks,
AFAIK, there is no API to "query" - it would be nice to have :) I recall asking a similar question back in one of Google's developer hangouts about "repurposing" some of the now deprecated Google Checkout API which did have "query apis".
I'd suggest you mitigate things by logging all notifications - aka "notification history". If you experience a processing error on your end, you'd still have access to the "raw data".
Of course this assumes 2 things, (1) Google will never fail sending you a postback, and (2) your server/s are always ready (if they're down, then they can't receive).
Unless I'm corrected by a Googler, I don't believe I've seen a "retry policy" - error on either end - e.g. in GCO API postbacks were resent until the merchant successfully "acknowledges" receipt of the postback. Until then, I think you're down to looking at Merchant Center (manual).
Hth...
I've followed the Order Processing tutorial to receive and handle order notifications in my GAE application from Google Checkout.
Everything works OK in the sandbox environment. I can send a fake order and the app gets a notification and handles the order.
When I switch to the production environment and make a real order, I can see the order in the Google Checkout Merchant account but I don't receive any notifications.
To switch to production I simply edit my ApiContext object to use Environment.Production and the real merchant id and key. The Integration settings are the same. I've also tried changing the callback URL to use HTTPS (https://blah.appspot.com/not) but still nothing.
What am I missing?
The "Integration Console" in the Checkout Merchant Center gives you information about the callbacks (XML sent, XML received, HTTP errors, etc.). Hopefully you can figure out the problem from that data. Some related links below:
https://checkout.google.com/support/sell/bin/answer.py?hl=en&answer=72217
http://code.google.com/apis/checkout/developer/Google_Checkout_XML_API.html#integration_issues_console
http://code.google.com/apis/checkout/articles/Troubleshoot_Integration_Console_Errors.html