As part of SNS configuration, I followed all the steps mentioned in the link https://docs.snowflake.com/en/user-guide/data-load-snowpipe-auto-s3.html#option-2-configuring-amazon-sns-to-automate-snowpipe-using-sqs-notifications. But still I don't see any message on SQS queue even though new file is dropped in S3 bucket. One thing I noticed that "Create subscriptions" steps are missing to add Snowflake SQS. I went ahead and added Snowflake SQS ARN in the subscription of the SNS topic but the status shows "Pending confirmation". I am not sure where we can approve this confirmation?? Any help on this greatly appreciated.
Related
I have a GAE/P/Standard/FirstGen app that sends a lot of email with Sendgrid. Sendgrid sends my app a lot of notifications for when email is delivered, opened, etc.
This is how I process the Sendgrid notifications:
My handler processes the Sendgrid notification and adds a task to a pull queue
About once every minute I lease a batch of tasks from the pull queue to process them.
This works great except when I am sending more emails than usual. When I am adding tasks to the pull queue at a high rate, the pull queue refuses to lease tasks (it responds with TransientError) so the pull queue keeps filling up.
What is the best way to scale this procedure?
If I create a second pull queue and split the tasks between the two of them, will that double my capacity? Or is there something else I should consider?
====
This is how I add tasks:
q = taskqueue.Queue("pull-queue-name")
q.add(taskqueue.Task(data, method="PULL", tag=tag_name))
I have found some information about it in Google documentation here. According to it solution for TransientError should be to:
catch these exceptions, back off from calling lease_tasks(), and then
try again later.
etc.
Actually I suppose this is App Engine Task queue, not Cloud Tasks which are different product.
According to my understanding there is no option to scale this better. It seems that the some solution might be to migrate to Cloud Task and Pub/Sub which is better way to manage queues in GAE as you may find here.
I hope it will help somehow... :)
I have watch/subscribed to the topic using the following code.
request = {
'labelIds': ['INBOX'],
'topicName': 'projects/myproject/topics/mytopic'
}
gmail.users().watch(userId='me', body=request).execute()
How can I get the status of the topic at any given point in time? The problem is, sometimes I am not getting the push from Gmail for any incoming emails.
From the Cloud Pub/Sub perspective, if you want to check on the status of messages, you could look at metrics via Stackdriver. There are many Cloud Pub/Sub metrics that are available. You can create graphs on any of the metrics that will be mentioned later by going to Stackdriver, creating a new dashboard, clicking on "Add Chart," and then typing in the name of the metric in the "Find resource type and metric box:
The first thing you have to determine is whether the issue is on the publish side (from Gmail into your topic) or on the subscribe side (from the subscription to your push endpoint). To determine if the topic is receiving messages, look at the topic/send_message_operation_count metric. This should be non-zero at points where messages were sent from Gmail to the topic. If it is always zero, then it is likely that the connection from Gmail to Cloud Pub/Sub is not set up properly, e.g., you need to grant publish rights to the topic. Note that results are delayed, so from the time you expect a message to have been sent to when it would be reflected on the graph could be up to 5 minutes.
If the messages are successfully being sent to Pub/Sub, then you'll want to see the status of attempts to receive those messages. If your subscription is a push subscription, then you'll want to look at subscription/push_request_count for the subscription. Results are grouped by response code. If the responses are in the 400 or 500 ranges, then Cloud Pub/Sub is attempting to deliver messages to your subscriber, but the subscriber is returning errors. In this case, it is likely an issue with your subscriber itself.
If you are using the Cloud Pub/Sub client libraries, then you'll want to look at properties like subscription/streaming_pull_message_operation_count to determine if your subscriber is managing to try to fetch messages for a subscription. If you are calling the pull method directly in your subscriber, then you'll want to look at subscription/pull_message_operation_count to see if there are pull requests returning successfully to your subscriber.
If the metrics for push, pull, or streaming pull indicate errors, that should help to narrow down the problem. If there are no requests at all, then it indicates that the subscribers may not There could be permission problems, e.g., the subscriber is running as a user that doesn't have permission to read from subscriptions.
It was working fine till last day and suddenly stopped pushing to endpoint. Checked all settings including endpoint URL and found everything remains unchanged. Can you guys suggest possible causes.
Not receiving a message on a push endpoint could happen for many reasons. The first thing to do would be to go to Stackdriver and create a graph for the subscription/push_request_count metric. You can break this down by response_code to see how many requests Cloud Pub/Sub is sending to your push endpoint and what response codes it is returning. If there are requests being delivered that are returning errors, this graph will show that.
It might also be worth checking the publish side to ensure messages are still being published as expected. You can look at the topic/send_message_operation_count metric, which can also be broken down by response_code, to make sure the publish requests are all returning success.
You should also check to ensure the subscription still exists using the Pub/Sub Subscriptions page in the Cloud console. After 30 days of inactivity (including inability to successfully deliver a message to a push endpoint), subscriptions are potentially deleted.
If the issue still unsolved after those steps, it is best to contact Google Cloud support with your project ID and subscription name so that things can be investigated for your specific case.
I know that in google cloud pub/sub, message will be lost after 7 days regardless of their acknowledgement state. Is there anyway we can send and store those messages in a file or csv or mq even after 7 days. My aim is whenever publisher publishes message this message should store in other place also.
Thanks,
santosh
There is no automatic way to store the messages that are published into Google Cloud Pub/Sub, but you could set up a subscriber that would store the messages as they are published. You would create a separate subscription on your topic that would be used for making the backups. Then, you would write a subscriber that reads messages using this subscription and immediately persists them in the desired place and format. You could use Cloud Dataflow to solve this by connecting a PubSubIO on the input side with a TextIO on the output side.
I'm looking to implement Google Wallet for digital goods' subscription on my website.
I understand how it works with postback on start and cancellation.
I'm worried if cancellation postback fail contacting my server. As I have a rather large amount of subscriptions, checking manually would be bothersome so I was wondering if there is any way to check subscription state contacting google wallet servers (like Paypal API).
How do you manage failed cancelation postback ?
Thanks,
AFAIK, there is no API to "query" - it would be nice to have :) I recall asking a similar question back in one of Google's developer hangouts about "repurposing" some of the now deprecated Google Checkout API which did have "query apis".
I'd suggest you mitigate things by logging all notifications - aka "notification history". If you experience a processing error on your end, you'd still have access to the "raw data".
Of course this assumes 2 things, (1) Google will never fail sending you a postback, and (2) your server/s are always ready (if they're down, then they can't receive).
Unless I'm corrected by a Googler, I don't believe I've seen a "retry policy" - error on either end - e.g. in GCO API postbacks were resent until the merchant successfully "acknowledges" receipt of the postback. Until then, I think you're down to looking at Merchant Center (manual).
Hth...