Background:
So I'm a novice to the whole app engine thing: I have made an app on google app engine that on the main page accepts user input and then sends the information to a handler that then uses the Big Query API to run a synchronous query with some tables I have uploaded to Big Query. The handler then sends back the results of the query as a json.
Problem:
In deployment it works mostly except sometimes a user can often run into this error while trying to make the synchronous query:
Error in runSyncQuery:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "termsOfServiceNotAccepted",
"message": "BigQuery Terms of Service have not been accepted"
}
],
"code": 403,
"message": "BigQuery Terms of Service have not been accepted"
}
}
After doing some searching I com across this:
https://groups.google.com/forum/#!msg/bigquery-announce/l0kwVNpQX5A/ct0MglMmYMwJ
If you make API calls that are authenticated as an end user, then API calls will soon return errors when the end user has not accepted the updated Terms of Service. Apps built against the BigQuery API should ideally look for those errors and direct the user to the Google APIs Console to accept the new terms.
Except I dont really understand how to do this.
Also all the potential user accounts that I have tested my app with have access to a specific project that has Big Query API enabled, and can use the API so why does this pop up?
Also there are times when a specific account does not run into this problem. For instance if I keep refreshing and retrying to use the app eventually it will not have this problem and work. But then the next time this error will resurface again.
I don't understand how a user can have accepted the terms of service at one point of time and then not another at some point in the future?
Yes, any end users who authorize access to the BigQuery API must accept the Terms of Service (ToS) provided by the Google Developers Console at https://developers.google.com/console
It is possible that Terms of Service can change, and that some of your project members have not yet accepted updated BigQuery ToS. If one of your users is receiving this message when authorizing access to the BigQuery API, you redirect them to the https://developers.google.com/console to accept the terms of service.
Re: "specific account does not run into this problem" - can you confirm this is happening consistently with a specific account on a specific project?
Related
It was working fine till last day and suddenly stopped pushing to endpoint. Checked all settings including endpoint URL and found everything remains unchanged. Can you guys suggest possible causes.
Not receiving a message on a push endpoint could happen for many reasons. The first thing to do would be to go to Stackdriver and create a graph for the subscription/push_request_count metric. You can break this down by response_code to see how many requests Cloud Pub/Sub is sending to your push endpoint and what response codes it is returning. If there are requests being delivered that are returning errors, this graph will show that.
It might also be worth checking the publish side to ensure messages are still being published as expected. You can look at the topic/send_message_operation_count metric, which can also be broken down by response_code, to make sure the publish requests are all returning success.
You should also check to ensure the subscription still exists using the Pub/Sub Subscriptions page in the Cloud console. After 30 days of inactivity (including inability to successfully deliver a message to a push endpoint), subscriptions are potentially deleted.
If the issue still unsolved after those steps, it is best to contact Google Cloud support with your project ID and subscription name so that things can be investigated for your specific case.
I am suddenly getting a 403 error when I try to POST an update to the Retrieve and Rank service. This code is under development but it has been working up until yesterday. The failure occurs only when doing a POST to /v1/solr_clusters/{solr_cluster_id}/solr/{collection_name}/update, and it fails the same way whether I do it via my program, the Swagger API documentation, or cURL. All other operations to this service that I've tried work fine when using the same credentials that I'm using with this POST. The error message I'm getting back is
Error: WRRCSH004: Service [1d111267-76b7-417a-98bd-4e9a58072ef9] is not authorized for cluster [sc262b05e8_dcf5_40b4_b662_ae85058ff07f]!. I don't know where the identifier (1d111267-76b7-417a-98bd-4e9a58072ef9) is coming from; that's not the userid I'm sending in.
Looking into your issue it appears your Bluemix organization has multiple service instances. The 403 issue you were seeing is because you're trying to access a Solr cluster using credentials from one of your instances against a cluster in the other instance. The 1d111267-76b7-417a-98bd-4e9a58072ef9 represents one of these service instances—but the issue is that the cluster you're trying to access is not part of that instance. A good way to test this is to ensure you're using the same credentials that generate the 403 but simply try to list the Solr clusters you have created by doing a GET against https://gateway.watsonplatform.net/retrieve-and-rank/api/v1/solr_clusters/.
As for the 500 issue, I wasn't able to see anything on our end. If you're still experiencing that I would suggest posting another question and we can look into things again.
Thanks,
-Scott
I am using the Gmail API to put messages into a Google Apps email account. I use
the OAuth 2.0 authentication protocol with a service account. This is more or
less working fine. One of our customers has asked us to put messages
directly into a Google Vault. I don't see a Vault API, but I did find this
information related to the "insert" method (which is what we use to add
messages to a normal account):
parameter "deleted" (boolean): Mark the email as permanently deleted
(not TRASH) and only visible in Google Apps Vault to a Vault administrator.
Only used for Google Apps for Work accounts.
When I do this, some messages are accepted, but frequently I get http error
500 in response to the POST. The error text says "Backend Error". I thought
the pattern was that the first time the message was posted, it would work,
but the second time would generate the error. Therefore I was thinking it
was a duplicate check issue. However I now see some examples of messages
that fail immediately. The POST url looks like this:
https://www.googleapis.com/upload/gmail/v1/users/user#domain.com/messages?uploadType=multipart&internalDateSource=dateHeader&deleted=true&access_token=ABC...
As I mentioned, the same message to the same url (without deleted=true) will
always work. Any ideas what is causing the error?
Was just fighting this issue myself. Apparently the error has something to do if the message is compatible with the Google vault retention policies:
If I turn on a default policy of "Retain everything" then I've been able to get the messages to import correctly. HTH!
I'm using the import api method and the backendError seems to be related to filters/policies. For example we asked Google to reject messages with xls and macros and we get the error on mail with that kind of attachment
UPDATE: Please, if anyone can help: Google is waiting for inputs and examples of this problem on their bug tracking tool. If you have reproducible steps for this issue, please share them on: https://code.google.com/p/googleappengine/issues/detail?id=10937
I'm trying to fetch data from the StackExchange API using a Google App Engine backend. As you may know, some of StackExchange's APIs are site-specific, requiring developers to run queries against every site the user is registered in.
So, here's my backend code for fetching timeline data from these sites. The feed_info_site variable holds the StackExchange site name (such as 'security', 'serverfault', etc.).
data = json.loads(urllib.urlopen("%sme/timeline?%s" %
(self.API_BASE_URL, urllib.urlencode({"pagesize": 100,
"fromdate": se_since_timestamp, "filter": "!9WWBR
(nmw", "site": feed_info_site, "access_token":
decrypt(self.API_ACCESS_TOKEN_SECRET, self.access_token), "key":
self.API_APP_KEY}))).read())
for item in data['items']:
... # code for parsing timeline items
When running this query on all sites except Stack Overflow, everything works OK. What's weird is, when the feed_info_site variable is set to 'stackoverflow', I get the following error from Google App Engine:
HTTPException: Invalid and/or missing SSL certificate for URL:
https://api.stackexchange.com/2.2/me/timeline?
filter=%219WWBR%28nmw&access_token=
<ACCESS_TOKEN_REMOVED>&fromdate=1&pagesize=100&key=
<API_KEY_REMOVED>&site=stackoverflow
Of course, if I run the same query in Safari, I get the JSON results I'm expecting from the API. So the problem really lies in Google's URLfetch service. I found several topics here on Stack Overflow related to similar HTTPS/SSL exceptions, but no accepted answer solved my problems. I tried removing cacerts.txt files. I also tried making the call with validate_certificate=False, with no success.
I think the problem is not strictly related to HTTPS/SSL. If so, how would you explain that changing a single API parameter would make the request to fail?
Wait for the next update to the app engine (scheduled one soon) then update.
Replace browserid.org/verify with another service (verifier.loogin.persona.org/verify is a good service hosted by Mozilla what could be used)
Make sure cacerts.txt doesnt exist (looks like you have sorted but just in-case :-) )
Attempt again
Good luck!
-Brendan
I was facing the same error, google has updated the app engine now, error resolved, please check the updated docs.
I'm looking to implement Google Wallet for digital goods' subscription on my website.
I understand how it works with postback on start and cancellation.
I'm worried if cancellation postback fail contacting my server. As I have a rather large amount of subscriptions, checking manually would be bothersome so I was wondering if there is any way to check subscription state contacting google wallet servers (like Paypal API).
How do you manage failed cancelation postback ?
Thanks,
AFAIK, there is no API to "query" - it would be nice to have :) I recall asking a similar question back in one of Google's developer hangouts about "repurposing" some of the now deprecated Google Checkout API which did have "query apis".
I'd suggest you mitigate things by logging all notifications - aka "notification history". If you experience a processing error on your end, you'd still have access to the "raw data".
Of course this assumes 2 things, (1) Google will never fail sending you a postback, and (2) your server/s are always ready (if they're down, then they can't receive).
Unless I'm corrected by a Googler, I don't believe I've seen a "retry policy" - error on either end - e.g. in GCO API postbacks were resent until the merchant successfully "acknowledges" receipt of the postback. Until then, I think you're down to looking at Merchant Center (manual).
Hth...