In the Google Cloud Endpoints: serving your API to the world talk at Google Cloud Next '17 conference, it was presented that some request errors such as calling not existent path or providing a malformed JSON object in the body, could be handled by the ESP container and never get to the actual application code. So you get for example a nicely formatted NotFound error JSON objects as this one:
{
"code": 5,
"message": "Method does not exist.",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "service_control"
}
]
}
But I don't seem to get this functionality with my app in App Engine Standard environment. What I get when I call a non existent path is "Not Found" plain text in response. I was trying to specify the x-google-allow: configured in my OpenAPI spec (which should be a default behavior as I understood) but that didn't work either.
Should it work in AE standard and if so, how should I modify my OpenAPI spec to make it work?
Related
What's the process for verifying the HTTP request from Google Cloud scheduler? The docs (https://cloud.google.com/scheduler/docs/creating) mention you can create a job with a target of any publicly available HTTP endpoint but do not mention how the server verifies the cron/scheduler request.
[Update May 28, 2019]
Google Cloud Scheduler now has two command line options:
--oidc-service-account-email=<service_account_email>
--oidc-token-audience=<service_endpoint_being_called>
These options add an additional header to the request that Cloud Scheduler makes:
Authorization: Bearer ID_TOKEN
You can process the ID_TOKEN inside your endpoint code to verify who is calling your endpoint.
For example, you can make an HTTP request to decode the ID Token:
https://oauth2.googleapis.com/tokeninfo?id_token=ID_TOKEN
This will return JSON like this:
{
"aud": "https://cloudtask-abcdefabcdef-uc.a.run.app",
"azp": "0123456789077420983142",
"email": "cloudtask#development.iam.gserviceaccount.com",
"email_verified": "true",
"exp": "1559029789",
"iat": "1559026189",
"iss": "https://accounts.google.com",
"sub": "012345678901234567892",
"alg": "RS256",
"kid": "0123456789012345678901234567890123456789c3",
"typ": "JWT"
}
Then you can check that the service account email matches the one that you authorized Cloud Scheduler to use and that the token has not expired.
[End Update]
You will need to verify the request yourself.
Google Cloud Scheduler includes several Google specific headers such as User-Agent: Google-Cloud-Scheduler. Refer to the documentation link below.
However, anyone can forge HTTP headers. You need to create a custom something that you include as an HTTP Header or in the HTTP body that you know how to verify. Using a signed JWT would be secure and easy to create and verify.
When you create a Google Cloud Scheduler Job you have some control over the headers and body fields. You can embed your custom something in either one.
Scheduler Jobs
[Update]
Here is an example (Windows command line) using gcloud so that you can set HTTP headers and the body. This example calls Cloud Functions on each trigger showing how to include an APIKEY. The Google Console does not have this level of support yet.
gcloud beta scheduler ^
--project production ^
jobs create http myfunction ^
--time-zone "America/Los_Angeles" ^
--schedule="0 0 * * 0" ^
--uri="https://us-central1-production.cloudfunctions.net/myfunction" ^
--description="Job Description" ^
--headers="{ \"Authorization\": \"APIKEY=AUTHKEY\", \"Content-Type\": \"application/json\" }" ^
--http-method="POST" ^
--message-body="{\"to\":\"/topics/allDevices\",\"priority\":\"low\",\"data\":{\"success\":\"ok\"}}"
Short answer
If you host your app in Google Cloud, just check if header X-Appengine-Queuename equals __scheduler. However, this is undocumented behaviour, for more information read below.
Furthermore, if possible use Pub/Sub instead of HTTP requests, as Pub/Sub is internally sent (therefore of implicitly verified origin).
Experiment
As I've found here, Google strips requests of certain headers1, but not all2. Let's find if there are such headers for Cloud Scheduler.
1 E.g. you can't send any X-Google-* headers (found experimentally, read more)
2 E.g. you can send X-Appengine-* headers (found experimentally)
Flask app used in the experiment:
#app.route('/echo_headers')
def echo_headers():
headers = {h[0]: h[1] for h in request.headers}
print(headers)
return jsonify(headers)
Request headers sent by Cloud Scheduler
{
"Host": []
"X-Forwarded-For": "0.1.0.2, 169.254.1.1",
"X-Forwarded-Proto": "http",
"User-Agent": "AppEngine-Google; (+http://code.google.com/appengine)",
"X-Appengine-Queuename": "__scheduler",
"X-Appengine-Taskname": [private]
"X-Appengine-Taskretrycount": "1",
"X-Appengine-Taskexecutioncount": "0",
"X-Appengine-Tasketa": [private]
"X-Appengine-Taskpreviousresponse": "0",
"X-Appengine-Taskretryreason": "",
"X-Appengine-Country": "ZZ",
"X-Cloud-Trace-Context": [private]
"X-Appengine-Https": "off",
"X-Appengine-User-Ip": [private]
"X-Appengine-Api-Ticket": [private]
"X-Appengine-Request-Log-Id": [private]
"X-Appengine-Default-Version-Hostname": [private]
}
Proof that header X-Appengine-Queuename is stripped by GAE
Limitations
This method is most likely not supported by Google SLAs and Depreciation policies, since it's not documented. Also, I'm not sure if header cannot forged when the request source is within Google Cloud (maybe they're stripped at the outside layer). I've tested with an app in GAE, results may or may not vary for other deployment options. In short, use at your own risk.
This header should work:
map (key: string, value: string)
HTTP request headers.
This map contains the header field names and values. Headers can be
set when the job is created.
Cloud Scheduler sets some headers to default values:
User-Agent: By default, this header is "AppEngine-Google;
(+http://code.google.com/appengine)". This header can be modified, but
Cloud Scheduler will append "AppEngine-Google;
(+http://code.google.com/appengine)" to the modified User-Agent.
X-CloudScheduler: This header will be set to true.
X-CloudScheduler-JobName: This header will contain the job name.
X-CloudScheduler-ScheduleTime: For Cloud Scheduler jobs specified in
the unix-cron format, this header will contain the job schedule time
in RFC3339 UTC "Zulu" format. If the job has an body, Cloud Scheduler
sets the following headers:
Content-Type: By default, the Content-Type header is set to
"application/octet-stream". The default can be overridden by explictly
setting Content-Type to a particular media type when the job is
created. For example, Content-Type can be set to "application/json".
Content-Length: This is computed by Cloud Scheduler. This value is
output only. It cannot be changed. The headers below are output only.
They cannot be set or overridden:
X-Google-: For Google internal use only. X-AppEngine-: For Google
internal use only. In addition, some App Engine headers, which contain
job-specific information, are also be sent to the job handler.
An object containing a list of "key": value pairs. Example: { "name":
"wrench", "mass": "1.3kg", "count": "3" }.
https://cloud.google.com/scheduler/docs/reference/rest/v1/projects.locations.jobs#appenginehttptarget
if request.META['HTTP_X_CLOUDSCHEDULER'] == 'true':
print("True")
I have an IBM Cloud Function (OpenWhisk) that invokes a Watson Conversation Service.
We are using JAVA
The documentation of the JAVA SDK (https://github.com/watson-developer-cloud/java-sdk ) suggests that the credentials would be picked up from the binding.
When I list the bindig I get this:
>bx wsk action get talksmall parameters
ok: got action talksmall, displaying field parameters
[
{
"key": "__bx_creds",
"value": {
"conversation": {
"credentials": "Credentials-SmallTalk",
"instance": "<INSTANCE>",
"password": "<PASSWORD>",
"url": "https://gateway.watsonplatform.net/conversation/api",
"username": "<USERNAME>"
}
}
}
]
But when I use the SDK like this:
Conversation conversationService = new Conversation(Conversation.VERSION_DATE_2017_05_26);
I get an error
{
"error": "An error has occured while invoking the action (see logs for details): java.lang.IllegalArgumentException: apiKey or username and password were not specified"
}
When I add the line:
conversationService.setUsernameAndPassword(userName, password);
It works.
Maybe the VCAP_Service way of binding does not work with Cloud Functions ?
The Cloud Function runs in the same IBM Cloud organization and space.
I opened an issue against the SDK documentation which talks about "running in Bluemix". IBM Cloud offers infrastructure, OpenWhisk / Cloud Functions, Cloud Foundry and more. Bluemix originated from Cloud Foundry and the automatic binding via VCAP_SERVICE is a Cloud Foundry feature.
From my experience with using IBM Cloud Functions with Python and Node.js you need to call the API functions to set credentials explicitly. With the feature of service binding you can easily make credentials of provisioned services available to the context within IBM Cloud Functions as successfully shown in your code above.
We would like to automatically create a project ID and install our ULAPPH Cloud Desktop application using the App Engine Admin API (REST) and Golang.
https://cloud.google.com/appengine/docs/admin-api/?hl=en_US&_ga=1.265860687.1935695756.1490699302
https://ulapph-public-1.appspot.com/articles?TYPE=ARTICLE&DOC_ID=3&SID=TDSARTL-3
We were able to get a token but when we tried to create a project ID, we get the error below.
[Response OK] Successful connection to Appengine Admin API.
[Token] { "access_token" : "TOKEN_HERE", "expires_in" : 3599, "token_type" : "Bearer" }
[Response Code] 403
[Response Body] { "error": { "code": 403, "message": "Operation not allowed", "status": "PERMISSION_DENIED", "details": [ { "#type": "type.googleapis.com/google.rpc.ResourceInfo", "resourceType": "gae.api", "description": "The \"appengine.applications.create\" permission is required." } ] } }
We are just using the REST API calls. Request for token was successful as you can see above and the scope is ok as well. Now, when we posted the request to create application, we are having the error that says "appengine.application.create" permission required.
How do we specify the permission?
What are the possible reasons why we are getting that error? Do we missed to send a field in JSON or in query?
As per below link, we just need to pass the json containing the id and location. We also just need to pass the token in the Authorization header. The same logic I have used successfully in accessing Youtube, Drive APIs etc so not sure what needs to be done since I have followed the docs available.
I have also posted the same issue in Google Groups and now waiting for their reply.
It seems you've given no details about how you set up the account you're using to authorize the request. You'll need to make sure the appengine.applications.create permission is given to the account you're using, as mentioned in the error text. You can use the Google Identity and Access Management (IAM) API for this.
(by the way, I'd given this answer in the original thread, although you didn't reply or seem to take action on it. check it out! this is likely the solution you need!)
Using an oauth accessToken, I am able to retrieve the user's info through:
https://api.pinterest.com/v1/me/?fields=first_name%2Cid%2Clast_name%2Curl%2Cusername%2Cimage&access_token=xxxx
which from a desktop or even ec2 returns:
{
"data": {
"username": "yyyt",
"first_name": "yyyr",
"last_name": "",
"url": "https:\/\/www.pinterest.com\/yyyt\/",
"image": {
"60x60": {
"url": "https:\/\/s-passets-cache-ak0.pinimg.com\/images\/user\/default_60.png",
"width": 60,
"height": 60
}
},
"id": "1234567890"
}
}
However, when the same query is made from appengine, a 403 error is returned with the details:
{
"message": "Forbidden",
"status": 403
}
I can't find any information about why Google AppEngine may be specifically blocked, and since their API has come out of Beta, I'm not sure a reason why it would be.
This earlier question: Pinterest API - returning 403 on EC2 Instance suggested that they were blocking ec2 because the api was still unofficially supported, but ec2 access does in fact seem fine now, so I'm not sure why they would block google.
Can anyone suggest a workaround not involving a proxy, or refer me to a reason why the access might be forbidden?
or refer me to a reason why the access might be forbidden?
Unfortunately I ran into the same issue today when I tried to access the Pinterest web-site (not the API) via App Engine.
Looking at the 403 error page that is returned by Pinterest following a HTTP request from App Engine it seems that the reason is that Pinterest doesn't like bots and intentionally rejects HTTP requests by App Engine or the App Engine dev server.
When trying to access Pinterest via CURL, I noticed that Pinterest rejects all HTTP requests that have the string App Engine in the User-Agent HTTP request header, but Pinterest does happily accept any other (random) User-Agent string.
Because App Engine, as stated in the documentation, automatically appends the string "AppEngine-Google (+http://code.google.com/appengine; appid: APPID)" to the User-Agent HTTP request header, I suspect there is no way of circumventing this.
I wrote a webapp with angularjs frontend, google app engine for storing data, and google cloud endpoints for api access from the frontend client. I tested everything fine locally, but after deploying, accessing the api from the frontend javascript client gives me the following error:
[
{
"error": {
"code": 403,
"message": "Access Not Configured",
"data": [
{
"domain": "usageLimits",
"reason": "accessNotConfigured",
"message": "Access Not Configured"
}
]
},
"id": "gapiRpc"
}
]
I've checked the production api explorer after deployment and it works fine. Also, I tried directly accessing the api by URL which also works fine. Just the frontend client does not work. Any ideas?
Turns out I set the API key in the client with gapi.client.setApiKey(API_KEY); where the API Key is the browser key from the cloud console. I removed this and it works fine. I have no idea what the API key is for.
I'm looking at the problem now on one of my projects. Might be that the ipv6 address must be registered for the project. Take a look at this post Google API returning Access Not Configured
The usual reason for this is that the API, which is being queried is not yet enabled in Google Console by the time of the request. Once it is turned on - error goes away.