deploying to google app engine flexible env - google-app-engine

i am following
https://cloud.google.com/endpoints/docs/quickstart-app-engine
but when i run
gcloud service-management deploy openapi.yaml
i am hitting:
ERROR: (gcloud.service-management.deploy) PERMISSION_DENIED: Not allowed to get project settings for project instasmarttagger-162719
i am not sure what i have to do to resolve it.
the openapi.yml looks like
VSKUMAR-mac:appengine vskumar$ vi openapi.yaml
- "application/json"
responses:
200:
description: "Authenication info."
schema:
$ref: "#/definitions/authInfoResponse"
x-security:
- google_id_token:
audiences:
# Your OAuth2 client's Client ID must be added here. You can add
# multiple client IDs to accept tokens from multiple clients.
- "YOUR-CLIENT-ID"
definitions:
echoMessage:
properties:
message:
type: "string"
authInfoResponse:
properties:
id:
type: "string"
email:
type: "string"
# This section requires all requests to any path to require an API key.
security:
- api_key: []
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "query"
# This section configures authentication using Google API Service Accounts
# to sign a json web token. This is mostly used for server-to-server
# communication.
google_jwt:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
# This must match the 'iss' field in the JWT.
x-google-issuer: "jwt-client.endpoints.sample.google.com"
# Update this with your service account's email address.
x-google-jwks_uri: "https://www.googleapis.com/service_accounts/v1/jwk/YOUR-SERVICE-ACCOUNT-EMAIL"
# This section configures authentication using Google OAuth2 ID Tokens.
# ID Tokens can be obtained using OAuth2 clients, and can be used to access
# your API on behalf of a particular user.
google_id_token:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "accounts.google.com"
x-google-jwks_uri: "https://www.googleapis.com/oauth2/v1/certs"

Looks like i was signed into a different account and was trying to deploy to the app hosted on another account. doing a gcloud list projects helped me identify it

Related

ShinyProxy 2.6.1 access with Identity Server

I followed the instruction from this website to add the authentication with Identity Server. The configuration is quite simple
proxy:
title: Open Analytics Shiny Proxy
port: 8080
authentication: openid
openid:
auth-url: https://identityserverurl/connect/authorize
token-url: https://identityserverurl/connect/token
jwks-url: https://identityserverurl/.well-known/openid-configuration/jwks
logout-url: https://identityserverurl/Account/Logout?return=http://yourshinyproxy:8080/
client-id: ShinyProxy
client-secret: secret
scopes: [ "openid", "profile", "roles" ]
username-attribute: aud
roles-claim: role
And the authentication seems working. When I add the access-groups to display only the app for a particular role, it doesn't work
specs:
- id: 01_hello
display-name: Hello Application
description: Application which demonstrates the basics of a Shiny app
container-cmd: ["R", "-e", "shinyproxy::run_01_hello()"]
container-image: openanalytics/shinyproxy-demo
container-network: sp-example-net
access-groups: 200122-user
The same code is working with the version 2.4.3 of ShinyProxy.
Is there anything I missed for this configuration in the ShinyProxy 2.6.1?

GCP Extensible Service Proxy encounters error when forwarding request

I have a the following setup:
1. Application (Java microservice) deployed on app engine.
2. Custom domain mapped to hit this service:.
myfavmicroservice.project-amazing.dev.corporation.com
3. This endpoint is secured to require authentication by enabling IAP.
4. Configured ESP to intercept, authenticate and fulfill request to all
backend microservices (like above) with a common gateway endpoint.
5. Microservice is deployed using app.yaml.
6. ESP endpoint is configured using api.yaml (OpenAPI API Surface document)
This is the tutorial I am following:
https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine-standard
app.yaml to deploy the microservice:
runtime: java11
entrypoint: java -jar tar/worker.jar
instance_class: F2
service: myfavmicroservice
handlers:
- url: /.*
script: this field is required, but ignored
The ESP api.yaml for describing microservice api surface is like this
swagger: "2.0"
info:
title: "My fav micro Service"
description: "Serve my favorite microservice content"
version: "1.0.0"
# This field will be replaced by the deploy_api.sh script.
host: microservice-system-gateway-5c4s43dedq-ue.a.run.app
schemes:
- https
produces:
- application/json
paths:
/myfavmicroservice:
get:
summary: Greet the user
operationId: hello
description: "Get helloworld mainpage"
x-google-backend:
address: https://myfavmicroservice.project amazing.dev.corporation.com
jwt_audience: .....
responses:
'200':
description: "Success."
schema:
type: string
'400':
description: "The IATA code is invalid or missing."
schema:
type: string
But the problem is that whenever I make request to endpoint like this:
GET
https://microservice-system-gateway-5c4s43dedq-ue.a.run.app/myfavmicroservice
I always get gateway 500 error. Upon inspection of ESP logs I am finding primarily
1. SSL Handshake Error with Error no 40
2. upstream server temporarily disabled while SSL handshaking to upstream
3. request: "GET /metadatasvc-hello HTTP/1.1", upstream: "https://[3461:f4f0:5678:a13::63]:443/myfavmicroservice
So the ESP is intercepting my request correctly, perhaps forwarding the request in correct format as well as evidenced from #3. But I am getting SSL error.
Why am I getting this error?
Ok figured out the issue. For the benefit of stackoverflow community I am posting the solution here.
I figured that if you use custom domains that you map to app engine like this in the OpenAPI Configuration (That you deploy to ESP), SSL handshake fails:
x-google-backend:
address: https://my-microservice.my-custom-domain.company.com
However if you use the default URL that is assigned by APP Engine upon startup of the microservice like this, everything is fine:
x-google-backend:
address: https://my-microservice.appspot.com
So I am trying to figure out how to use custom domain mappings in ESP OpenAPI configuration. For now though, if I do that the SSL proxying is not working inside ESP.

Google AppEngine Getting 403 forbidden trying to update cron.yaml

I am following the docs on how to backup datastore using AppEngine.
I am performing a gcloud app deploy cron.yaml command on a GCE VM that is meant to update a cronjob in AppEngine. the GCE VM and AppEngine cron are in the same project, and I have granted AppEngine admin to the GCE VM via a default Service Account. When I run this on my local machine, it updates fine. However on the GCE instance, thats where issues arise
Here are the files
app.yaml
runtime: python27
api_version: 1
threadsafe: true
service: cloud-datastore-admin
libraries:
- name: webapp2
version: "latest"
handlers:
- url: /cloud-datastore-export
script: cloud_datastore_admin.app
login: admin
cron.yaml
cron:
- description: "Daily Cloud Datastore Export"
url: /cloud-datastore-export?namespace_id=&output_url_prefix=gs://<my-project-id>-bucket
target: cloud-datastore-admin
schedule: every 24 hours
cloud_datastore_export.yaml
import datetime
import httplib
import json
import logging
import webapp2
from google.appengine.api import app_identity
from google.appengine.api import urlfetch
class Export(webapp2.RequestHandler):
def get(self):
access_token, _ = app_identity.get_access_token(
'https://www.googleapis.com/auth/datastore')
app_id = app_identity.get_application_id()
timestamp = datetime.datetime.now().strftime('%Y%m%d-%H%M%S')
output_url_prefix = self.request.get('output_url_prefix')
assert output_url_prefix and output_url_prefix.startswith('gs://')
if '/' not in output_url_prefix[5:]:
# Only a bucket name has been provided - no prefix or trailing slash
output_url_prefix += '/' + timestamp
else:
output_url_prefix += timestamp
entity_filter = {
'kinds': self.request.get_all('kind'),
'namespace_ids': self.request.get_all('namespace_id')
}
request = {
'project_id': app_id,
'output_url_prefix': output_url_prefix,
'entity_filter': entity_filter
}
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer ' + access_token
}
url = 'https://datastore.googleapis.com/v1/projects/%s:export' % app_id
try:
result = urlfetch.fetch(
url=url,
payload=json.dumps(request),
method=urlfetch.POST,
deadline=60,
headers=headers)
if result.status_code == httplib.OK:
logging.info(result.content)
elif result.status_code >= 500:
logging.error(result.content)
else:
logging.warning(result.content)
self.response.status_int = result.status_code
except urlfetch.Error:
logging.exception('Failed to initiate export.')
self.response.status_int = httplib.INTERNAL_SERVER_ERROR
app = webapp2.WSGIApplication(
[
('/cloud-datastore-export', Export),
], debug=True)
The Error I'm getting is
Configurations to update:
descriptor: [/usr/local/sbin/pluto/<my-project-id>/datastore/cron.yaml]
type: [cron jobs]
target project: [<my-project-id>]
Do you want to continue (Y/n)?
Updating config [cron]...
failed.
ERROR: (gcloud.app.deploy) Server responded with code [403]:
Forbidden Unexpected HTTP status 403.
You do not have permission to modify this app (app_id=u'e~<my-project-id>').
I have checked other posts related to this, however they seem to deal with an old version/deployment of appengine
Service Accounts!
From Deploying using IAM roles:
To grant a user account the ability to deploy to App Engine:
Click Add member to add the user account to the project and then select all of the roles for that account by using the dropdown menu:
Required roles to allow an account to deploy to App Engine:
a. Set the one of the following roles:
Use the App Engine > App Engine Deployer role to allow the account to deploy a version of an app.
To also allow the dos.yaml or dispatch.yaml files to be deployed with an app, use the App Engine > App Engine Admin role
instead.
The user account now has adequate permission to use the Admin API to deploy apps.
b. To allow use of App Engine tooling to deploy apps, you must also give the user account the Storage > Storage Admin role
so that the tooling has permission to upload to Cloud Storage.
Optional. Give the user account the following roles to grant permission for uploading additional configuration files:
Cloud Scheduler > Cloud Scheduler Admin role: Permissions for uploading cron.yaml files.
Potentially of interest:
Deployments with predefined roles
Predefined roles comparison matrix
Okay after some tinkering. I added the project editor role to the service account linked to the GCE instance running my server. I am not fully aware if this is the role with least priviledge to enable this to work.

How to enable api-key auth for all version when deploying multiple versions to same configuration in Google Clould Endpoint

I deployed 2 versions of openapi.yaml file to Google Cloud Endpoint using the Cloud Endpoint's versioning feature(i.e gcloud service-management deploy openapi_v1.yaml openapi_v2.yaml). Each version of the yaml file contains a version number and basepath different from the other, one endpoint that use api-key authentication, and definition for api-key authentication tag. After deployed to Endpoint, the configuration shows both yaml file, however deploying an api to GAE using this configuration will only have api-key authentication turned on for the newer version.
Does anyone know if this is a known bug, or there is something else I need to do to enable authentication for all versions?
The .yaml file looks like the following. The two versions I used to test on are identical except version and bathpath:
swagger: "2.0"
info:
description: "This API is used to connect 3rd-party ids to a common user identity"
version: "0.0.1"
title: "****"
host: "uie-dot-user-id-exchange.appspot.com"
basePath: "/v0"
...
- "https"
x-google-allow: all
paths:
...
/ids/search:
get:
operationId: "id_search"
produces:
- "application/json"
security:
- api_key: []
tags:
- "Ids"
summary: "Privileged endpoint. Provide any id (3rd party or otherwise) and get a hash of all ids associated with it."
parameters:
- in: "query"
name: "id_type"
description: "Type of id to search"
required: true
type: string
- in: "query"
name: "id_value"
description: "Value of id to search"
required: true
type: string
responses:
200:
description: "AssociatedIdsHash"
schema:
$ref: '#/definitions/AssociatedIdsHash'
400:
description: "Bad request. Requires both id_type and id_value query parameters."
401:
description: "Unauthorized. Please provide a valid api-key in the \"api-key\" header."
404:
description: "Not found - no entry found for key provided"
...
################ SECURITY DEFINITIONS ################
securityDefinitions:
# This section configures basic authentication with an API key.
api_key:
type: "apiKey"
name: "key"
in: "query"
I can replicate this issue and it appears to be a bug.
What does work is adding the API key restriction on the global level for both versions rather than at the per-path level. Perhaps this workaround will suffice for your use case.
...
security:
- api_key: []
path:
...

Authorization Header for WebHDFS with Azure Data Lake

I'm trying to use WebHDFS with Azure Data Lake. According to Microsoft's documentation, the steps one should follow are:
Create a new application in Azure AD with a key and delegated permissions to Azure Management Services
Using the client_id, tenant_id, and secret key, make a request to the OAUTH2 endpoint
curl -X POST https://login.microsoftonline.com/<TENANT-ID>/oauth2/token \
-F grant_type=client_credentials \
-F resource=https://management.core.windows.net/ \
-F client_id=<CLIENT-ID> \
-F client_secret=<AUTH-KEY>
Upon success, you then get back some JSON including an "access_token" object, which content you should include with subsequent WebHDFS requests by adding the header
Authorization: Bearer <content of "access_token">
where <content of "access_token"> is the long string in "access_token" object.
Once you have included that header, you should be able to make WebHDFS calls, such as to list directories, you could do
curl -i -X GET -H "Authorization: Bearer <REDACTED>" https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS
Having followed all those steps, I am getting an HTTP 401 error when running the above curl command to list directories:
WWW-Authenticate: Bearer authorization_uri="https://login.windows.net/<REDACTED>/", error="invalid_token", error_description="The access token is invalid."
with the body
{"error":{"code":"AuthenticationFailed","message":"Failed to validate the access token in the 'Authorization' header."}}
Does anyone know what might be the problem?
I pasted the token into jwt.io and it is valid (didn't check the signature). The content is something like this:
{
typ: "JWT",
alg: "RS256",
x5t: "MnC_VZcATfM5pOYiJHMba9goEKY",
kid: "MnC_VZcATfM5pOYiJHMba9goEKY"
}.
{
aud: "https://management.core.windows.net",
iss: "https://sts.windows.net/<TENANT-ID>/",
iat: 1460908119,
nbf: 1460908119,
exp: 1460912019,
appid: "<APP-ID>",
appidacr: "1",
idp: "https://sts.windows.net/<TENANT-ID>/",
oid: "34xxxxxx-xxxx-xxxx-xxxx-5460xxxxxxd7",
sub: "34xxxxxx-xxxx-xxxx-xxxx-5460xxxxxxd7",
tid: "<TENANT-ID>",
ver: "1.0"
}.
Please click the Data Explorer button then highlight the root folder and click Access. Then grant your AAD app permissions to WebHDFS there. I believe what you have done already is just to grant that AAD app permissions to manage your Azure Data Lake Store with the portal or Azure PowerShell. You haven't actually granted WebHDFS permissions yet. Further reading on security is here.

Resources