We have created an app engine instance to work as backend and another one is from cloud function.
Now cloud functions needs to access the api from app engine within the same google project this works fine if the firewall from app engine allows everyone to access. But in our case we need to restrict the access from cloud function only.
I'm new to GCP I would higly appreciate your suggestions. Thanks in advance.
The best solution would be to activate the IAP for App Engine ( Identity aware proxy ). Here you can find a guide on how to activate your IAP on App Engine.
IAP will interdict the acces of anyone, any application to access your App Engine instance, but the one you will allow manually. In your situation you will need to allow the Cloud Functions service account to access your application. You can check on this guide on how to achieve that programmatically from Cloud Functions. You have examples for C#, Python, Java and PHP.
eg Python :
import google.auth
import google.auth.app_engine
import google.auth.compute_engine.credentials
import google.auth.iam
from google.auth.transport.requests import Request
import google.oauth2.credentials
import google.oauth2.service_account
import requests
import requests_toolbelt.adapters.appengine
IAM_SCOPE = 'https://www.googleapis.com/auth/iam'
OAUTH_TOKEN_URI = 'https://www.googleapis.com/oauth2/v4/token'
def make_iap_request(url, client_id, method='GET', **kwargs):
"""Makes a request to an application protected by Identity-Aware Proxy.
Args:
url: The Identity-Aware Proxy-protected URL to fetch.
client_id: The client ID used by Identity-Aware Proxy.
method: The request method to use
('GET', 'OPTIONS', 'HEAD', 'POST', 'PUT', 'PATCH', 'DELETE')
**kwargs: Any of the parameters defined for the request function:
https://github.com/requests/requests/blob/master/requests/api.py
If no timeout is provided, it is set to 90 by default.
Returns:
The page body, or raises an exception if the page couldn't be retrieved.
"""
# Set the default timeout, if missing
if 'timeout' not in kwargs:
kwargs['timeout'] = 90
# Figure out what environment we're running in and get some preliminary
# information about the service account.
bootstrap_credentials, _ = google.auth.default(
scopes=[IAM_SCOPE])
if isinstance(bootstrap_credentials,
google.oauth2.credentials.Credentials):
raise Exception('make_iap_request is only supported for service '
'accounts.')
elif isinstance(bootstrap_credentials,
google.auth.app_engine.Credentials):
requests_toolbelt.adapters.appengine.monkeypatch()
# For service account's using the Compute Engine metadata service,
# service_account_email isn't available until refresh is called.
bootstrap_credentials.refresh(Request())
signer_email = bootstrap_credentials.service_account_email
if isinstance(bootstrap_credentials,
google.auth.compute_engine.credentials.Credentials):
# Since the Compute Engine metadata service doesn't expose the service
# account key, we use the IAM signBlob API to sign instead.
# In order for this to work:
#
# 1. Your VM needs the https://www.googleapis.com/auth/iam scope.
# You can specify this specific scope when creating a VM
# through the API or gcloud. When using Cloud Console,
# you'll need to specify the "full access to all Cloud APIs"
# scope. A VM's scopes can only be specified at creation time.
#
# 2. The VM's default service account needs the "Service Account Actor"
# role. This can be found under the "Project" category in Cloud
# Console, or roles/iam.serviceAccountActor in gcloud.
signer = google.auth.iam.Signer(
Request(), bootstrap_credentials, signer_email)
else:
# A Signer object can sign a JWT using the service account's key.
signer = bootstrap_credentials.signer
# Construct OAuth 2.0 service account credentials using the signer
# and email acquired from the bootstrap credentials.
service_account_credentials = google.oauth2.service_account.Credentials(
signer, signer_email, token_uri=OAUTH_TOKEN_URI, additional_claims={
'target_audience': client_id
})
# service_account_credentials gives us a JWT signed by the service
# account. Next, we use that to obtain an OpenID Connect token,
# which is a JWT signed by Google.
google_open_id_connect_token = get_google_open_id_connect_token(
service_account_credentials)
# Fetch the Identity-Aware Proxy-protected URL, including an
# Authorization header containing "Bearer " followed by a
# Google-issued OpenID Connect token for the service account.
resp = requests.request(
method, url,
headers={'Authorization': 'Bearer {}'.format(
google_open_id_connect_token)}, **kwargs)
if resp.status_code == 403:
raise Exception('Service account {} does not have permission to '
'access the IAP-protected application.'.format(
signer_email))
elif resp.status_code != 200:
raise Exception(
'Bad response from application: {!r} / {!r} / {!r}'.format(
resp.status_code, resp.headers, resp.text))
else:
return resp.text
def get_google_open_id_connect_token(service_account_credentials):
"""Get an OpenID Connect token issued by Google for the service account.
This function:
1. Generates a JWT signed with the service account's private key
containing a special "target_audience" claim.
2. Sends it to the OAUTH_TOKEN_URI endpoint. Because the JWT in #1
has a target_audience claim, that endpoint will respond with
an OpenID Connect token for the service account -- in other words,
a JWT signed by *Google*. The aud claim in this JWT will be
set to the value from the target_audience claim in #1.
For more information, see
https://developers.google.com/identity/protocols/OAuth2ServiceAccount .
The HTTP/REST example on that page describes the JWT structure and
demonstrates how to call the token endpoint. (The example on that page
shows how to get an OAuth2 access token; this code is using a
modified version of it to get an OpenID Connect token.)
"""
service_account_jwt = (
service_account_credentials._make_authorization_grant_assertion())
request = google.auth.transport.requests.Request()
body = {
'assertion': service_account_jwt,
'grant_type': google.oauth2._client._JWT_GRANT_TYPE,
}
token_response = google.oauth2._client._token_endpoint_request(
request, OAUTH_TOKEN_URI, body)
return token_response['id_token']
In case you use a Cloud Function on Nodejs, a StackOverflow user created an example on how to achieve the same for Nodejs in this post.
Related
I am trying to invoke a Cloud Run service using Cloud Tasks as described in the docs here.
I have a running Cloud Run service. If I make the service publicly accessible, it behaves as expected.
I have created a cloud queue and I schedule the cloud task with a local script. This one is using my own account. The script looks like this
from google.cloud import tasks_v2
client = tasks_v2.CloudTasksClient()
project = 'my-project'
queue = 'my-queue'
location = 'europe-west1'
url = 'https://url_to_my_service'
parent = client.queue_path(project, location, queue)
task = {
'http_request': {
'http_method': 'GET',
'url': url,
'oidc_token': {
'service_account_email': 'my-service-account#my-project.iam.gserviceaccount.com'
}
}
}
response = client.create_task(parent, task)
print('Created task {}'.format(response.name))
I see the task appear in the queue, but it fails and retries immediately. The reason for this (by checking the logs) is that the Cloud Run service returns a 401 response.
My own user has the roles "Service Account Token Creator" and "Service Account User". It doesn't have the "Cloud Tasks Enqueuer" explicitly, but since I am able to create the task in the queue, I guess I have inherited the required permissions.
The service account "my-service-account#my-project.iam.gserviceaccount.com" (which I use in the task to get the OIDC token) has - amongst others - the following roles:
Cloud Tasks Enqueuer (Although I don't think it needs this one as I'm creating the task with my own account)
Cloud Tasks Task Runner
Cloud Tasks Viewer
Service Account Token Creator (I'm not sure whether this should be added to my own account - the one who schedules the task - or to the service account that should perform the call to Cloud Run)
Service Account User (same here)
Cloud Run Invoker
So I did a dirty trick: I created a key file for the service account, downloaded it locally and impersonated locally by adding an account to my gcloud config with the key file. Next, I run
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://url_to_my_service
That works! (By the way, it also works when I switch back to my own account)
Final tests: if I remove the oidc_token from the task when creating the task, I get a 403 response from Cloud Run! Not a 401...
If I remove the "Cloud Run Invoker" role from the service account and try again locally with curl, I also get a 403 instead of a 401.
If I finally make the Cloud Run service publicly accessible, everything works.
So, it seems that the Cloud Task fails to generate a token for the service account to authenticate properly at the Cloud Run service.
What am I missing?
I had the same issue here was my fix:
Diagnosis: Generating OIDC tokens currently does not support custom domains in the audience parameter. I was using a custom domain for my cloud run service (https://my-service.my-domain.com) instead of the cloud run generated url (found in the cloud run service dashboard) that looks like this: https://XXXXXX.run.app
Masking behavior: In the task being enqueued to Cloud Tasks, If the audience field for the oidc_token is not explicitly set then the target url from the task is used to set the audience in the request for the OIDC token.
In my case this meant that enqueueing a task to be sent to the target https://my-service.my-domain.com/resource the audience for the generating the OIDC token was set to my custom domain https://my-service.my-domain.com/resource. Since custom domains are not supported when generating OIDC tokens, I was receiving 401 not authorized responses from the target service.
My fix: Explicitly populate the audience with the Cloud Run generated URL, so that a valid token is issued. In my client I was able to globally set the audience for all tasks targeting a given service with the base url: 'audience' : 'https://XXXXXX.run.app'. This generated a valid token. I did not need to change the url of the target resource itself. The resource stayed the same: 'url' : 'https://my-service.my-domain.com/resource'
More Reading:
I've run into this problem before when setting up service-to-service authentication: Google Cloud Run Authentication Service-to-Service
1.I created a private cloud run service using this code:
import os
from flask import Flask
from flask import request
app = Flask(__name__)
#app.route('/index', methods=['GET', 'POST'])
def hello_world():
target = os.environ.get('TARGET', 'World')
print(target)
return str(request.data)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=int(os.environ.get('PORT', 8080)))
2.I created a service account with --role=roles/run.invoker that I will associate with the cloud task
gcloud iam service-accounts create SERVICE-ACCOUNT_NAME \
--display-name "DISPLAYED-SERVICE-ACCOUNT_NAME"
gcloud iam service-accounts list
gcloud run services add-iam-policy-binding SERVICE \
--member=serviceAccount:SERVICE-ACCOUNT_NAME#PROJECT-ID.iam.gserviceaccount.com \
--role=roles/run.invoker
3.I created a queue
gcloud tasks queues create my-queue
4.I create a test.py
from google.cloud import tasks_v2
from google.protobuf import timestamp_pb2
import datetime
# Create a client.
client = tasks_v2.CloudTasksClient()
# TODO(developer): Uncomment these lines and replace with your values.
project = 'your-project'
queue = 'your-queue'
location = 'europe-west2' # app engine locations
url = 'https://helloworld/index'
payload = 'Hello from the Cloud Task'
# Construct the fully qualified queue name.
parent = client.queue_path(project, location, queue)
# Construct the request body.
task = {
'http_request': { # Specify the type of request.
'http_method': 'POST',
'url': url, # The full url path that the task will be sent to.
'oidc_token': {
'service_account_email': "your-service-account"
},
'headers' : {
'Content-Type': 'application/json',
}
}
}
# Convert "seconds from now" into an rfc3339 datetime string.
d = datetime.datetime.utcnow() + datetime.timedelta(seconds=60)
# Create Timestamp protobuf.
timestamp = timestamp_pb2.Timestamp()
timestamp.FromDatetime(d)
# Add the timestamp to the tasks.
task['schedule_time'] = timestamp
task['name'] = 'projects/your-project/locations/app-engine-loacation/queues/your-queue/tasks/your-task'
converted_payload = payload.encode()
# Add the payload to the request.
task['http_request']['body'] = converted_payload
# Use the client to build and send the task.
response = client.create_task(parent, task)
print('Created task {}'.format(response.name))
#return response
5.I run the code in Google Cloud Shell with my user account which has Owner role.
6.The response received has the form:
Created task projects/your-project/locations/app-engine-loacation/queues/your-queue/tasks/your-task
7.Check the logs, success
The next day I am no longer able to reproduce this issue. I can reproduce the 403 responses by removing the Cloud Run Invoker role, but I no longer get 401 responses with exactly the same code as yesterday.
I guess this was a temporary issue on Google's side?
Also, I noticed that it takes some time before updated policies are actually in place (1 to 2 minutes).
For those like me, struggling through documentation and stackoverflow when having continuous UNAUTHORIZED responses on Cloud Tasks HTTP requests:
As was written in thread, you better provide audience for oidcToken you send to CloudTasks. Ensure your requested url exactly equals to your resource.
For instance, if you have Cloud Function named my-awesome-cloud-function and your task request url is https://REGION-PROJECT-ID.cloudfunctions.net/my-awesome-cloud-function/api/v1/hello, you need to ensure, that you set function url itself.
{
serviceAccountEmail: SERVICE-ACCOUNT_NAME#PROJECT-ID.iam.gserviceaccount.com,
audience: https://REGION-PROJECT-ID.cloudfunctions.net/my-awesome-cloud-function
}
Otherwise seems full url is used and leads to an error.
I am using the app.yaml's login: admin in handlers to restrict access to my app only to selected Google accounts (which I can edit in IAM). I'm using the python27 standard environment on GAE.
I would like to use the JSON API my app exposes from another server app (not hosted on GAE). Using a service account looks like a straightforward solution, but I am unable to get the scopes or the request itself right, so the endpoint would see an authenticated Google user.
The service-user currently has Project/Viewer role in the IAM. I tried a few more like AppEngine/Viewer, AppEngine/Admin. I also tried some more scopes.
My test code:
"""Try do do an API request to a deployed app
with the current service account.
https://google-auth.readthedocs.io/en/latest/user-guide.html
"""
import sys
from google.auth.transport.requests import AuthorizedSession
from google.oauth2 import service_account
def main():
if len(sys.argv) < 2:
sys.exit("use: %s url" % sys.argv[0])
credentials = service_account.Credentials.from_service_account_file(
'service-user.json')
scoped_credentials = credentials.with_scopes(
['https://www.googleapis.com/auth/cloud-platform.read-only'])
authed_http = AuthorizedSession(scoped_credentials)
response = authed_http.request('GET', sys.argv[1])
print response.status_code, response.reason
print response.text.encode('utf-8')
if __name__ == '__main__':
main()
There is no error, the request behaves like unauthenticated. I checked the headers on the server, and while requesting from the browser there are several session cookies, the AuthorizedSession request contains single Authorization: Bearer .. header.
Normally the roles you would need is App Engine Admin; it's designed for this purpose. It should also work with the viewer/editor/owner primitive roles. That being said, to make sure it's not a "role" issue, simply give it the project owner role and also the explicit App Engine Admin role and try again. This will eliminate any role-based issue.
Let me know if that works for you.
I have a web service hosted on a home PC which is on a public dynamic ip. I also have a domain name hosted by google domains (I will refer in this post as www.example.com).
Via Google App Engine I'm able to store the dynamic IP on memcache and once a client points to myhome subdomain it will be redirected to the host ip (e.g. myhome.example.com -> 111.111.1.100).
Here is my python code running on App Engine:
import json
from webapp2 import RequestHandler, WSGIApplication
from google.appengine.api import memcache
class MainPage(RequestHandler):
# Memcache keys
key_my_current_ip = "my_current_ip"
def get(self):
response_text = "example.com is online!"
# MyHome request
if (self.request.host.lower().startswith('myhome.example.com') or
self.request.host.lower().startswith('myhome-xyz.appspot.com')):
my_current_ip = memcache.get(self.key_my_current_ip)
if my_current_ip:
url = self.request.url.replace(self.request.host, my_current_ip)
# move to https
if url.startswith("http://"):
url = url.replace("http://", "https://")
return self.redirect(url, True)
response_text = "myhome is offline!"
# Default site
self.response.headers['Content-Type'] = 'text/plain'
self.response.write(response_text)
def post(self):
try:
# Store request remote ip
body = json.loads(self.request.body)
command = body['command']
if command == 'ping':
memcache.set(key=self.key_my_current_ip, value=self.request.remote_addr)
except:
raise Exception("bad request!")
app = WSGIApplication([('/.*', MainPage),], debug=True)
What I found is that, by adding an "A" domain record, myhome.example.com will point directly to that ip without need to redirect it.
Is there a way I can update that "A" domain record using Google App Engine or Domain APIs?
def post(self):
try:
# Store request remote ip
body = json.loads(self.request.body)
command = body['command']
if command == 'ping':
**--> UPDATE "A" Record with this IP: <self.request.remote_addr>**
except:
raise Exception("bad request!")
As google stated you can use an https POST to achive this.
You can perform updates manually with the API by making making a POST request (GET is also allowed) to the following url:
https://domains.google.com/nic/update
The API requires HTTPS. Here’s an example request:
https://username:password#domains.google.com/nic/update?hostname=subdomain.yourdomain.com&myip=1.2.3.4
Note: You must set a user agent in your request as well. Web browsers will generally add this for you when testing via the above url. In any case, the final HTTP request sent to our servers should look something like this:
Example HTTP query:
POST /nic/update?hostname=subdomain.yourdomain.com&myip=1.2.3.4 HTTP/1.1
Host: domains.google.com
Authorization: Basic base64-encoded-auth-string User-Agent: Chrome/41.0 your_email#yourdomain.com
Summary: Using installed (native) application credentials, I am able to authorize and use a pull task queue -- using OAuth2 and the REST API v1beta2. This is OK, but I would prefer to authorize and use the task queue under a service account, since using installed app credentials requires me to use my own email address in the task queue ACLs (see below). However, when I authorize using a service account (2-legged Oauth2), after adding the service account email address to the queue ACL, I am getting permissions errors using the queue. Is there a way to use a task queue under a service account, from a native client application?
Details: I was a bit confused about the task queue ACL, which specifies permissions using an email address -- and installed app credentials don't seem to have an associated email address. But basically it uses whatever google account you are logged into in the browser when you first authorize via 3-legged OAuth. So when I include my own email address under the queue ACL, it works fine:
queue:
- name: my-queue
mode: pull
acl:
- user_email: my-email#example.com
- writer_email: my-email#example.com
But if I add my service account email address the same way, and authorize using the service account, I get an error 403: you are not allowed to make this api call.
queue:
- name: my-queue
mode: pull
acl:
- user_email: 12345-abcd#developer.gserviceaccount.com
- writer_email: 12345-abcd#developer.gserviceaccount.com
I have double-checked the service email address is correct, and tried quoting it in the queue.yaml file, get the same error.
This indicates to me that the task queue doesn't associate my authorization with the service account email address for some reason. Either it doesn't like 2-legged OAuth, or something else?
Note that the service-account authorization itself does work (ruby code below), it's the subsequent API call that doesn't, and clearly the ACL permissions is the immediate cause of the error.
# issuer == service account email address
def service_auth!(issuer, p12_file)
key = Google::APIClient::KeyUtils.load_from_pkcs12(p12_file, 'notasecret')
client.authorization = Signet::OAuth2::Client.new(
:token_credential_uri => 'https://accounts.google.com/o/oauth2/token',
:audience => 'https://accounts.google.com/o/oauth2/token',
:scope => 'https://www.googleapis.com/auth/prediction',
:issuer => issuer,
:signing_key => key)
client.authorization.fetch_access_token!
api = client.discovered_api(TASKQUEUE_API, TASKQUEUE_API_VERSION)
return client, api
end
I think the issue may just be your scope ('https://www.googleapis.com/auth/prediction').
I'm able to authorize and make calls to the TaskQueue API using a service account without any issues. My example is in Python however (sorry I am not a Rubyist):
queue.yaml:
queue:
- name: pull
mode: pull
retry_parameters:
task_retry_limit: 5
acl:
- user_email: 12345-abcd#developer.gserviceaccount.com
- writer_email: 12345-abcd#developer.gserviceaccount.com
source:
client_email = '12345-abcd#developer.gserviceaccount.com'
with open('service_account.p12', 'rb') as f:
private_key = f.read()
credentials = SignedJwtAssertionCredentials(client_email, private_key,
['https://www.googleapis.com/auth/taskqueue',
'https://www.googleapis.com/auth/taskqueue.consumer'])
http = httplib2.Http()
http = credentials.authorize(http)
api = build('taskqueue', 'v1beta2', http=http)
# Do stuff with api ...
I am trying to build a Gmail service which will read a user's emails, once their IT admin has authenticated the App on the Apps marketplace. From the documentation, it seemed service accounts would be the right fit, for which I tried both:
scope = "https://www.googleapis.com/auth/gmail.readonly"
project_number = "c****io"
authorization_token, _ = app_identity.get_access_token(scope)
logging.info("Using token %s to represent identity %s",
authorization_token, app_identity.get_service_account_name())
#authorization_token = "OAuth code pasted from playground"
response = urlfetch.fetch(
"https://www.googleapis.com/gmail/v1/users/me/messages",
method=urlfetch.GET,
headers = {"Content-Type": "application/json",
"Authorization": "OAuth " + authorization_token})
and
credentials = AppAssertionCredentials(scope=scope)
http = credentials.authorize(httplib2.Http(memcache))
service = build(serviceName='gmail', version='v1', http=http)
listReply = gmail_service.users().messages().list(userId='me', q = '').execute()
I then started dev_appserver.py as per Unable to access BigQuery from local App Engine development server
However, I get an HTTP error code 500 "Backend Error". Same code, but when I paste the access_token from the OAuth playground, it works fine (HTTP 200). I'm on my local machine in case that makes any difference. Wondering if I'm missing anything? I'm trying to find all emails for all users of a particular domain where their IT admin has installed my Google Marketplace App.
Thanks for the help!
To do this type of impersonation, you should create a JWT and set the "sub" field to the email address of the user whose mailbox you want to access. Developer documentation: Using OAuth 2.0 for Server to Server Applications: Additional claims.
The python code to construct the credentials will look something like
credentials = SignedJwtAssertionCredentials(
"<service account email>#developer.gserviceaccount.com",
file("secret-privatekey.pem", "rb").read(),
scope=["https://www.googleapis.com/auth/gmail.readonly"],
sub="<user to impersonate>#your-domain.com"
)