Injecting secrets from CI with GCP AppEngine's app.yml - google-app-engine

Thanks for your help in advance!
I'm trying to create a simple but secure way to inject secrets from a commandline call to gcloud app deploy app.yml.
I'm relying on secret information stored in environment variables in my app's runtime, which I can set using the following app.yml:
runtime: nodejs
env: flex
env_variables:
SECRET_KEY: "passed-in-value"
For example, I'd like to be able to do something like
SECRETKEY=${LOCAL_SECRET_VALUE} gcloud app deploy app.yml
or use a cli argument if there is one, but I don't see any here.
At the end of the day, I just need a simple way to inject secrets so I can deploy to my testing environment from my local machine, or to production from a github action. I could just write the app.yml dynamically from my ci script and inject local environment variables, but it seems like there must be a more canonical way.
I can set environment variables with app.yml using the following syntax.
I would like a solution which works with both standard and flex app engine if possible.

Google Cloud has silently released in Beta Secret Manager. The APIs are available but the client libraries aren't yet. The service will be announced soon and will help you to manage your secrets.
The principle is simple: In your yaml files and in your GIT, you simply save a reference to the secret, for example mySecret#2 where 2 is the version number, and mySecret the name of your secret.
Then, perform a simple call to the API to get access to the secret
https://secretmanager.googleapis.com/v1beta1/projects/<myProject>/secrets/<MySecret>/versions/<Version>:access
The call must be secured with a Bearer access token in the header. Be sure that the App Engine service account has the required roles (default service account: YOUR_PROJECT_ID#appspot.gserviceaccount.com), at least roles/
secretmanager.secretAccessor
Note: I don't know how the library will be implemented if you request the secret without giving an explicit version. Is the 1st version which will be taken? the lastest?
For now, there is a lot of manual overhead (get the token, build the URL, manage errors,...). If you can wait a couple of weeks, the life will be easier with client libraries!

Related

Is there a way to dynamically inject sensitive environment variables into a serverless React frontend application using Azure/Github Actions?

I'm sort of restricted to Azure since that is what the client is already using.
But basically, we have this React website which is your typical react-scripts no server website, which means that there's nowhere in Azure Static Webapps to set environment variables for a frontend application.
I saw this on Azure Static Webapps Configuration, but subject to the following restrictions, won't work for my use case because there is no backend API for my frontend application - the backend associated with the frontend is published separately to Azure App services. And I need the secrets on the frontend to use some npm packages that require them, which I would prefer to do on frontend instead of backend.
Are available as environment variables to the backend API of a static web app
Can be used to store secrets used in authentication configuration
Are encrypted at rest
Are copied to staging and production environments
May only be alphanumeric characters, ., and _
I was doing some more research, and this seems to sort of be up the alley of what I'm looking for:
https://learn.microsoft.com/en-us/answers/questions/249842/inject-environment-variables-from-pipeline-to-azur.html
Essentially, I really want to avoid hardcoding secrets into the React code because that's bad practice.
I personally see a few different (potential) options:
Create an endpoint on the backend Spring Boot api that simply serves all environment variables
This is the most trivial to implement but I have concerns about security with this approach. As my frontend has no access to any kind of secrets, there's no way for it to pass a secure token to the backend or anything to authenticate the request, so someone could conceivably have chrome network inspect element tab open, see that I'm making a request to /getEnvironmentVariables, and recreate the request. The only way I can see to prevent this is to have IP restrictions enacted on the backend API, so it only accepts incoming requests from the IP address of my frontend website, but honestly that just sounds like so much overhead to worry about. Especially because we're building the product as more of a POC, so we don't have access to their production environments and can't just test it like that.
Have the Azure Static Webapps Github Actions workflow somehow inject environment variables
So I've actually done something similar with GCP before. In order to login to a GCP service account to deploy the app during continuous build, the workaround was to encode a publicly viewable file that could be freely uploaded to whatever public repo, which could only (realistically) be decrypted using secrets set on the CI/CD pipeline, which would be travis-ci, or in my case, Github Actions. And I know how to set secrets on Github Actions but I'm not sure how realistic a solution like that would be for my use case because decrypting a file is not enough, it has to be formatted or rewritten in such a way that React is able to read it, and I know React is a nightmare working with fs and whatnot, so I'm really worried about the viability of going down a path like that. Maybe a more rudimentary approach might be writing some kind of bash script that could run in the github actions, and using the github actions secrets to store the environment variables I want to inject, run a rudimentary file edit on a small React file that is responsible for just disbursing environment variabless, before packaging with npm and deploying to Azure?
TLDR: I have a window in github actions when I have access to a linux environment and any environment variables I want, in which I want to somehow inject into React during ci/cd before deployment.

How to mix Cloud Run and App Engine deployments in one project?

I have a Quarkus application already deployed on Google Cloud Run.
It depends on MySQL, hence there is an instance started on Cloud SQL.
Next step in my deployment process is to add keycloak. From what I've read the best option seems to be Google App Engine.
The approved answer in this question gave me some good insight of what needs to be done ... mostly.
What I did was:
Locally I made a sub-directory in the main project.
In that directory I added the app.yaml and the Dockerfile (as described here for instance).
There I executed the said two commands: gcloud init and gcloud app deploy.
I had my doubts about this set up and they were backed up by the error I got eventually:
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: The first service (module) you upload to a new application must be the 'default' service (module). Please upload a version of the 'default' service (module) before uploading a version for the 'morph-keycloak-service' service (module).
I understand my set up breaks the overall structure of the project but I'm not sure how to mix those two application with the right services.
I understand keycloak is a stateful application, hence cannot live on Cloud Run (by the way the intention is for keycloak to use the same database instance shared with the application).
So does any one know a more sensible set up, or what can I move in mine in order to fix it?
In short:
The answer really is in reading the error message (thanks #gaefan) - about the error itself it explains enough. So I just commented out the service: my-keycloak-service line in the app.yaml (thus leaving gcloud to implicitly mark it as the default one) and the deployment continued.
Eventually keycloak didn't connect to the database but if I don't manage to adjust the configurations that would probably be a subject to a different question.
On the point of project structure and functionality:
First off, thanks #NoCommandLine and #guillaume-blaquiere for your input!
#NoCommandLine the application on Cloud Run is sort of a headless REST API enabled backend. Most of the API calls are secured by keycloack. A next step in the deployment process would be to port an existing UI (React) client on the Firebase hosting (or on another suitable service - I'm still not completely sure which approach is best) and in order for the users to work with this client properly they must make an SSO through keycloak first.
I'm quite new to GCP and the number and variants of the available options are still overwhelming to me - one must get familiar with the nuances but I guess it takes time. So I'm still taking suggestions on how to adjust my project structure to fit better the services stack. Thanks!

How to handle secrets in Google App Engine?

My application needs a bunch of secrets to run: database credentials, API credentials, etc. It's running in Google App Engine Standard Java 11. I need these secrets as environment variables or as arguments to my application, so that my framework can pick them up and establish the connections accordingly. My particular framework is Spring Boot, but I believe Django, Rails and many others use the same methods.
What's the best way of doing this?
One of the answers I get to this question is to use Google Cloud Key Management, which looks promising, but I can't figure out how to turn those values into environment variables in App Engine. Is it possible? I've read Setting Up Authentication for Server to Server Production Applications, but I don't see any indication there about how to turn the secrets into environment variables in App Engine (am I missing it?).
The other alternatives I've seen include hard-coding them in app.yaml or another file that is never committed and lives in my machine, which means I'm the only one who can deploy... I can't even deploy from another machine. This is problematic for me.
Another potential solution I've seen is to delegate the problem to Google Cloud Build, so that it fetches a value/file from CKM and pushes it to App Engine (1, 2). I'm not using GCB and I doubt I will, since it's so basic.
I really wish App Engine had a environment variables page like Heroku does.
[Update] (as of Feb 2020) GCP's Secret Manager is in beta, see:
https://cloud.google.com/secret-manager/docs/overview
For Java-specific implementation, see:
https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets#secretmanager-access-secret-version-java
Your specific solution would depend how your app is set up, but you should be able to access the secret(s) and create environment variables with the values or otherwise pass them to your app.
You can use GCP IAM to create a service accounts to manage access or add a role like Secret Manager Secret Accessor to an existing member/service (e.g., in this case, I added that permision to the App Engine default service account).
I tried it out with Node.js on GAE standard, and it seems to work well; I didn't do any performance tests but it should be fine, particularly if you primarily need the secrets on app start or as part of a build process.
For local (non-GCP) development/testing, you can create a service account with appropriate secret manager permissions and get the json service key. You then set an environment variable named GOOGLE_APPLICATION_CREDENTIALS to the path of the file, e.g.:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/local_service_key.json
and the app running in that shell session should pick up the permissions without any additional auth code.
See: https://cloud.google.com/docs/authentication/getting-started
(You would want to exclude the key file from version control.)
At this date, App Engine Standard Standard does not have a Google provided solution for storing application secrets.
[UPDATE]
I noticed your comment on another answer that you require environment variables to be valid before you have application control. In that case, you have no options for App Engine today. I would deploy to a different service (Kubernetes) better suited for your system goals that can provided managed secrets.
[END UPDATE]
You have two choices for secrets for App Engine Standard:
Store the secrets as environment variables in app.yaml
Store the secrets someplace else.
For both options, you can add a layer of security by encrypting them. However, adding encryption adds another secret (decryption key) that you must somehow provide to your app. The chicken-or-egg situation.
App Engine Standard uses a Service Account. This service account can be used as an identity to control access to other resources. Examples of other resources are KMS and Cloud Storage. This means that you can securely access KMS or Cloud Storage without adding another secret to App Engine.
Let's assume that your company wants all application secrets encrypted. We can use the App Engine Service Account as the identity authorized to access KMS for a single key.
Note: The following examples use Windows syntax. Replace the line continuation ^ with \ for Linux/macOS.
Create the KMS Keyring. Keyrings cannot be deleted, so this is a one-time operation.
set GCP_KMS_KEYRING=app-keyring
set GCP_KMS_KEYNAME=app-keyname
gcloud kms keyrings create %GCP_KMS_KEYRING% --location global
Create the KMS Key.
gcloud kms keys create %GCP_KMS_KEYNAME% ^
--location global ^
--keyring %GCP_KMS_KEYRING% ^
--purpose encryption
Add the service account to the KMS policy for the keyring and key that we created.
This will allow App Engine to decrypt data without requiring secrets for KMS. The service account identity provides access control. No roles are required for KMS. You will need to provide the KMS Keyring and Keyname which can be included in app.yaml.
set GCP_SA=<replace with the app engine service acccount email adddress>
set GCP_KMS_ROLE=roles/cloudkms.cryptoKeyDecrypter
gcloud kms keys add-iam-policy-binding %GCP_KMS_KEYNAME% ^
--location global ^
--keyring %GCP_KMS_KEYRING% ^
--member serviceAccount:%GCP_SA% ^
--role %GCP_KMS_ROLE%
For this example, let's assume that you need to access a MySQL database. We will store the credentials in a JSON file and encrypt it. The file is named config.json.
{
"DB_HOST": "127.0.0.1",
"DB_PORT": "3306",
"DB_USER": "Roberts",
"DB_PASS": "Keep-This-Secret"
}
Encrypt config.json using Cloud KMS and store the encrypted results in config.enc:
call gcloud kms encrypt ^
--location=global ^
--keyring %GCP_KMS_KEYRING% ^
--key=%GCP_KMS_KEYNAME% ^
--plaintext-file=config.json ^
--ciphertext-file=config.enc
The encrypted file can be stored in Cloud Storage. Since it is encrypted, you could store the file with your build files, but I do not recommend that.
The final piece is to write the code in Java that is part of your program that uses KMS to decrypt the file config.enc using KMS. Google has a number of examples of KMS decryption:
Java KMS Decrypt
Java Samples
You can pass secrets as env variables at build time. This example retrieves a Stripe API key and updates app.yaml within Cloud Build, ensuring the local file is not accidentally checked in to source control
First make sure the CloudBuild service account has IAM role Secret Manager Secret Accessor
an app.dev.yaml file with a place holder for the env variable
runtime: python39
env: standard
instance_class: F4
automatic_scaling:
max_instances: 1
env_variables:
STRIPE_API_KEY: STRIPE_API_VAR
etc
etc
Cloudbuild.yaml to retrieve the secret and insert at build time
steps:
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: 'bash'
args:
- -c
- |
echo 'my api key from secret manager is '$$STRIPE_API_VAR
sed -i "s|STRIPE_API_VAR|$$STRIPE_API_VAR|g" app.dev.yaml
cat app.dev.yaml # you can now see the secret value inserted as the env variable
gcloud app deploy --appyaml=app.dev.yaml # deploy with the updated app.yaml, the local copy of the file is not changed
secretEnv: ['STRIPE_API_VAR']
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/stripe-api-key/versions/latest
env: 'STRIPE_API_VAR'
Berglas looks interesting.
Another option is to put the secrets in app.yaml file(s) (you can have more than one) and encrypt it before committing it to version control.
There are many tools to encrypt secrets before putting them in version control, like https://github.com/StackExchange/blackbox
Pros:
Very versatile
I find it simple to understand compared to other options
Easy to get started
Cons:
You can't really remove access for a person (since the file could always be copied) so you have rotate secrets sometimes
Can be hard to keep the unencrypted files out of the repo. Ones you get used to it, and have ignore files and/or scripts, it's usually OK.
For secret management, I'm personally fan of Berglas project. It's based on KMS and, in addition, manage DEK and KEK
It's today write in Go and it's not compliant with Java. I wrote a python library for some colleagues. I can write a Java package if you plan to use it. It's not very hard.
Let me know

Securing GAE env variables by using gsutil builder to source app.yaml during build?

I have the same problem as the one mentioned here: Securely storing environment variables in GAE with app.yaml - namely:
"I need to store API keys and other sensitive information in app.yaml as environment variables for deployment on GAE. The issue with this is that if I push app.yaml to GitHub, this information becomes public (not good)."
Additionally I'm looking to check the following boxes:
Prevent vendor lock-in (as much as possible) & ability to take my dockerfile elsewhere.
Ease of deployment with GitHub. I want a push to the master which triggers a build.
Minimal setup, or a suitable effort and workflow for a solo-dev or small team.
My research yielded the following:
Securely storing environment variables in GAE with app.yaml
How to set environment variables/app secrets in Google App Engine
GAE : How to deploy various environments with secrets?
appengine and OS environment variables
How to pass environment variables to the app.yaml using cloud build
A lot of good information from GAE : How to deploy various environments with secrets?
where the author listed the three workarounds and their reason to not be used:
Use Google KMS - allows us to put encrypted secrets directly into the
project, but it requires us to put custom code in our apps to decrypt
them. It creates a different environment management between local,
staging and production. It increases the risk of bugs due to the
complexity.
Store secrets in Google Datastore - I tried it, I created a helper
that searches env vars in proccess.ENV, then in cache and ultimately
in Datastore. But like KMS, it increases complexity a lot.
Store secrets in a JSON file and put in on Google Cloud Storage : again, it requires to load env variables through an helper that
checks env vars, then loads the file etc...
However the best solution for me came from How to pass environment variables to the app.yaml using cloud build
It allows me to have the following deployment flow using GAE flexible environment for nodejs:
A merge to my Github master branch triggers a cloud build
My first step in my cloudbuild.yaml sources my app.yaml file using the gsutil builder, since app.yaml is not in source control
My app.yaml points to my dockerfile for my runtime and has my env variables
This checks all my boxes and was a fairly easy solution but, this definitely doesn't seem to be a popular solution, so am I missing something here?
Most importantly are there any security concerns?
I am amazed at how you did your research, you actually collected all the possible ways to do achieve it.
As you mentioned there are many ways to pass the variables to the application but I believe that the solution you propose ( storing the variables in Google Cloud Storage and retrieving them with Google Cloud Build ) is optimal for your purposes. It doesn't require much code and it's elegant, I hope this post helps people to be aware of this solution. Regarding your security concerns, this solution includes a high degree of security as you can set the file in the bucket to only be accessible from Google Cloud Build and the owner of the project.
Another solution I've employed, is to store the env variables in the Cloud Build trigger substitution variables directly and use a custom Cloud Builder envsubt to render a templated app.yaml.
I could not find documentation on how the substitution variables are stored in the Cloud Build trigger (any reference here would be helpful). However, I think most data in Google Cloud is encrypted at rest and encrypted on use and transfer. The main drawback is that the values are show in plain text, so sensitive information like API keys are not obscured, and any one who has access to the trigger can see the sensitive information.
One benefit is that this keeps the templated app.yaml close to the code you'll be using it with, and can be reviewed in the same pull request. Also you don't need to use another service, like Google Storage.
Steps:
Add the envsubst Cloud builder to your project, see instructions here.
Create a templated app.yaml file, e.g.
runtime: <your runtime>
service: ${GAE_SERVICE}
env_variables:
MY_VAR: ${MY_VAR}
MY_VAR_2: ${MY_VAR_2}
Add an app.yaml template rendering step in cloudbuild.yaml
steps:
- id: "render-app-yaml"
name: "gcr.io/${PROJECT_ID}/envsubst"
env:
- "GAE_SERVICE=${_GAE_SERVICE}"
- "MY_VAR=${_MY_VAR}"
- "MY_VAR_2=${_MY_VAR_2}"
args: ["app.yaml"]
Add the substitution variables in the Cloud Build trigger, e.g. _GAE_SERVICE, _MY_VAR, and _MY_VAR_2. Note: user-defined variables in the trigger are prefixed with a _.
When I was doing my research, I couldn't find any solution like this one either. Any feedback is welcome.

Redeploying OpenAPI spec into App Engine standard environment

When I do some changes in OpenAPI spec, which don't involve any changes in the code, do I need to redeploy the code along with the new specification?
When I deploy OpenAPI spec with gcloud service-management deploy command I get back in it's output new service configuration version, which I should set to the ENDPOINTS_SERVICE_VERSION parameter in the app.yaml file.
I'm not sure but it looks like I have to redeploy the app every time I deploy a new version of my OpenAPI spec, even when the application code doesn't change I still need to point it to the right service configuration version with the new ENDPOINTS_SERVICE_VERSION value, is that right?
If so, then it's different from what is described in the How API Deployment Works document for AE flex environment under the "Redeployment" section, where it says the following:
You can use the gcloud service-management deploy command to update
just the API specification without redeploying your backend API server
code or the Extensible Service Proxy. This is useful if you are
changing a configuration-only detail.
When you change the API specification, the Service Control API backing
your running service instances will pick up the change because as it
depends on the same service configuration.
Maybe someone could help to clarify how exactly it works in case of AE standard environment?
Are you using the Endpoints Frameworks? If so, then you will have to deploy the app when you make a new service config version, because, as you mentioned, the app.yaml specifies which service config to use.

Resources