How to use Google Cloud Secrets in Cloud Run using react? - reactjs

I'm developing a react website that uses some sensitive API keys.
I'm hosting the application on Google Cloud Run, via a container.
I would like to access API keys through Google Cloud Secret Manager, but I am not able to. When I try to access them, the return is "undefined".
Here is my code snippet:
console.log(process.env.REACT_APP_API_KEY)
And the Dockerfile:
FROM node:14-alpine AS builder
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install --frozen-lockfile
COPY . .
RUN yarn build
FROM nginx:1.19-alpine AS server
COPY ./nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=builder ./app/build /usr/share/nginx/html
I'm deploying the application using the gcloud command:
gcloud run deploy test-gcr-react \
--image gcr.io/test-gcr-react-app/test-gcr-react \
--region=southamerica-east1 \
--set-secrets=REACT_APP_API_KEY=REACT_APP_API_KEY:latest \
--allow-unauthenticated
PS: I have already given the proper access permissions to the service account "Default compute service account" to be a "Secret Manager Secret Acessor" of the secret REACT_APP_API_KEY.

It's a common issue. Think about the role and the behavior of each layer:
NGINX serve directly the react JS
The react JS is ran directly on the user browser.
therefore, NEVER, the env var (your secret) is read. NGINX doesn't care of it, the user browser can't access Cloud Run env var.
The trick here is to set the API KEY in a conf file of your react APP at build time (in your dockerfile). The secret must be in the JS before the runtime. NGINX only serve JS, not update/change it

I guess you mean Secret Manager Secret Accessor role.
In order to do so you can do this:
gcloud secrets add-iam-policy-binding secret-id \
--member="serviceAccount:<project-number>-compute#developer.gserviceaccount.com" \
--role="roles/secretmanager.secretAccessor"
But, make sure your cloud run is really using that service account, if not change the member field in the command up above to correct it to the appropriate service account instead.

Related

Github PAT on deployment

I have a project where I need to use a Github personal access token to hit the GitHub API endpoint. My project is only frontend using React and I deployed using GH-pages. So I believe my problem is in my YAML file, where I create a .env file during build and push the environment variable I set on Github secrets/environment settings. During the CI/CD process, my token gets revoked because it's a PAT, but I made it "scopeless" so it shouldn't be revoked according to the documentation.
OAuth tokens and personal access tokens pushed to public repositories and public gists will only be revoked if the token has scopes.
Have I mistaken something here? I am trying to understand what's going on. So what I think is happening is that this new .env file is being deployed with the project, and I assumed that after the job is complete, it's wiped, but clearly that's not what's happening as the cause for the token being revoked is that it appeared somewhere in my script. It is not.
- run: |
cd ./frontend
touch .env
echo 'REACT_APP_GHTOKEN : ${{ env.REACT_APP_GHTOKEN }}' >> .env
npm install
npm run build
Any advice?

Start React app using domain instead of localhost

I start my React app using:
npm start
This starts the app on localhost:3000
I would like to have it start with a domain instead of localhost. When testing locally, I have a local domain (example mydomain.com) set to IP address 127.0.0.1. This allows me to use the actual domain in code when making requests to the backend. Without this, my code would need to support both localhost and my domain and swap them out in production.
npm start is only a command to run your react app in development mode.. for faster build and publish functionalities..
So for production, you will be using npm build to get a production build for the same. This won;t be automatically deplyed or hosted on your localhost:3000.
So production deployment you will have to host your compiled build files using IIS or some hosting application. There you can configure your preferred DNS, whcihever domain name you would want to use.
You can customise the url for deployment for development.
In your .env.development add this line
HOST='mydomain.com'
If you want to deploy at https, add this line also in same file
HTTPS=true

Setup react app build folder onto Google Kubernetes

Currently, I have a repo that contains both a Node.js Express backend and React frontend. The repo's image is in Google Container Registry and is used on a Google Kubernetes cluster. There is an url provided by a load balancer, where it is the backend url that is serving the static build server. In the future, I want to separate the backend/frontend into two different repos (one for backend and one for frontend).
I believe making changes for the backend in the cluster won't be difficult, but I am having trouble figuring out how to add the React frontend to this since the build folder will be in a different repo than the backend. I read online that to serve a React app on GCP, you would upload the build folder onto a bucket and have that bucket served on App Engine, which will provide a url to access it on the web.
I'm wondering if this is how it would be done on a Kubernetes cluster or if there is a different approach since it is not using App Engine, rather Google Kubernetes.
I hope this makes sense (I am still fairly new to Google Cloud) and any feedback/tips will be appreciated!
Thanks!
There are different approaches to this.
Approach 1: Serve your frontend via Google Cloud Storage.
There is a guide in the GCP documentation: Hosting a static website to set this up. After the build copy all the files to the cloud storage and you are done.
Approach 2: Add your fronted to your backend while building the Docker image
Build your frontend and pack it into a Docker image with something like this:
FROM node AS build
WORKDIR /app
COPY . .
RUN npm ci && npm run build
FROM scratch
COPY --from=build /app/dist /app
Build your backend and copy the frontend:
FROM myapp/frontend as frontend
FROM node
// build backend
COPY --from=frontend /app /path/where/frontend/belongs
This decouples both builds but you will always have to deploy the backend for a frontend change.
Approach 3: Serve your frontend with nginx (or another web server)
FROM node AS build
WORKDIR /app
COPY . .
RUN npm ci && npm run build
FROM nginx
COPY --from=build /app/dist /usr/share/nginx/html
You might also adapt the nginx.conf to enable routing without hash paths. See this article by codecentric for more information on that.

How to use private, self hosted NPM package with Google App Engine node, standard environment

I have an NPM package hosted on a private Bitbucket git repo (not in the official NPM registry).
I have this in my package.json, under the "dependencies" key:
"a-private-package" git+ssh://git#bitbucket.org:myusername/a-private-package.git
It works when I run npm install locally as my SSH keys are used.
But when I use gcloud app deploy to deploy to the app engine standard environment for node, I get a Host key verification failed from Google Cloud Build.
I have tried:
Adding a custom SSH key to Cloud Build.
https://cloud.google.com/cloud-build/docs/access-private-github-repos
Issue: No access to cloudbuild.yaml for GAE standard; cannot tell git to use the SSH key.
Adding my private git repo to Google Sources.
Issue: No access to cloudbuild.yaml for GAE standard; cannot tell git to use the SSH key.
npm pack; npm install
Issue: Does not keep repo history/URL.
Is it actually possible?
It is not possible to modify the cloudbuild.yaml when you are running gcloud app deploy. Instead, you must create a new cloudbuild.yaml and execute it with gcloud builds submit --config=cloudbuild.yaml . In this case, the gcloud app deploy is going to be executed inside the cloudbuild.yaml.
I have tried the steps described for connecting to a private Github repository and changing the values to fit it with bitbucket, but was not able to. Thus, I have created this Feature Request for better documentation
Using Cloud Source Repositories
I believe that, as you already have a dependency on the private repo, then it will be simpler to host you entire app on it. Given this, you will have to clone the entire repo, run npm install and deploy.
In this case, Cloud Source Repositories has a built in feature to mirror directly to Bitbucket (public and private repos).
Steps:
1) Create on your app root folder a cloudbuild.yaml with the following code:
steps:
# NPM install
- name: 'gcr.io/cloud-builders/npm'
args: ['install']
#Test
- name: 'gcr.io/cloud-builders/npm'
args: ['test']
#Deploy
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
2) Connect Cloud Source Repositories to Bitbucket
3) Create a Cloud Build Trigger (so new code pushed to the repo will be automatically deployed)
4) Push the root folder containing the app.yaml and the cloudbuild.yaml to the repo
It should now be Synced to Cloud Source Repositories and it should trigger the Cloud Build for the deploy.
Unfortunately you will need to embed a username/password in the package.json, but you can probably use the https endpoint:
"a-private-package": "git+https://myusername:password#bitbucket.org/myusername/a-private-package.git"
If this works for you I'd suggest creating a separate account on bitbucket and restricting it to view-only on that repo.

Multiple services with different dockerfiles on GAE Flexible

I'm using Google AppEngine Flexible with python environment. Right now I have two services: default and worker that share the same codebase, configured by app.yaml and worker.yaml. Now I need to install native C++ library, so I had to switch to Custom runtime and added Dockerfile.
Here is the Dockerfile generated by gcloud beta app gen-config --custom command
FROM gcr.io/google-appengine/python
LABEL python_version=python3.6
RUN virtualenv --no-download /env -p python3.6
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi
Previously my app.yaml and worker.yaml each had it's own entrypoint: config that specified the command needed to be run to start the service.
So, my question is how can I use two different commands to start the services?
EDIT 1
So far I was able to solve this by rewriting CMD line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution.
gcloud app deploy command has --image-url flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often
Yes, as you mentioned, I think using the --image-url flag, is a good option here.
Specify a custom runtime.
Build the image locally, tag it, and push it to Google Container Registry (GCR)
then, deploy your service, specifying a custom service file, and specifying the remote image on GCR using the --image-url option.
Here's an example that accomplishes different entrypoints in 2 services that share the same code:
...this is assuming that the "flex" and not "standard" app engine offering is being used.
lets say you have a: project called my-proj
with a default service that is not important
and a second service called queue-processor which is using much of the same code from the same directory.
Create a separate dockerfile for it called QueueProcessorDockerfile
and a separate app.yaml called queue-processor-app.yaml to tell google app engine what i want to happen.
QueueProcessorDockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN npm install -g yarn
RUN yarn
# Bundle app source
COPY . .
CMD [ "yarn", "process-queue" ]
*of course i have a "process-queue" script in my package.json
queue-processor-app.yaml
runtime: custom
env: flex
... other stuff...
...
build and tag the docker image
Check out googles guide here -> https://cloud.google.com/container-registry/docs/pushing-and-pulling
docker build -t eu.gcr.io/my-proj/queue-processor -f QueueProcessorDockerfile .
push it to GCR
docker push eu.gcr.io/my-proj/queue-processor
deploy the service, specifying which yaml config file google should use, as well as the image url you have pushed
gcloud app deploy queue-processor-app.yaml --image-url eu.gcr.io/my-proj/queue-processor
Since the Dockerfile name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration.
But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see Sharing code between flexible environment modules in a GAE project.
I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have:
hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file
package and publish your shared code as an external library, added to the requirements.txt file of each service using it
split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services.
have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying might work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work.
So i had a very similar issue with my Java applications. We were looking to migrate from Heroku to GAE and were attempting to simulate the Heroku Procfile with GAE services. Effectively what we did was to create separate directories in our application src/main/appengine/web and src/main/appengine/worker where each directory conainted the app.yaml and Dockerfile specific to the process. Then using the mvn appengine:deploy capabilities, we specified the -Dapp.stage.dockerDirectory and -Dapp.stage.appEngineDirecory respectively for each service we wanted to deploy. Then using just some parameters we were able to basically script out parallel deployments of each service from the same code base. Not sure if this works in your situation, but it was very useful for us: Here are the two example commands in their entirety:
Web Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/web -Dapp.stage.appEngineDirectory=src/main/appengine/web -Dapp.stage.stagingDirectory=target/appengine-web -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}
Worker Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/worker -Dapp.stage.appEngineDirectory=src/main/appengine/worker -Dapp.stage.stagingDirectory=target/appengine-worker -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}

Resources