Consume Cloud Run environment variables inside Nextjs app - reactjs

I've recently built a Nextjs app, which I am hosting on Google Cloud Run.
My app makes some requests to external APIs from the getStaticProps() method.
I would like to be able to point to a different API host depending on the environment (e.g prod or dev) using environment variables which would be set differently for each environment.
I know I can store these variables in environment specific files like .env.development and .env.production however I would like to be able to store these environment variables in the environment variables field in the Google Cloud console for the cloud run service and skip storing them in the files altogether.
I have tried adding in the variables to Cloud Run, but it does not work. I have also tried prefixing the variables with NEXT_PUBLIC_... With no luck.
Did anyone have any tips on how to accomplish this?

Ok... I think I have figured it out now.
I was using Cloud Builds to build my container, and the container runs npm run build before it runs npm run start.
I assume that my Cloud Run variables aren't available at the point in time when Cloud Build is building the project.
So, I think my solution is prob to try and inject the variables at the point when it is building, using substitution variables.
EDIT: Confirmed. If I start Nextjs in dev mode, such that the page is rendered on the server for each request, then the Cloud Run environment variables are used.
To build the Nextjs app for production, I include the environment variables in the Dockerfile that is built by Cloud Build
EDIT: as request, an example of a dockerfile setting the environment variable:
FROM node:16.13-alpine
RUN mkdir -p /usr/src
WORKDIR /usr/src
COPY . /usr/src
ENV NEXT_PUBLIC_MY_API_HOST='https://some.host.com'
RUN npm install -only=production
RUN npm run build
EXPOSE 3000
CMD npm run start
Then you can just reference the environment variable from within your code using process.env.NEXT_PUBLIC_MY_API_HOST

Related

How to access the environment variables in Amazon S3 for the react application as I have stored the Firebase and Google Api Key in env variables?

REACT_APP_API_KEY=**
REACT_APP_FIREBASE_API_KEY=**
Using in the below style in react js for local environment :
process.env.REACT_APP_FIREBASE_API_KEY
Note : I am using aws code pipeline and code build for ci /cd.
In Code Build you can edit the Environment and add environmental variables. Click Additional configuration in the console to see it. You can export REACT_APP_FIREBASE_API_KEY there and you it should be picked up by npm run build.
You could also use an .env.production file and commit it to source code. These aren't secrets, are added to the deployed code, so there shouldn't be a security issue.

Using create-react-app hosted on S3 to hit a Heroku API

Backgroud
The backend is a Rails API-only app running on Heroku (on port 5000 during local development).
The frontend is made with create-react-app and is hosted in an S3 bucket behind AWS CloudFront (on port 3000 during local development).
Development Setup
Locally, the frontend's package.json includes:
"proxy": "http://localhost:5000",
This lets me run a Javascript command like…
await fetch(`/someEndpoint`)
…and the development server running on localhost:3000 forwards the response from http://localhost:5000/someEndpoint. This works great and my frontend processes the responses successfully 👍.
Production Issues
Now I'm running on production.
If I change nothing, the same line will try to load from https://asdfasdfasdf.cloudfront.net/someEndpoint which will 404.
Instead I want to configure the app so that in production it will hit the Heroku app https://my-awesome-rails-app.herokuapp.com/someEndpoint where the Rails API is running.
Questions
(1) Is what I want possible without changing the code
await fetch(`/someEndpoint`)
either by configuring the frontend to use my-awesome-rails-app.herokuapp.com, or by configuring CloudFront to forward some requests to S3 and others to Heroku?
(2) If neither are possible, how should I write the fetch() command to let me hit Heroku in production but still keep the local proxying behavior during local development? Some use of environment variables maybe?
In my opinion, It's better to follow the same process (to get the API hostname) in all the environments. Also setting up CloudFront only for this purpose is not convincing unless you have other reasons to do so.
I think you could use an environment variable and here is how you can do it.
using environment variables in react project
react-scripts supports environment variables using dotenv library.
we build the react app using:
REACT_APP_API_DOMAIN=REACT_APP_API_DOMAIN npm run build
or you can define a .env.production where the NODE_ENV environment variable is set to production.
REACT_APP_API_DOMAIN=http://localhost:5000
Then in your code file, you can access it from process.env
await fetch(`${process.env.REACT_APP_API_DOMAIN}/someEndpoint`)
Please read the reference, there is a pattern using which you should define the environment variables for it to work. for e.g each environment variable you define should start with REACT_APP_
Reference:
https://create-react-app.dev/docs/adding-custom-environment-variables/
hope this helps.

Continuous Deployment - Modify backend url in the frontend before deployment

My current situation:
I have a jenkins pipeline to dockerize my node/express backend and build+dockerize my react frontend after every commit to github. This works so far. I am using docker and jenkins on ubuntu 18.
Problem:
My frontend (of course) can't connect to the backend when on the live server (because the route to the backend is http://127.0.0.1:8080. My first idea was to use environment variables but this is not working since react can't read env variables after built (because it's pure html/css/js). What are common solutions to this problem? I don't want to change the backend to the actual domain every time before I push to the repository and change it back to 127.0.0.1 to work on it again.
I researched some more and environment variables CAN be replaced by their value in build time (which is what I want) when you don't use an npm package like dotenv, but rather define variables that start with REACT_APP_.
More Information
"The environment variables are embedded during the build time" - should have read that before.
You can use env files to define different variables based on the environment

create-react-app build/serve environment variables

Relatively new to working with react. I have an application that is working fine in local docker. I populate a bunch of REACT_APP_ environment variables by exporting them to the environment before starting the docker container.
Now, I'm trying to deploy this to a kubernetes pod by running a yarn build and then serving up the build. I see that the environment variables are available on the pod itself by looking at printenv but the application doesn't appear to be picking them up.
Is there something special with serving a production build of a react-app to get it to see the exported environment variables that I'm missing?
I don't want to embed an .env file into the built docker image for security reasons, so I'm hoping that running a react build via serve can still pick up exported REACT_APP_ environment variables that are set via kubernetes secrets.
So apparently, whenever you build a react application with npm, static files are produced that don't know anything about any environment variables you may try to inject at runtime with Kubernetes.
The article below does a good job of explaining this and why they choose to attach the environment variables to the JavaScript window object since it has application-scope available to it.
Making a React application container environment-aware at Kubernetes deployment

Multiple services with different dockerfiles on GAE Flexible

I'm using Google AppEngine Flexible with python environment. Right now I have two services: default and worker that share the same codebase, configured by app.yaml and worker.yaml. Now I need to install native C++ library, so I had to switch to Custom runtime and added Dockerfile.
Here is the Dockerfile generated by gcloud beta app gen-config --custom command
FROM gcr.io/google-appengine/python
LABEL python_version=python3.6
RUN virtualenv --no-download /env -p python3.6
# Set virtualenv environment variables. This is equivalent to running
# source /env/bin/activate
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/
RUN pip install -r requirements.txt
ADD . /app/
CMD exec gunicorn --workers=3 --threads=3 --bind=:$PORT aces.wsgi
Previously my app.yaml and worker.yaml each had it's own entrypoint: config that specified the command needed to be run to start the service.
So, my question is how can I use two different commands to start the services?
EDIT 1
So far I was able to solve this by rewriting CMD line in dockerfile for each deploy of each service. However, I'm not quite satisfied with this solution.
gcloud app deploy command has --image-url flag that allows to set image url from GCR. I haven't researched that yet, but it seems that I can just upload images to GCR and use the urls since don't change that often
Yes, as you mentioned, I think using the --image-url flag, is a good option here.
Specify a custom runtime.
Build the image locally, tag it, and push it to Google Container Registry (GCR)
then, deploy your service, specifying a custom service file, and specifying the remote image on GCR using the --image-url option.
Here's an example that accomplishes different entrypoints in 2 services that share the same code:
...this is assuming that the "flex" and not "standard" app engine offering is being used.
lets say you have a: project called my-proj
with a default service that is not important
and a second service called queue-processor which is using much of the same code from the same directory.
Create a separate dockerfile for it called QueueProcessorDockerfile
and a separate app.yaml called queue-processor-app.yaml to tell google app engine what i want to happen.
QueueProcessorDockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
COPY package.json ./
COPY yarn.lock ./
RUN npm install -g yarn
RUN yarn
# Bundle app source
COPY . .
CMD [ "yarn", "process-queue" ]
*of course i have a "process-queue" script in my package.json
queue-processor-app.yaml
runtime: custom
env: flex
... other stuff...
...
build and tag the docker image
Check out googles guide here -> https://cloud.google.com/container-registry/docs/pushing-and-pulling
docker build -t eu.gcr.io/my-proj/queue-processor -f QueueProcessorDockerfile .
push it to GCR
docker push eu.gcr.io/my-proj/queue-processor
deploy the service, specifying which yaml config file google should use, as well as the image url you have pushed
gcloud app deploy queue-processor-app.yaml --image-url eu.gcr.io/my-proj/queue-processor
Since the Dockerfile name cannot be changed, the only way to not have to modify the Dockerfile would be to store each service in its own, separate directory. Clean separation, each service has its own Dockerfile and/or startup configuration.
But this raises a question: how to deal with the code shared by multiple services? Using symlinks (which works great for sharing code across standard env services) doesn't work for the flexible env services, see Sharing code between flexible environment modules in a GAE project.
I see a few possible approaches, none really ideal, but maybe more appealing than what you currently have:
hard-link each and every shared source code file (since hardlinking directories is not possible). A bit tedious and error-prone, but you only have to do that once per file
package and publish your shared code as an external library, added to the requirements.txt file of each service using it
split the shared code in a separate repository and have a copy of that repository in each service using it (maybe as a git submodule if using git?). You just need to ensure at the service deployment time that the shared repository is pulled at the proper version - can be quite reliably done through automation. A bit more complicated if you have uncommited changes in this repo - you'd have to patch the same changes in all services.
have multiple copies of the Dockerfiles with different names which you simply copy over instead of always editing the same file. Symlinking instead of copying might work as well, since the symlink doesn't need to be followed outside of the service directory, if it's just replicated as a symlink it'll work.
So i had a very similar issue with my Java applications. We were looking to migrate from Heroku to GAE and were attempting to simulate the Heroku Procfile with GAE services. Effectively what we did was to create separate directories in our application src/main/appengine/web and src/main/appengine/worker where each directory conainted the app.yaml and Dockerfile specific to the process. Then using the mvn appengine:deploy capabilities, we specified the -Dapp.stage.dockerDirectory and -Dapp.stage.appEngineDirecory respectively for each service we wanted to deploy. Then using just some parameters we were able to basically script out parallel deployments of each service from the same code base. Not sure if this works in your situation, but it was very useful for us: Here are the two example commands in their entirety:
Web Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/web -Dapp.stage.appEngineDirectory=src/main/appengine/web -Dapp.stage.stagingDirectory=target/appengine-web -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}
Worker Process:
mvn appengine:deploy -Dapp.stage.dockerDirectory=src/main/appengine/worker -Dapp.stage.appEngineDirectory=src/main/appengine/worker -Dapp.stage.stagingDirectory=target/appengine-worker -Dapp.deploy.projectId=${project-id} -Dapp.deploy.version=${project-version}

Resources