Adding Environment Variables to Azure Static Web App Release Task - reactjs

My react frontend SPA has some environment variables I would like to pass at deploy time using the Static Web App Deploy task in Azure ADO.
If I'm doing this in a pipeline, it's simple to add directly to the YAML https://learn.microsoft.com/en-us/azure/developer/javascript/how-to/with-authentication/static-web-app-with-api/deploy-static-web-app-to-azure#add-react-client-environment-variables-to-workflow-configuration-file
However, within the "Releases", I don't see where I can add these env variables since I can't directly edit the YAML file.
My questions are:
Is this not possible? Is this limitation due to the fact the Deploy Static Web App is technically still in "Preview"?
Is it possible to write a custom Task/YAML code to run during the release? I would prefer to keep the CD in the Releases section so I can keep approvals/rollback functionality, rather than just make a multi-stage pipeline
Is there a better or more correct way to pass these environment variables? They are not secrets, just for example the API URL. Passing as env variables would allow me to break up deployment into dev/prod and pass different values for each environment.
Thank you!

Related

Is there a way to dynamically inject sensitive environment variables into a serverless React frontend application using Azure/Github Actions?

I'm sort of restricted to Azure since that is what the client is already using.
But basically, we have this React website which is your typical react-scripts no server website, which means that there's nowhere in Azure Static Webapps to set environment variables for a frontend application.
I saw this on Azure Static Webapps Configuration, but subject to the following restrictions, won't work for my use case because there is no backend API for my frontend application - the backend associated with the frontend is published separately to Azure App services. And I need the secrets on the frontend to use some npm packages that require them, which I would prefer to do on frontend instead of backend.
Are available as environment variables to the backend API of a static web app
Can be used to store secrets used in authentication configuration
Are encrypted at rest
Are copied to staging and production environments
May only be alphanumeric characters, ., and _
I was doing some more research, and this seems to sort of be up the alley of what I'm looking for:
https://learn.microsoft.com/en-us/answers/questions/249842/inject-environment-variables-from-pipeline-to-azur.html
Essentially, I really want to avoid hardcoding secrets into the React code because that's bad practice.
I personally see a few different (potential) options:
Create an endpoint on the backend Spring Boot api that simply serves all environment variables
This is the most trivial to implement but I have concerns about security with this approach. As my frontend has no access to any kind of secrets, there's no way for it to pass a secure token to the backend or anything to authenticate the request, so someone could conceivably have chrome network inspect element tab open, see that I'm making a request to /getEnvironmentVariables, and recreate the request. The only way I can see to prevent this is to have IP restrictions enacted on the backend API, so it only accepts incoming requests from the IP address of my frontend website, but honestly that just sounds like so much overhead to worry about. Especially because we're building the product as more of a POC, so we don't have access to their production environments and can't just test it like that.
Have the Azure Static Webapps Github Actions workflow somehow inject environment variables
So I've actually done something similar with GCP before. In order to login to a GCP service account to deploy the app during continuous build, the workaround was to encode a publicly viewable file that could be freely uploaded to whatever public repo, which could only (realistically) be decrypted using secrets set on the CI/CD pipeline, which would be travis-ci, or in my case, Github Actions. And I know how to set secrets on Github Actions but I'm not sure how realistic a solution like that would be for my use case because decrypting a file is not enough, it has to be formatted or rewritten in such a way that React is able to read it, and I know React is a nightmare working with fs and whatnot, so I'm really worried about the viability of going down a path like that. Maybe a more rudimentary approach might be writing some kind of bash script that could run in the github actions, and using the github actions secrets to store the environment variables I want to inject, run a rudimentary file edit on a small React file that is responsible for just disbursing environment variabless, before packaging with npm and deploying to Azure?
TLDR: I have a window in github actions when I have access to a linux environment and any environment variables I want, in which I want to somehow inject into React during ci/cd before deployment.

Process.Env Isn't Populated in React App on Azure

I'm trying to get a React app running in Azure App Services and having difficulties getting environmental variables to work.
Locally
When running locally I'm using a .env.local file to load environmental variables (which isn't checked into git).
REACT_APP_SERVER_URL=http://localhost:5000
I can then use it in my application by referencing it using process.env.REACT_APP_SERVER_URL. Everything is working as expected.
On Azure
Now I'm trying to get the same thing working in Azure. I've setup configuration using Application Settings (described in their documentation here and here).
I can use kudu to get to the debug console and see that the environmental variable is being set.
But using process.env.REACT_APP_SERVER_URL is not pulling in that value. If I console.log(process.env) my url isn't doesn't show up.
My React app was created using the create-react-app cli a couple days of ago. My React App is deployed via github actions with the following startup command: (pm2 serve /home/site/wwwroot/build --no-daemon --spa)
From what I've read online it "should" just work, so I'm not sure what the breakdown is. My thought is something with pm2 might be blocking environmental variables, but not sure.
This guy seems to be having a similar problem to me, but no answer is elaborated on that question. And I don't want to store config in package.json, since that's what Azure Config is suppose to help me do.
A react single page app is, from an infrastructure perspective, just a bunch of static files - HTML,JS,CSS, whatever. It cannot reach into the server to read those variables because there is no server side code running.
You're going to have to bundle your environment variables, whether inside .env or package.json or wherever.

Is it possible to configure React application to use container's environment variables in Kubernetes?

To begin with - let us suppose we have a React application. We want to build it and deploy to 3 environments - dev, test and production. As every front-end app it needs to call some APIs. API addresses will vary between the environments. So they should be stored as environment variables.
As every modern, progressive developer we want to use containers. In particular Kubernetes.
We want to build our web application and deploy it on K8S cluster. The container image should be built and kind of sealed for changes, then before deployment to each particular environment the variables should be injected.
But it seems there's one great impossibility here. When it comes to .NET apps for example, when we have .dll compiled, it reads a config file in the runtime. It's not a case with React. After we generate build we have just static files. The variables are changed to static values in the process of building React app. It seems there's no way to update it after that point - or is it?
The way you are trying to solve your problem is not correct.
You don't need to know anything about the addresses of the backend services in your react app. Only the frontend server/gateway that is serving your react app needs to know about the backend services. Any request from the react app should be proxied via the gateway.
See API gateway pattern - https://microservices.io/patterns/apigateway.html
You can use config map to store the API endpoint address and refer it as environment variable in the pod.
If you want to change some values while the pod is running you can mount the config map and any change to it will be synced in the pod

Securing GAE env variables by using gsutil builder to source app.yaml during build?

I have the same problem as the one mentioned here: Securely storing environment variables in GAE with app.yaml - namely:
"I need to store API keys and other sensitive information in app.yaml as environment variables for deployment on GAE. The issue with this is that if I push app.yaml to GitHub, this information becomes public (not good)."
Additionally I'm looking to check the following boxes:
Prevent vendor lock-in (as much as possible) & ability to take my dockerfile elsewhere.
Ease of deployment with GitHub. I want a push to the master which triggers a build.
Minimal setup, or a suitable effort and workflow for a solo-dev or small team.
My research yielded the following:
Securely storing environment variables in GAE with app.yaml
How to set environment variables/app secrets in Google App Engine
GAE : How to deploy various environments with secrets?
appengine and OS environment variables
How to pass environment variables to the app.yaml using cloud build
A lot of good information from GAE : How to deploy various environments with secrets?
where the author listed the three workarounds and their reason to not be used:
Use Google KMS - allows us to put encrypted secrets directly into the
project, but it requires us to put custom code in our apps to decrypt
them. It creates a different environment management between local,
staging and production. It increases the risk of bugs due to the
complexity.
Store secrets in Google Datastore - I tried it, I created a helper
that searches env vars in proccess.ENV, then in cache and ultimately
in Datastore. But like KMS, it increases complexity a lot.
Store secrets in a JSON file and put in on Google Cloud Storage : again, it requires to load env variables through an helper that
checks env vars, then loads the file etc...
However the best solution for me came from How to pass environment variables to the app.yaml using cloud build
It allows me to have the following deployment flow using GAE flexible environment for nodejs:
A merge to my Github master branch triggers a cloud build
My first step in my cloudbuild.yaml sources my app.yaml file using the gsutil builder, since app.yaml is not in source control
My app.yaml points to my dockerfile for my runtime and has my env variables
This checks all my boxes and was a fairly easy solution but, this definitely doesn't seem to be a popular solution, so am I missing something here?
Most importantly are there any security concerns?
I am amazed at how you did your research, you actually collected all the possible ways to do achieve it.
As you mentioned there are many ways to pass the variables to the application but I believe that the solution you propose ( storing the variables in Google Cloud Storage and retrieving them with Google Cloud Build ) is optimal for your purposes. It doesn't require much code and it's elegant, I hope this post helps people to be aware of this solution. Regarding your security concerns, this solution includes a high degree of security as you can set the file in the bucket to only be accessible from Google Cloud Build and the owner of the project.
Another solution I've employed, is to store the env variables in the Cloud Build trigger substitution variables directly and use a custom Cloud Builder envsubt to render a templated app.yaml.
I could not find documentation on how the substitution variables are stored in the Cloud Build trigger (any reference here would be helpful). However, I think most data in Google Cloud is encrypted at rest and encrypted on use and transfer. The main drawback is that the values are show in plain text, so sensitive information like API keys are not obscured, and any one who has access to the trigger can see the sensitive information.
One benefit is that this keeps the templated app.yaml close to the code you'll be using it with, and can be reviewed in the same pull request. Also you don't need to use another service, like Google Storage.
Steps:
Add the envsubst Cloud builder to your project, see instructions here.
Create a templated app.yaml file, e.g.
runtime: <your runtime>
service: ${GAE_SERVICE}
env_variables:
MY_VAR: ${MY_VAR}
MY_VAR_2: ${MY_VAR_2}
Add an app.yaml template rendering step in cloudbuild.yaml
steps:
- id: "render-app-yaml"
name: "gcr.io/${PROJECT_ID}/envsubst"
env:
- "GAE_SERVICE=${_GAE_SERVICE}"
- "MY_VAR=${_MY_VAR}"
- "MY_VAR_2=${_MY_VAR_2}"
args: ["app.yaml"]
Add the substitution variables in the Cloud Build trigger, e.g. _GAE_SERVICE, _MY_VAR, and _MY_VAR_2. Note: user-defined variables in the trigger are prefixed with a _.
When I was doing my research, I couldn't find any solution like this one either. Any feedback is welcome.

Can I update only app.yaml file without uploading all project

Is there a way to update selected files when using the App Engine Flexible env?
I'm facing an issue whenever I do a small change in the app.yaml file: to test it I would need to deploy the whole application which takes ~5mins.
Is there a way to update only the config file? OR is there a way to test these files locally.
Thanks!
The safe/blanket answer would be no as the flex env docker image would need to be updated regardless of how tiny the changes are, see How can I speed up Rails Docker deployments on Google Cloud Platform?
However, there might be something to try (YMMV).
From App Engine Flexible Environment:
You always have root access to Compute Engine VM instances. SSH access to VM instances in the flexible environment is disabled by
default. If you choose, you can enable root access to your app's VM
instances.
So you might be able to login as root on your GAE instance VM and try to manually modify a particular app artifact. Of course, you'd need to locate the artifact first.
Some artifacts might not even be present in the VM image itself (those used exclusively by the GAE infra, queue definitions, for example). But it should be possible to update these artifacts without updating the docker image, since they aren't part of the flex env service itself.
Other artifacts might be read-only and it might not be possible to change them to read-write.
Even if possible, such manual changes would be volatile, they would not survive an instance reload (which would be using the unmodified docker image), which might be required for some changes to take effect.
Lots of "might"s, lots of risks (manual fiddling with the app code could negatively impact its functionality), up to you to determine if a try is really worthy.
Update: it seems this is actually documented and supported, see Accessing Google App Engine Python App code in production

Resources