Cognito Auth Works in Prod but Not locally - reactjs

So this will be a brief explanation since I've got no idea what is going on or how to solve it. So lately I've been asked to work on a project (React frontend) that uses amplify with Cognito for auth, the only problem I've had lately is that my local environment does not do the authentication for some reason. I've noticed that the prod website whenever it does a request for Sign In uses a certain ClientId but in my local enviroment uses a whole different one. I know for a fact the code is the same in prod and in my machine since I've done a lot of tests with amplify. So does anyone have a clue what is happening or what to do to make it work locally?

When we work with Amplify there are certain steps that needs to be taken to setup the local environment. Your amplify project must first pull down the resources that it has created. In your case the Cognito Auth resource.
To do that run the command amplify pull this will read the team-provider-info.json file in your amplify project and build the configs required for you to work locally. This is under the assumption that you have after pulling the code down initialised the amplify project using amplify init command.
The following docs should guide you in this regard Docs

Related

Connection to AWS database only works in Development

Problem
I created an app and deployed it via AWS Amplify. The app works, but every time I try to do an operation which uses my database I get an error. The peculiar thing is that when I am developing on localhost and connecting to the database, everything works.
Debugging
I checked whether the environment variables are set correctly and they are. When checking the cloud logs, I can see this error: code: 'ER_GET_CONNECTION_TIMEOUT'.
Could this be a problem with the security group or something else? There are no problems connecting from my local ip. There is only one inbound rule specified:
I am not really well versed in all the IAM management stuff, so there is a good chance that I have messed this up. Any hints or help are very welcome. Thanks in advance.
If you amplify mock function .... test a Lambda, I believe it runs using the permissions of the amplify-cli user and not necessarily the Lambda's actual permissions.
Try amplify env checkout prod so your local environment is pointing to the 'production' environment on AWS. Test the front-end (carefully, knowing you're making changes in production) and see if that works.
You'll probably need to log out of the front-end website and log back in using a production user.
If that fails, then I suspect something is different between your dev & prod environments. Look at your environment variables. Make sure you didn't hard-code any table names -dev instead of -${process.env.ENV} etc.
IF the above test does work, then consider the differences between production and development environments. If everything is managed by Amplify, then the should be the same. If you have some pre-existing resources, then you'll need to examine the permissions resources have to talk to those resources. Did you grab an ARN from somewhere in your dev and not from prod? etc.

Is there a way to dynamically inject sensitive environment variables into a serverless React frontend application using Azure/Github Actions?

I'm sort of restricted to Azure since that is what the client is already using.
But basically, we have this React website which is your typical react-scripts no server website, which means that there's nowhere in Azure Static Webapps to set environment variables for a frontend application.
I saw this on Azure Static Webapps Configuration, but subject to the following restrictions, won't work for my use case because there is no backend API for my frontend application - the backend associated with the frontend is published separately to Azure App services. And I need the secrets on the frontend to use some npm packages that require them, which I would prefer to do on frontend instead of backend.
Are available as environment variables to the backend API of a static web app
Can be used to store secrets used in authentication configuration
Are encrypted at rest
Are copied to staging and production environments
May only be alphanumeric characters, ., and _
I was doing some more research, and this seems to sort of be up the alley of what I'm looking for:
https://learn.microsoft.com/en-us/answers/questions/249842/inject-environment-variables-from-pipeline-to-azur.html
Essentially, I really want to avoid hardcoding secrets into the React code because that's bad practice.
I personally see a few different (potential) options:
Create an endpoint on the backend Spring Boot api that simply serves all environment variables
This is the most trivial to implement but I have concerns about security with this approach. As my frontend has no access to any kind of secrets, there's no way for it to pass a secure token to the backend or anything to authenticate the request, so someone could conceivably have chrome network inspect element tab open, see that I'm making a request to /getEnvironmentVariables, and recreate the request. The only way I can see to prevent this is to have IP restrictions enacted on the backend API, so it only accepts incoming requests from the IP address of my frontend website, but honestly that just sounds like so much overhead to worry about. Especially because we're building the product as more of a POC, so we don't have access to their production environments and can't just test it like that.
Have the Azure Static Webapps Github Actions workflow somehow inject environment variables
So I've actually done something similar with GCP before. In order to login to a GCP service account to deploy the app during continuous build, the workaround was to encode a publicly viewable file that could be freely uploaded to whatever public repo, which could only (realistically) be decrypted using secrets set on the CI/CD pipeline, which would be travis-ci, or in my case, Github Actions. And I know how to set secrets on Github Actions but I'm not sure how realistic a solution like that would be for my use case because decrypting a file is not enough, it has to be formatted or rewritten in such a way that React is able to read it, and I know React is a nightmare working with fs and whatnot, so I'm really worried about the viability of going down a path like that. Maybe a more rudimentary approach might be writing some kind of bash script that could run in the github actions, and using the github actions secrets to store the environment variables I want to inject, run a rudimentary file edit on a small React file that is responsible for just disbursing environment variabless, before packaging with npm and deploying to Azure?
TLDR: I have a window in github actions when I have access to a linux environment and any environment variables I want, in which I want to somehow inject into React during ci/cd before deployment.

Pivotal Cloud Foundry - Connect React FE to Spring Boot BE

We have two applications deployed to PCF:
A Spring Boot REST Backend on a route e.g. backend.route1.net deployed with java_buildpack_offline
We also have our React Frontend on the same instance on a route e.g. frontend.route1.net pushed with a Staticfile with a single line content of: pushstate: enabled
Both works separately, which we can confirm by accessing the endpoints on both directly, for which they clearly return the expected values, hence they are both working well.
However, as soon as we try to access a page on the Frontend that needs to talk to the Backend using Axios we get a ERR_CERT_COMMON_NAME_INVALID
The generic things I find on Google or in PCF Documentation don't seem to be helping. Any tips where should I continue looking or what the actual issue could be?
If you need further info please ask.
Thanks!
/AndrĂ¡s

How to mix Cloud Run and App Engine deployments in one project?

I have a Quarkus application already deployed on Google Cloud Run.
It depends on MySQL, hence there is an instance started on Cloud SQL.
Next step in my deployment process is to add keycloak. From what I've read the best option seems to be Google App Engine.
The approved answer in this question gave me some good insight of what needs to be done ... mostly.
What I did was:
Locally I made a sub-directory in the main project.
In that directory I added the app.yaml and the Dockerfile (as described here for instance).
There I executed the said two commands: gcloud init and gcloud app deploy.
I had my doubts about this set up and they were backed up by the error I got eventually:
ERROR: (gcloud.app.deploy) INVALID_ARGUMENT: The first service (module) you upload to a new application must be the 'default' service (module). Please upload a version of the 'default' service (module) before uploading a version for the 'morph-keycloak-service' service (module).
I understand my set up breaks the overall structure of the project but I'm not sure how to mix those two application with the right services.
I understand keycloak is a stateful application, hence cannot live on Cloud Run (by the way the intention is for keycloak to use the same database instance shared with the application).
So does any one know a more sensible set up, or what can I move in mine in order to fix it?
In short:
The answer really is in reading the error message (thanks #gaefan) - about the error itself it explains enough. So I just commented out the service: my-keycloak-service line in the app.yaml (thus leaving gcloud to implicitly mark it as the default one) and the deployment continued.
Eventually keycloak didn't connect to the database but if I don't manage to adjust the configurations that would probably be a subject to a different question.
On the point of project structure and functionality:
First off, thanks #NoCommandLine and #guillaume-blaquiere for your input!
#NoCommandLine the application on Cloud Run is sort of a headless REST API enabled backend. Most of the API calls are secured by keycloack. A next step in the deployment process would be to port an existing UI (React) client on the Firebase hosting (or on another suitable service - I'm still not completely sure which approach is best) and in order for the users to work with this client properly they must make an SSO through keycloak first.
I'm quite new to GCP and the number and variants of the available options are still overwhelming to me - one must get familiar with the nuances but I guess it takes time. So I'm still taking suggestions on how to adjust my project structure to fit better the services stack. Thanks!

Process.Env Isn't Populated in React App on Azure

I'm trying to get a React app running in Azure App Services and having difficulties getting environmental variables to work.
Locally
When running locally I'm using a .env.local file to load environmental variables (which isn't checked into git).
REACT_APP_SERVER_URL=http://localhost:5000
I can then use it in my application by referencing it using process.env.REACT_APP_SERVER_URL. Everything is working as expected.
On Azure
Now I'm trying to get the same thing working in Azure. I've setup configuration using Application Settings (described in their documentation here and here).
I can use kudu to get to the debug console and see that the environmental variable is being set.
But using process.env.REACT_APP_SERVER_URL is not pulling in that value. If I console.log(process.env) my url isn't doesn't show up.
My React app was created using the create-react-app cli a couple days of ago. My React App is deployed via github actions with the following startup command: (pm2 serve /home/site/wwwroot/build --no-daemon --spa)
From what I've read online it "should" just work, so I'm not sure what the breakdown is. My thought is something with pm2 might be blocking environmental variables, but not sure.
This guy seems to be having a similar problem to me, but no answer is elaborated on that question. And I don't want to store config in package.json, since that's what Azure Config is suppose to help me do.
A react single page app is, from an infrastructure perspective, just a bunch of static files - HTML,JS,CSS, whatever. It cannot reach into the server to read those variables because there is no server side code running.
You're going to have to bundle your environment variables, whether inside .env or package.json or wherever.

Resources