I've got an app that I'm working on that consists of a Frontend - written using CRA - and a Backend - written in Java. We would like to support deploying this in multiple environments throughout our process, namely:
Local development machine - This is easy. Just run "npm start" and use the "development" environment
Local End-to-end tests - these build an infrastructure in Docker consisting of the Frontend, Backend and Database, and runs all of the tests against that
Development CI system
QA CI System
Pre-production System
Production System
The problem that I've got is - in the myriad of different systems, what is the best way for the Frontend to know where the Backend is? I'm also hoping that the Frontend can be deployed onto a CDN, so if it can be static files with the minimal complexity that would be preferred.
I've got the first two of these working so far - and they work by the use of the .env files and having a hard-coded hostname to call.
Once I get into the third and beyond - which is my next job - this falls down, because in each of the cases there is a different hostname to be calling, but with the same output of npm run build doing the work.
I've seen suggestions of:
Hard-code every option and determine it based on the current browser location in a big switch statement. That just scares me - not only do I have to maintain my deployment properties inside my source code, but the Production source will know the internal hostnames, which is arguably a security risk.
Run different builds with provided environment variables - e.g. REACT_APP_BACKEND=http://localhost:8080/backend npm run build. This will work, but then the code going into QA, PreProd and Prod are actually different and could arguably invalidate testing.
Adding another Javascript file that gets deployed separately onto the server and is loaded from index.html. That again means that the different test environments are technically different code.
Use DNS Tricks so that the hostnames are the same everywhere. That's just a mess.
Serve the files from an active web server, and have it dynamically write the URL out somehow. That would work, but we're hoping that the Production one can just be a simple CDN that just serves up static files.
So - what is the best way to manage this?
Cheers
Related
I'm no React developer, and I've been doing a docker course that uses a multi-stage build Dockerfile with node and nginx to dockerize a React app. Why is nginx needed? And why can't we simply use npm start in production? Doesn't it already start a server and exposes the port for React to run?
You are correct, there is really nothing stopping you from just doing npm start even for production. For development purposes using a Nginx server is kind of overkill. However, the story is different for production environments. There are many reasons to use a "proper" webserver. Here some points:
Performance and obfuscation when doing a production build of the React code: In order to improve performance you should do a npm build to obtain minified and optimized code. This will reduce the file size of your application which will reduce storage, memory, processing and networking resources. The result of npm build is a bunch a static files which can be served from any web server.
This will also obfuscate your code, making it harder for other to see what the code does, making it harder to exploit potential bugs and weaknesses
Obfuscation of infrastructure: Having a front-facing webserver can act as a "protective layer" from the internet, hiding your infrastructure from the outside is good for security purposes.
More performance and security; A battle tested production web server such as Nginx is highly performant and has HTTPS capabilities built-in. A dev server usually won't have the same abilities, it will perform worse, and it won't have near the same level of security.
Convenience: Handling a production environment can be significantly different from a dev environment. Nginx provides built in logging, you can easily restrict/allow/redirect calls to your server, load balancing, caching, A/B testing and much more. All of which can be very essential in production.
I guess your course just sets up example cases that are also relevant for people wanting to create production ready systems.
When you are ready with your react app and publish it (npm run build), you will get a bunch of static html and js files. And somehow you want to serve them, this is where a webserver, for example nginx comes in. (You can use any other webserver, for example serve)
You can serve with node, if you create a server, which can serve static files too, npm start is for debugging, it will use much more resources (ram, cpu) and the file size will be bigger (it will load much slower)
We have multiple environments, and we have environment files which having backend configarations and we are using these files during the builds.
Example: ng build --c UAT
But I have a issue here, now we decided to build only once and deployment multiple environments same artifact.
I know this is quite achievable using an Angular service and theĀ APP_INITIALIZERĀ token, but some reason we can't use this.
So I decided to after the build, modify the compiled js files(main.js) with respective env configuration values. But it's becoming difficult because of increased number of env variable and its patterns.
So I thought of follow below process, please suggest it can usable or i should not
1, I'll build UAT webpack(dist/artifact) using the "ng build --c UAT".
2, I'll do same for all other environments, now I have total 3 dist folders(webpacks).
3, I'll deploy the UAT artifact to all environment s, but before deploying it to Preprod I'll replace "main.js" file with Preprod artifact main.js file because only main.js file have the all environment configarations. and keep all other js files same.
4, I'll repeat same with prod deployment.
Please suggest on this approach.
You made a good choice to decide against environment specific builds, as they'll always come back to haunt you. But with your approach you only shifted the problem since you still need to tweak the build artifact. When your configuration is highly dynamic, I would suggest to reconsider your decision of not using a Service to load the data dynamically at runtime, or at least state the constraints why this approach doesn't work for you.
Assuming you still want to rely on static file content, th article How to use environment variables to configure your Angular application without a rebuild could be of interest for you. It suggests to load the data from an embedded env.js script, and expose it from there as an Angular service. While one could argue that this also only shifts the problem further, it at least allows your build artifact to remain untouched. For example, if you run your app from say an nginx docker container, you could replace values in env.js dynamically prior to the webserver start. I'm using a simplified version of this approach, and while it still feels a bit hacky, it does work! Good luck!
I'm currently working on a live project. The frontend part of the system is in ReactJS. We are using create-react-app as the starter kit.
We are facing some issues in deploying the application on live server. Earlier we followed the strategy of pushing the code on server and then creating the build on it. But we noticed that so long the build was generating, our site became unavailable. Which does not seem right. Hence we decide to create build folder in developer's local machine and push the build to the server. But now we are receiving a lot of change requests and feature requests, hence I'm planning to move to a robust git branching model. I believe this will create problem with the way we are currently handling our deployment strategy(which is to move the build to production).
It will be really helpful if some one can show us the right direction in handling deployment of ReactJS apps.
You can use Jenkins which can be configured to trigger the build as soon as a code in a branch is checked-in in GIT. I have not worked on Jenkins but surely, I have seen people using Jenkins for such things.
Jenkins will trigger the build in its own environment (or you can create a temp folder for the time being the build is getting generated if Jenkins operates on the server directly) which will generate the output bundle. So your code will not be removed from the server for that while and you can patch your new files to the actual folder (which can also be automated using Jenkins).
My use case is composed of several ReactJs projects, where we collaborate using Git.
we are building a workflow to be used using Git, and that's our current thinking:
Each programmer works locally feching the next branch
They build their own branches but at the end its all merged to next
When all the pending tasks are done, we move create a branch test from next
Once test is fine, it is branched to beta
When stable, branched stable
This is the development phase.
For deployment, our first tought is to "build" the bundle on test, beta and stabe and copy it to the respective servers for running, as we keep built bundles on a normal filesystem (this is how we do it today, keep several bundles for several versions, not using Git)
Our production environment has dozen of servers in different customers, and every time we need to update, we need to copy the respective bundle from the correct directory to the server and install it (all bundles are build with an installation tool).
So, I have 2 doubts here:
a) Is the development workflow a good practice? Any suggestion?
b) How we make the deployment workflow smoother? Should we keep the bundles together in the code on Git ? Should we use something different?
Ideally we would need the server to autoupdate from our command. What would be the correct way to accomplish that ?
How can docker help automation testers?
I know it provides linux containers which is similar to virtual machines but how can I use those containers in software automation testing.
Short answer
You can use Docker to easily create an isolated, reproducible and portable environment for testing. Every dependency goes to an image and whenever you need an environment to test your application you just run some images.
Long answer
Applications have a lot of dependencies
A typical application has a lot of dependencies to other system. You might have a database, a LDAP, a Memcache or a many more things your system depends on. The application itself needs a certain run time (Java, Python, Ruby) in a dedicated version (Java 7 or Java 8). You might also need a server (Tomcat, Jetty, NGINX) with settings for your application. You might need a special folder structure for your application and so on.
Setting up an test environment becomes complicated
All this things make up the environment you need for your application. You need this environment to run your application in production, to develop it and to test it (manual or automated). This environment can become quite complicated and maintaining it will cost you a lot of time and trouble.
Dependencies become images
This is where Docker comes into play: Docker let's you put your database (with the initial data of your application already set up) to a Docker image. The same goes for your LDAP, your Memcache and all other applications you depend on. Docker let's you even package your own application into an image which provides the correct run time, server, folder structure and configuration.
Images make your environment easily reproducible
Those images are self-contained, isolated and portable. This means you can pull them on every machine and just run them as they are. Instead of installing a database, LDAP, Memcache and configure all of them you just pull the images and run them. This makes it super easy to spin up a new and fresh environment in seconds whenever you need.
Testing becomes easier
And that's the basic for your tests, because you would need a clean, fresh and reproducible environment to perform tests against. Especially "reproducible" and "fresh" is important. If you run automated tests (locally on the developer maschine or on your build server) you must use the same environment. Otherwise your tests are not reliable. Fresh is important because it means you can just stop all containers when your tests are finished and every data mess your tests created is gone. When you run your tests again you just spin up a new enviroment which is clean and in its initial state.