ReactJS typical workflow - from development to deployment - reactjs

My use case is composed of several ReactJs projects, where we collaborate using Git.
we are building a workflow to be used using Git, and that's our current thinking:
Each programmer works locally feching the next branch
They build their own branches but at the end its all merged to next
When all the pending tasks are done, we move create a branch test from next
Once test is fine, it is branched to beta
When stable, branched stable
This is the development phase.
For deployment, our first tought is to "build" the bundle on test, beta and stabe and copy it to the respective servers for running, as we keep built bundles on a normal filesystem (this is how we do it today, keep several bundles for several versions, not using Git)
Our production environment has dozen of servers in different customers, and every time we need to update, we need to copy the respective bundle from the correct directory to the server and install it (all bundles are build with an installation tool).
So, I have 2 doubts here:
a) Is the development workflow a good practice? Any suggestion?
b) How we make the deployment workflow smoother? Should we keep the bundles together in the code on Git ? Should we use something different?
Ideally we would need the server to autoupdate from our command. What would be the correct way to accomplish that ?

Related

main.js(compiled js-files) file from a angular artifact replace in another build artifacts

We have multiple environments, and we have environment files which having backend configarations and we are using these files during the builds.
Example: ng build --c UAT
But I have a issue here, now we decided to build only once and deployment multiple environments same artifact.
I know this is quite achievable using an Angular service and theĀ APP_INITIALIZERĀ token, but some reason we can't use this.
So I decided to after the build, modify the compiled js files(main.js) with respective env configuration values. But it's becoming difficult because of increased number of env variable and its patterns.
So I thought of follow below process, please suggest it can usable or i should not
1, I'll build UAT webpack(dist/artifact) using the "ng build --c UAT".
2, I'll do same for all other environments, now I have total 3 dist folders(webpacks).
3, I'll deploy the UAT artifact to all environment s, but before deploying it to Preprod I'll replace "main.js" file with Preprod artifact main.js file because only main.js file have the all environment configarations. and keep all other js files same.
4, I'll repeat same with prod deployment.
Please suggest on this approach.
You made a good choice to decide against environment specific builds, as they'll always come back to haunt you. But with your approach you only shifted the problem since you still need to tweak the build artifact. When your configuration is highly dynamic, I would suggest to reconsider your decision of not using a Service to load the data dynamically at runtime, or at least state the constraints why this approach doesn't work for you.
Assuming you still want to rely on static file content, th article How to use environment variables to configure your Angular application without a rebuild could be of interest for you. It suggests to load the data from an embedded env.js script, and expose it from there as an Angular service. While one could argue that this also only shifts the problem further, it at least allows your build artifact to remain untouched. For example, if you run your app from say an nginx docker container, you could replace values in env.js dynamically prior to the webserver start. I'm using a simplified version of this approach, and while it still feels a bit hacky, it does work! Good luck!

What is the right way to upload build folder to production server for create-react-app?

I'm currently working on a live project. The frontend part of the system is in ReactJS. We are using create-react-app as the starter kit.
We are facing some issues in deploying the application on live server. Earlier we followed the strategy of pushing the code on server and then creating the build on it. But we noticed that so long the build was generating, our site became unavailable. Which does not seem right. Hence we decide to create build folder in developer's local machine and push the build to the server. But now we are receiving a lot of change requests and feature requests, hence I'm planning to move to a robust git branching model. I believe this will create problem with the way we are currently handling our deployment strategy(which is to move the build to production).
It will be really helpful if some one can show us the right direction in handling deployment of ReactJS apps.
You can use Jenkins which can be configured to trigger the build as soon as a code in a branch is checked-in in GIT. I have not worked on Jenkins but surely, I have seen people using Jenkins for such things.
Jenkins will trigger the build in its own environment (or you can create a temp folder for the time being the build is getting generated if Jenkins operates on the server directly) which will generate the output bundle. So your code will not be removed from the server for that while and you can patch your new files to the actual folder (which can also be automated using Jenkins).

Deploying Create-React-App applications into different environments

I've got an app that I'm working on that consists of a Frontend - written using CRA - and a Backend - written in Java. We would like to support deploying this in multiple environments throughout our process, namely:
Local development machine - This is easy. Just run "npm start" and use the "development" environment
Local End-to-end tests - these build an infrastructure in Docker consisting of the Frontend, Backend and Database, and runs all of the tests against that
Development CI system
QA CI System
Pre-production System
Production System
The problem that I've got is - in the myriad of different systems, what is the best way for the Frontend to know where the Backend is? I'm also hoping that the Frontend can be deployed onto a CDN, so if it can be static files with the minimal complexity that would be preferred.
I've got the first two of these working so far - and they work by the use of the .env files and having a hard-coded hostname to call.
Once I get into the third and beyond - which is my next job - this falls down, because in each of the cases there is a different hostname to be calling, but with the same output of npm run build doing the work.
I've seen suggestions of:
Hard-code every option and determine it based on the current browser location in a big switch statement. That just scares me - not only do I have to maintain my deployment properties inside my source code, but the Production source will know the internal hostnames, which is arguably a security risk.
Run different builds with provided environment variables - e.g. REACT_APP_BACKEND=http://localhost:8080/backend npm run build. This will work, but then the code going into QA, PreProd and Prod are actually different and could arguably invalidate testing.
Adding another Javascript file that gets deployed separately onto the server and is loaded from index.html. That again means that the different test environments are technically different code.
Use DNS Tricks so that the hostnames are the same everywhere. That's just a mess.
Serve the files from an active web server, and have it dynamically write the URL out somehow. That would work, but we're hoping that the Production one can just be a simple CDN that just serves up static files.
So - what is the best way to manage this?
Cheers

Incremental deployment, why is that?

So far, I've encountered different scenarios of deployment, at least three types.
Full Build, Full Deploy. Like most of Java, .Net applications, have a Jenkins job builds the whole application and deploy the whole application.
No Build, Incremental Deploy. Like Mainframe, Informatica, Vitria, even database, these types of applications, they only deploy files got changed.
Full Build, Incremental Deploy. e.g. a not well structured Java application. The build generate total 7 jars, but only one of them is actually changed, and they want to deploy only this jar. Same for .net applications.
After google, I believe the 3rd case is not following best practice and should be resolved at the application architecture level. How to structure, partition, and build large MVC application for deployment in small incremental pieces?
The 1st case, is simple. We take everything from the source control system, build it and deploy it.
The tricky one is the 2nd. I have to generate a list of changed file in this build and deploy them. Any good experience on how to handle this well?
For incremental deployment, I use rsync. It's a program for synchronizing files, directories, permissions, etc. By default, it performs the task of figuring out which files have changed. When you have gigs of data and fifty thousand of files, re-deploying is fast and can be automated.

Continuous Delivery for multi component Project

In our project we have multiple components developed by separate teams having separate git repos.
All components have commit job and packaging job and publishes the artifacts to artifactory.
The problem comes when we want to deploy all the components as a system.
Since all these components deploys to separate servers and then interact with each other for functioning.. a lot of time inconsistencies arises due to some newer version of a component being deployed to one of the server.
For ex. I have components A,B,C and want to move following versions A1, B1, C1 in the deployment and testing pipeline. How I can ensure that no newer version of a component is deployed to QA environment (servers). I am using Jenkins as my CI/CD tool. It seems I need some integration or lightweight configuration management tool to manage the versioning of my system as a whole comprising of all components which I can promote in the deployment pipeline.
I hope I could describe my question. Suggestions to tackle this situation will be really helpful.
Thanks,
We use this pattern:
for every customer which uses our products there is one "project": It contains nearly no code, just configuration. We use this name scheme: coreapp_customerslug.
the project depends on N applications. The project pins all exact versions of the dependencies.
During CI we do this:
install project P and all the pinned dependencies
Then update all dependencies to their latest version.
Run all tests
If all tests succeed, update the versions of the dependencies and increment the version of the project.
Now the project has a new and stable release.
deploy the new release (at the moment we don't do this automatically, but in the near future).
With this pattern ("project" is an container of the apps) you can handle the version problem. If you have several servers, the update process should be fast, to avoid different versions at the same time.
Update
The CI maintains the pinned versions. We use python and pip and the file requirements.txt gets updated by a script. We use the version schema YYYY.N. N gets incremented if all tests are ok.
Attention: If app1 has latest version N, this does not mean that it works in all projects. If you have two projects: P1 and P2, this can happen: app1 with latest version N works well in project P1, but fails in P2. This means you can't create a new stable version of project P2. Sometimes this is annoying, but this keeps a constant update alive. We always use the latest version of our apps in ours projects.

Resources