How much Spring Native Image affect on CI/CD pipeline? do we need to create built on individual environment - continuous-deployment

We are planning to switch to SpringNative using graalvm-ce-java8-21.2.0. My question is do we need to built a tested copy in all environment or we can just copy the executable or docker image directly in testing and than production environment ? Thanks.
At this moment, i tried deployment in different machines (Same OS) and looks smooth.

Related

Runtime Environment Variables in React for UAT and Live for a single build devops

Sorry if you think this has been asked before but there doesn't seem like a good solution anywhere.
I have a build pipeline that packages up my react app into a single artifact.
The release pipeline pushes that artifact to different Azure storage accounts for each environment (Dev, UAT, Live).
Surely there is a way to use DevOps variables to configure variables in my package per environment.
Other solutions:
One build per environment - I don't want to do this because I would need to create a branch for each environment, a pipeline for each, the env configs for each, and a release pipeline for each. This means a change to 1 environment takes 3 times as long. Also, the time to build these environments trebles.
Using a JSON file and swapping this out on deployment. - This didn't work because webpack imported the JSON file into the build so whilst I transformed the config.json files. It was too late. This seems similar to using env.development and env.live and would mean 3 builds
Pull environment out of the request URL and call an endpoint - seems like my only option but definitely has flaws.
This isn't an issue in .NET (or Java I believe, .NET is my background) and was solved years ago with web.config and appSettings.
Please let me know if you have solved this and how?
Thanks for your help
Runtime Environment Variables in React for UAT and Live for a single build devops
We could use the task Replace Tokens to use DevOps variables to configure variables in the package per environment:
The format of variable in .json file is #{TestVar}#.
And define the key's values on the Variables based on the stages:
Hope this helps.

What is the right way to upload build folder to production server for create-react-app?

I'm currently working on a live project. The frontend part of the system is in ReactJS. We are using create-react-app as the starter kit.
We are facing some issues in deploying the application on live server. Earlier we followed the strategy of pushing the code on server and then creating the build on it. But we noticed that so long the build was generating, our site became unavailable. Which does not seem right. Hence we decide to create build folder in developer's local machine and push the build to the server. But now we are receiving a lot of change requests and feature requests, hence I'm planning to move to a robust git branching model. I believe this will create problem with the way we are currently handling our deployment strategy(which is to move the build to production).
It will be really helpful if some one can show us the right direction in handling deployment of ReactJS apps.
You can use Jenkins which can be configured to trigger the build as soon as a code in a branch is checked-in in GIT. I have not worked on Jenkins but surely, I have seen people using Jenkins for such things.
Jenkins will trigger the build in its own environment (or you can create a temp folder for the time being the build is getting generated if Jenkins operates on the server directly) which will generate the output bundle. So your code will not be removed from the server for that while and you can patch your new files to the actual folder (which can also be automated using Jenkins).

Deploying Create-React-App applications into different environments

I've got an app that I'm working on that consists of a Frontend - written using CRA - and a Backend - written in Java. We would like to support deploying this in multiple environments throughout our process, namely:
Local development machine - This is easy. Just run "npm start" and use the "development" environment
Local End-to-end tests - these build an infrastructure in Docker consisting of the Frontend, Backend and Database, and runs all of the tests against that
Development CI system
QA CI System
Pre-production System
Production System
The problem that I've got is - in the myriad of different systems, what is the best way for the Frontend to know where the Backend is? I'm also hoping that the Frontend can be deployed onto a CDN, so if it can be static files with the minimal complexity that would be preferred.
I've got the first two of these working so far - and they work by the use of the .env files and having a hard-coded hostname to call.
Once I get into the third and beyond - which is my next job - this falls down, because in each of the cases there is a different hostname to be calling, but with the same output of npm run build doing the work.
I've seen suggestions of:
Hard-code every option and determine it based on the current browser location in a big switch statement. That just scares me - not only do I have to maintain my deployment properties inside my source code, but the Production source will know the internal hostnames, which is arguably a security risk.
Run different builds with provided environment variables - e.g. REACT_APP_BACKEND=http://localhost:8080/backend npm run build. This will work, but then the code going into QA, PreProd and Prod are actually different and could arguably invalidate testing.
Adding another Javascript file that gets deployed separately onto the server and is loaded from index.html. That again means that the different test environments are technically different code.
Use DNS Tricks so that the hostnames are the same everywhere. That's just a mess.
Serve the files from an active web server, and have it dynamically write the URL out somehow. That would work, but we're hoping that the Production one can just be a simple CDN that just serves up static files.
So - what is the best way to manage this?
Cheers

How can docker help software automation testers?

How can docker help automation testers?
I know it provides linux containers which is similar to virtual machines but how can I use those containers in software automation testing.
Short answer
You can use Docker to easily create an isolated, reproducible and portable environment for testing. Every dependency goes to an image and whenever you need an environment to test your application you just run some images.
Long answer
Applications have a lot of dependencies
A typical application has a lot of dependencies to other system. You might have a database, a LDAP, a Memcache or a many more things your system depends on. The application itself needs a certain run time (Java, Python, Ruby) in a dedicated version (Java 7 or Java 8). You might also need a server (Tomcat, Jetty, NGINX) with settings for your application. You might need a special folder structure for your application and so on.
Setting up an test environment becomes complicated
All this things make up the environment you need for your application. You need this environment to run your application in production, to develop it and to test it (manual or automated). This environment can become quite complicated and maintaining it will cost you a lot of time and trouble.
Dependencies become images
This is where Docker comes into play: Docker let's you put your database (with the initial data of your application already set up) to a Docker image. The same goes for your LDAP, your Memcache and all other applications you depend on. Docker let's you even package your own application into an image which provides the correct run time, server, folder structure and configuration.
Images make your environment easily reproducible
Those images are self-contained, isolated and portable. This means you can pull them on every machine and just run them as they are. Instead of installing a database, LDAP, Memcache and configure all of them you just pull the images and run them. This makes it super easy to spin up a new and fresh environment in seconds whenever you need.
Testing becomes easier
And that's the basic for your tests, because you would need a clean, fresh and reproducible environment to perform tests against. Especially "reproducible" and "fresh" is important. If you run automated tests (locally on the developer maschine or on your build server) you must use the same environment. Otherwise your tests are not reliable. Fresh is important because it means you can just stop all containers when your tests are finished and every data mess your tests created is gone. When you run your tests again you just spin up a new enviroment which is clean and in its initial state.

Deploying AngularJs + Sinatra to AWS

I have an AngularJS site consuming an API written in Sinatra.
I'm simply trying to deploy these 2 components together on an AWS EC2 instance.
How would one go about doing that? What tools do you recommend? What structure do you think is most suitable?
Cheers
This is based upon my experience of utilizing the HashciCorp line of tools.
Manual: Launch an Ubuntu image, gem install sinatra and deploy your code. Take a snapshot for safe keeping. This one off approach is good for a development box to iron out the configuration process. Write down the commands you run and any options you may need.
Automated: Use the Packer EC2 Builder and Shell Provisioner to automate your commands from the previous manual approach. This will give you a configured AMI that can be launched.
You can apply different methods of getting to an AMI using different toolsets. However, in the end, you want a single immutable image that can be deployed. repeatedly.

Resources