Bitbucket pipeline failing while running Newman collection with images as multi-part form data - multipartform-data

I want run newman collection which requires images too as multipart form data.
How can I run newman test collection on Bitbucket pipeline so that it can take images too? I couldn't find anything like this.
Script that I used:
image: postman/newman_alpine33
pipelines:
default:
- step:
script:
- newman --version
- newman run ./ocrCollection/Pipelinedemo.postman_collection.json
I have added the image in the collection folder in the repo as well.
I am getting this error:
How can I run tests using the images?
Thanks in advance

Related

Azure DevOps - React App - Set Environment Variables in Release Pipeline

We have a React app in AzureDevOps. We build it using npm install/npm run build and then upload the zip file. From there we'll do a release to multiple stages/environments. Due to SOX compliance we're trying to maintain a single build/artifact no matter what the environment.
What I'm trying to do is be able to set the environment variables during the release pipeline. For instance, be able to substitute the value of something like process.env.REACT_APP_CONFIG_VALUE
I've tried setting that in the Pipeline variables during the release but it does not seem to work. Is this possible or do I have to use a json config of some sort instead of using process.env?
Thanks
You could not achieve this by setting pipeline variables during the release.
Suggest you could use RegEx Match & Replace extension task to achieve this. Use this site to convert the regular expression: Regex Generator
Here is an example:
this._baseUrl = process.env.REACT_APP_CONFIG_VALUE;
This extension task will use regular expressions to match fields in the file.
Checking from published js.
This is how I did it -
Step1: Add all those files (.env .env.development .env.production) to azure devops library as secure files. We can download these secure files in the build machine using a DownloadSecureFile#1 pipeline task (yml). This way we are making sure the correct .env file is provided in the build machine before the task yarn build --mode development in the pipeline.
Step2: Add the following task in your azure yml pipeline in appropriate place. I have created a github repo https://github.com/mail4hafij/react-yarn-azure-pipeline if you want to see a complete example.
# Download secure file from azure library
- task: DownloadSecureFile#1
inputs:
secureFile: '.env.development'
# Copy the .env file
- task: CopyFiles#2
inputs:
sourceFolder: '$(Agent.TempDirectory)'
contents: '**/*.env.development'
targetFolder: '$(YOUR_DEFINED_PROJECT_ROOT_FOLDER_VARIABLE)'
cleanTargetFolder: false
Keep note, secure files can't be edited but you can always re-upload.

How to automatically clear cache on user's already loaded website whenever a new build is generated?

I am using this react boilerplate code - https://github.com/react-boilerplate/react-boilerplate and webpack - https://webpack.js.org/configuration/
The problem I am facing is whenever I am doing npm run build and pushing the build folder to prod, users have to manually do ctr+shift+r to see the latest changes.
I have tried many methods of hashing script etc, to clear the cache on new build but none is working.
One of the method i tried is - https://dev.to/ammartinwala52/clear-cache-on-build-for-react-apps-1k8j
But the thing is in my build folder there isn't any public folder or any meta.json.
Any help will be appreciated

Build output on AWS CodeDeploy pipeline

I have setup a pipeline on AWS CodeDeploy.
My bildspec.yml has a line that runs react build script, and judging by pipeline build output the build runs ok.
Yet, in the final image that is deployed, only the repo files exist, react's build output folder is not there.
On localhost this works ok.
I have read a ton of AWS documentation and googled examples, and I can't understand what's wrong.
The output build will not be added to the repository of CodeCommit. It will be stored in the S3 bucket you specified in the artifacts section of CodeBuild.
If you still want to store the build in the CodeCommit, the create an Event to lambda from that s3 bucket. Then use boto3 put_file operation of CodeCommit in a lambda to push that s3 file to CodeCommit. For reference: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codecommit.html#codecommit

Best practice for build / deployment of docker images

I've just finished the basic pipeline for my angular application, which runs in a Node image in docker. So, the process works as follows: push to Gitlab > Hook to Jenkins Build > Deployment script to docker build image and push to Quay > Publish script to prompt Rancher service to upgrade the container and refresh the image > Complete.
Now, the problem I have is that the base node image is quite large, meaning when I am pushing a simple change, it takes a long while to complete the build pipeline (~8 minutes). This seems unreasonable for every tiny change, and the push to Quay and subsequent publish to the Rancher platform means I'm moving around 250mb up to quay and 250mb to Rancher.
I have several "micro-services" planned for deployment, but if each time I want to deploy one to a development environment and move that much data around each time, it seems somewhat counter productive... Is what I am doing wrong, what am I missing, and is there any guidelines for best practice when building/deploying/hosting container based services?
First some info on images, builds, registries and clients.
Images and Layers
Docker image builds work with layers. Each step in your Dockerfile commits a layer that is overlaid on top of the previous.
FROM node ---- a6b9ffdcf522
RUN apt-get update -y --- 72886b467bd2
RUN git clone whatever -- 430615b3487a
RUN npm install - 4f8ddac8d3b5 mynode:latest
Each layer that makes up an image is individually identified by a sha256 checksum. The IMAGE ID in docker images -a is a short snippet of that.
Running dockviz images -t on a build host will give you a better idea of the tree of layers that can build up. While a build is running you can see a branch grow, and then the final layer is eventually tagged but that layer stays in the tree and maintains a link to it's parents.
Build Caching
Docker builds are cached by default at each build step. If the RUN command in the docker file hasn't changed or COPY source files your are copying haven't changed then that build step should not need to run again. The layer stays the same, as does the sha256 checksum ID and docker attempts to build the next layer.
When docker gets to a step that does need to be rebuilt, the image "tree" that dockviz presents will branch off to create the new layer with a new checksum. Any steps after this then need to run again and create a layer on the new branch.
Registries
Registries understand this layering too. If you only change the top most layer in your newly tagged image, that's the only layer that should need to be uploaded to the registry (There are caveats to this, it works best with a recent docker-1.10.1+ and registry 2.3+) The registry will already have a copy of most of the image id's that make up your new "image" and only new layers will need to be sent.
Clients
Docker registry clients deal with layers in the same way. When pulling an image it actually downloads the individual layers (blobs) that make up the image. You can see this from the list of image ids printed when you docker pull or docker run a new image. Again, if most of the layers are the same then the update will only need download those top most layers that have changed saving precious time.
Minimising build time
So the things you want to focus on are
Keep the image sizes small
Take advantage of build caching
Make use of common "tagged" parent images.
Keep the image sizes small
The main way to save time is to not have anything to do in the first place.
The less data in an image the better.
If you can avoid using a full OS, do it. When you can run apps on the busybox or alpine image, it makes the Docker gods smile. alpine + a node.js build is less than 50MB. Go binaries are a great example of minimising size too. They can be statically compiled, and have no dependencies so can even be run on the blank scratch image.
Take advantage of Dockers build caching
It's important to have your most frequently changing artefacts (most likely your code) as a late entry in your Dockerfile. The build will slow down if the build has to update the complete 50MB of data for one little file change that invalidates the cache for a build step.
There will always be some changes that invalidate the entire cache (like updating the base node image). These you just have to live with once in a while.
Anything else in the build that infrequently updates, should go to the top of the Dockerfile.
Make use of common "tagged" parent images
Although image checksumming has been somewhat fixed from Docker 1.10 onwards, using a common parent image guarantees that you will be starting from the same shared image ID where ever you use that image with FROM.
Previous to Docker 1.10, Image ID's were just a random uuid. If you had builds running on multiple hosts, layers could all be invalidated and replaced depending on which host built them. Even if the layers were in fact the same thing.
Common parent images also help when you have multiple services and multiple Dockerfiles that are largely the same. Whenever you start repeating build steps in multiple Dockerfiles, pull those steps out into a common parent image so layers are definitely shared between all your services. You are essentially getting this already by using the node image as your base.
Node.js tricks
If you are running an npm install after your code deploy every build and you have a number of dependencies, the npm install causes a lot of repeated work that doesn't actually change much each build. It could be worthwhile to have a workflow to build your node_modules prior to code changes. Then the npm install only needs to run you when package.json is updated
FROM node
WORKDIR /app
COPY package.json /app/package.json
RUN npm install && rm -rf ~/.npm
COPY . /app/
CMD [ "node", "/app/server.js" ]
Staged build
If you rely on npm packages with native modules, you will sometimes need to install a complete build chain in the container to run the npm install. Staged builds can now easily separate the build image from the run image.
FROM node:8 AS build
WORKDIR /build
RUN apt-get update \
&& apt-get install build-essential;
COPY package.json /build/package.json
RUN npm install; \
&& rm -rf ~/.npm;
# Stage 2 app image
FROM node:8-slim
WORKDIR /app
COPY --from=build /build/node_modules /app/node_modules
COPY . /app/
CMD [ "node", "/app/server.js" ]
Other things
Make sure your build host has ssd's and a good internet connection, because there will be times you have to do a full rebuild so the quicker it is, the better. AWS usually works well because the packages and images you are pulling and pushing are probably hosted on AWS as well. AWS also provide a image registry services (ECR) for only the cost of storage.

Jenkins Docker plugin is not showing the Tag-on-completion checkbox

I am using the Jenkins Docker Plugin.
I am supposed to see the Tag-on-completion checkbox with so that the images will be retained in docker.
Reference: https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
However in my Jenkins, do not see the Tag-on-completion checkbox.
The wiki document is outdated, the Tag-on-compete checkbox is moved into job.
The checkbox name is now called
Commit on successful build [x]
Clean local images []
See source code config.jelly

Resources