Copy a docker ARG into an Angularjs config file - angularjs

I have a simple AngularJS application that is built through a Jenkins pipeline and a Docker file. When running the Jenkins job, the environment is set. Then it builds to one of two environments: dev or integration. What I need is a way to get that variable into the angular app.
The docker file uses the environment to build different config settings like:
ARG env
COPY build_config/${env} /opt/some/path...
I need to get that env into one of the controllers. Is there a way to copy env into a controller. I attempted something like the following:
COPY ${env} path/to/angular/file/controller
I have searched and tried different methods but cannot find a solution to work for the Jenkins with Docker pipeline.

You can just use RUN to write a string to a file:
RUN echo "$env" > path/to/angular/file/controller
If you want to append to the file instead of overwritting it, use
RUN echo "$env" >> path/to/angular/file/controller

Related

Docker command not able to pass parameter at runtime to a appConfig.json file

Hi i am new to docker(version 19.03.8) and basically I have an angularjs project(dummyPoject) which contains appConfig.json file with the following path dummyPoject\src\assets\conf\appConfig.json. The json file contains the following variable:
{
"baseUrl": "MAPPED-URL"
}
Basically I want to override the MAPPED-URL properties with the one that i am sending while executing docker command.
Based on the online documentation I found out that it can be passed as environment variable while running the docker command please find below:
docker run -e baseUrl=http://localhost:8081/dummyUrl/ -p 8000:8080 -d --name cms test:1.0
I was expecting that MAPPED-URL will change to http://localhost:8081/dummyUrl/ but it is not the case.
Anything I am missing here please?
By adding -e baseUrl=http://localhost:8081/dummyUrl/ to docker run you have successfully added a environment variable to your docker container. But this value will not magically replace values in your appConfig.json file.
You will need some sort of script that extracts the baseUrl variable from environment and replaces the value in the script. This could be done using a bash script which runs when the container starts and replaces the line "baseUrl": "MAPPED-URL" using the environment variable you added.
Update:
This question inspired me to create a small Node.js package command line package that should help solve your issue. The package is called replace-env
You can add replace-env to your package.json dependencies. You can then run the command as part of your Dockerfile build process, or you can have it modify the file at runtime by customizing your CMD instruction.

Django Cookiecutter using environment variables pattern in production

I am trying to understand how to work with production .env files in a django cookie cutter generated project.
The documentation for this is here:
https://cookiecutter-django.readthedocs.io/en/latest/developing-locally-docker.html#configuring-the-environment
The project is generated and creates .local and .production folders for environment variables.
I am attempting to deploy to a docker droplet in digital ocean.
Is my understanding correct:
The .production folder is NEVER checked into source control and are only generated as examples of what to create on a production machine when I am ready to deploy?
So when I do deploy , as part of that process I need to do a pull/clone of the project on the docker droplet and then either
manually create the .production folder with the production environment variables folder structure?
OR
RUN merge_production_dotenvs_in_dotenv.py locally to create .env file that I copy onto production and the configure my production.yml to use that?
Thanks
Chris
The production env files are NOT into source control, only the local ones are. At least that is the intend, production env files should not be in source control as they contain secrets.
However, they are added to the docker image by docker-compose when you run it. You may create a Docker machine using the Digital Ocean driver, activate it from your terminal, and start the image you've built by running docker-compose -f production.yml -d up.
Django cookiecutter does add .envs/.production and infact everything in .envs/ folder into source control. You would know this by checking the .gitignore file. The .gitignore file does not contain .envs meaning the .envs/ folder is checked into source control.
So when you want to deploy, you clone/pull the repository into your server and your .production/ folder will be there too.
You can also run merge_production_dotenvs_in_dotenv.py to create .env file but the .env would not be checked into source control so you have to copy the file to your server. Then you can configure your docker-compose file to include path/to/your/project/.env as the env_file for any service that needs the environment variables in the file.
You can use scp to copy files from your local machine to your server easily like this:
scp /path/to/local/file username#domain-or-ipaddress:/path/to/destination

Getting container's IP inside a dockerfile

I am running a reactjs app using docker container, and we are using Mock API and UI. I am running those inside a single docker container as 2 separate process. However, in the .env file of the reactjs app the environment variables are mapped to localhost like below :-
REACT_APP_MOCK_API_URL="http://localhost:8080/API"
REACT_APP_MOCK_API_URL_AUTH="http://localhost:8080/API/AUTH"
REACT_APP_MOCK_API_URL_PRESENTATION="http://localhost:8080/API/PRESENTATION"
Since the docker container's IP would be dynamic i need to override it with the dynamic ip that the container will be creating at run time.
May i know the way to do this inside dockerfile ???
PS : I tried assigning the static IP inside the docker file for these environment vars and it works. However, i am not sure how to get the IP dynamically and pass it inside the dockerfile itself .
Please help.
Thanks.
That's intrinsically not something you can directly set up inside the Dockerfile. You usually don't care about the container-internal IP addresses at all: from other containers you should use Docker's internal DNS service, and from outside a container you can access published ports (docker run -p option) via the host's IP address.
In many cases you can glean enough information from HTTP headers to construct valid links within an application. You might be able to set these variables to just e.g. REACT_APP_MOCK_API_URL="/API"; if that's interpreted relative to some other URL in the application then it will inherit the correct host name.
If none of this works, you can use an entrypoint script to set these variables. This might look something like:
#!/bin/sh
if [ -n "$URL_PREFIX" ]; then
# Set these three variables, if they're not already set
: ${REACT_APP_MOCK_API_URL:="${URL_PREFIX}/API"}
: ${REACT_APP_MOCK_API_URL_AUTH:="${URL_PREFIX}/API/AUTH"}
: ${REACT_APP_MOCK_API_URL_PRESENTATION:="${URL_PREFIX}/API/PRESENTATION"}
# Export them to other processes
export REACT_APP_MOCK_API_URL REACT_APP_MOCK_API_URL_AUTH
export REACT_APP_MOCK_API_URL_PRESENTATION
fi
# Launch the main container command
exec "$#"
In your Dockerfile you'd COPY this script in and run it as the ENTRYPOINT
...
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD [...]
Then when you finally ran the container, you can dynamically inject the URL prefix, including whatever port you choose.
docker run -e URL_PREFIX="http://$(hostname):3456" -p 3456:8080 ...
The entrypoint script will set the other variables based on the URL_PREFIX variable, then run whatever command was set as the CMD in the Dockerfile or was named on the docker run command line. (If you docker run -it ... sh, the entrypoint will run and as its last step launch the interactive shell, which is useful for debugging.)

I'm new to command line. I get a lot of messages like 'command not found' and 'no such file or directory'

Trying to run gcloud init to initialize the Google App Engine Engine SDK by typing ./google-cloud-sdk/bin/gcloud init but it showed: no such file or directory or command not found. Is something wrong with my PATH? My path is:
/Users/AnneLutz/Documents/google-cloud-sdk\
If you typing ./google-cloud-sdk/bin/gcloud init and you installed Cloud SDK in /Users/AnneLutz/Documents/google-cloud-sdk, then your current directory should be /Users/AnneLutz/Documents in order for what you type to work.
That said you should add /Users/AnneLutz/Documents/google-cloud-sdk/bin to you path. To do this, assuming you are using bash you can
source /Users/AnneLutz/Documents/google-cloud-sdk/path.bash.inc
To make it so that every-time you start your shell you can add it to shell profile. For example you can add above source command at the end of ~/.bash_profile file.
It looks like you used the option to download the SDK zip file and are then trying to configure your environment with that download option. If you aren't comfortable with setting environment variables, you might want to instead try installing using the "interactive" installer, which will automate the steps for making the commands always available on your system.
The directions are here, but for Mac OS users are basically:
Enter the following at a command prompt:
curl https://sdk.cloud.google.com | bash
Restart your shell:
exec -l $SHELL
Run gcloud init to initialize the gcloud environment:
gcloud init
For many, this procedure is easier than getting everything configured manually.

Windows batch file run php serve & gulp

Every-time when I start working on a laravel project, I need to run php artisan serve to launch the virtual server, and then gulp which will run the gulp file with browser-sync that proxy the virtual server.
I am thinking to simplify the process with the batch file. But the problem is once it run the first command, it stop there. How can I get 2 commands called one after another with just a double click the batch file?
I dont know much about the windows batch scripting so. hope this helps
Write a batch script that opens TWO terminal windows and the first one will run php artisan serve and the second will run gulp
you cant chain them like php artisan serve && gulp because 1) this dont work in windows 2) the first command will never end unless you hit ctrl+c

Resources