Why cookiecutter-django does not set DATABASE_URL and CELERY_BROKER_URL during entrypoint execution? - cookiecutter-django

cookiecutter-django does not set env variables for DATABASE_URL and CELERY_BROKER_URL during "entrypoint" file execution in local development environment.
After I manually 'exported' DATABASE_URL and CELERY_BROKER_URL they appeared in environment variables. Why is that?
By manually I mean I got inside the docker container then on the shell I wrote:
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
and
export CELERY_BROKER_URL="${REDIS_URL}"

The export command is used to pass variables to child process. A variable will be included in child process environments without affecting other environments.
Even setting manually as you did, this only applies to this shell session. Try to exit and enter again another session, and you will see the variable disappeared.
You can pass environment variables to your containers with the -e flag:
docker exec -it -e ENV_NAME='my_var' container_name sh
Or to set a variable globally, set it in docker compose:
app:
image: myimage:latest
environment:
APP_ENV: my_env

Related

React enviroment variables inside docker container not working

I have an issue, where inside my docker container of the react app, my env variables are not working (got undefined).
My Dockerfile:
FROM <my nginx image>
COPY build/. /usr/share/nginx/html
COPY config/nginx.conf /etc/nginx/nginx.conf
EXPOSE 8080 80
My .env file (in the root of the project):
REACT_APP_VAR=HELLO
And in my code, I access that env variable through process.env.REACT_APP_VAR.
However, when I execute inside my production Linux server the command docker exec client -e, I do get all the env variables, including REACT_APP_VAR, PATH, HOSTNAME and etc.
Important to say, this issue is only in the docker (in the prod server), in my windows development station it works fine (without docker).
Also, I can't add ENV inside my Dockerfile, and I rather not use the docker yaml's files.

Mongodb running on Docker is wiping the collection after restart

I have to build a small application that reads data from MongoDB running on docker and uses it for further processes.
The problem is that after I close docker, the local instance of the database is also getting deleted. How can I stop it?
The MONGODB_URI is mongodb://localhost:27017 and what are the attributes that I should add in the docker command to avoid it. should I avoid using localhost? docker-compose seems confusing to me so I use Dockerfile.
So, what exactly can be the docker run command to avoid it? is it one of these?
Commands: docker run -d --name mongo-on-docker -p 27017:27017 mongo
docker run -d --name sample --link mongo-on-docker web app
Also to permanently save what data directory should I use?
Docker container are dead before quiting. For store data you should mount named volume, folder or file to the container.
In MongoDB case try:
docker run --rm -ti -v mongo_data:/data/db mongo bash
Where mongo_data is a special Docker entity, that can be mounted as a folder into container. Including in different containers at the same time.
Not new:
How to set docker mongo data volume

Getting container's IP inside a dockerfile

I am running a reactjs app using docker container, and we are using Mock API and UI. I am running those inside a single docker container as 2 separate process. However, in the .env file of the reactjs app the environment variables are mapped to localhost like below :-
REACT_APP_MOCK_API_URL="http://localhost:8080/API"
REACT_APP_MOCK_API_URL_AUTH="http://localhost:8080/API/AUTH"
REACT_APP_MOCK_API_URL_PRESENTATION="http://localhost:8080/API/PRESENTATION"
Since the docker container's IP would be dynamic i need to override it with the dynamic ip that the container will be creating at run time.
May i know the way to do this inside dockerfile ???
PS : I tried assigning the static IP inside the docker file for these environment vars and it works. However, i am not sure how to get the IP dynamically and pass it inside the dockerfile itself .
Please help.
Thanks.
That's intrinsically not something you can directly set up inside the Dockerfile. You usually don't care about the container-internal IP addresses at all: from other containers you should use Docker's internal DNS service, and from outside a container you can access published ports (docker run -p option) via the host's IP address.
In many cases you can glean enough information from HTTP headers to construct valid links within an application. You might be able to set these variables to just e.g. REACT_APP_MOCK_API_URL="/API"; if that's interpreted relative to some other URL in the application then it will inherit the correct host name.
If none of this works, you can use an entrypoint script to set these variables. This might look something like:
#!/bin/sh
if [ -n "$URL_PREFIX" ]; then
# Set these three variables, if they're not already set
: ${REACT_APP_MOCK_API_URL:="${URL_PREFIX}/API"}
: ${REACT_APP_MOCK_API_URL_AUTH:="${URL_PREFIX}/API/AUTH"}
: ${REACT_APP_MOCK_API_URL_PRESENTATION:="${URL_PREFIX}/API/PRESENTATION"}
# Export them to other processes
export REACT_APP_MOCK_API_URL REACT_APP_MOCK_API_URL_AUTH
export REACT_APP_MOCK_API_URL_PRESENTATION
fi
# Launch the main container command
exec "$#"
In your Dockerfile you'd COPY this script in and run it as the ENTRYPOINT
...
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD [...]
Then when you finally ran the container, you can dynamically inject the URL prefix, including whatever port you choose.
docker run -e URL_PREFIX="http://$(hostname):3456" -p 3456:8080 ...
The entrypoint script will set the other variables based on the URL_PREFIX variable, then run whatever command was set as the CMD in the Dockerfile or was named on the docker run command line. (If you docker run -it ... sh, the entrypoint will run and as its last step launch the interactive shell, which is useful for debugging.)

Get environment variable defined in Linux bash to webpack

I have checked out tons of SO questions about "environment variables in webpack" using e.g. the DefinePlugin:
new webpack.DefinePlugin({'ENV': JSON.stringify('staging')})
but I cannot for the life of me find a way to inject an environment variable defined in the linux bash shell, instead of using the hard coded staging string
In my production and staging environements I have variables such as $ENV and $API_KEY defined, and I want to use their values in my webpack / ReactJs code.
Edit
I notice, if I run the a webpack command from cli:
$ ENVIRONMENT=staging
$ node_modules/.bin/webpack -p
And in my webpack.config.js file defines
new webpack.DefinePlugin({'ENV': JSON.stringify(process.env.ENVIRONMENT)})
This does not work (ENV is undefined in my JS code),
However, if I run it on the same line, it seems to work - ENVIRONMENT seems to be available in the webpack.config.js file:
$ ENVIRONMENT=staging node_modules/.bin/webpack -p
So I would really like to make this work without having to define the ENVIRONMENT variable on the same line as the webpack command.
In nodejs you can get your environment variables via process.env object. In your case you can do process.env.$ENV and process.env.$API_KEY to get $ENV and $API_KEY env vars respectively.
Stupid me forgot to export the variable in the bash terminal
$ export ENVIRONMENT=staging
$ node_modules/.bin/webpack -p
works fine and, as pointed out, one can then access the ENVIRONMENT variable with process.env.ENVIRONMENT from inside webpack.config.js

Passing Tomcat parameters to Docker

I am new to Docker and I have a question that I can't seem to find the answer to.
I am taking a Docker image (consol/tomcat-7.0) and wrote a Dockerfile that loads this image, copies my war files and copies a server.xml, with unique database connection details and default host, into a new image.
If I am running many containers with this image on, what is the proper way to have each one use the same war files but connect to different databases and have different URLs in server.xml?
I am currently building the image using the Dockerfile with different details each time I want to a new instance and this seems a waste.
So each time I want a new instance, I run 'build' using this Dockerfile:
FROM consol/tomcat-7.0:latest
MAINTAINER xxx
LABEL version="1.0"
EXPOSE 80 443
RUN mkdir /vhost/
COPY FILES /vhost/ /vhost/ # my war files - same on every instance
COPY FILES/server.xml /opt/tomcat/conf/ # my config file - different on each instance
And then run this new image.
What is the proper way of doing this?
The typical method for docker containers is passing via environment variables.
Expanding on a solution to pass the port via command line the server.xml needs to be modified so it takes in properties from JAVA_OPTS
For example in server.xml
<GlobalNamingResources>
<Resource Name="jdbc/Addresses"
auth="Container"
type="javax.sql.Datasource"
username="auser"
password="Secret"
driverClassName="com.mysql.jdbc.Driver"
description="Global Address Database"
url="${jdbc.url}" />
</GlobalNamingResources>
Then you can pass value of ${jdbc.url} from properties on the command line.
JAVA_OPTS="-Djdbc.url=jdbc:mysql:mysqlhost:3306/"
When running the docker image you use the -e flag to set this environment variable at run time
$ docker run -it -e "JAVA_OPTS=-Djdbc.url=jdbc:mysql:mysqlhost:3306/" --rm myjavadockerimage /opt/tomcat/bin/deploy-and-run.sh
Optionally also add a --add-host if you need to map mysqlhost to a specific ip address.
There are at least two options I can think of:
If server.xml supports environment variables, you could pass database connection details to the container via --env or even --env-file. Note that this has certain security implications.
Another option would be to mount server.xml for a particular instance into the container via --volume.

Resources