Getting container's IP inside a dockerfile - reactjs

I am running a reactjs app using docker container, and we are using Mock API and UI. I am running those inside a single docker container as 2 separate process. However, in the .env file of the reactjs app the environment variables are mapped to localhost like below :-
REACT_APP_MOCK_API_URL="http://localhost:8080/API"
REACT_APP_MOCK_API_URL_AUTH="http://localhost:8080/API/AUTH"
REACT_APP_MOCK_API_URL_PRESENTATION="http://localhost:8080/API/PRESENTATION"
Since the docker container's IP would be dynamic i need to override it with the dynamic ip that the container will be creating at run time.
May i know the way to do this inside dockerfile ???
PS : I tried assigning the static IP inside the docker file for these environment vars and it works. However, i am not sure how to get the IP dynamically and pass it inside the dockerfile itself .
Please help.
Thanks.

That's intrinsically not something you can directly set up inside the Dockerfile. You usually don't care about the container-internal IP addresses at all: from other containers you should use Docker's internal DNS service, and from outside a container you can access published ports (docker run -p option) via the host's IP address.
In many cases you can glean enough information from HTTP headers to construct valid links within an application. You might be able to set these variables to just e.g. REACT_APP_MOCK_API_URL="/API"; if that's interpreted relative to some other URL in the application then it will inherit the correct host name.
If none of this works, you can use an entrypoint script to set these variables. This might look something like:
#!/bin/sh
if [ -n "$URL_PREFIX" ]; then
# Set these three variables, if they're not already set
: ${REACT_APP_MOCK_API_URL:="${URL_PREFIX}/API"}
: ${REACT_APP_MOCK_API_URL_AUTH:="${URL_PREFIX}/API/AUTH"}
: ${REACT_APP_MOCK_API_URL_PRESENTATION:="${URL_PREFIX}/API/PRESENTATION"}
# Export them to other processes
export REACT_APP_MOCK_API_URL REACT_APP_MOCK_API_URL_AUTH
export REACT_APP_MOCK_API_URL_PRESENTATION
fi
# Launch the main container command
exec "$#"
In your Dockerfile you'd COPY this script in and run it as the ENTRYPOINT
...
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD [...]
Then when you finally ran the container, you can dynamically inject the URL prefix, including whatever port you choose.
docker run -e URL_PREFIX="http://$(hostname):3456" -p 3456:8080 ...
The entrypoint script will set the other variables based on the URL_PREFIX variable, then run whatever command was set as the CMD in the Dockerfile or was named on the docker run command line. (If you docker run -it ... sh, the entrypoint will run and as its last step launch the interactive shell, which is useful for debugging.)

Related

React enviroment variables inside docker container not working

I have an issue, where inside my docker container of the react app, my env variables are not working (got undefined).
My Dockerfile:
FROM <my nginx image>
COPY build/. /usr/share/nginx/html
COPY config/nginx.conf /etc/nginx/nginx.conf
EXPOSE 8080 80
My .env file (in the root of the project):
REACT_APP_VAR=HELLO
And in my code, I access that env variable through process.env.REACT_APP_VAR.
However, when I execute inside my production Linux server the command docker exec client -e, I do get all the env variables, including REACT_APP_VAR, PATH, HOSTNAME and etc.
Important to say, this issue is only in the docker (in the prod server), in my windows development station it works fine (without docker).
Also, I can't add ENV inside my Dockerfile, and I rather not use the docker yaml's files.

Why cookiecutter-django does not set DATABASE_URL and CELERY_BROKER_URL during entrypoint execution?

cookiecutter-django does not set env variables for DATABASE_URL and CELERY_BROKER_URL during "entrypoint" file execution in local development environment.
After I manually 'exported' DATABASE_URL and CELERY_BROKER_URL they appeared in environment variables. Why is that?
By manually I mean I got inside the docker container then on the shell I wrote:
export DATABASE_URL="postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}#${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}"
and
export CELERY_BROKER_URL="${REDIS_URL}"
The export command is used to pass variables to child process. A variable will be included in child process environments without affecting other environments.
Even setting manually as you did, this only applies to this shell session. Try to exit and enter again another session, and you will see the variable disappeared.
You can pass environment variables to your containers with the -e flag:
docker exec -it -e ENV_NAME='my_var' container_name sh
Or to set a variable globally, set it in docker compose:
app:
image: myimage:latest
environment:
APP_ENV: my_env

Connect to docker sqlserver via ssh

I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.

I can't locate my host directory, which I attached to Docker

I have a slight problem, I've been trying to get some persistent data with ArangoDB and Docker. I passed the argument to Docker which attaches the hosts directory to a path within Docker. It's all fine to this point, but I'm stuck in an enigma where the hell this directory is.
1) This is a sample command which resembles mine:
docker run -e ARANGO_ROOT_PASSWORD='mypass' -p 80:8529 -d -v myhostfolder:/var/lib/arangodb3 arangodb
So, the problem is i can't find myhostfolder anywhere on my host machine which runs docker. The data within it is persistent and I can access it, but only through the docker container. I think that the data is somewhere on my host machine, I've been trying to pass a couple of these "relative" folders and they all keep persistent data so I doubt that the data is in the actual docker container.
2) If I do something like this (providing an absolute path)
docker run -e ARANGO_ROOT_PASSWORD='mypass' -p 80:8529 -d -v /home/myhostfolder:/var/lib/arangodb3 arangodb
then I have no issues with locating the /home/myhostfolder.
So my question is, where on my OS X 10.12 is the myhost folder from example 1)?
Thanks for your help!
The host-dir can either be an absolute path or a name value. If you supply an absolute path for the host-dir, Docker bind-mounts to the path you specify. If you supply a name, Docker creates a named volume by that name.
Refer https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume.
In your case, as myhostfolder is a name, docker creates a named volume. Execute the below command which lists the volumes. A volume with name myhostfolder will be shown.
docker volume ls

Passing Tomcat parameters to Docker

I am new to Docker and I have a question that I can't seem to find the answer to.
I am taking a Docker image (consol/tomcat-7.0) and wrote a Dockerfile that loads this image, copies my war files and copies a server.xml, with unique database connection details and default host, into a new image.
If I am running many containers with this image on, what is the proper way to have each one use the same war files but connect to different databases and have different URLs in server.xml?
I am currently building the image using the Dockerfile with different details each time I want to a new instance and this seems a waste.
So each time I want a new instance, I run 'build' using this Dockerfile:
FROM consol/tomcat-7.0:latest
MAINTAINER xxx
LABEL version="1.0"
EXPOSE 80 443
RUN mkdir /vhost/
COPY FILES /vhost/ /vhost/ # my war files - same on every instance
COPY FILES/server.xml /opt/tomcat/conf/ # my config file - different on each instance
And then run this new image.
What is the proper way of doing this?
The typical method for docker containers is passing via environment variables.
Expanding on a solution to pass the port via command line the server.xml needs to be modified so it takes in properties from JAVA_OPTS
For example in server.xml
<GlobalNamingResources>
<Resource Name="jdbc/Addresses"
auth="Container"
type="javax.sql.Datasource"
username="auser"
password="Secret"
driverClassName="com.mysql.jdbc.Driver"
description="Global Address Database"
url="${jdbc.url}" />
</GlobalNamingResources>
Then you can pass value of ${jdbc.url} from properties on the command line.
JAVA_OPTS="-Djdbc.url=jdbc:mysql:mysqlhost:3306/"
When running the docker image you use the -e flag to set this environment variable at run time
$ docker run -it -e "JAVA_OPTS=-Djdbc.url=jdbc:mysql:mysqlhost:3306/" --rm myjavadockerimage /opt/tomcat/bin/deploy-and-run.sh
Optionally also add a --add-host if you need to map mysqlhost to a specific ip address.
There are at least two options I can think of:
If server.xml supports environment variables, you could pass database connection details to the container via --env or even --env-file. Note that this has certain security implications.
Another option would be to mount server.xml for a particular instance into the container via --volume.

Resources