I am new to Docker and I have a question that I can't seem to find the answer to.
I am taking a Docker image (consol/tomcat-7.0) and wrote a Dockerfile that loads this image, copies my war files and copies a server.xml, with unique database connection details and default host, into a new image.
If I am running many containers with this image on, what is the proper way to have each one use the same war files but connect to different databases and have different URLs in server.xml?
I am currently building the image using the Dockerfile with different details each time I want to a new instance and this seems a waste.
So each time I want a new instance, I run 'build' using this Dockerfile:
FROM consol/tomcat-7.0:latest
MAINTAINER xxx
LABEL version="1.0"
EXPOSE 80 443
RUN mkdir /vhost/
COPY FILES /vhost/ /vhost/ # my war files - same on every instance
COPY FILES/server.xml /opt/tomcat/conf/ # my config file - different on each instance
And then run this new image.
What is the proper way of doing this?
The typical method for docker containers is passing via environment variables.
Expanding on a solution to pass the port via command line the server.xml needs to be modified so it takes in properties from JAVA_OPTS
For example in server.xml
<GlobalNamingResources>
<Resource Name="jdbc/Addresses"
auth="Container"
type="javax.sql.Datasource"
username="auser"
password="Secret"
driverClassName="com.mysql.jdbc.Driver"
description="Global Address Database"
url="${jdbc.url}" />
</GlobalNamingResources>
Then you can pass value of ${jdbc.url} from properties on the command line.
JAVA_OPTS="-Djdbc.url=jdbc:mysql:mysqlhost:3306/"
When running the docker image you use the -e flag to set this environment variable at run time
$ docker run -it -e "JAVA_OPTS=-Djdbc.url=jdbc:mysql:mysqlhost:3306/" --rm myjavadockerimage /opt/tomcat/bin/deploy-and-run.sh
Optionally also add a --add-host if you need to map mysqlhost to a specific ip address.
There are at least two options I can think of:
If server.xml supports environment variables, you could pass database connection details to the container via --env or even --env-file. Note that this has certain security implications.
Another option would be to mount server.xml for a particular instance into the container via --volume.
Related
Background
I am writing a .NET 5 application and using .net user secrets for my secret keys (database connections & passwords).
Recently I decided to learn Dockers and update my application to work with it so that using Visual Studio I generated a docker file for my API project and then created a docker-compose file that includes the API project & database (and some more irrelevant things for this question).
Almost everything works well. Technically, I can hard-code the secrets, and then the application will work well.
I have some secrets and most of them work fine, e.g: the database connection secrets works well, in the C# code I do the following code and it gets the value from the .net user-secrets:
config.GetConnectionString("Default");
Code Details
I have a secret key that contains a SQL password for the sa user.
dotnet user-secrets set "SA_PASSWORD" "<MySecretPassword>"
Then I have the docker-compose file which is of Linux system and this is part of the code:
sql_in_dc:
build:
context: .
dockerfile: items/sql/sql.Dockerfile
restart: always
ports:
- "1440:1433"
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=$SA_PASSWORD
- ASPNETCORE_ENVIRONMENT=Development
- USER_SECRETS_ID=80a155b1-fb7a-44de-8788-4f5759c60ff6
volumes:
- $APPDATA/Microsoft/UserSecrets/$USER_SECRETS_ID:/root/.microsoft/usersecrets/$USER_SECRETS_ID
- $HOME/.microsoft/usersecrets/$USER_SECRETS_ID:/root/.microsoft/usersecrets/$USER_SECRETS_ID
As you can see it calls the sql.Dockerfile which is:
FROM mcr.microsoft.com/mssql/server
ARG PROJECT_DIR=/tmp/devdatabase
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY items/sql/InitializeDatabase.sql ./
COPY items/sql/wait-for-it.sh ./
COPY items/sql/entrypoint.sh ./
COPY items/sql/setup.sh ./
CMD ["/bin/bash", "entrypoint.sh"]
Then the setup.sh is:
# Wait for SQL Server to be started and then run the sql script
./wait-for-it.sh sql_in_dc:1433 --timeout=0 --strict -- sleep 5s && \
/opt/mssql-tools/bin/sqlcmd -S localhost -i InitializeDatabase.sql -U sa -P "$SA_PASSWORD"
The Problem
The file setup.sh doesn't recognizes the $SA_PASSWORD environment variable when it comes from the secrets file.
It works well if I change the docker-compose.yml file to:
- SA_PASSWORD=SomePassword
Notes
I searched for an answer in Google and tried many things but couldn't find exactly my case.
I know it is possible to use Docker Swarm for the secrets but for now I want to do it without it. I am still learning and prefer that the code will work good and the next step will be to use Docker Swarm / Kubernetes / etc...
I would be happy to know if there is a fast solution even if it is not the ideal one. Later I will improve it and use better techniques.
I wrote the code I think that should be enough for the case and the relevant code, but if you need any more data, let me know and I will add it.
I have it in GitHub in a public repository in a pushed branch. If you want I can share with you the code.
Really big thanks in advance!
The docker-compose.yml is executed on your host OS (so it can use OS environment variables or vars from .env file, or from compose file, ...).
The running image - container has it's own set of env variables, in your case that means the running container has no SA_PASSWORD variable.
Your usecase would work if you had set the SA_PASSWORD Variable on your host OS.
You can check which variables are set in your container with (if your image comes with bash):
docker exec -it [container id] bash
printenv
The dotnet-secrets environment variables are created implicit during execution/runtime from Visual Studio (see entry in project file).
So as you mentioned "the compose file can't recognize dotnet-secrets".
You can use:
*.env File
pass it to compose command: with -e
Plain text in compose yml: - SA_PASSWORD=thepassword
as host OS Variable
Keep in mind that Visual Studio adds some magic when running or debugging your docker container. See Visual Studio container volume mapping: For ASP.NET core web apps, there might be two additional folders for the SSL certificate and the user secrets, which is explained in more detail in the next section.
I am running a reactjs app using docker container, and we are using Mock API and UI. I am running those inside a single docker container as 2 separate process. However, in the .env file of the reactjs app the environment variables are mapped to localhost like below :-
REACT_APP_MOCK_API_URL="http://localhost:8080/API"
REACT_APP_MOCK_API_URL_AUTH="http://localhost:8080/API/AUTH"
REACT_APP_MOCK_API_URL_PRESENTATION="http://localhost:8080/API/PRESENTATION"
Since the docker container's IP would be dynamic i need to override it with the dynamic ip that the container will be creating at run time.
May i know the way to do this inside dockerfile ???
PS : I tried assigning the static IP inside the docker file for these environment vars and it works. However, i am not sure how to get the IP dynamically and pass it inside the dockerfile itself .
Please help.
Thanks.
That's intrinsically not something you can directly set up inside the Dockerfile. You usually don't care about the container-internal IP addresses at all: from other containers you should use Docker's internal DNS service, and from outside a container you can access published ports (docker run -p option) via the host's IP address.
In many cases you can glean enough information from HTTP headers to construct valid links within an application. You might be able to set these variables to just e.g. REACT_APP_MOCK_API_URL="/API"; if that's interpreted relative to some other URL in the application then it will inherit the correct host name.
If none of this works, you can use an entrypoint script to set these variables. This might look something like:
#!/bin/sh
if [ -n "$URL_PREFIX" ]; then
# Set these three variables, if they're not already set
: ${REACT_APP_MOCK_API_URL:="${URL_PREFIX}/API"}
: ${REACT_APP_MOCK_API_URL_AUTH:="${URL_PREFIX}/API/AUTH"}
: ${REACT_APP_MOCK_API_URL_PRESENTATION:="${URL_PREFIX}/API/PRESENTATION"}
# Export them to other processes
export REACT_APP_MOCK_API_URL REACT_APP_MOCK_API_URL_AUTH
export REACT_APP_MOCK_API_URL_PRESENTATION
fi
# Launch the main container command
exec "$#"
In your Dockerfile you'd COPY this script in and run it as the ENTRYPOINT
...
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD [...]
Then when you finally ran the container, you can dynamically inject the URL prefix, including whatever port you choose.
docker run -e URL_PREFIX="http://$(hostname):3456" -p 3456:8080 ...
The entrypoint script will set the other variables based on the URL_PREFIX variable, then run whatever command was set as the CMD in the Dockerfile or was named on the docker run command line. (If you docker run -it ... sh, the entrypoint will run and as its last step launch the interactive shell, which is useful for debugging.)
I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.
I'm new to nifi and i want to connect SQL server database to nifi and create a data flow with the processors. how can I do this, can any one Help me with this clearly.
Thanks in Advance
sam
Here are two great articles on getting information in and out of databases with NiFi:
http://www.batchiq.com/database-injest-with-nifi.html
http://www.batchiq.com/database-extract-with-nifi.html
They describe/illustrate how to configure a DBCPConnectionPool service to provide connection(s) to an RDBMS, and example flows to extract data and ingest data.
Expanding on mattyb answer
If you are using the latest Hortonworks sandbox, or other setup that uses docker containers, read below.
You have to install the JDBC jar file inside the docker. For SQL Server, it should be 6.2 or above.
docker ps
docker exec -it <mycontainer uuid> bash
How do I get into a Docker container's shell?
will help you log into the container.
cd file:///usr/lib/jvm/jre/lib/
mkdir jdbc
cd ./jdbc
wget https://download.microsoft.com/download/3/F/7/3F74A9B9-C5F0-43EA-A721-07DA590FD186/sqljdbc_6.2.2.0_enu.tar.gz
tar xvzf sqljdbc_6.2.2.0_enu.tar.gz
cp ./sqljdbc_6.2/enu/mssql-jdbc-6.2.2.jre8.jar ./
jdbc:sqlserver://192.168.1.201:1433;databaseName=[your database]
com.microsoft.sqlserver.jdbc.SQLServerDriver
You might need to replace the ip address with IPv4 address of your host found under ipconfig in Windows or ifconfig in Mac/Linux.
You may change file:///usr/lib/jvm/jre/lib/ to any path you desire.
Expanding on TamusJRoyce's answer
If you are running nifi via a docker image like apache/nifi or the aforementioned Hortonworks sandbox, the following should help you get the required driver on the image so that you don't need to exec into the container to do it manually.
See the comments below the docker file
FROM apache/nifi
USER root
RUN mkdir /lib/jdbc
WORKDIR /lib/jdbc
RUN wget https://download.microsoft.com/download/3/F/7/3F74A9B9-C5F0-43EA-A721-07DA590FD186/sqljdbc_6.2.2.0_enu.tar.gz
RUN tar xvzf sqljdbc_6.2.2.0_enu.tar.gz
RUN cp ./sqljdbc_6.2/enu/mssql-jdbc-6.2.2.jre8.jar ./
USER nifi
EXPOSE 8080 8443 10000 8000
WORKDIR ${NIFI_HOME}
ENTRYPOINT ["../scripts/start.sh"]
The above image uses apache/nifi as the base image. You can use any nifi docker image has a base if you would like.
You can specify any location for lib/jdbc, just remember that you need to use this as the reference for the file location so that it is referenced as file:///lib/jdbc/mssql-jdbc-6.2.2.jre8.jar
Lastly, switch back to the nifi user and finish off with the standard nifi image details. This will allow the image to run correctly.
I have a slight problem, I've been trying to get some persistent data with ArangoDB and Docker. I passed the argument to Docker which attaches the hosts directory to a path within Docker. It's all fine to this point, but I'm stuck in an enigma where the hell this directory is.
1) This is a sample command which resembles mine:
docker run -e ARANGO_ROOT_PASSWORD='mypass' -p 80:8529 -d -v myhostfolder:/var/lib/arangodb3 arangodb
So, the problem is i can't find myhostfolder anywhere on my host machine which runs docker. The data within it is persistent and I can access it, but only through the docker container. I think that the data is somewhere on my host machine, I've been trying to pass a couple of these "relative" folders and they all keep persistent data so I doubt that the data is in the actual docker container.
2) If I do something like this (providing an absolute path)
docker run -e ARANGO_ROOT_PASSWORD='mypass' -p 80:8529 -d -v /home/myhostfolder:/var/lib/arangodb3 arangodb
then I have no issues with locating the /home/myhostfolder.
So my question is, where on my OS X 10.12 is the myhost folder from example 1)?
Thanks for your help!
The host-dir can either be an absolute path or a name value. If you supply an absolute path for the host-dir, Docker bind-mounts to the path you specify. If you supply a name, Docker creates a named volume by that name.
Refer https://docs.docker.com/engine/tutorials/dockervolumes/#mount-a-host-directory-as-a-data-volume.
In your case, as myhostfolder is a name, docker creates a named volume. Execute the below command which lists the volumes. A volume with name myhostfolder will be shown.
docker volume ls