I am struggling with creating a user automatically in dockerized Oracle db 12c. I manage building up an instance of Oracle db, but I would also like to create a user there just after initialization is complete (it takes about 10 min to be up and running).
My Dockerfile:
FROM example/database:12c
COPY entry_point.sh ./
ENV ORACLE_SID="ORCLCDB"
ENV ORACLE_PDB="ORCLCPDB1"
ENV ORACLE_PWD="example123"
ENTRYPOINT ./entry_point.sh
The entry_point.sh
#!/bin/bash
exec $ORACLE_BASE/$RUN_FILE
sqlplus SYS/example123 as SYSDBA <<EOF <--- part that does not run at all
alter session set container=ORCLCPDB1;
create user EXUSER identified by example1234;
grant create session to EXUSER;
EOF
Docker commands:
docker image build -t oracle-db .
docker run --rm --name oracle-container -p 1521:1521 -p 5500:5500 -d oracle-db
After running those snippets the database comes up properly, I can log in there and so on (e.g create a user with exact the same commands manually). The only issue is that entry_script.sh run only exec $ORACLE_BASE/$RUN_FILE and the leftover part where admin logs to the db and create a user is somehow omitted. I know exec command runs another shell in this case but when I remove it and run a standalone script nothing changes at all. I have tried many different approaches to automate the task of creating a user just after the db is up yet unsuccessfully. Do you have any suggestions?
You can create external file with content (named for example name_of_above_file.sql):
alter session set container=ORCLCPDB1;
create user EXUSER identified by example1234;
grant create session to EXUSER;
exit
transfer it in to the docker and then run the command on this way
sqlplus SYS/example123 as SYSDBA #name_of_above_file.sql
Related
The following docker file creates a custom SQL server image with a database restored from a backup (rmsdev.bak).
FROM mcr.microsoft.com/mssql/server:2019-latest
ENV MSSQL_PID=Developer
ENV SA_PASSWORD=Password1?
ENV ACCEPT_EULA=Y
USER mssql
COPY rmsdev.bak /var/opt/mssql/backup/
# Launch SQL Server, confirm startup is complete, restore the database, then terminate SQL Server.
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q 'RESTORE DATABASE rmsdev FROM DISK = "/var/opt/mssql/backup/rmsdev.bak" WITH MOVE "rmsdev" to "/var/opt/mssql/data/rmsdev.mdf", MOVE "rmsdev_Log" to "/var/opt/mssql/data/rmsdev_log.ldf", NOUNLOAD, STATS = 5' \
&& pkill sqlservr
CMD ["/opt/mssql/bin/sqlservr"]
The issue is that, once the restore is complete, the backup file is not required anymore and I would like to remove it from the image.
Unfortunately, due to how docker images are formed (layers) I cannot simply 'rm' the file as I would like to.
Multistage Dockerfile is not easily applicable in this case as in a build scenario.
Another way would be to run the container, restore the backup and then commit a new image, but what I am looking to do is to use only docker build with the proper Dockerfile.
Does anyone know a way?
If you know where the data directory is in the image, and the image does not declare that directory as a VOLUME, then you can use a multi-stage build for this. The first stage would set up the data directory as you show. The second stage would copy the populated data directory from the first stage but not the backup file. This trick might depend on the two stages running identical builds of the underlying software.
For SQL Server, the Docker Hub page and GitHub repo are both tricky to find, and surprisingly neither talks to the issue of data storage (as #HansKillian notes in a comment, you would almost always want to store the database data in some sort of volume). The GitHub repo does include a Helm chart built around a Kubernetes StatefulSet and from that we can discover that a data directory would be mounted on /var/opt/mssql.
So I might write a multi-stage build like so:
# Put common setup steps in an initial stage
FROM mcr.microsoft.com/mssql/server:2019-latest AS setup
ENV MSSQL_PID=Developer
ENV SA_PASSWORD=Password1? # (weak password, easily extracted with `docker inspect`)
ENV ACCEPT_EULA=Y # (legally probably the end user needs to accept this not the image builder)
# Have a stage specifically to populate the data directory
FROM setup AS data
# (copy-and-pasted from the question)
USER mssql
COPY rmsdev.bak / # not under /var/opt/mssql
RUN ( /opt/mssql/bin/sqlservr & ) | grep -q "Service Broker manager has started" \
&& /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -Q 'RESTORE DATABASE rmsdev FROM DISK = "/rmsdev.bak" WITH MOVE "rmsdev" to "/var/opt/mssql/data/rmsdev.mdf", MOVE "rmsdev_Log" to "/var/opt/mssql/data/rmsdev_log.ldf", NOUNLOAD, STATS = 5' \
&& pkill sqlservr
# Final stage that actually will actually be run.
FROM setup
# Copy the prepopulated data tree, but not the backup file
COPY --from=data /var/opt/mssql /var/opt/mssql
# Use the default USER, CMD, etc. from the base SQL Server image
The standard Docker Hub open-source database images like mysql and postgres generally declare a VOLUME in their Dockerfile for the database data, which forces the data to be stored in a volume. The important thing this means is that you can't set up data in the image like this; you have to populate the data externally, and then copy the data tree outside of the Docker image system.
I installed oracle db version 12c in my docker environment.
I used the following command:
docker run -d --name oracle -p 8080:8080 -p 1521:1521 quay.io/maksymbilenko/oracle-12c
I connected to the DB and everything went well but I wanted to enable unified audit.
In order to do that, at first you must shutdown the Database and in all the instructions that I found it says to use sqlplus as following:
sqlplus / as sysoper
SQL> shutdown immediate
SQL> exit
I connected successfully to the DB using the next command:
docker exec -it oracle "bash"
and then I ran the sqlplus command and I received "command not found"
[root#f30cc670f85f /]# sqlplus / as sysoper
bash: sqlplus: command not found
Am I doing it wrong?
What should I do in order to have sqlplus on my oracle DB?
I looked for it and didn't find anything that helped me.
I have mac if its relevant
I think that Docker image is just the database and enough of the OS to run the database. I don't think it includes client software such as SQL*Plus.
You need to have SQL*Plus installed on your Mac. If you haven't already, download the Oracle Instant Client for MacOS including the SQL*Plus extension. Or why not treat yourself and install the new-fangled sqlCL tool? It is easier to install and has all the SQL*Plus capabilities and a whole bunch more features. Find it here.
Whatever client you choose, once it's installed on your Mac you run it like any other app: when prompted for connection you give the string Maksym provides:
system/oracle#//localhost:1521/xe
If you need to connect as sys that would look like this:
sys/oracle#//localhost:1521/xe as sysdba
Sourcing the .bashrc should work to connect to sqlplus as sysdba.
docker-compose exec db bash -c "source /home/oracle/.bashrc; sqlplus sys/Oradoc_db1#ORCLCDB as sysdba;"
with this, you enter the image:
docker exec -it oracle /bin/bash
after that, you can use:
sqlplus sys as sysdba
When using the docker image store/oracle/database-enterprise:12.2.0.1-slim sqlplus and sqlldr tools are only available after the container has started.
You can't do the following in a Dockerfile:
RUN sqlplus sys/password AS SYSDBA #create_database.sql
The container images can be configured to run scripts after setup and on startup. Currently sh and sql extensions are supported.
In your Dockerfile, copy the SQL script into the startup directory:
COPY create_database.sql /opt/oracle/scripts/setup/01_create_database.sql
The database will be created on first startup of the container.
I don't have any experience with docker, but it looks for all the world like you are getting to a bash environment, so there we are on solid ground. The returned error ("bash: sqlplus: command not found") simply means that the executable (sqlplus) was not found in any directory listed in your PATH environment variable, as it exists within your shell environment. You actually need to set three variables: ORACLE_SID needs to be set to the value of your database name. ORACLE_HOME needs to be set to the value of the directory where your oracle binaries are installed. And PATH needs to have $ORACLE_HOME/bin added to it:
export PATH=$ORACLE_HOME/bin:$PATH
Obviously, since you are using the value of ORACLE_HOME in setting PATH, ORACLE_HOME needs to be set first.
For Windows OS:
Type docker ps in command line to show running containers and check container id.
Type docker exec -it container_id //bin/bash
Login via sqlplus command
Or the simplest way
docker exec -it container_id bash -c "source /home/oracle/.bashrc; sqlplus sys/Oradoc_db1#ORCLCDB as sysdba;"
More info is here: https://hub.docker.com/u/cgmmathaw/content/sub-90f0c051-b514-4b7b-a0fe-fc9d6f2172fa
I'm creating a custom Docker image from the existing mcr.microsoft.com/mssql/server:2017-latest-ubuntu by adding some databases with Flyway and then creating backups of the databases within the container.
The idea is to start the Docker container and have it import all databases from the backup files it finds in a certain directory.
If my Docker entrypoint just starts sqlservr and afterwards I exec the shell scripts that restore the DBs from outside, everything works (except that sometimes SQL Server seems to be up, but really isn't).
According to https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-configure-docker?view=sql-server-2017#customcontainer, I should start my shell scripts before sqlservr starts, since it's the foreground process which defines whether the container is up or not.
But it's a chicken/egg situation: How can I execute SQL commands within my shell scripts before SQL Server is running?
I'm aware that normally one would do this with a volume, but this is a test database, and I don't care what happens to the data once the test has finished.
Here's my Docker entrypoint script (I have some custom scripts in /opt/mssql-tools/bin/):
#!/bin/sh
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${USER_NAME:-sqlservr}:x:$(id -u):0:${USER_NAME:-sqlservr} user:${HOME}:/sbin/nologin" >> /etc/passwd
fi
fi
/opt/mssql-tools/bin/restore-databases.sh & sqlservr
Thanks for any hints and suggestions what "the Docker way" should be.
Here is what I do on my side it's not ideal as it involves "sleep" function to wait for SQL server to be up and running before executing the SQL... but it works.
I have the following folder structure:
Dockerfile
init/
database.sql
entrypoint.sh
setup.sh
My Dockerfile calls my entrypoint.sh as follow:
Dockerfile
FROM microsoft/mssql-server-linux:latest
# Project files
ARG PROJECT_DIR=/srv/db-sql-server
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY ./init/ ./
# Grant permissions for scripts to be executable
RUN chmod +x $PROJECT_DIR/entrypoint.sh
RUN chmod +x $PROJECT_DIR/setup.sh
CMD ["/bin/bash", "entrypoint.sh"]
My entrypoint.sh starts SQL Server, calls another custom shell script setup.sh to setup the database and then keeps the container alive:
entrypoint.sh
#start SQL Server, start the script to create the DB and import the data, start the app
/opt/mssql/bin/sqlservr & ./setup.sh & sleep infinity & wait
My custom shell script setup.sh waits 30s for SQL Server to be up and running and then executes my SQL file database.sql to create the database / restore the data:
setup.sh
# Wait for SQL Server to be started
sleep 30s
# Run the setup script to create the database
/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P $SA_PASSWORD -d master -i database.sql
I bet sp_procoption could work in your case. This allows you to designate a stored procedure to run each time the instance starts and after all databases are mounted and the relational engine begins listening for commands.
I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.
I am using postgres docker image in my project. For initialization I am using following command to create and init my database (tables, views, data, ...)
COPY sql_dump.sql /docker-entrypoint-initdb.d/
Is possible persist these data after container is stopped and removed? For instance when I run image of postgres, it will create database with these data wihout loading script every time of container start. Just load created data of first run.
I did some research and I found VOLUME command, but I don't know how to use it for my purpose, I am new with Docker. Thanks for any help. I am using Docker For Win v18.
You can use docker named volumes more information can be found here.
this will create a named volume called postgres-data
docker volume create postgres-data
and say this is your command to create the container.
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
change that to this.
docker run --name some-postgres -v postgres-data:/var/lib/postgresql -e POSTGRES_PASSWORD=mysecretpassword -d postgres
this should mount the postgres-data volume under /var/lib/postgresql. can they initialize your DB and when you stop and start the container it will contain the persisted data.
-HTH