How can I drop into a bash shell with docker-compose? - sql-server

Intention:
I am trying to figure out how to use the command docker-compose up --build to build a machine with an Apache/PHP7.1 service acting as a web server and a MSSQL service as the database for the app that is served from the web server.
I need to be able to type in commands to the docker container while I build this development environment (so that I can look at the PHP logs, etc), but since the web server is running, my terminal is occupied with the output from the web server and when I press ctrl+Z, it actually puts the docker process in the background.
The question:
Is there anyway that I can run this docker-compose.yml file and have it drop me into the shell on the guest machine?
Service 1:
webserver:
image: php:7.1-apache
stdin_open: true
tty: true
volumes:
- ./www:/var/www/html
ports:
- 5001:80
Service 2:
mssql:
image: microsoft/mssql-server-linux
stdin_open: true
tty: true
environment:
- ACCEPT_EULA=Y
volumes:
- ./www:/var/www/html
ports:
- 1433:1433
depends_on:
-webserver

You can exec a new process in a running container.
docker-compose exec [options] SERVICE COMMAND [ARGS...]
In your case
docker-compose exec webserver bash
The -i and -t options from docker exec are assumed in docker-compose exec

First of all, same as docker run, docker-compose has an option for running the services/containers in the background.
docker-compose up --detach --build
After that, list the running containers with:
docker-compose ps
And connect to the container like #Matt already answered.

Related

Can't run mssql docker image: permissions_check.sh: exec format error

I've trying to run a mssql docker container with Linux 20.04 installed in a Raspberry pi, and I got the following error:
mssql | exec /opt/mssql/bin/permissions_check.sh: exec format error
mssql exited with code 1
My docker compose file is the following:
version: "3.7"
services:
sql-server-db:
container_name: mssql
image: mcr.microsoft.com/mssql/server:2022-latest
#image: mcr.microsoft.com/mssql/server:2019-CU18-ubuntu-20.04
ports:
- "1433:1433"
environment:
MSSQL_SA_PASSWORD: ******
ACCEPT_EULA: Y
volumes:
- ./Data/mssql:/var/opt/mssql/data
Also, I've tried to run the container like this:
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=******" -p 1433:1433 --name mssql -d mcr.microsoft.com/mssql/server:2019-CU18-ubuntu-20.04
Additionally, I've tried more than one image with no success.
I've tried everything I have in my hands, I'm starting to losing my mind with this right now....
Things I've tried:
I read an article in stackoverflow about defining a volume inside docker-compose.yml file to the mssql image.
I've put and take single and double quotation marks from the docker-compose.yml file.
As I mentioned before, I tried to run docker run (...) command to check another way around.
I expected these to work since I thought I had a spelling mistake or a comma, quatation mark wrong, etc. All with no success.

SQL scripts in Dockerfile is skipped to run if I attach the deployment to a persistent volume

I create a new image base on the "mcr.microsoft.com/mssql/server" image.
Then I have a script to create a new database with some tables with seeded data within the Dockerfile.
FROM mcr.microsoft.com/mssql/server
USER root
# CreateDb
COPY ./CreateDatabaseSchema.sql ./opt/scripts/
ENV ACCEPT_EULA=true
ENV MSSQL_SA_PASSWORD=myP#ssword#1
# Create database
RUN /opt/mssql/bin/sqlservr & sleep 60; /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P ${MSSQL_SA_PASSWORD} -d master -i /opt/scripts/CreateDatabaseSchema.sql
I can see the database created by my script if I don't attach it to a persistent volume, and DO NOT see the new database if I attach it to a persistent volume. I check the log and don't see any error. Looks like the system skip to process that file. What is the problem that might cause the environment to skip processing the SQL script whci defined in Dockerfile?
thanks,
Austin
The problem with using persistent volume is all the data in that directory is replaced by the base image. I need to learn how to create the database after volume mounts
volumeMounts:
- mountPath: /var/opt/mssql
You can use docker-compose.yml and Dockerfile. Both can work together.
version: '3.9'
services:
mysqlserver:
build:
context: ..
dockerfile: Dockerfile
restart: always
volumes:
- make/my/db/persistent:/var/opt/mssql
Then you can run it with:
docker-compose -f docker-compose.yml up
Have fun

Transferring a data from a mongodb on my local machine to a docker container

I'm trying to deploy a site and I am stuck trying to get my MongoDB data into my docker container. My API seems to work just fine without the docker container but when it is run using the docker container, it throws an error. The errors are due to the database being empty. I'm looking for a way to transfer previously stored data from my local MongoDB to the MongoDB on my container. Any solutions for this.
Below is my docker-compose.yml file:
version: "2"
services:
web:
build: .
ports:
- "3030:3030"
depends_on:
- mongo
mongo:
image: mongo
ports:
- "27018:27017"
I was told using mongodump and mongorestore could be helpful but haven't had much luck with mongorestore.
Currently, I have a dump folder with the db that I'm trying to transfer on my local machine. What steps should I take next to get it into docker?
Found the issue for anyone attempting to populate their mongo database in docker.
Here are the steps I took:
First used the mongodump to copy my database into a dump file
mongodump --db
Used docker cp to copy that dump file into a docker container
docker cp ~/dump/ :/usr/
Used mongorestore inside of the docker container
Open docker mongo shell in Docker Desktop or docker exec -it bash
cd into the usr directory
mongorestore --db= --collection= ./dump//.bson

Create SQL Server database from a script in docker

Simple question I hope. I cannot find anything anywhere.
How do you create a database in a Microsoft SQL Server Docker container?
Dockerfile:
I am looking at the following Dockerfile:
FROM microsoft/mssql-server-windows-developer:latest
ENV sa_password ab873jouehxaAGR
WORKDIR /
COPY /db-scripts/create-db.sql .
# here be the existing run commnd in the image microsoft/mssql-server-windows-developer:latest
#CMD ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';", ".\\start -sa_password $env:sa_password -ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \\\"$env:attach_dbs\\\" -Verbose" ]
RUN (sqlcmd -S localhost -U SA -P ab873jouehxaAGR -i create-db.sql)
docker-compose.yml:
I have put together the following docker compose file:
version: '3.4'
services:
sql.data:
image: ${DOCKER_REGISTRY}myfirst-mssql:latest
container_name: myfirst-mssql_container
build:
context: .
dockerfile: ./Dockerfile
environment:
SA_PASSWORD: ab873jouehxaAGR
ACCEPT_EULA: Y
Bringing it together
I am running the command docker-compose up against the above. And assuming create-db.sql file will simply create a database which is not relevant here to keep things minimal.
Errors
The error I get above is that the login for SA is invalid when it runs the .sql script:
Step 7/7 : RUN (sqlcmd -S localhost -U SA -P ab873jouehxaAGR -i create-db.sql)
---> Running in 2ac5644d0bd9
Sqlcmd: Error: Microsoft ODBC Driver 13 for SQL Server : Login failed for user 'SA'..
It looks like this runs before the password has been changed to ab873jouehxaAGR which typically looks like the command from mssql-server-windows-developer:latest.json from inspecting the image in vscode - \start -sa_password $env:sa_password -ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs actually does.
Environment
I am running docker Docker version 18.06.1-ce, build e68fc7a on Windows 10.
Attach or script
I am not specifying attaching a database using the environment variable attach_dbs of which I see in many examples.
I am trying to find a best practice for managing a sql container from a point of view of end to end testing and a lot of articles seem to not cover the data aspect part - ie Development Workflow
I would be interested to hear in comments thoughts on these two approaches in the Docker world.
using following commands can solve your problem
docker-compose up --build -d
version: '3.4'
services:
sql.data:
image: ${DOCKER_REGISTRY}myfirst-mssql:latest
container_name: myfirst-mssql_container
environment:
SA_PASSWORD: ab873jouehxaAGR
ACCEPT_EULA: Y
and after that:
docker exec myfirst-mssql_container sqlcmd
-d master
-S localhost
-U "sa"
-P "ab873jouehxaAGR"
-Q 'select 1'

database lost on docker restart

I'm running influxdb and grafana on Docker with Windows 10.
Every time I shut down Docker, I lose my database.
Here's what I know:
I have tried adjusting the retention policies, with no effect on the
outcome
I can shut down and restart the containers (docker-compose down) and the database is still there. Only when I shut down Docker for Windows do I lose the database.
I don't see any new folders on the mapped directory when I create a new database (/data/influxdb/data/)'. Only the '_internal' folder persists, which I assume corresponds to the persisting database called '_internal'
Here's my yml file. Any help greatly appreciated.
version: '3'
services:
# Define an InfluxDB service
influxdb:
image: influxdb
volumes:
- ./data/influxdb:/var/lib/influxdb
ports:
- "8086:8086"
- "80:80"
- "8083:8083"
grafana:
image: grafana/grafana
volumes:
- ./data/grafana:/var/lib/grafana
container_name: grafana
ports:
- "3000:3000"
env_file:
- 'env.grafana'
links:
- influxdb
# Define a service for using the influx CLI tool.
# docker-compose run influxdb-cli
influxdb-cli:
image: influxdb
entrypoint:
- influx
- -host
- influxdb
links:
- influxdb
If you are using docker-compose down/up, keep in mind that this is not a "restart" because:
docker-compose up creates new containers and
docker-compose down removes them :
docker-compose up
Builds, (re)creates, starts, and attaches to containers for a service.
docker-compose down
Stops containers and removes containers, networks, volumes, and images created by up.
So, removing the containers + not using a mechanism to persist data (such as volumes) means that you lose your data ☹️
On the other hand, if you keep using:
docker-compose start
docker-compose stop
docker-compose restart
you deal with the same containers, the ones created when you ran docker-compose up.
docker-compose down
the above command should not remove the volume unless specified.
https://docs.docker.com/compose/reference/down/
I tried the following docker-compose.yaml file which persist the data even with down or rm docker commands.
version: '3'
services:
influxdb:
image: influxdb:2.0
ports:
- 8086:8086
volumes:
- influxdb-data:/var/lib/influxdb2
restart: always
volumes:
influxdb-data:
external: true
I think problem is related to mounted volume not docker or influxdb. You should first find where influxdb stores data(by default it is in your home folder "~user/. influxdb" in windows) and then generate influxdb.conf file, finally mount the volumes.
This seemed to work for me but just in case someone else is reading this for the same problem as mine, the connection with my Docker Wordpresscompose site was lost.
It seems as though it needed restarting.
I used the advice from #tgogos and into the shell terminal in the docker root folder I typed the command:
docker-compose restart
however before doing this i edited the yml file, docker-compose.yml to also include:
restart: always
with the advice from the linode.com site

Resources