I have a docker-compose.yml having this lines.
version: "3.9"
services:
mssql:
image: localhost/local_mssql_server:mssqlserver
ports:
- "1433:1433"
volumes:
- sqlfolder1234:/var/opt/mssql
The Docker is starting up successfully and serving data.
But I seldom work with windows and I like to know where is the host-folder sqlfolder1234?
I tried the windows-explorer to search for that folder. Since an hour he not finished yet.
Where is that folder sqlfolder1234 on my host system?
This volume type is called Named volume Here is some description from the official document. Short syntax
In the absence of having named volumes with specified sources, Docker creates an anonymous volume for each task backing a service. Anonymous volumes do not persist after
the associated containers are removed.
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# Specify an absolute path mapping
- /opt/data:/var/lib/mysql
# Path on the host, relative to the Compose file
- ./cache:/tmp/cache
# User-relative path
- ~/configs:/etc/configs/:ro
# Named volume
- datavolume:/var/lib/mysql
If you want to keep persistent data in your host computer, I would use another volume way to do that. this sample provides Path on the host, relative to the Compose file that help us can see the volumn folder in the docker-compose file level.
volumes:
- ./sqlfolder1234:/var/opt/mssql
Related
I'm trying to get React to change content of the site when it's file is being saved.
I'm using VS code which doesn't have safe write. I'm using docker-compose on Windows via Docker Desktop.
Dockerfile:
FROM node:17-alpine
WORKDIR /front
ARG FRONT_CMD
ARG API_HOSTNAME
ENV REACT_APP_API_HOSTNAME=$API_HOSTNAME
COPY . .
RUN npm i #emotion/react #emotion/styled
CMD $FRONT_CMD
relevant part of docker-compose.yml:
frontend:
volumes:
- ./frontend/src:/front/src
- /front/node_modules
build:
context: ./frontend
dockerfile: Dockerfile
args:
- FRONT_CMD=${FRONT_CMD}
- API_HOSTNAME=${API_HOSTNAME}
env_file:
- .env.dev
networks:
- internal
environment:
- CHOKIDAR_USEPOLLING=true
- FAST_REFRESH=false
- NODE_ENV=development
Everything is running behind traefik. CHOKIDAR_USEPOLLING and FAST_REFRESH seem to make no difference, I start with ' docker-compose --env-file ..env.dev up' - within the file FRONT_CMD="npm start" which behaves just fine. Env.dev should be clear indication of dev build (and is, works the same without the addition) to React, but I added NODE_ENV just be safe. I tried adding all of them into build envs just be super sure, but nothing changes. React files lay in 'frontend' folder, which is in the same location as docker-compose.yml.
Every time React says it compiled successfully and warns me that it's a development build.
Only suspicion I have left is that there's some issue with updating files with Windows locally while docker uses Linux, but I have no idea where to go from there even if that's the case.
Shortest way I found was to start from the other side, that is attach editor to container instead of updating container based on changes in system files. I followed the guide here: https://code.visualstudio.com/docs/remote/attach-container
I spent most morning trying to figure out not only how to copy an initial SQL dump into the container, but also how to auto-import (execute) the dump into the DB. I have read countless other posts, none of which seem to work. I have the following docker compose file:
version: '3.8'
services:
db:
image: mariadb:10.5.8
restart: always
container_name: database
environment:
MYSQL_ROOT_PASSWORD: default
volumes:
- db-data:/var/lib/mysql
- ./db-init:/docker-entrypoint-initdb.d
volumes:
db-data:
The SQL dump is found in the db-init folder. I got the docker-entrypoint-initdb.d from the official docs on DockerHub.
After docker-compose up, the SQL is correctly copied into the docker-entrypoint-initdb.d but is never ran against the DB, aka the dump is never imported and the DB remains empty.
I have tried placing the volumes directive around in the docker compose file as this was suggested in another post. From what I've read, the SQL dump should be imported automatically when mounting the volume.
Is there no way to accomplish this via the docker-compose.yml only?
Edit: Switching the version to 2.x did not work
EDIT2: Container logs:
2021-02-10 17:53:09+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/wordpress.sql
ERROR 1046 (3D000) at line 10: No database selected
From your logs, a quick google search pointed to this post. Adding MYSQL_DATABASE to the environment should solve the issue and the .sql should then be imported correctly on startup.
Final docker-compose should look like this:
services:
db:
image: mariadb:10.5.8
restart: always
container_name: database
environment:
MYSQL_DATABASE: wordpress
MYSQL_ROOT_PASSWORD: default
volumes:
- db-data:/var/lib/mysql
- ./db-init:/docker-entrypoint-initdb.d/
Maybe not worded as strongly as it should be, but the docs mention this: SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
I wanted to create a SQL Server database in Kubernetes pod using a SQL script file. I have the SQL script which creates the database and inserts the master data. As I'm new to Kubernetes, I'm struggling to run the SQL script in a pod. I know the SQL script can be executed manually in a separate kubectl exec command, but I wanted it to be executed automatically in the pod deploy yml file itself.
Is there a way to mount the script file into pod's volume and run it after starting the container?
You could use kubernetes hooks for that case. There are two of them: PostStart and PreStop.
PostStart executes immediately after a container is created.
PreStop on other hand is called immediately before a container is terminated.
You have two types of hook handlers that can be implemented: Exec or HTTP
Exec - Executes a specific command, such as pre-stop.sh, inside the cgroups and namespaces of the Container. Resources consumed by the command are counted against the Container.
HTTP - Executes an HTTP request against a specific endpoint on the Container.
PostStart is the one to go with here, however please note that the hook is running in parallel with the main process.
It does not wait for the main process to start up fully. Until the hook completes, the container will stay in waiting state.
You could use a little workaround for that and add a sleep command to your script in order to have it wait a bit for your main container creation.
Your script file can be stored in the container image or mounted to volume shared with the pod using ConfigMap. Here`s some examples how to do that:
kind: ConfigMap
apiVersion: v1
metadata:
namespace: <your-namespace>
name: poststarthook
data:
poststart.sh: |
#!/bin/bash
echo "It`s done"
Make sure your script does not exceed 1mb limit for ConfigMap
After you define configMap you will have mount it using volumes:
spec:
containers:
- image: <your-image>
name: example-container
volumeMounts:
- mountPath: /opt/poststart.sh
subPath: poststart.sh
name: hookvolume
volumes:
- name: hookvolume
configMap:
name: poststarthook
defaultMode: 0755 #please remember to add proper (executable) permissions
And then you can define postStart in your spec:
spec:
containers:
- name: example-container
image: <your-image>
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", /opt/poststart.sh ]
You can read more about hooks in kubernetes documentation and in this article. Let me know if that was helpful.
Have Visual Studio 2017 (15.3) solution with two projects:
An API written in ASP.NET Core 2 MVC
Database Project
I was able to "dockerize" the MVC project easily (right click, add Docker support) but while trying to dockerize the Database project keep getting the error: Value cannot be null. Parameter name: stream. My Google-fu is failing me; the closest resource found is for Visual Studio 15.2.
How I've Setup Database Project So Far
Added Dockerfile to root:
FROM microsoft/mssql-server-linux:latest
EXPOSE 1433
ENV ACCEPT_EULA=Y
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
ENV MSSQL_TCP_PORT=1433
# Add Database project output from VS build process
RUN mkdir --parents /_scripts/generated
COPY ./_scripts /_scripts/
COPY ./_scripts/generated/*.sql /_scripts/generated/
# Add shell script that starts MSSQL server, waits 60 seconds, then executes script to build out DB (script generated from VS build process)
CMD /bin/bash /_scripts/entrypoint.sh
Modified docker-compose.yml file to include new project
version: '3'
services:
webapp-api-service:
image: webapp-api
build:
context: ./src/API
dockerfile: Dockerfile
webapp-db-service:
image: webapp-db
build:
context: ./src/Database
dockerfile: Dockerfile
Modified docker-composeoverride.yml file to expose port for dev SSMS access
version: '3'
services:
webapp-api-service:
environment:
- ASPNETCORE_ENVIRONMENT=Development
ports:
- "80"
webapp-db-service:
ports:
- "1433"
Here's the build output
2>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\Sdks\Microsoft.Docker.Sdk\build\Microsoft.VisualStudio.Docker.Compose.targets(279,5): error : Value cannot be null.
2>C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\MSBuild\Sdks\Microsoft.Docker.Sdk\build\Microsoft.VisualStudio.Docker.Compose.targets(279,5): error : Parameter name: stream
2>Done building project "docker-compose.dcproj" -- FAILED.
Thanks in advance!
I ran into this same issue yesterday. I just solved it by removing the build portion of the database service. I'll just have to build the database project manually for now.
You can add a file named AppType.cache to /obj/Docker with content AspNetCore as a workaround.
Imagine a non-trivial docker compose app, with nginx in front of a webapp, and a few linked data stores:
web:
build: my-django-app
volumes:
- .:/code
ports:
- "8000:8000"
links:
- redis
- mysql
- mongodb
nginx:
image: nginx
links:
- web
redis:
image: redis
expose:
- "6379"
mysql:
image: mysql
volumes:
- /var/lib/mysql
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=myproject
mongodb:
image: mongo
The databases are pretty easy to configure (for now), the containers expose pretty nice environmental variables to control them (see the mysql container), but what of nginx? We'll need to template a vhost file for that, right?
I don't want to roll my own image, that'll need rebuilding for each changed config, from different devs' setups, to test, through staging and production. And what if we want to, in a lightweight manner, do A/B testing by flipping a config option?
Some centralised config management is needed here, maybe something controlled by docker-compose that can write out config files to a shared volume?
This will only get more important as new services are added (imagine a microservice cloud, rather than, as in this example, a monolithic web app)
What is the correct way to manage configuration in a docker-compose project?
In general you'll find that most containers use entrypoint scripts to configure applications by populating configuration files using environment variables. For an advanced example of this approach see the entrypoint script for the Wordpress official image.
Because this is a common pattern, Jason Wilder created the dockerize project to help automate the process.