I am trying to connect to a mysql database on my host using a docker container (so I can use it in other containers).
I would like to do it this way, because I cannot connect to the database from a docker container -> throws Connection refused because the IP is not allowed.
I tried mounting the sock file using following compose:
version: '3'
services:
mysql:
image: mariadb:10.3
volumes:
- /var/run/mysqld/mysqld.sock:/var/run/mysqld/mysqld.sock
networks:
- database
networks:
database:
external: true
but it's failing on (even though the host database already contains many databases):
2022-06-07 20:14:46+00:00 [ERROR] [Entrypoint]: Database is uninitialized and password option is not specified
You need to specify one of MARIADB_ROOT_PASSWORD, MARIADB_ALLOW_EMPTY_ROOT_PASSWORD and MARIADB_RANDOM_ROOT_PASSWORD,
and when I added the volume: - /var/lib/mysql:/var/lib/mysql, I am getting a Connection refused error (as if I was connecting to it normally and not via unix)
Here you go, consider checking the documentation before asking thoses types of questions
https://hub.docker.com/_/mariadb
version: 3
services:
sql:
image: mariadb:10.3
container_name: <container_name>
restart: always
networks:
- database
volumes:
- <volume>:/var/lib/mysql
- /var/run/mysqld/mysqld.sock:/var/run/mysqld/mysqld.sock
environment:
MARIADB_RANDOM_ROOT_PASSWORD: 1
MARIADB_USER: <USER_TO_BE_CREATED>
MARIADB_DATABASE: <DATABASE_TO_BE_CREATED>
MARIADB_PASSWORD: <PASSWORD_TO_BE_CREATED>
networks:
database:
external: true
Related
I created a docker compose file to run a spring-boot application (2.5.7) and a MS SQL Server:
version: '3.8'
services:
db:
ports:
- 1434:1433
build: ./db
api:
depends_on:
- db
build: ./mdm-web-api
restart: on-failure
env_file: ./.env
ports:
- $SPRING_LOCAL_PORT:$SPRING_DOCKER_PORT
environment:
- SPRING_DATASOURCE_URL=jdbc:sqlserver://db:1434;databaseName=dwh-demo;encrypt=true;logingTimeout=30
stdin_open: true
tty: true
I defined a Dockerfile for the mssql in the folder 'db' and successfully tested the SQL server alone.
I added my spring-boot application, which I also tested separately.
I have an application.properties file as follow (I replace sensible values):
logging.level.root=INFO
spring.datasource.username=<my-db-user>
spring.datasource.password=<my-ultra-strong-pw>
spring.datasource.driver-class-name=com.microsoft.sqlserver.jdbc.SQLServerDriver
spring.jpa.hibernate.ddl-auto=none
spring.jpa.properties.hibernate.format_sql=true
spring.jpa.properties.hibernate.default_schema:entity_reference
spring.jpa.hibernate.use-new-id-generator-mappings=false
spring.jpa.generate-ddl=false
spring.jpa.show-sql=true
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
spring.data.rest.basePath=/api
springdoc.api-docs.path=/api-docs
I just added the spring.datasource.url properties in the docker-compose.yml to be able to point to the database server using the service name:
environment:
- SPRING_DATASOURCE_URL=jdbc:sqlserver://db:1434;databaseName=dwh-demo;encrypt=true;logingTimeout=30
It seems like the SPRING_DATASOURCE_URL value is completely ignored and I'm getting the following error:
Failed to determine suitable jdbc url
It shouldn't be a problem to use both places to add configuration, but maybe I'm wrong.
What could be the problem and which alternative do I have?
Additional details:
java openjdk 19
maven 3.8.5
Docker on Windows
SQL Server 2019-latest for Linux
I am trying to deploy a Wordpress instance on my PI using docker. Unfortunately I am receiving an error, that the App cannot establish a DB connction.
All containers run in the bridged network. I am exposing port 80 of the APP on 8882 and the port 3306 of the DB on 3382.
A second Wordpress installation on ports 8881 (APP) and 3381 (DB) in the same network are perfectly working, where is the flaw in my setup?
version: '2.1'
services:
wordpress:
image: wordpress
network_mode: bridge
restart: always
ports:
- 8882:80
environment:
PUID: 1000
PGID: 1000
WORDPRESS_DB_HOST: [addr. of PI]:3382
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: secret
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress:/var/www/html
db:
image: ghcr.io/linuxserver/mariadb
network_mode: bridge
environment:
- PUID=1000
- PGID=1000
- MYSQL_ROOT_PASSWORD=secret
- TZ=Europe/Berlin
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=secret #Must match the above password
volumes:
- db:/config
ports:
- 3382:3306
restart: unless-stopped
volumes:
db:
wordpress:
When containers are on the same bridge network, they can talk to each other using their service names as hostnames. In your case, the wordpress container can talk to the database container using the hostname db. Since it's not talking via the host, any port mapping is irrelevant and you just connect on port 3306.
So if you change
WORDPRESS_DB_HOST: [addr. of PI]:3382
to
WORDPRESS_DB_HOST: db
it should work.
You can remove the port mapping on the database container if you don't need to access the database directly from the host.
Ok, learned something today, like everyday.
Better to have such installations all nicely seperated in different networks and also better do not use same container names, such as DB. Better seperate them like DB-WP1, DB-WP2, etc....
In my setup, I couldnĀ“t see any reason, why it should interfere with each other, but doing the above will not harm anything at all....
You should create network
version: '2.1'
services:
wordpress:
image: wordpress
networks:
- db_net
restart: always
ports:
- 8882:80
environment:
PUID: 1000
PGID: 1000
WORDPRESS_DB_HOST: [addr. of PI]:3382
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: secret
WORDPRESS_DB_NAME: wordpress
volumes:
- wordpress:/var/www/html
db:
image: ghcr.io/linuxserver/mariadb
networks:
- db_net
environment:
- PUID=1000
- PGID=1000
- MYSQL_ROOT_PASSWORD=secret
- TZ=Europe/Berlin
- MYSQL_DATABASE=wordpress
- MYSQL_USER=wordpress
- MYSQL_PASSWORD=secret #Must match the above password
volumes:
- db:/config
ports:
- 3382:3306
restart: unless-stopped
volumes:
db:
wordpress:
networks:
db_net:
driver: bridge
In golang-migrate's documentation, it is stated that you can run this command to run all the migrations in one folder.
docker run -v {{ migration dir }}:/migrations --network host migrate/migrate
-path=/migrations/ -database postgres://localhost:5432/database up 2
How would you do this to fit the syntax of the new docker-compose, which discourages the use of --network?
And more importantly: How would you connect to a database in another container instead to one running in your localhost?
Adding this to your docker-compose.yml will do the trick:
db:
image: postgres
networks:
new:
aliases:
- database
environment:
POSTGRES_DB: mydbname
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: mydbpwd
ports:
- "5432"
migrate:
image: migrate/migrate
networks:
- new
volumes:
- .:/migrations
command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"]
links:
- db
networks:
new:
Instead of using the --network host option of docker run you set up a network called new. All the services inside that network gain access to each other through a defined alias (in the above example, you can access the db service through the database alias). Then, you can use that alias just like you would use localhost, that is, in place of an IP address. That explains this connection string:
"postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable"
The answer provided by #Federico work for me at the beginning, nevertheless, I realised that I've been gotten a connect: connection refused the first time the docker-compose was run in brand new environment, but not the second one. This means that the migrate container runs before the Database is ready to process operations. Since, migrate/migrate from docker-hub runs the "migration" command whenever is ran, it's not possible to add a wait_for_it.sh script to wait for the db to be ready. So we have to add depends_on and a healthcheck tags to manage the order execution.
So this is my docker file:
version: '3.3'
services:
db:
image: postgres
networks:
new:
aliases:
- database
environment:
POSTGRES_DB: mydbname
POSTGRES_USER: mydbuser
POSTGRES_PASSWORD: mydbpwd
ports:
- "5432"
healthcheck:
test: pg_isready -U mydbuser -d mydbname
interval: 10s
timeout: 3s
retries: 5
migrate:
image: migrate/migrate
networks:
- new
volumes:
- .:/migrations
command: ["-path", "/migrations", "-database", "postgres://mydbuser:mydbpwd#database:5432/mydbname?sslmode=disable", "up", "3"]
links:
- db
depends_on:
- db
networks:
new:
As of Compose file formats version 2 you do not have to setup a network.
As stated in the docker networking documentation By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
So in you case you could do something like:
version: '3.8'
services:
#note this databaseservice name is what we will use instead
#of localhost when using migrate as compose assigns
#the service name as host
#for example if we had another container in the same compose
#that wnated to access this service port 2000 we would have written
# databaseservicename:2000
databaseservicename:
image: postgres:13.3-alpine
restart: always
ports:
- "5432"
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: username
POSTGRES_DB: database
volumes:
- pgdata:/var/lib/postgresql/data
#if we had another container that wanted to access migrate container at let say
#port 1000
#and it's in the same compose file we would have written migrate:1000
migrate:
image: migrate/migrate
depends_on:
- databaseservicename
volumes:
- path/to/you/migration/folder/in/local/computer:/database
# here instead of localhost as the host we use databaseservicename as that is the name we gave to the postgres service
command:
[ "-path", "/database", "-database", "postgres://databaseusername:databasepassword#databaseservicename:5432/database?sslmode=disable", "up" ]
volumes:
pgdata:
I'm developing a .NET core web API with a MSSQL Server database. I tried to containerize this into a Docker container and use Docker Compose to spin up this service. But my API cannot connect to the database.
The following error occurs:
Application startup exception: System.Data.SqlClient.SqlException (0x80131904): A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
Code in my startup:
services.AddDbContext<ProductServiceDbContext>(options =>
options.UseSqlServer("server=sqlserver;port=1433;user id=sa;password=docker123!;database=ProductService;"));
Dockerfile:
FROM microsoft/dotnet:2.1-sdk AS build
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore dockerapi.csproj
COPY . ./
RUN dotnet publish dockerapi.csproj -c Release -o out
FROM microsoft/dotnet:2.1-aspnetcore-runtime AS runtime
WORKDIR /app
COPY --from=build /app/out .
ENTRYPOINT ["dotnet", "dockerapi.dll"]`
Docker-compose:
version: '3.4'
services:
productservice:
image: productservice/api
container_name: productservice_api
build:
context: ./ProductService/ProductService
depends_on:
- sqlserver
ports:
- "5000:80"
sqlserver:
image: microsoft/mssql-server-linux:latest
container_name: sqlserver
ports:
- "1433"
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=docker123!
I've tried several things:
Add links in the docker-compose file (sqlserver in productservice)
Add networks to the docker-compose file
Changed the connection string to
"server=sqlserver;user id=sa;password=docker123!;database=ProductService;" or "server=sqlserver,1433;user id=sa;password=docker123!;database=ProductService;"
Hey I was having a similar issue and stumbled upon your post, this was not the same issue I was having but I think I know the problem you were having (sorry this reply is so late)
Looking at your connection string server=sqlserver;port=1433;user id=sa;password=docker123!;database=ProductService;, you actually have to specify the port another way: server=sqlserver,1433;user id=sa;password=docker123!;database=ProductService;
Hope this helps!
Try these changes:
sqlserver:
image: microsoft/mssql-server-linux:latest
container_name: sqlserver
ports:
- "1433:1433" # map the ports
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=docker123!
sqlserver:
image: "microsoft/mssql-server-linux:latest"
extra_hosts:
- "localhost:192.168.65.2"
container_name: sqlserver
hostname: sqlserver
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "Your_password123"
ports:
- "49173:1433"
Changed the connection string to
"localhost,49173;Database=ProductService;User
Id=sa;Password=Your_password123;
Try another port because it can cause a conflict with an existing one. e.g. "1433:1401";
Disable your antivirus for a while and see if it works;
Create a catalog with the name "ProductService" like defined in your appSettings.json before run the container;
I have not tested this. Try at your own risk.
I can use Traefik for web sites since they use headers when they are connecting.
But I want to have multiple different instances of SQL Server running through docker which will be externally available (outside the docker host, potentially outside the local network)
So, is there anything which allows connecting to different sql server instances running on the same docker instance WITHOUT having to give them different ports or external ip addresses such that someone could access
sql01.docker.local,1433 AND sql02.docker.local,1433 from SQL Tools.
Start Additional Question
Since there has been no replies perhaps there is a way to have different instances like: sql.docker.local\instance1 and sql.docker.local\instance2 though I imagine that may also not be possible
End Additional Question
This is an example of the docker-compose file I was trying to use (before I realised that queries to sql server don't send through a host header - or am I wrong about that?)
version: '2.1'
services:
traefik:
container_name: traefik
image: stefanscherer/traefik-windows
command: --docker.endpoint=tcp://172.28.80.1:2375 --logLevel=DEBUG
ports:
- "8080:8080"
- "80:80"
- "1433:1433"
volumes:
- ./runtest:C:/etc/traefik
- C:/Users/mvukomanovic.admin/.docker:C:/etc/ssl
networks:
- default
restart: unless-stopped
labels:
- "traefik.enable=false"
whoami:
image: stefanscherer/whoami
labels:
- "traefik.backend=whoami"
- "traefik.frontend.entryPoints=http"
- "traefik.port=8080"
- "traefik.frontend.rule=Host:whoami.docker.local"
networks:
- default
restart: unless-stopped
sql01:
image: microsoft/mssql-server-windows-developer
environment:
- ACCEPT_EULA=Y
hostname: sql01
domainname: sql01.local
networks:
- default
restart: unless-stopped
labels:
- "traefik.frontend.rule=Host:sql01.docker.local,sql01,sql01.local"
- "traefik.frontend.entryPoints=mssql"
- "traefik.port=1433"
- "traefik.frontend.port=1433"
networks:
- default
restart: unless-stopped
sql02:
image: microsoft/mssql-server-windows-developer
environment:
- ACCEPT_EULA=Y
hostname: sql02
domainname: sql02.local
networks:
- default
restart: unless-stopped
labels:
- "traefik.frontend.rule=Host:sql02.docker.local,sql02,sql02.local"
- "traefik.frontend.entryPoints=mssql"
- "traefik.port=1433"
- "traefik.frontend.port=1433"
networks:
- default
restart: unless-stopped
networks:
default:
external:
name: nat
As mentionned earlier traefik is not the right solution since it's a HTTP only LoadBalancer.
I can think right now in 3 different ways to achieve what you want to do :
Use a TCP Load Balancer like HAproxy
Setup you server in Docker Swarm Mode (https://docs.docker.com/engine/swarm/), that will allow to bind the same port with a transparent routing between them
Use a service discovery service like consul and SRV records that can abstracts ports number (this might be overkill for your needs and complex to setup)
you can't use traefik, because it's a HTTP reverse proxy.
You're sql server listen and communicate via TCP.
I don't understand what's you're final goal.
Why are you using 2 differents sql-server ?
It depends on what's you want but you may have two solutions:
Can you use a simpler solution ? different databases, roles and permissions for separation.
You can search into the documentation of SQL Server Always On, but it doesn't seems easy to route queries to specific sever.
There is no "virtual" access to databases like for HTTP servers. So - no additional hostnames pointing to same IP can help you.
If you insist on port 1433 for all of your instances, then I see no way for you except to use two different external IPs.
If you were on a Linux box you may try some iptables magic, but it not elegant and would allow access to only one of your instances at any single moment. Windows may have iptables equivalent (I never heard of it) but still only-one-at-a-time you cannot escape.
My advice - use more than one port to expose your servers.