Imagine a non-trivial docker compose app, with nginx in front of a webapp, and a few linked data stores:
web:
build: my-django-app
volumes:
- .:/code
ports:
- "8000:8000"
links:
- redis
- mysql
- mongodb
nginx:
image: nginx
links:
- web
redis:
image: redis
expose:
- "6379"
mysql:
image: mysql
volumes:
- /var/lib/mysql
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_DATABASE=myproject
mongodb:
image: mongo
The databases are pretty easy to configure (for now), the containers expose pretty nice environmental variables to control them (see the mysql container), but what of nginx? We'll need to template a vhost file for that, right?
I don't want to roll my own image, that'll need rebuilding for each changed config, from different devs' setups, to test, through staging and production. And what if we want to, in a lightweight manner, do A/B testing by flipping a config option?
Some centralised config management is needed here, maybe something controlled by docker-compose that can write out config files to a shared volume?
This will only get more important as new services are added (imagine a microservice cloud, rather than, as in this example, a monolithic web app)
What is the correct way to manage configuration in a docker-compose project?
In general you'll find that most containers use entrypoint scripts to configure applications by populating configuration files using environment variables. For an advanced example of this approach see the entrypoint script for the Wordpress official image.
Because this is a common pattern, Jason Wilder created the dockerize project to help automate the process.
Related
I have a docker-compose.yml having this lines.
version: "3.9"
services:
mssql:
image: localhost/local_mssql_server:mssqlserver
ports:
- "1433:1433"
volumes:
- sqlfolder1234:/var/opt/mssql
The Docker is starting up successfully and serving data.
But I seldom work with windows and I like to know where is the host-folder sqlfolder1234?
I tried the windows-explorer to search for that folder. Since an hour he not finished yet.
Where is that folder sqlfolder1234 on my host system?
This volume type is called Named volume Here is some description from the official document. Short syntax
In the absence of having named volumes with specified sources, Docker creates an anonymous volume for each task backing a service. Anonymous volumes do not persist after
the associated containers are removed.
volumes:
# Just specify a path and let the Engine create a volume
- /var/lib/mysql
# Specify an absolute path mapping
- /opt/data:/var/lib/mysql
# Path on the host, relative to the Compose file
- ./cache:/tmp/cache
# User-relative path
- ~/configs:/etc/configs/:ro
# Named volume
- datavolume:/var/lib/mysql
If you want to keep persistent data in your host computer, I would use another volume way to do that. this sample provides Path on the host, relative to the Compose file that help us can see the volumn folder in the docker-compose file level.
volumes:
- ./sqlfolder1234:/var/opt/mssql
I want to configure CI/CD from Cloud Repositories that builds my CMS (Directus) when I push to main repository.
In the build-time, the project needs to access Cloud SQL. But I get this error:
I tried this database configuration with gcloud app deploy and it connects Cloud SQL and runs.
cloudbuild.yaml (It crashes at second step, so I didn't add other steps for simplicity):
steps:
- name: node:16
entrypoint: npm
args: ['install']
dir: cms
- name: node:16
entrypoint: npm
args: ['run', 'start']
dir: cms
env:
- 'NODE_ENV=PRODUCTION'
- 'EXTENSIONS_PATH="./extensions"'
- 'DB_CLIENT=pg'
- 'DB_HOST=/cloudsql/XXX:europe-west1:XXX'
- 'DB_PORT="5432"'
- 'DB_DATABASE="XXXXX"'
- 'DB_USER="postgres"'
- 'DB_PASSWORD="XXXXXX"'
- 'KEY="XXXXXXXX"'
- 'SECRET="XXXXXXXXXXXX"'
Node-pg (node library) adds /.s.PGSQL.5432 at the end automatically. That's why it is not written in DB_HOST.
IAM roles:
How can I solve this error? I read so many answers in Stackoverflow but none of them helped me. I found this article but I didn't fully understand how to implement it in my case (https://cloud.google.com/sql/docs/postgres/connect-build).
Without your full Cloud Build yaml, it's hard to say for sure - but, it looks like you aren't following the steps in the documentation correctly.
Roughly what you should be doing is:
Downloading the cloud_sql_proxy into your container space
In a follow up step, start the cloud_sql_proxy then (in the same step) run your script, connecting to the proxy via either tcp or unix socket.
I don't see your yaml describing the proxy at all.
I spent most morning trying to figure out not only how to copy an initial SQL dump into the container, but also how to auto-import (execute) the dump into the DB. I have read countless other posts, none of which seem to work. I have the following docker compose file:
version: '3.8'
services:
db:
image: mariadb:10.5.8
restart: always
container_name: database
environment:
MYSQL_ROOT_PASSWORD: default
volumes:
- db-data:/var/lib/mysql
- ./db-init:/docker-entrypoint-initdb.d
volumes:
db-data:
The SQL dump is found in the db-init folder. I got the docker-entrypoint-initdb.d from the official docs on DockerHub.
After docker-compose up, the SQL is correctly copied into the docker-entrypoint-initdb.d but is never ran against the DB, aka the dump is never imported and the DB remains empty.
I have tried placing the volumes directive around in the docker compose file as this was suggested in another post. From what I've read, the SQL dump should be imported automatically when mounting the volume.
Is there no way to accomplish this via the docker-compose.yml only?
Edit: Switching the version to 2.x did not work
EDIT2: Container logs:
2021-02-10 17:53:09+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/wordpress.sql
ERROR 1046 (3D000) at line 10: No database selected
From your logs, a quick google search pointed to this post. Adding MYSQL_DATABASE to the environment should solve the issue and the .sql should then be imported correctly on startup.
Final docker-compose should look like this:
services:
db:
image: mariadb:10.5.8
restart: always
container_name: database
environment:
MYSQL_DATABASE: wordpress
MYSQL_ROOT_PASSWORD: default
volumes:
- db-data:/var/lib/mysql
- ./db-init:/docker-entrypoint-initdb.d/
Maybe not worded as strongly as it should be, but the docs mention this: SQL files will be imported by default to the database specified by the MYSQL_DATABASE variable.
My project uses Go modules hosted in private GitHub repositories.
Those are listed in my go.mod file, among the public ones.
On my local computer, I have no issue authenticating to the private repositories, by using the proper SSH key or API token in the project’s local git configuration file. The project compiles fine here.
Neither the git configuration nor the .netrc file are taken into account during the deployment (gcloud app deploy) and the build phase in the cloud, so my project compilation fails there with an authentication error for the private modules.
What is the best way to fix that? I would like to avoid a workaround which would consist in including the private modules’ source code in the deployed files, and have rather find a way to make the remote go or git use credentials I can provide.
You could try to deploy it directly from a build. According to the Accessing private GitHub repositories, you can set up git with key and domain on one of the build steps.
After that you can specify a step to run the gcloud app deploy command, as suggested in the Quickstart for automating App Engine deployments with Cloud Build.
An example of the cloudbuild.yaml necessary to do this would be:
# Decrypt the file containing the key
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args:
- kms
- decrypt
- --ciphertext-file=id_rsa.enc
- --plaintext-file=/root/.ssh/id_rsa
- --location=global
- --keyring=my-keyring
- --key=github-key
volumes:
- name: 'ssh'
path: /root/.ssh
# Set up git with key and domain.
- name: 'gcr.io/cloud-builders/git'
entrypoint: 'bash'
args:
- '-c'
- |
chmod 600 /root/.ssh/id_rsa
cat <<EOF >/root/.ssh/config
Hostname github.com
IdentityFile /root/.ssh/id_rsa
EOF
mv known_hosts /root/.ssh/known_hosts
volumes:
- name: 'ssh'
path: /root/.ssh
# Deploy app
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy"]
timeout: "16000s"
I'm trying to run the project in Jhipster based on this tutorial by the creator himself :https://www.youtube.com/watch?v=d1MEM8PdAzQ but it can't connect to Postgres
See errors below:
Caused by: org.postgresql.util.PSQLException: The server requested password-based authentication, but no password was provided.
at org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:473)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:203)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:65)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:146)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:35)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:22)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:47)
at org.postgresql.jdbc42.AbstractJdbc42Connection.<init>(AbstractJdbc42Connection.java:21)
at org.postgresql.jdbc42.Jdbc42Connection.<init>(Jdbc42Connection.java:28)
at org.postgresql.Driver.makeConnection(Driver.java:415)
at org.postgresql.Driver.connect(Driver.java:282)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:95)
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:101)
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:316)
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:518)
How do I connect Jhipster with postgreSQL? I am a newbie on jhipster
JHipster creates 3 configuration files: -
application.yml - main Spring Boot configuration file.
application-dev.yml
application-prod.yml
The application.yml file contains common properties, the other 2 hold specific properties to development and production environments.
If you look at application-dev.yml you'll see something like the following: -
datasource:
type: com.zaxxer.hikari.HikariDataSource
url: jdbc:postgresql://localhost:5432/myapp
username: myapp
password:
However, you still have to create your PostgreSQL database - the easiest way is via the pgAdmin tool but you can also create it via command line tools - a quick google will help you there!
If you don't want to work with docker compose file Version 3 in docker swarm mode, where it is best practice to use docker secrets, then you can create an .application.env-file and link it in your docker-compose.yml (Version 2) with 'env_file:'
$ cat .application.env
SPRING_PROFILES_ACTIVE=prod,swagger
SPRING_DATASOURCE_URL=jdbc:postgresql://postgresql:5432/database_name
SPRING_DATASOURCE_USER=database_user
SPRING_DATASOURCE_PASSWORD=database_password
JHIPSTER_SLEEP=10
[...]
At least I use it this way to keep the credentials away from my jhipster-projects, which are on Github, where I also want to put the *.yml-files.