MongoDb in a container seed with multiple collections - database

I'm trying to feed my mongo running in a container with existing collections living outside the container.
docker-compose.yml looks like this:
version: "3"
services:
webapi:
image: webapp:develop
container_name: web_api
build:
args:
buildconfig: Debug
context: ../src/api
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:5003
ports:
- "5003:5003"
depends_on:
- mongodb
mongodb:
image: mongo:latest
container_name: mongodb
ports:
- "27017:27017"
mongo-seed:
build: ./mongo-seed
links:
- mongodb
mongo-seed/Dockerfile:
FROM mongo
COPY initA.json /initA.json
CMD mongoimport --host mongodb --db Database --collection A --type json --file /initA.json --jsonArray --mode merge
FROM mongo
COPY initB.json /initB.json
CMD mongoimport --host mongodb --db TestListDb --collection B --type json --file /initB.json --jsonArray --mode merge
But this doesn't do the trick as it overwrites the database with the last collection, so maintains only 'B' collection in this case.
How can I import multiple collections to one database?

I found a solution for that.
Also the answer shows how to configure network so the webapp can see the mongodb container.
Structure of files:
Web.Application
.
+-- docker_compose.yml
+-- mongo
| +-- dump
| +-- DatabaseDb
| +-- Dockerfile
| +-- restore.sh
docker-compose.yml
version: '3.4'
services:
webapp:
container_name: webapp
image: ${DOCKER_REGISTRY-}webapp
build:
context: ./Web.Application/
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
depends_on:
- mongo
networks:
clusternetwork:
ipv4_address: 1.1.0.1
mongo:
container_name: mongo
build:
context: ./Web.Application/mongo/
dockerfile: Dockerfile
networks:
clusternetwork:
ipv4_address: 1.1.0.12
networks:
clusternetwork:
driver: bridge
ipam:
driver: default
config:
- subnet: 1.1.0.0/24
./Web.Application/mongo/Dockerfile:
FROM mongo AS start
COPY . .
COPY restore.sh /docker-entrypoint-initdb.d/
./Web.Application/mongo/restore.sh:
#!/bin/bash
mongorestore --db DatabaseDb ./dump/DatabaseDb

Related

Traefik Django & React setup

Recently I came across server configuration using GitLab CI/CD and docker-compose, I have two separated repositories one for Django and the other for React JS on Gitlab.
The Django Repo contains the following production.yml file:
version: '3'
volumes:
production_postgres_data: {}
production_postgres_data_backups: {}
production_traefik: {}
services:
django: &django
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
image: one_sell_production_django
platform: linux/x86_64
expose: # new
- 5000
depends_on:
- postgres
- redis
env_file:
- ./.envs/.production/.django
- ./.envs/.production/.postgres
command: /start
labels: # new
- "traefik.enable=true"
- "traefik.http.routers.django.rule=Host(`core.lwe.local`)"
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
image: one_sell_production_postgres
expose:
- 5432
volumes:
- production_postgres_data:/var/lib/postgresql/data:Z
- production_postgres_data_backups:/backups:z
env_file:
- ./.envs/.production/.postgres
traefik: # new
image: traefik:v2.2
ports:
- 80:80
- 8081:8080
volumes:
- "./compose/production/traefik/traefik.dev.toml:/etc/traefik/traefik.toml"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
redis:
image: redis:6
This is work perfectly using the Traefik, I have also the following code for React JS repo:
version: '3.8'
services:
frontend:
build:
context: ./
dockerfile: Dockerfile
expose:
- 3000
labels: # new
- "traefik.enable=true"
- "traefik.http.routers.django.rule=Host(`lwe.local`)"
restart: 'always'
env_file:
- .env
Now I don't know how to connect both Django and React Js Repo using the Traefik and also how the CI/CD configuration should be, the following is the CI/CD configuration for Django Repo (I omitted unnecessary info and just include the deploy stage):
deploy:
stage: deploy
tags:
- docker
when: always
before_script:
- mkdir -p .envs/.production/
- touch .envs/.production/.django
- touch .envs/.production/.postgres
- touch .env
- chmod +x ./setup_env.sh
- sh setup_env.sh
- less .envs/.production/.django
- less .envs/.production/.postgres
- docker-compose -f production.yml build
- docker-compose -f production.yml run --rm django python manage.py migrate
script:
- docker-compose -f local.yml up -d

React App doesn't refresh on changes using Docker-Compose

Consider the Docker compose
version: '3'
services:
frontend:
build:
context: ./frontend
container_name: frontend
command: npm start
stdin_open: true
tty: true
volumes:
- ./frontend:/usr/app
ports:
- "3000:3000"
backend:
build:
context: ./backend
container_name: backend
command: npm start
environment:
- PORT=3001
- MONGO_URL=mongodb://api_mongo:27017
volumes:
- ./backend/src:/usr/app/src
ports:
- "3001:3001"
api_mongo:
image: mongo:latest
container_name: api_mongo
volumes:
- mongodb_api:/data/db
ports:
- "27017:27017"
volumes:
mongodb_api:
And the React Dockerfile :
FROM node:14.10.1-alpine3.12
WORKDIR /usr/app
COPY package.json .
RUN npm i
COPY . .
Folder Structure :
-frontend
-backend
-docker-compose.yml
And inside Frontend :
And inside src :
When I change files inside src it doesn't reflect on the Docker side.
How can we fix this ?
Here is the answer :
If you are running on Windows, please read this: Create-React-App has some issues detecting when files get changed on Windows based machines. To fix this, please do the following:
In the root project directory, create a file called .env
Add the following text to the file and save it: CHOKIDAR_USEPOLLING=true
That's all!
Don't use same name dir for different services like you use /usr/app change this to /client/app for client and server/app for backend and then it all works and use environment:- CHOKIDAR_USEPOLLING=true and use FROM node:16.5.0-alpine and can use stdin_open: true

How can I use my database in a container after push the container into Dockerhub? [duplicate]

I am trying to distribute a set of connected applications running in several linked containers that includes a mongo database that is required to:
be distributed containing some seed data;
allow users to add additional data.
Ideally the data will also be persisted in a linked data volume container.
I can get the data into the mongo container using a mongo base instance that doesn't mount any volumes (dockerhub image: psychemedia/mongo_nomount - this is essentially the base mongo Dockerfile without the VOLUME /data/db statement) and a Dockerfile config along the lines of:
ADD . /files
WORKDIR /files
RUN mkdir -p /data/db && mongod --fork --logpath=/tmp/mongodb.log && sleep 20 && \
mongoimport --db testdb --collection testcoll --type csv --headerline --file ./testdata.csv #&& mongod --shutdown
where ./testdata.csv is in the same directory (./mongo-with-data) as the Dockerfile.
My docker-compose config file includes the following:
mongo:
#image: mongo
build: ./mongo-with-data
ports:
- "27017:27017"
#Ideally we should be able to mount this against a host directory
#volumes:
# - ./db/mongo/:/data/db
#volumes_from:
# - devmongodata
#devmongodata:
# command: echo created
# image: busybox
# volumes:
# - /data/db
Whenever I try to mount a VOLUME it seems as if the original seeded data - which is stored in /data/db - is deleted. I guess that when a volume is mounted to /data/db it replaces whatever is there currently.
That said, the docker userguide suggests that: Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization? So I expected the data to persist if I placed the VOLUME command after the seeding RUN command?
So what am I doing wrong?
The long view is that I want to automate the build of several linked containers, and then distribute a Vagrantfile/docker-compose YAML file that will fire up a set of linked apps, that includes a pre-seeded mongo database with a (partially pre-populated) persistent data container.
I do this using another docker container whose only purpose is to seed mongo, then exit. I suspect this is the same idea as ebaxt's, but when I was looking for an answer to this, I just wanted to see a quick-and-dirty, yet straightforward, example. So here is mine:
docker-compose.yml
mongodb:
image: mongo
ports:
- "27017:27017"
mongo-seed:
build: ./mongo-seed
depends_on:
- mongodb
# my webserver which uses mongo (not shown in example)
webserver:
build: ./webserver
ports:
- "80:80"
depends_on:
- mongodb
mongo-seed/Dockerfile
FROM mongo
COPY init.json /init.json
CMD mongoimport --host mongodb --db reach-engine --collection MyDummyCollection --type json --file /init.json --jsonArray
mongo-seed/init.json
[
{
"name": "Joe Smith",
"email": "jsmith#gmail.com",
"age": 40,
"admin": false
},
{
"name": "Jen Ford",
"email": "jford#gmail.com",
"age": 45,
"admin": true
}
]
I have found useful to use Docker Custom Images and using volumes, instead of creating another container for seeding.
File Structure
.
├── docker-compose.yml
├── mongo
│   ├── data
│   ├── Dockerfile
│   └── init-db.d
│   └── seed.js
Every File location mentioned in Dockerfile/docker-compose.yml, is relative to location of docker-compose.yml
DOCKERFILE
FROM mongo:3.6
COPY ./init-db.d/seed.js /docker-entrypoint-initdb.d
docker-compose.yml
version: '3'
services:
db:
build: ./mongo
restart: always
volumes:
- ./mongo/data:/data/db #Helps to store MongoDB data in `./mongo/data`
environment:
MONGO_INITDB_ROOT_USERNAME: {{USERNAME}}
MONGO_INITDB_ROOT_PASSWORD: {{PWD}}
MONGO_INITDB_DATABASE: {{DBNAME}}
seed.js
// Since Seeding in Mongo is done in alphabetical order... It's is important to keep
// file names alphabetically ordered, if multiple files are to be run.
db.test.drop();
db.test.insertMany([
{
_id: 1,
name: 'Tensor',
age: 6
},
{
_id: 2,
name: 'Flow',
age: 10
}
])
docker-entrypoint-initdb.d can be used for creating different users and mongodb administration related stuffs, just create an alphabetical ordered named js-script to createUser etc...
For more details on how to customize MongoDB Docker service, read this
Also, it is good to keep your passwords and usernames secure from Public, DO NOT push credentials on public git, instead use Docker Secrets. Also read this Tutorial on Secrets
Do note, it is not necessary to go into docker-swarm mode to use secrets. Compose Files supports secrets as well. Check this
Secrets can also be used in MongoDB Docker Services
Current answer based on #Jeff Fairley answer and updated according to new Docker docs
docker-compose.yml
version: "3.5"
services:
mongo:
container_name: mongo_dev
image: mongo:latest
ports:
- 27017:27017
networks:
- dev
mongo_seed:
container_name: mongo_seed
build: .
networks:
- dev
depends_on:
- mongo
networks:
dev:
name: dev
driver: bridge
Dockerfile
FROM mongo:latest
COPY elements.json /elements.json
CMD mongoimport --host mongo --db mendeleev --collection elements --drop --file /elements.json --jsonArray
You probably need to rebuild current images.
You can use this image that provides docker container for many jobs ( import, export , dump )
Look at the example using docker-compose
You can use Mongo Seeding Docker image.
Why?
You have the Docker image ready to go
You are not tied to JSON files - JavaScript and TypeScript files are supported as well (including optional model validation with TypeScript)
Example usage with Docker Compose:
version: '3'
services:
database:
image: 'mongo:3.4.10'
ports:
- '27017:27017'
api:
build: ./api/
command: npm run dev
volumes:
- ./api/src/:/app/src/
ports:
- '3000:3000'
- '9229:9229'
links:
- database
depends_on:
- database
- data_import
environment:
- &dbName DB_NAME=dbname
- &dbPort DB_PORT=27017
- &dbHost DB_HOST=database
data_import:
image: 'pkosiec/mongo-seeding:3.0.0'
environment:
- DROP_DATABASE=true
- REPLACE_ID=true
- *dbName
- *dbPort
- *dbHost
volumes:
- ./data-import/dev/:/data-import/dev/
working_dir: /data-import/dev/data/
links:
- database
depends_on:
- database
Disclaimer: I am the author of this library.
Here is the working database seed mongodb docker compose use the below command to seed the database
Dockerfile
FROM mongo:3.6.21
COPY init.json /init.json
CMD mongoimport --uri mongodb://mongodb:27017/testdb --collection users --type json --file /init.json --jsonArray
docker-compose.yml
version: "3.7"
services:
mongodb:
container_name: mongodb
image: mongo:3.6.21
environment:
- MONGO_INITDB_DATABASE=testdb
volumes:
- ./data:/data/db
ports:
- "27017:27017"
mongo_seed:
build: ./db
depends_on:
- mongodb
To answer my own question:
simple YAML file to create simple mongo container linked to a data volume container, fired up by Vagrant docker compose.
in the Vagrantfile, code along the lines of:
config.vm.provision :shell, :inline => <<-SH
docker exec -it -d vagrant_mongo_1 mongoimport --db a5 --collection roads --type csv --headerline --file /files/AADF-data-minor-roads.csv
SH
to import the data.
Package the box.
Distribute the box.
For the user, a simple Vagrantfile to load the box and run a simple docker-compose YAML script to start the containers and mount the mongo db against the data volume container.

MongoNetwork ECONNREFUSED when renaming service and database

I have a problem starting mongodb with Docker. I have some code which i want to reuse for different purpose. After i made a copy of that code everything worked just fine but after renaming the service, database and building everything again with
docker-compose -f docker-compose.dev.yml build
and running with
docker-compose -f docker-compose.dev.yml up
mongodb won't start and i get the ECONNREFUSED error. I tried to remove all the services and containers with
docker-compose -f docker-compose.dev.yml rm
docker rm $(docker ps -a -q)
but nothing seems to help. I also tried to discard all the changes i made (to the point where it worked) but it still doesn't work. I am quite new to programming itself and have no idea what is happening. What am i missing?
Also including my config.js, .env and docker-compose.dev.yml files.
Config.js
const config = {
http: {
port: parseInt(process.env.PORT) || 9000,
},
mongo: {
host: process.env.MONGO_HOST || 'mongodb://localhost:27017',
dbName: process.env.MONGO_DB_NAME || 'myresume',
},
};
module.exports = config;
.env
NODE_ENV=development
MONGO_HOST=mongodb://db:27017
MONGO_DB_NAME=myresume
PORT=9001
docker-compose.dev.yml
version: "3"
services:
myresume-service:
build: .
container_name: myresume-service
command: npm run dev
ports:
- 9001:9001
links:
- mongo-db
depends_on:
- mongo-db
env_file:
- .env
volumes:
- ./src:/usr/myresume-service/src
mongo-db:
container_name: mongo-db
image: mongo
ports:
- 27017:27017
volumes:
- myresume-service-mongodata:/data/db
environment:
MONGO_INITDB_DATABASE: "myresume"
volumes:
myresume-service-mongodata:
I am not completely sure but I think that your service needs the env var
MONGO_HOST=mongodb://mongo-db:27017 instead of the one that you have. The two services are only visible to each other that way. I believe you also need a network to connect the two of them.
something like this:
version: "3"
networks:
my-network:
external: true
services:
myresume-service:
build: .
container_name: myresume-service
command: npm run dev
ports:
- 9001:9001
links:
- mongo-db
depends_on:
- mongo-db
env_file:
- .env
volumes:
- ./src:/usr/myresume-service/src
networks:
- my-network
mongo-db:
container_name: mongo-db
image: mongo
ports:
- 27017:27017
volumes:
- myresume-service-mongodata:/data/db
environment:
MONGO_INITDB_DATABASE: "myresume"
networks:
- my-network
volumes:
myresume-service-mongodata:
you probably need to create the network using the command:
docker network create my-network

Docker - run Dockerfile before or after composer an how?

via a docker-compose.yml i compose a mssql.
version: "3"
services:
db:
image: mcr.microsoft.com/mssql/server:2017-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SecretPassword
- MSSQL_PID=Express
- MSSQL_LCID=1031
- MSSQL_COLLATION=Latin1_General_CI_AS
- MSSQL_MEMORY_LIMIT_MB=8192
- MSSQL_AGENT_ENABLED=true
- TZ=Europe/Berlin
ports:
- 1433:1433
- 49200:1433
volumes:
- ./data:/var/opt/mssql/data
- ./backup:/var/opt/mssql/backup
restart: always
this works fine.
But how can i expand this image?
with: mssql-server-fts
on github i find this - but how can i combine a docker-compose.yml with a Dockerfile ?
https://github.com/Microsoft/mssql-docker/blob/master/linux/preview/examples/mssql-agent-fts-ha-tools/Dockerfile
Here is a documentation on the docker-compose.yml file docker-compose file
To use the Dockerfile in the docker-compose.yml, one needs to add the build section. If the Dockerfile and docker-compose.yml are in the same directory section of the docker-compose.yml would look like the following:
version: '3'
services:
webapp:
build:
context: .
dockerfile: Dockerfile
contex is set to the root directory, this is based on the location of the docker-compose.yml file
dockerfile is set to the name of the Dockerfile, in this case Dockerfile
I hope that this helps.
Add the path to the docker file you want to include in your docker-compose.
For example:
version: "3"
services:
dockerFileExample:
build: ./Dockerfile // Or custom file name i.e. ./docker-file-frontend
Here is link to the documentation: https://docs.docker.com/compose/reference/build/

Resources