How to run init.sql using docker-compose.yml? - sql-server

I'm creating a SQL server in docker container using docker-compose.yml, but i can't execute my init.sql when i run docker
Code from docker-compose.yml
version: "3.9"
services:
mssql-service:
image: mcr.microsoft.com/mssql/server:2019-latest # Or whatever version you want
container_name: mssql
restart: unless-stopped
ports:
- "1433:1433"
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=my_password
- MSSQL_PID=Developer
- MSSQL_AGENT_ENABLED=True
volumes:
- sqlvolume:/var/opt/mssql
- C:/Users/joel_/Documents/GitLab/Emaresa/apps/api/init.sql:/docker-entrypoint-initdb.d/init.sql
code from init.sql
USE master;
GO
IF NOT EXISTS (SELECT name FROM sys.databases WHERE name = 'my_database')
CREATE DATABASE my_database;
GO
my folder:
project/
│ .gitignore
│ alembic.ini
│ database.env
│ docker-command.bash
│ docker-compose.yml
│ init.sql
│ poetry.lock
│ pyproject.toml
hope someone help me with this because i didn't figure it out how to execute init.sql and create my_database

Related

Ingress configuration as String with Flink statefun

What I am trying to do
Once followed the python walkthrough I am trying to modify module.yaml file so ingress and egress is not Protobuf but String. I have not really modify most of the files, only module.yaml trying to configure for string ingress and greeter.py to not take into account either state or protobuf messages and only to print the input received from the ingress.
The architecture of the project has not being changed:
$ tree statefun-walkthrough
statefun-walkthrough
├── Dockerfile
├── docker-compose.yml
├── generator
│ ├── Dockerfile
│ ├── event-generator.py
│ └── messages_pb2.py
├── greeter
│ ├── Dockerfile
│ ├── greeter.py
│ ├── messages.proto
│ ├── messages_pb2.py
│ └── requirements.txt
└── module.yaml
The used configuration files and python application:
docker-compose.yml
version: "2.1"
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
kafka-broker:
image: wurstmeister/kafka:2.12-2.0.1
ports:
- "9092:9092"
environment:
HOSTNAME_COMMAND: "route -n | awk '/UG[ \t]/{print $$2}'"
KAFKA_CREATE_TOPICS: "names:1:1,greetings:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
depends_on:
- zookeeper
volumes:
- /var/run/docker.sock:/var/run/docker.sock
master: # for flink-statefun
build:
context: .
expose:
- "6123"
ports:
- "8081:8081"
environment:
- ROLE=master
- MASTER_HOST=master
volumes:
- ./checkpoint-dir:/checkpoint-dir
worker: # for flink-statefun
build:
context: .
expose:
- "6121"
- "6122"
depends_on:
- master
- kafka-broker
links:
- "master:master"
- "kafka-broker:kafka-broker"
environment:
- ROLE=worker
- MASTER_HOST=master
volumes:
- ./checkpoint-dir:/checkpoint-dir
python-worker: # greeter application
build:
context: ./greeter
expose:
- "8000"
event-generator: # reading and writting in kafka topic
build:
context: generator
dockerfile: Dockerfile
depends_on:
- kafka-broker
module.yaml
version: "1.0"
module:
meta:
type: remote
spec:
functions:
- function:
meta:
kind: http
type: example/greeter
spec:
endpoint: http://python-worker:8000/statefun
maxNumBatchRequests: 500
timeout: 2min
ingresses:
- ingress:
meta:
type: statefun.kafka.io/ingress
id: example/names
spec:
address: kafka-broker:9092
consumerGroupId: my-group-id
topics:
- topic: names
valueType: io.statefun.types/string
targets:
- example/greeter
egresses:
- egress:
meta:
type: statefun.kafka.io/egress
id: example/greets
spec:
address: kafka-broker:9092
deliverySemantic:
type: exactly-once
transactionTimeoutMillis: 100000
greeter.py
from statefun import StatefulFunctions
from statefun import RequestReplyHandler
from statefun import kafka_egress_record
functions = StatefulFunctions()
#functions.bind("example/greeter")
def greet(context, message):
print(type(message), message)
handler = RequestReplyHandler(functions)
#
# Serve the endpoint
#
from flask import request
from flask import make_response
from flask import Flask
app = Flask(__name__)
#app.route('/statefun', methods=['POST'])
def handle():
response_data = handler(request.data)
response = make_response(response_data)
response.headers.set('Content-Type', 'application/octet-stream')
return response
if __name__ == "__main__":
app.run()
The error
After running docker-compose up -d --build flink master stops with the next error:
2022-02-14 18:11:14,795 INFO org.apache.flink.runtime.entrypoint.ClusterEntrypoint [] - Shutting StatefulFunctionsClusterEntryPoint down with application status FAILED. Diagnostics org.apache.flink.util.FlinkException: Could not create the DispatcherResourceManagerComponent.
at org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:256)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:219)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:172)
at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:171)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:520)
at org.apache.flink.statefun.flink.launcher.StatefulFunctionsClusterEntryPoint.main(StatefulFunctionsClusterEntryPoint.java:99)
Caused by: org.apache.flink.util.FlinkRuntimeException: Could not retrieve the JobGraph.
at org.apache.flink.runtime.dispatcher.runner.JobDispatcherLeaderProcessFactoryFactory.createFactory(JobDispatcherLeaderProcessFactoryFactory.java:57)
at org.apache.flink.runtime.dispatcher.runner.DefaultDispatcherRunnerFactory.createDispatcherRunner(DefaultDispatcherRunnerFactory.java:51)
at org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:194)
... 6 more
Caused by: org.apache.flink.util.FlinkException: Could not create the JobGraph from the provided user code jar.
at org.apache.flink.statefun.flink.launcher.StatefulFunctionsJobGraphRetriever.retrieveJobGraph(StatefulFunctionsJobGraphRetriever.java:107)
at org.apache.flink.runtime.dispatcher.runner.JobDispatcherLeaderProcessFactoryFactory.createFactory(JobDispatcherLeaderProcessFactoryFactory.java:55)
... 8 more
Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: There are no routers defined.
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
at org.apache.flink.client.program.PackagedProgramUtils.getPipelineFromProgram(PackagedProgramUtils.java:150)
at org.apache.flink.client.program.PackagedProgramUtils.createJobGraph(PackagedProgramUtils.java:77)
at org.apache.flink.statefun.flink.launcher.StatefulFunctionsJobGraphRetriever.retrieveJobGraph(StatefulFunctionsJobGraphRetriever.java:101)
... 9 more
Caused by: java.lang.IllegalStateException: There are no routers defined.
at org.apache.flink.statefun.flink.core.StatefulFunctionsUniverseValidator.validate(StatefulFunctionsUniverseValidator.java:31)
at org.apache.flink.statefun.flink.core.StatefulFunctionsJob.main(StatefulFunctionsJob.java:76)
at org.apache.flink.statefun.flink.core.StatefulFunctionsJob.main(StatefulFunctionsJob.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
... 13 more
I do not know if this Exception Caused by: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error: There are no routers defined. is the main problem and why is it happening.

Connect to Docker Compose SQL Server from Visual Studio's SQL Server Object Explorer and save database to local PC

Two questions:
Is it possible to connect to Docker Compose SQL Server from Visual Studio's SQL Sever Object Explorer? If so, how?
Visual Studio 2019 usually saves the local databases into C:\Users\Username\ProjectName.mdf. Can I make docker-compose save it on my local PC instead of the Linux docker? For example in C:\SkybotDb.
docker-compose.yml
version: '3.4'
services:
db:
container_name: skybotdb
image: mcr.microsoft.com/mssql/server:2019-latest
environment:
SA_PASSWORD: "SkybotPassword123456"
ACCEPT_EULA: "Y"
ports:
- 1433:1433
restart: unless-stopped
networks:
- webnet
skybot.web:
image: ${DOCKER_REGISTRY-}skybotweb
build:
context: .
dockerfile: src/Skybot.Web/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://0.0.0.0:5001
- "UseInMemoryDatabase=true"
- "ConnectionStrings__DefaultConnection=Server=db;Database=SkybotDb;User=sa;Password=SkybotPassword123456;MultipleActiveResultSets=true"
- ElasticConfiguration__Uri=http://es01:9200
ports:
- 5000:5000
- 5001:5001
restart: on-failure
networks:
- webnet
depends_on:
- db
- es01
es01:
image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
container_name: es01
environment:
- node.name=es01
- cluster.name=es-docker-cluster
- cluster.initial_master_nodes=es01
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
restart: unless-stopped
networks:
- webnet
kib01:
image: docker.elastic.co/kibana/kibana:7.10.1
container_name: kib01
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://es01:9200
ELASTICSEARCH_HOSTS: http://es01:9200
restart: unless-stopped
networks:
- webnet
volumes:
data01:
driver: local
networks:
webnet:
driver: bridge
Skybot.Web.Dockerfile
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 5000
EXPOSE 5001
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
WORKDIR /src
COPY ["src/Skybot.Web/Skybot.Web.csproj", "src/Skybot.Web/"]
COPY ["src/Skybot.Application/Skybot.Application.csproj", "src/Skybot.Application/"]
COPY ["src/Skybot.Domain/Skybot.Domain.csproj", "src/Skybot.Domain/"]
COPY ["src/Skybot.Infrastructure/Skybot.Infrastructure.csproj", "src/Skybot.Infrastructure/"]
RUN dotnet restore "src/Skybot.Web/Skybot.Web.csproj"
COPY . .
WORKDIR "/src/src/Skybot.Web"
RUN dotnet build "Skybot.Web.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Skybot.Web.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Skybot.Web.dll"]

React App doesn't refresh on changes using Docker-Compose

Consider the Docker compose
version: '3'
services:
frontend:
build:
context: ./frontend
container_name: frontend
command: npm start
stdin_open: true
tty: true
volumes:
- ./frontend:/usr/app
ports:
- "3000:3000"
backend:
build:
context: ./backend
container_name: backend
command: npm start
environment:
- PORT=3001
- MONGO_URL=mongodb://api_mongo:27017
volumes:
- ./backend/src:/usr/app/src
ports:
- "3001:3001"
api_mongo:
image: mongo:latest
container_name: api_mongo
volumes:
- mongodb_api:/data/db
ports:
- "27017:27017"
volumes:
mongodb_api:
And the React Dockerfile :
FROM node:14.10.1-alpine3.12
WORKDIR /usr/app
COPY package.json .
RUN npm i
COPY . .
Folder Structure :
-frontend
-backend
-docker-compose.yml
And inside Frontend :
And inside src :
When I change files inside src it doesn't reflect on the Docker side.
How can we fix this ?
Here is the answer :
If you are running on Windows, please read this: Create-React-App has some issues detecting when files get changed on Windows based machines. To fix this, please do the following:
In the root project directory, create a file called .env
Add the following text to the file and save it: CHOKIDAR_USEPOLLING=true
That's all!
Don't use same name dir for different services like you use /usr/app change this to /client/app for client and server/app for backend and then it all works and use environment:- CHOKIDAR_USEPOLLING=true and use FROM node:16.5.0-alpine and can use stdin_open: true

MongoDb in a container seed with multiple collections

I'm trying to feed my mongo running in a container with existing collections living outside the container.
docker-compose.yml looks like this:
version: "3"
services:
webapi:
image: webapp:develop
container_name: web_api
build:
args:
buildconfig: Debug
context: ../src/api
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:5003
ports:
- "5003:5003"
depends_on:
- mongodb
mongodb:
image: mongo:latest
container_name: mongodb
ports:
- "27017:27017"
mongo-seed:
build: ./mongo-seed
links:
- mongodb
mongo-seed/Dockerfile:
FROM mongo
COPY initA.json /initA.json
CMD mongoimport --host mongodb --db Database --collection A --type json --file /initA.json --jsonArray --mode merge
FROM mongo
COPY initB.json /initB.json
CMD mongoimport --host mongodb --db TestListDb --collection B --type json --file /initB.json --jsonArray --mode merge
But this doesn't do the trick as it overwrites the database with the last collection, so maintains only 'B' collection in this case.
How can I import multiple collections to one database?
I found a solution for that.
Also the answer shows how to configure network so the webapp can see the mongodb container.
Structure of files:
Web.Application
.
+-- docker_compose.yml
+-- mongo
| +-- dump
| +-- DatabaseDb
| +-- Dockerfile
| +-- restore.sh
docker-compose.yml
version: '3.4'
services:
webapp:
container_name: webapp
image: ${DOCKER_REGISTRY-}webapp
build:
context: ./Web.Application/
dockerfile: Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
depends_on:
- mongo
networks:
clusternetwork:
ipv4_address: 1.1.0.1
mongo:
container_name: mongo
build:
context: ./Web.Application/mongo/
dockerfile: Dockerfile
networks:
clusternetwork:
ipv4_address: 1.1.0.12
networks:
clusternetwork:
driver: bridge
ipam:
driver: default
config:
- subnet: 1.1.0.0/24
./Web.Application/mongo/Dockerfile:
FROM mongo AS start
COPY . .
COPY restore.sh /docker-entrypoint-initdb.d/
./Web.Application/mongo/restore.sh:
#!/bin/bash
mongorestore --db DatabaseDb ./dump/DatabaseDb

Docker - run Dockerfile before or after composer an how?

via a docker-compose.yml i compose a mssql.
version: "3"
services:
db:
image: mcr.microsoft.com/mssql/server:2017-latest
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=SecretPassword
- MSSQL_PID=Express
- MSSQL_LCID=1031
- MSSQL_COLLATION=Latin1_General_CI_AS
- MSSQL_MEMORY_LIMIT_MB=8192
- MSSQL_AGENT_ENABLED=true
- TZ=Europe/Berlin
ports:
- 1433:1433
- 49200:1433
volumes:
- ./data:/var/opt/mssql/data
- ./backup:/var/opt/mssql/backup
restart: always
this works fine.
But how can i expand this image?
with: mssql-server-fts
on github i find this - but how can i combine a docker-compose.yml with a Dockerfile ?
https://github.com/Microsoft/mssql-docker/blob/master/linux/preview/examples/mssql-agent-fts-ha-tools/Dockerfile
Here is a documentation on the docker-compose.yml file docker-compose file
To use the Dockerfile in the docker-compose.yml, one needs to add the build section. If the Dockerfile and docker-compose.yml are in the same directory section of the docker-compose.yml would look like the following:
version: '3'
services:
webapp:
build:
context: .
dockerfile: Dockerfile
contex is set to the root directory, this is based on the location of the docker-compose.yml file
dockerfile is set to the name of the Dockerfile, in this case Dockerfile
I hope that this helps.
Add the path to the docker file you want to include in your docker-compose.
For example:
version: "3"
services:
dockerFileExample:
build: ./Dockerfile // Or custom file name i.e. ./docker-file-frontend
Here is link to the documentation: https://docs.docker.com/compose/reference/build/

Resources