Data loss on Azure Container Instance with mounted volume - sql-server

I just created a container instance on azure with an sql server docker image and a mounted file sharing storage as a volume. The fact is that the container got stucked, so I restarted it.
After restart, all data was gone. When I restart a docker container locally, data keep existing because of volumes so I cannot understand the behaviour on azure.
Any clue about this?
Here is the cli command I run to create the container
az container create --resource-group myresource-rg \
--name project-test-db \
--image mcr.microsoft.com/mssql/server:2019-latest \
--location westus2 \
--ports 1433 \
--memory 5 \
--environment-variables SA_PASSWORD=Password ACCEPT_EULA=Y \
--ip-address public \
--azure-file-volume-account-name projectteststorageacc \
--azure-file-volume-account-key \MyKey \
--azure-file-volume-share-name project-test-file-share \
--azure-file-volume-mount-path /databases

Try editing your command as below
Use " " for around ACCEPT_EULA=Y and key-value pair as below. And replace SA_PASSWORD with MSSQL_SA_PASSWORD
--environment-variables "MSSQL_SA_PASSWORD=Password" "ACCEPT_EULA=Y" \
Required setting for the SQL Server image, a strong password that is at least 8 characters and meets the SQL Server password requirements. Given if you have set appropriate strong password and storage key already, the below commands works just fine for me. If the password doesn't meet SQL standards this container fails (restart loop).
PS /home/karthik> $Password = "MyStrongPassword"
PS /home/karthik> $key = "FO/R6WkZELhMzX02wi9KahtLtKppoSIJg/EcJLEnZajRm2uxXs0sb/APaCk1eRsNW31yijSjS1hFm5Rd4rdTew=="
az container create --resource-group Myrg \
--name project-test-db \
--image mcr.microsoft.com/mssql/server:2019-latest \
--location westus2 \
--ports 1433 \
--memory 5 \
--environment-variables "SA_PASSWORD=$Password" "ACCEPT_EULA=Y" \
--ip-address public \
--azure-file-volume-account-name kteststoragee \
--azure-file-volume-account-key $key \
--azure-file-volume-share-name ktestfs2 \
--azure-file-volume-mount-path /databases
When you have a misbehaving container in Azure Container Instances, start by viewing its logs with az container logs, and stream its standard out and standard error with az container attach.
The az container attach command provides diagnostic information during container startup. Once the container has started, it streams STDOUT and STDERR to your local console.
Refer: Quickstart: Run SQL Server container images with Docker and Docker run command fails with Accept-Eula Agreement error #199

Related

When docker deploys the openGauss database, how to change one master one standby to one master two standby?

When docker deploys the openGauss database, how to change one master one standby to one master two standby?
docker run --network opengaussnetwork --ip $MASTER_IP --privileged=true \
--name $MASTER_NODENAME -h $MASTER_NODENAME -p $MASTER_HOST_PORT:$MASTER_HOST_PORT -d \
-e GS_PORT=$MASTER_HOST_PORT \
-e OG_SUBNET=$OG_SUBNET \
-e GS_PASSWORD=$GS_PASSWORD \
-e NODE_NAME=$MASTER_NODENAME \
-e REPL_CONN_INFO="replconninfo1 = 'localhost=$MASTER_IP localport=$MASTER_LOCAL_PORT localservice=$MASTER_HOST_PORT remotehost=$SLAVE_1_IP remoteport=$SLAVE_1_LOCAL_PORT remoteservice=$SLAVE_1_HOST_PORT'\n" \
enmotech/opengauss:$VERSION -M primary \
|| {
echo ""
echo "ERROR: OpenGauss Database Master Docker Container was NOT successfully created."
exit 1
}
echo "OpenGauss Database Master Docker Container created."
Add the replication channels 'replconninfo2' to the three nodes of the openGauss database

Cannot Connect Keycloak docker to SQL database

I'm trying to connect Keycloak with docker to a SQL Server database located on another server, but I'm not getting a connection.
This is the command I'm typing:
docker run --name keycloak \
--net keycloak-network \
-p 8080:8080 \
-e DB_VENDOR=mssql \
-e DB_USER=*** \
-e DB_PASSWORD=*** \
-e DB_ADDR=172.... \
-e DB_DATABASE=Keycloak \
-e KEYCLOAK_USER=user \
-e KEYCLOAK_PASSWORD=password \
jboss/keycloak
Could someone help me to solve it please.
Apparently it could be some SSL and RSA 1024 bitkey error
Caused by: java.security.cert.CertificateException: Certificates do not conform to algorithm constraints
at java.base/sun.security.ssl.AbstractTrustManagerWrapper.checkAlgorithmConstraints(SSLContextImpl.java:1681)
at java.base/sun.security.ssl.AbstractTrustManagerWrapper.checkAdditionalTrust(SSLContextImpl.java:1606)
at java.base/sun.security.ssl.AbstractTrustManagerWrapper.checkServerTrusted(SSLContextImpl.java:1550)
at java.base/sun.security.ssl.CertificateMessage$T12CertificateConsumer.checkServerCerts(CertificateMessage.java:638)
... 78 more
Caused by: java.security.cert.CertPathValidatorException: Algorithm constraints check failed on keysize limits: RSA 1024 bit key used with certificate: CN=SSL_Self_Signed_Fallback
at java.base/sun.security.util.DisabledAlgorithmConstraints$KeySizeConstraint.permits(DisabledAlgorithmConstraints.java:889)
at java.base/sun.security.util.DisabledAlgorithmConstraints$Constraints.permits(DisabledAlgorithmConstraints.java:507)
at java.base/sun.security.util.DisabledAlgorithmConstraints.permits(DisabledAlgorithmConstraints.java:247)
at java.base/sun.security.util.DisabledAlgorithmConstraints.permits(DisabledAlgorithmConstraints.java:193)
at java.base/sun.security.provider.certpath.AlgorithmChecker.check(AlgorithmChecker.java:292)
at java.base/sun.security.ssl.AbstractTrustManagerWrapper.checkAlgorithmConstraints(SSLContextImpl.java:1677)
... 81 more
These are the errors that appear.

Auto generate models in Nest js from existing SQL Server DB using Sequelize

I am working on an API using Nest js that has to connect to an existing database.
There are so many tables that cannot be manage manually for creating the entity tables in Nest.
I am using sequelize.
Is there a way that I can auto generate models.
Sequelize-auto seems to only work well for express. I need something that can generate class based model entities.
I get the solution, you can use sequelize-typescript-generator library to create the entity and models automatically based on schema and connection.
From the description:
You can run this globally if you already install the package
For the usage of the command
-h, --host Database IP/hostname
-p, --port Database port. Defaults:
- MySQL/MariaDB: 3306
- Postgres: 5432
- MSSQL: 1433
-d, --database Database name
-s, --schema Schema name (Postgres only). Default:
- public
-D, --dialect Dialect:
- postgres
- mysql
- mariadb
- sqlite
- mssql
-u, --username Database username
-x, --password Database password
-t, --tables Comma-separated names of tables to process
-T, --skip-tables Comma-separated names of tables to skip
-i, --indices Include index annotations in the generated models
-o, --out-dir Output directory. Default:
- output-models
-C, --case Transform tables and fields names
with one of the following cases:
- underscore
- camel
- upper
- lower
- pascal
- const
You can also specify a different
case for model and columns using
the following format:
<model case>:<column case>
-S, --storage SQLite storage. Default:
- memory
-L, --lint-file ES Lint file path
-l, --ssl Enable SSL
-r, --protocol Protocol used: Default:
- tcp
-a, --associations-file Associations file path
-g, --logs Enable Sequelize logs
-n, --dialect-options Dialect native options passed as json string.
-f, --dialect-options-file Dialect native options passed as json file path.
-R, --no-strict Disable strict typescript class declaration.
Example of the command:
stg \
-D mysql \
-h localhost \
-p 3306 \
-d myDatabase \
-u myUsername \
-x myPassword \
--indices \
--case camel \
--out-dir pathofthegeneratedfiletakeplace \
--clean \

Add a static file with react-router to pass lets encrypt verification

I'm trying to install HTTPS in a gitlab pages, with a React site and react router.
Certbot is asking me to add a page with a code:
Make sure your web server displays the following content at
http://YOURDOMAIN.org/.well-known/acme-challenge/5TBu788fW0tQ5EOwZMdu1Gv3e9C33gxjV58hVtWTbDM
before continuing:
5TBu788fW0tQ5EOwZMdu1Gv3e9C33gxjV58hVtWTbDM.ewlbSYgvIxVOqiP1lD2zeDKWBGEZMRfO_4kJyLRP_4U
#
# output omitted
#
Press ENTER to continue
It's a one page site, so I don't really know where to add a static page with url: http://YOURDOMAIN.org/.well-known/acme-challenge/5TBu788fW0tQ5EOwZMdu1Gv3e9C33gxjV58hVtWTbDM
Is there a way to do it?
I couldn't do it with static page, but Certbot allows you to use an alternative method with DNS Challenge.
You need to put a TXT in your DNS hostzone, and then Certbot will compare it to the one he is asking you.
To get the TXT content, you must run certbot like that ( I use Docker )
docker run -it --rm --name certbot \
-v "$PWD/letsencrypt:/etc/letsencrypt" \
-v "$PWD/lib/letsencrypt:/var/lib/letsencrypt" \
certbot/certbot \
certonly \
-m email#company.com \
--manual \
--preferred-challenges dns-01 \
--no-eff-email \
--manual-public-ip-logging-ok \
--keep-until-expiring \
--agree-tos \
-d mydomain.com \
--server https://acme-v02.api.letsencrypt.org/directory
Just change your email, and the domain you want to work with.
More details in gitlab docs.
Hope it helps

How can I restore a database to a influxdb container

Sorry for my ignorance. I have influx db running on docker with docker-compose as below.
influxdb:
image: influxdb:alpine
ports:
- 8086:8086
volumes:
- ./influxdb/config/influxdb.conf:/etc/influxdb/influxdb.conf:ro
- ./influxdb/data:/var/lib/influxdb
I need to restore the backup of a database from remote server to this Influxdb container. I have taken the backup as below from remote server.
influxd backup -database tech_db /tmp/tech_db
I read the documentation and couldn't find a way to restore the DB to docker container.Can anyone give me a pointer to how to do this.
I have also had the same issue. Looks like it is impossible because you are not able to kill influxd process in a container.
# Restoring a backup requires that influxd is stopped (note that stopping the process kills the container).
docker stop "$CONTAINER_ID"
# Run the restore command in an ephemeral container.
# This affects the previously mounted volume mapped to /var/lib/influxdb.
docker run --rm \
--entrypoint /bin/bash \
-v "$INFLUXDIR":/var/lib/influxdb \
-v "$BACKUPDIR":/backups \
influxdb:1.3 \
-c "influxd restore -metadir /var/lib/influxdb/meta -datadir /var/lib/influxdb/data -database foo /backups/foo.backup"
# Start the container just like before, and get the new container ID.
CONTAINER_ID=$(docker run --rm \
--detach \
-v "$INFLUXDIR":/var/lib/influxdb \
-v "$BACKUPDIR":/backups \
-p 8086 \
influxdb:1.3
)
More information is here

Resources