I want to set up a ddev (v1.5.2) project without a database. When I try to overwrite the image in a Docker-Compose YAML it stops with an error.
As suggested for the dba I have overwritten the db image in an additional docker-compose.database.yaml in the .ddev folder.
version: '3.6'
services:
db:
image: "busybox"
I expected it to start without the database and it does, but it seems to do a health check for the database which fails.
Failed to start sitzplan: db container failed: log=, err=container exited, please use 'ddev logs -s db` to find out why it failed
The project is running, but it's not working, because it won't run my post-start hooks, which are necessary. In that means I can't even ignore the error.
First, note that there's now explicit support for just turning off the dba/phpmyadmin container, omit_containers: dba (can also be done in the global ddev config, ~/.ddev/global_config.yaml).
And of course I would recommend just letting the regular db container run and not using it.
But here's a docker-compose.database.yaml that does what you ask for:
version: '3.6'
services:
db:
image: "busybox:latest"
command: sh -c "while true; do sleep 1000; done"
healthcheck:
test: ["CMD", "true"]
Related
Description
I'm trying out the service containers for integrated database tests in azure devops pipelines.
As per this opensourced dummy ci cd pipeline project https://dev.azure.com/funktechno/_git/dotnet%20ci%20pipelines. I was experimenting with azure devops service containers for integrated pipeline testing. I got postgress and mysql to work. I'm having issues with with microsoft sql server.
yml file
resources:
containers:
- container: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
ports:
# - 1433
- 1433:1433
options: -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -e 'MSSQL_PID=Express'
- job: unit_test_db_mssql
# condition: eq('${{ variables.runDbTests }}', 'true')
# continueOnError: true
pool:
vmImage: 'ubuntu-latest'
services:
localhostsqlserver: mssql
steps:
- task: UseDotNet#2
displayName: 'Use .NET Core sdk 2.2'
inputs:
packageType: sdk
version: 2.2.207
installationPath: $(Agent.ToolsDirectory)/dotnet
- task: NuGetToolInstaller#1
- task: NuGetCommand#2
inputs:
restoreSolution: '$(solution)'
- task: Bash#3
inputs:
targetType: 'inline'
script: 'env | sort'
# echo Write your commands here...
# echo ${{agent.services.localhostsqlserver.ports.1433}}
# echo Write your commands here end...
- task: CmdLine#2
displayName: 'enabledb'
inputs:
script: |
cp ./MyProject.Repository.Test/Data/appSettings.devops.mssql.json ./MyProject.Repository.Test/Data/AppSettings.json
- task: DotNetCoreCLI#2
inputs:
command: 'test'
workingDirectory: MyProject.Repository.Test
arguments: '--collect:"XPlat Code Coverage"'
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: '$(Agent.TempDirectory)\**\coverage.cobertura.xml'
db connection string
{
"sqlserver": {
"ConnectionStrings": {
"Provider": "sqlserver",
"DefaultConnection": "User ID=sa;Password=yourStrong(!)Password;Server=localhost;Database=mockDb;Pooling=true;"
}
}
}
Debugging
Most I can tell is that when I run Bash#3 to check the environment variables postgres and mysql print something similar to
/bin/bash --noprofile --norc /home/vsts/work/_temp/b9ec7d77-4bc2-47ab-b767-6a5e95ec3ea6.sh
"id": "b294d39b9cc1f0d337bdbf92fb2a95f0197e6ef78ce28e9d5ad6521496713708"
"pg11": {
}
while the mssql fails to print an id
========================== Starting Command Output ===========================
/bin/bash --noprofile --norc /home/vsts/work/_temp/70ae8517-5199-487f-9067-aee67f8437bb.sh
}
}
update this doesn't happen when using ubuntu-latest, but still have mssql connection issue
Database Error logging.
I'm currently getting this error in the pipeline
Error Message:
Failed for sqlserver
providername:Microsoft.EntityFrameworkCore.SqlServer m:A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
In TestDbContext.cs my error message is this. If people have pointer for getting more details it would be appreciated.
catch (System.Exception e)
{
var assertMsg = "Failed for " + connectionStrings.Provider + "\n" + " providername:" + dbContext.Database.ProviderName + " m:";
if (e.InnerException != null)
assertMsg += e.InnerException.Message;
else
assertMsg += e.Message;
_exceptionMessage = assertMsg;
}
Example pipeline: https://dev.azure.com/funktechno/dotnet%20ci%20pipelines/_build/results?buildId=73&view=logs&j=ce03965c-7621-5e79-6882-94ddf3daf982&t=a73693a5-1de9-5e3d-a243-942c60ab4775
Notes
I already know that azure devops pipeline mssql server doesn't work in the windows agents b/c they are windows server 2019 and the windows container version of mssql server is not well supported and only works for windows server 2016. It fails on the initialize container step when I do that.
I tried several things for the unit_test_db_mssql, changing ubuntu version, changing parameters, changing mssql server version, all give about the same error.
If people know of command line methods that work in linux to test if the mssql docker instance is ready, this may help as well.
Progress Update
got mssql working in github actions and gitlab pipelines so far.
postgres and mysql work just fine on azure devops and bitbucket pipelines.
still no luck on getting mssql working on azure devops.bitbucket wouldn't work as well although it did give a lot of docker details of the running database, but then again nobody really cares about bitbucket pipelines with their measly 50 minutes. You can see all of these pipelines in the referenced repository, they are all public, pull requests are also possible since it's open-sourced.
Per the help from Starain Chen [MSFT] in https://developercommunity.visualstudio.com/content/problem/1159426/working-examples-using-service-container-of-sql-se.html. It looks like a 10 second delay is needed to wait for the container to be ready.
adding
- task: PowerShell#2
displayName: 'delay 10'
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
start-sleep -s 10
Gets the db connection to work. I'm assuming that maybe the mssql docker container is ready by then.
I ran into this issue over the past several days and came upon your post. I was getting the same behavior and then something clicked.
IMPORTANT NOTE: If you are using PowerShell on Windows to run these commands use double quotes instead of single quotes.
that note is from https://hub.docker.com/_/microsoft-mssql-server
I believe that changing your line from
options: -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -e 'MSSQL_PID=Express'
to
options: -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=yourStrong(!)Password" -e "MSSQL_PID=Express"
should get it to work. I think its also interesting that in the link you posted above the password is passed in as its own line, which probably resolves the issue, not necessarily the 10 second delay. (example below)
- container: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
env:
ACCEPT_EULA: Y
SA_PASSWORD: Password123
MSSQL_PID: Express
I've been studying "Kubernetes Up and Running" by Hightower et al (first edition) Chapter 13 where they discussed creating a Reliable MySQL Singleton (Since I just discovered that there is a second edition, I guess I'll be buying it soon).
Using their MySQL reliable singleton example as a model, I've been looking for some sample YAML files to make a similar deployment with Microsoft SQL Server (Express) on Docker Desktop for Kubernetes.
Apparently I need YAML files to deploy
Persistent Volume
Volume claim (should this be NFS?)
SQL Server (Express edition) replica set (in spite of the fact that this is just a singleton).
I've tried this example but I'm confused because it does not contain a persistent volume & claim and it does not work. I get the error
Error: unable to recognize "sqlserver.yml": no matches for kind "Deployment" in version "apps/v1beta1"
Can someone please point me to some sample YAML files that are not Azure specific that will work on Docker Desktop Kubernetes for Windows 10? After debugging my application, I'll want to deploy this to Azure (AKS).
Wed Jul 15 2020 Update
I left out the "-n namespace" for the helm install command (possibly because I'm using Helm and you are using helm v2?).
That install command still did not work. Then I did a
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
Now this command works:
helm install todo-app-database stable/mssql-linux
Progress!
When I do a "k get pods" I see that my todo-app-mssql-linux database is in the pending state. So I did a
kubectl get events
and I see
Warning FailedScheduling pod/todo-app-database-mssql-linux-8668d9b88c-lsh5l 0/1 nodes are available: 1 Insufficient memory.
I've been google searching for "Kubernetes insufficient memory" and can find no match.
I suspect this is a problem specific to "Docker Desktop Kubernetes".
When I look at the output for
helm -n ns-todolistdemo template todo-app-database stable/mssql-linux
I see the deployment is asking for 2Gi. (Interesting: when I use the template command, the "-n ns-todolistdemo" does not cause an error like it does with the install command).
So I do
kubectl describe deployment todo-app-database-mssql-linux >todo-app-database-mssql-linux.yaml
I edit the yaml file to change 2Gi to 1Gi.
kubectl apply -f todo-app-database-mssql-linux.yaml
I get this error:
error: error parsing todo-app-database-mssql-linux.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Hmm... that did not work. I try delete:
kubectl delete deployment todo-app-database-mssql-linux
kubectl create -f todo-app-database-mssql-linux.yaml
I get this error:
error: error validating "todo-app-database-mssql-linux.yaml": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
So I try apply:
kubectl apply -f todo-app-database-mssql-linux.yaml
Same error!
Shucks.... Is there a way to adjust the memory allocation for Docker Desktop?
Thank you
Siegfried
Short answer
https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml
Detailed Answer
Docker For Desktop comes already with a default StorageClass :
This storage class is responsible for auto-provisioning of PV whenever you create a PVC.
If you have a YAML definition of PVC (persistent volume claim), you just need to keep storageClass empty, so it will use the default.
k get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 11d
This is fair enough as Docker-For-Desktop Cluster is a one node cluster. So if your DB crashes and the cluster opens it again , it will not move to another node, because simply, you have a single node :)
Now should write the YAML of PVC from scratch ?
No , you don't need. Because Helm should be your best friend.
( I explained below Why you have to use Helm even without deep learning curve)
Fortunately, the community provides a chart called stable/mssql-linux..
Let's run it together :
helm -n <your-namespace> install todo-app-database stable/mssql-linux
# helm -n <namespace> install <release-name> <chart-name-from-community>
If you want to check the YAML (namely PVC) that Helm computed, you can run template instead of install
helm -n <your-namespace> template todo-app-database stable/mssql-linux
Why I give you the answer with Helm ?
Writing YAML from scratch lets reinventing the wheel while others do it.
The most efficient way is to reuse what community prepare for you.
However, you may ask: How can i reuse what others doing ?
That's why Helm comes.
Helm comes to be your installer of any application on top of kubernetes regardless how much YAML does your app require.
Install it now and hit the ground choco install kubernetes-helm
I tried below command
docker run --rm -t -e LICENSE=accept --net=host -v "$(pwd)":/installer/cluster ibmcom/icp-inception:2.1.0 install
the response is
Waiting for cloudant initialization
I entered the command received the logs shown in the image. No error shown. Please give a solution
From the error message, for cloudant database initialization issue, it may be caused by the cloudant docker image is pulled from dockerhub while ICP installation. The cloudant docker image is big, you can run below command to check whether the image is ready in your environment.
$ docker images | grep icp-datastore
If the cloudant docker image is ready in your environment, and the ICP installation still has cloudant database initialization issue, you can try to install the latest ICP 2.1.0.3 Community Edition. From 2.1.0.3, ICP removes the cloudant database. The ICP 2.1.0.3 installation documentation:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0.3/installing/install_containers_CE.html
If you still want to check the cloudant database initialization issue in ICP 2.1.0.1 environment, you can:
Ensure your ICP nodes match the system and hardware requirements firstly.
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_2.1.0/supported_system_config/system_reqs.html
Let us know the ICP installation configurations. You can check the contents for config.yaml and hosts files.
Check the system logs (in /var/log/messages or /var/log/syslog file) to find the relevant errors.
Run 'docker logs ' command to check the logs or errors.
I've been holding off posting here because I feel like this issue could be too vague. I will try my best to explain. I have been through all of the existing questions but they don't seem relevant to what I am doing.
Basically, I have inherited 3 Ec2 Instances that are Dev / Staging / Live web applications in my new role. I use Ansible playbooks to migrate the Database between all environments. We recently had a new website that was deployed onto all three existing instances.
The Dev box recently died so I blew it away and launched a new one, the website looks fine, however exporting and importing the Database no longer works (on the new instance)
Below is the Ansible output:
TASK: [Export database to migrate] ********************************************
failed: [172.**.**.***] => {"changed": true, "cmd": "wp db export dbv2.sql --tables=t*******0_links,t*******0_options,t*******0_postmeta,t*******0_posts,taxlt4ws0_rg_form,taxlt4ws0_rg_form_meta,taxlt4ws0_rg_form_view,t*******0_term_relationships,t*******0_term_taxonomy,t*******0_termmeta,t*******0_terms,t*******0_usermeta,t*******0_users", "delta": "0:00:00.001594", "end": "2017-09-01 10:21:25.225355", "rc": 127, "start": "2017-09-01 10:21:25.223761", "warnings": []}
stderr: /bin/sh: 1: wp: not found
FATAL: all hosts have already failed -- aborting
Things I've checked:
Chmod on the folders it import/exports in/from.
IAM Role is set
Used Shell instead of Command in the Playbook
Configs for each environment
I'm really stumped my Ansible knowledge is quite limited as I only picked it up a couple months ago and hadn't run into any issues (even with a new Website) until the Dev box had to be replaced.
I think ansible is referring to wpcli. It is not able to find its executable.
If this is the case,you need to install it with another task before that one.
Basically what this is complaining about is that whatever script you are using in module Export DB is not able to find a wp script or executable.
stderr: /bin/sh: 1: wp: not found
Would recommend checking which wp or maybe do a find to see if it is on the staging or live instances to see what it is and install/copy it over to the Dev instances.
You can test this hypothesis by using a small test script:
#!/bin/sh
wp
create this script say test.sh, give it executable permissions and run it on all the env's to see where it fails.
So as the title says, I'm trying to run liferay in side of a docker container. Then from there, connect to a database on an outside node.
I can successfully ping the server that the SQL Server is running on from inside the docker container, however, when I try to connect to the database through liferay's configuration interface, it simply says an connection could not be established, and the logs state that log in for the user failed.
If it's not possible, I understand, just trying to get a better idea of this little mess.
======================================================================
Just to note, I've been using snasello's docker image for liferay, except taking out the preconfigured database to force liferay to go to the configuration page. I'm starting the container with
docker run --rm -it -i 8080:8080 {whatever the local name of the image is}
00:00:34,301 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#6][BasicResourcePool:1851] com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask#3b17c58d -- Acquisition Attempt Failed!!! Clearing pending acquires. While trying to acquire a needed new resource, we failed to succeed more than the maximum number of allowed acquisition attempts (3). Last acquisition attempt exception:
java.sql.SQLException: Cannot open database "lportal" requested by the login. The login failed.
at net.sourceforge.jtds.jdbc.SQLDiagnostic.addDiagnostic(SQLDiagnostic.java:368)
at net.sourceforge.jtds.jdbc.TdsCore.tdsErrorToken(TdsCore.java:2820)
at net.sourceforge.jtds.jdbc.TdsCore.nextToken(TdsCore.java:2258)
at net.sourceforge.jtds.jdbc.TdsCore.login(TdsCore.java:603)
at net.sourceforge.jtds.jdbc.ConnectionJDBC2.(ConnectionJDBC2.java:345)
at net.sourceforge.jtds.jdbc.ConnectionJDBC3.(ConnectionJDBC3.java:50)
at net.sourceforge.jtds.jdbc.Driver.connect(Driver.java:184)
at com.mchange.v2.c3p0.DriverManagerDataSource.getConnection(DriverManagerDataSource.java:146)
at com.mchange.v2.c3p0.WrapperConnectionPoolDataSource.getPooledConnection(WrapperConnectionPoolDataSource.java:195)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPool$1PooledConnectionResourcePoolManager.acquireResource(C3P0PooledConnectionPool.java:211)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquire(BasicResourcePool.java:1086)
at com.mchange.v2.resourcepool.BasicResourcePool.doAcquireAndDecrementPendingAcquiresWithinLockOnSuccess(BasicResourcePool.java:1073)
at com.mchange.v2.resourcepool.BasicResourcePool.access$800(BasicResourcePool.java:44)
at com.mchange.v2.resourcepool.BasicResourcePool$ScatteredAcquireTask.run(BasicResourcePool.java:1810)
at com.mchange.v2.async.ThreadPoolAsynchronousRunner$PoolThread.run(ThreadPoolAsynchronousRunner.java:648)
00:00:34,301 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#6][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
00:00:34,303 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#9][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
00:00:34,304 WARN [C3P0PooledConnectionPoolManager[identityToken->21r35xoL]-HelperThread-#1][BasicResourcePool:894] Having failed to acquire a resource, com.mchange.v2.resourcepool.BasicResourcePool#80d65ef is interrupting all Threads waiting on a resource to check out. Will try again in response to new client requests.
You should link the mysql container to the liferay container using the --link docker flag. The alias you provide to the mysql container should be db_lep.
docker run -d --name mysqldb --env-file=.crendentials mysql
docker run -d --link mysqldb:db_lep -p 8080:8080 {whatever the local name of the image is}
If you see the https://github.com/snasello/docker-liferay-6.2/blob/master/lep/portal-bd-MYSQL.properties the host for the database is db_lep. If you provide your own properties file then you should change the alias to whatever is in your properties. If you are using localhost then instead of linking you should make the containers to share the same network(localhost).
Rechecking the errors, turned out there was an issue with SQL server's authentication. Solved via this helpful post.
Thanks guys!