This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 1 year ago.
Firt of all: I don't want to connect to a docker container, running mongo.
I am building a docker container that should access the mongo database I have installed in my running Ubuntu 18.04 machine.
Docker suggests this could be done fairly easy by just adding the flag -pto the run command, so I did this:
docker run -p 27017:27017 --name mycontainer myimage
Port 27017 is the default port for mongo (see here) and running netstat -pna | grep 27017 confirms by returning the following:
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55880 ESTABLISHED -
tcp 0 0 127.0.0.1:55882 127.0.0.1:27017 ESTABLISHED -
tcp 0 0 127.0.0.1:55880 127.0.0.1:27017 ESTABLISHED -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55884 ESTABLISHED -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55882 ESTABLISHED -
tcp 0 0 127.0.0.1:55884 127.0.0.1:27017 ESTABLISHED -
unix 2 [ ACC ] STREAM LISTENING 77163 - /tmp/mongodb-27017.sock
But running the docker command shown above, I get an error indicating that I can't connect to the port because it is already in use (which is actually the whole point of connecting to it):
docker: Error response from daemon: driver failed programming external connectivity on endpoint mycontainer (1c69e178b48ee51ab2765479e0ecf3eea8f43e6420e34b878882db8d8d5e07dd): Error starting userland proxy: listen tcp4 0.0.0.0:27017: bind: address already in use.
ERRO[0000] error waiting for container: context canceled
How should I proceed? What did I do wrong?
This dependents on how your application connects to a database.
Almost all languages needs the connection parameters.
Example with nodes & mysql:
const knex = require('knex')({
client: 'mysql',
connection: {
host: '10.10.10.10',
user: 'root',
password: 'changeme',
database: 'stackoverflow'
},
debug: true
});
Example with python & mongo
import pymongo
conn = pymongo.MongoClient('mongodb://root:pass#10.10.10.10:27017/')
Traditionally these connection parameters are stored in a properties or configuration file. One per environment: dev, staging, prod, etc
Configuration file
If your application use this method to get the connection parameters, you just need to follow these steps:
set the ip, port, user, password in the configuration file. Usually inside of your source code: application.properties, config.yml, parameters.ini, etc
perform a docker build ... of your app.
perform a docker run ... of your app. In this step, you don't need to pass any mongo parameter because there are already "inside" of your app. Check this and this to understand why localhost is not used in docker.
Disadvantage: This approach works in simple scenarios but if you have several environments like dev, testing, staging, pre-prod, prod, etc you will need to perform a build for each environment because the connection parameters are inside of your app.
Environment variables
This is my favorite and also is recommended in several platforms like heroku, openshift, couldfoundry, etc
In this approach you just need one build. This image could be deployed on any environment just playing with the correct parameters before the run.
Example with nodes & mysql using environment variables:
const knex = require('knex')({
client: 'mysql',
connection: {
host: process.env.database_host,
user: process.env.database_user,
password: process.env.database_password,
database: process.env.database_database
},
debug: true
});
Example with python & mongo using environment variables:
import pymongo
import os
database_host = os.environ['database_host']
database_user = os.environ['database_user']
database_password = os.environ['database_password']
urlConnect = "mongodb://{}:{}#{}:27017/".format(database_user, database_password,database_host )
conn = pymongo.MongoClient(urlConnect)
As you can see, the source code does not need to read a properties file to get the connection parameters because it hopes they are available as environment variables
Finally, the steps with this approach will be:
perform a docker build ... of your app.
perform a docker run ... of your app. In this case, you need to sent the variables from host to your container
docker run -it -p 8080:80 \
-e "database_host=10.10.10.10" \
-e "database_user=root" \
-e "database_password=pass" \
--name my_app my_container:1.0
Remote variables
If you have a distributed environment, scalable, etc you will want to manage your variables.
Basically you will have a web console to create, edit, delete and export your variables. Also these variables must be injected to your docker container in a easy way.
Example of how Heroku offer you a way to manage your variables
Check:
#4 Centralized and manageable configuration
Related
Description
I'm trying out the service containers for integrated database tests in azure devops pipelines.
As per this opensourced dummy ci cd pipeline project https://dev.azure.com/funktechno/_git/dotnet%20ci%20pipelines. I was experimenting with azure devops service containers for integrated pipeline testing. I got postgress and mysql to work. I'm having issues with with microsoft sql server.
yml file
resources:
containers:
- container: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
ports:
# - 1433
- 1433:1433
options: -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -e 'MSSQL_PID=Express'
- job: unit_test_db_mssql
# condition: eq('${{ variables.runDbTests }}', 'true')
# continueOnError: true
pool:
vmImage: 'ubuntu-latest'
services:
localhostsqlserver: mssql
steps:
- task: UseDotNet#2
displayName: 'Use .NET Core sdk 2.2'
inputs:
packageType: sdk
version: 2.2.207
installationPath: $(Agent.ToolsDirectory)/dotnet
- task: NuGetToolInstaller#1
- task: NuGetCommand#2
inputs:
restoreSolution: '$(solution)'
- task: Bash#3
inputs:
targetType: 'inline'
script: 'env | sort'
# echo Write your commands here...
# echo ${{agent.services.localhostsqlserver.ports.1433}}
# echo Write your commands here end...
- task: CmdLine#2
displayName: 'enabledb'
inputs:
script: |
cp ./MyProject.Repository.Test/Data/appSettings.devops.mssql.json ./MyProject.Repository.Test/Data/AppSettings.json
- task: DotNetCoreCLI#2
inputs:
command: 'test'
workingDirectory: MyProject.Repository.Test
arguments: '--collect:"XPlat Code Coverage"'
- task: PublishCodeCoverageResults#1
inputs:
codeCoverageTool: 'Cobertura'
summaryFileLocation: '$(Agent.TempDirectory)\**\coverage.cobertura.xml'
db connection string
{
"sqlserver": {
"ConnectionStrings": {
"Provider": "sqlserver",
"DefaultConnection": "User ID=sa;Password=yourStrong(!)Password;Server=localhost;Database=mockDb;Pooling=true;"
}
}
}
Debugging
Most I can tell is that when I run Bash#3 to check the environment variables postgres and mysql print something similar to
/bin/bash --noprofile --norc /home/vsts/work/_temp/b9ec7d77-4bc2-47ab-b767-6a5e95ec3ea6.sh
"id": "b294d39b9cc1f0d337bdbf92fb2a95f0197e6ef78ce28e9d5ad6521496713708"
"pg11": {
}
while the mssql fails to print an id
========================== Starting Command Output ===========================
/bin/bash --noprofile --norc /home/vsts/work/_temp/70ae8517-5199-487f-9067-aee67f8437bb.sh
}
}
update this doesn't happen when using ubuntu-latest, but still have mssql connection issue
Database Error logging.
I'm currently getting this error in the pipeline
Error Message:
Failed for sqlserver
providername:Microsoft.EntityFrameworkCore.SqlServer m:A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
In TestDbContext.cs my error message is this. If people have pointer for getting more details it would be appreciated.
catch (System.Exception e)
{
var assertMsg = "Failed for " + connectionStrings.Provider + "\n" + " providername:" + dbContext.Database.ProviderName + " m:";
if (e.InnerException != null)
assertMsg += e.InnerException.Message;
else
assertMsg += e.Message;
_exceptionMessage = assertMsg;
}
Example pipeline: https://dev.azure.com/funktechno/dotnet%20ci%20pipelines/_build/results?buildId=73&view=logs&j=ce03965c-7621-5e79-6882-94ddf3daf982&t=a73693a5-1de9-5e3d-a243-942c60ab4775
Notes
I already know that azure devops pipeline mssql server doesn't work in the windows agents b/c they are windows server 2019 and the windows container version of mssql server is not well supported and only works for windows server 2016. It fails on the initialize container step when I do that.
I tried several things for the unit_test_db_mssql, changing ubuntu version, changing parameters, changing mssql server version, all give about the same error.
If people know of command line methods that work in linux to test if the mssql docker instance is ready, this may help as well.
Progress Update
got mssql working in github actions and gitlab pipelines so far.
postgres and mysql work just fine on azure devops and bitbucket pipelines.
still no luck on getting mssql working on azure devops.bitbucket wouldn't work as well although it did give a lot of docker details of the running database, but then again nobody really cares about bitbucket pipelines with their measly 50 minutes. You can see all of these pipelines in the referenced repository, they are all public, pull requests are also possible since it's open-sourced.
Per the help from Starain Chen [MSFT] in https://developercommunity.visualstudio.com/content/problem/1159426/working-examples-using-service-container-of-sql-se.html. It looks like a 10 second delay is needed to wait for the container to be ready.
adding
- task: PowerShell#2
displayName: 'delay 10'
inputs:
targetType: 'inline'
script: |
# Write your PowerShell commands here.
start-sleep -s 10
Gets the db connection to work. I'm assuming that maybe the mssql docker container is ready by then.
I ran into this issue over the past several days and came upon your post. I was getting the same behavior and then something clicked.
IMPORTANT NOTE: If you are using PowerShell on Windows to run these commands use double quotes instead of single quotes.
that note is from https://hub.docker.com/_/microsoft-mssql-server
I believe that changing your line from
options: -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=yourStrong(!)Password' -e 'MSSQL_PID=Express'
to
options: -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=yourStrong(!)Password" -e "MSSQL_PID=Express"
should get it to work. I think its also interesting that in the link you posted above the password is passed in as its own line, which probably resolves the issue, not necessarily the 10 second delay. (example below)
- container: mssql
image: mcr.microsoft.com/mssql/server:2017-latest
env:
ACCEPT_EULA: Y
SA_PASSWORD: Password123
MSSQL_PID: Express
I'm trying to run Lando remotely to avoid consuming local resources. Sometimes I need to work on a laptop and lando+xdebug is a hungry beast.
Local
I don't have Lando running locally. I'm synchronizing my files using PHPStorm and Lando is running remotely.
Remote
I have a DigitalOcean droplet set up and running a Lando (drupal8) site. I can access the site and it's running as normal at:
http://165.xxx.xxx.xxx:ppppp
165.xxx.xxx.xxx, being the IP of the droplet and
ppppp, being the port that Lando (docker) exposes the container
.lando.yml
name: XXXXXX
recipe: drupal8
config:
php: 7.1
webroot: ./docroot
xdebug: false // overridden later
services:
appserver:
build:
- composer install
ruby:
type: ruby:2.4
run:
- "cd $LANDO_MOUNT && gem install compass"
tooling:
blt:
service: appserver
cmd: /app/vendor/acquia/blt/bin/blt
gem:
service: ruby
compass:
service: ruby
fix-compass:
service: ruby
cmd: "gem install compass"
.lando.local.yml
Since I don't want this config for my fellow developers
config:
xdebug: true
config:
php: .lando.php.ini
.lando.php.ini
xdebug.remote_enable = 1
xdebug.remote_autostart = 1
xdebug.remote_connect_back = 0
xdebug.remote_host = localhost
xdebug.remote_port = 9002
xdebug.remote_log = /xdebug.log
xdebug.remote_mode = req
xdebug.idekey = PHPSTORM
PHPStorm Server
Host: localhost
Port: 9002
Debugger: Xdebug
Use path mappings (checked)
-- project --> /app
Steps I take to run this
Start listening for debug connections in PHPStorm
Create SSH tunnel with ssh -R 9002:localhost:9002 root#165.xxx.xxx.xxx
Refresh http://165.xxx.xxx.xxx:ppppp
Findings
Using lando php -i, I can see that xdebug is running (and all of my php.ini config is set) as it should, on port 9002.
Using nc -z localhost 9002 || echo 'no tunnel open', I can also tell that SSH tunnel is open for 9002, as it should be.
I don't get any prompt for incoming connections
Update:
Some progress when I forced 9002 open with:
sudo iptables -A INPUT -p tcp -d 0/0 -s 0/0 --dport 9002 -j ACCEPT
However, now I get this error
Log opened at 2019-08-20 02:54:17
I: Connecting to configured address/port: 165.xxx.xxx.xxx:9002.
W: Creating socket for '165.xxx.xxx.xxx:9002', poll success, but error: Operation now in progress (29).
E: Could not connect to client. :-(
Log closed at 2019-08-20 02:54:17
So I've been experiencing this a few times this year. I originally solved this a few weeks ago by upgrading PHPSTORM from 2018.1 to 2019.1. It was indicated to me (on some thread on the internet) that the XML schema for the next version of xdebug was different. It worked immediately after updating. It seems though that this broke again in 2019.2.
The last time I got this to work was 2019.1.3. Best of luck!
Update: As I've continued to update both storm and php/xdebug, I've found that this has stabilized. It was all over the place for a while.
i have been facing a problem linking mongodb compass and and the online mongo atlas, but somehow when i type the mongoimport command complete with its parameters extracted from the "Command Line Options" in atlas account, it throws up the error connecting to the db server: no reachable server
I am running MongoDB enterprise version 3.0.15 for connecting compass and atlas account on windows 7 platform. I have tried various methods which are already described in some of the links
mongodb Failed: error connecting to db server: no reachable servers
mongoimport error - Failed: error connecting to db server: no reachable servers
mongorestore Failed: no reachable servers
including:
specifying the configuration file with net parameters adjusted to bind ip 0.0.0.0 and port 27017 as described in some threads on this error. Also note that my configuration file did not have any replication parameters, so removing replication parameters was out of the questions as it was suggested in some posts.
Explicitly specify/allow Inbound traffic in Windows firewall for port 27017
Reset the replication set, although i could not understand why i would need to do that in mongoimport case when my mongod instance is not even started using the --replSet rs0 command. Following link was followed for resetting the replset as it was suggested in some posts* (https://vitalflux.com/mongodb-how-to-reset-mongo-replica-set/)
and verified what ports mongo is listening in using db.serverCmdLineOpts() with output like this { "argv" : [ "mongod" ], "parsed" : { }, "ok" : 1 }
*https://serverfault.com/questions/424465/how-to-reset-mongodb-replica-set-settings/424714#424714
Mongoimport command used is:
mongoimport --host Cluster0-shard-0/cluster0-shard-00-00-1jypq.mongodb.net:27017,
cluster0-shard-00-01-1jypq.mongodb.net:27017,
cluster0-shard-00-02-1jypq.mongodb.net:27017 --ssl --username <username>
--password <password> --authenticationDatabase admin --db tutorial
--collection somedocs --type CSV --file retail.csv --headerline
The error message is as follows:
2019-05-10T13:22:32.509+0500 [........................] tutorial.somedocs 4.0 KB/42.4 MB (0.0%)
2019-05-10T13:22:32.860+0500 Failed: error connecting to db server: no reachable servers
2019-05-10T13:22:32.860+0500 imported 0 documents
At this point in time, i am really out of ideas, and i do not know how to proceed forward. Looking forward to your cooperation and valuable ideas in this regard.
Thanks,
I'm using Java but this isn't necessarily a Java question. Google's "java-compat" image is Debian (3.16.7-ckt20-1+deb8u3~bpo70+1 (2016-01-19)).
Here is my Dockerfile:
FROM gcr.io/google_appengine/java-compat
RUN apt-get -qqy update && apt-get qqy install curl xvfb x11vnc
RUN mkdir -p ~/.vnc
RUN x11vnc -storepasswd xxxxxxxx ~/.vnc/passwd
EXPOSE 5900
ADD . /app
And in the Admin Console I created a firewall rule to open up 5900. And lastly I am calling the vnc server itself in the "_ah/start" startup hook with this command:
x11vnc -forever -usepw -create
All seems to be setup correctly but I'm unable to connect with TightVNC. I use the public (ephemeral) IP address for the instance I find in the Admin Console followed by ::5900 (TightVNC requires two colons for some reason). I'm getting a message that the server refused the connection. And indeed when I try to telnet to port 5900 it's blocked.
Next I SSH into the container machine and when I test the port on the container with wget xxx.xxx.xxx.xxx:5900 I get a connection. So it seems to me the container is not accepting connections on port 5900. Am I getting this right? Is it possible to open up ports and route my VNC client into the docker container? Any help appreciated.
Why I can't use Compute Engine. Just to preempt some comments about using google's Compute Engine environment instead of Managed VMs. I make heavy use of the Datastore and Task Queues in my code. I don't think those can run (or run natively/efficiently) on Compute Engine. But I may pose that as a separate question.
Update: Per Paul in the comments... having learned some of the docker terminology: Can I publish a port on the container in Google's environment?
Out of curiosity - why are you trying to VNC into your instances? If it's just for management purposes, you can SSH into Managed VM instances.
That having been said - you can use the network/forwarded_ports config to route traffic from the VM to the application container:
network:
forwarded_ports:
- 5900
instance_tag: vnc
Put that in your app.yaml, and re-deploy your app. You'll also need to open the port in your firewall (if you intend on accessing this from the public internet):
gcloud compute firewall-rules create default-allow-vnc \
--allow tcp:5900 \
--target-tags vnc \
--description "Allow vnc traffic on port 5900"
Hope this helps!
I am trying to run a node.js server and a Redis server on an Amazon AWS Ec2 micro instance .
I have installed Redis Server and the redis-server command runs fine .
I use 'Forever' to keep the Redis-Server running . And it works fine .
But when I start my Node server , it fails to connect to the Redis-Server .
It gives the following error -
Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
Doing a 'Forever List' shows that the redis server is running fine .
info: Forever processes running
data: uid command script forever pid logfile uptime
data: [0] _pXw node app.js 26670 26671 /home/ubuntu/.forever/_pXw.log 0:0:0:13.463
data: [1] ylT1 node redis-server 25013 26681
I have verified that when the redis-server starts , it starts at 6379 port .
Can anyone help me explain why this error is happening and how I fix this ?
I use the following code to connect to Redis . I have the client libraries installed for Redis .
var redis = require("redis"),
client = redis.createClient();
Everything runs fine when I run the code on my localhost .
If you are going to use Redis outside of AWS you can try next steps that helped me to connect Redis Server working on AWS from my local Nodejs application:
1) On AWS: sudo cp /etc/redis/redis.conf.backup /etc/redis/redis.conf. Backup saves you a lot of energy figuring out whats wrong :)
2) On AWS: stop redis-server: sudo /etc/init.d/redis-server stop
3) On AWS: open /etc/redis/redis.conf and find a line bind 127.0.0.1. Copy and paste new line below bind 0.0.0.0. So you could have several lines with bind parameter. BTW, port of connection can be changed in redis.conf as well
4) On AWS: start redis-server: sudo /etc/init.d/redis-server start
5) On AWS: type redis-cli ping you should see PONG message if redis-server started ok
6) On AWS: Now open Sequrity Group for your running isntance and add New Rule with "Type" - Custom TCP Rule, Port Range - 6379
7) In your local Nodejs application:
var redis = require("redis");
var redisClient = redis.createClient(redis_port, redis_host);
nodejs redis aws
Have you checked the redis client-server connection on AWS using the ping-pong routine. Next maybe you should try running it without forever, as root.