i have been facing a problem linking mongodb compass and and the online mongo atlas, but somehow when i type the mongoimport command complete with its parameters extracted from the "Command Line Options" in atlas account, it throws up the error connecting to the db server: no reachable server
I am running MongoDB enterprise version 3.0.15 for connecting compass and atlas account on windows 7 platform. I have tried various methods which are already described in some of the links
mongodb Failed: error connecting to db server: no reachable servers
mongoimport error - Failed: error connecting to db server: no reachable servers
mongorestore Failed: no reachable servers
including:
specifying the configuration file with net parameters adjusted to bind ip 0.0.0.0 and port 27017 as described in some threads on this error. Also note that my configuration file did not have any replication parameters, so removing replication parameters was out of the questions as it was suggested in some posts.
Explicitly specify/allow Inbound traffic in Windows firewall for port 27017
Reset the replication set, although i could not understand why i would need to do that in mongoimport case when my mongod instance is not even started using the --replSet rs0 command. Following link was followed for resetting the replset as it was suggested in some posts* (https://vitalflux.com/mongodb-how-to-reset-mongo-replica-set/)
and verified what ports mongo is listening in using db.serverCmdLineOpts() with output like this { "argv" : [ "mongod" ], "parsed" : { }, "ok" : 1 }
*https://serverfault.com/questions/424465/how-to-reset-mongodb-replica-set-settings/424714#424714
Mongoimport command used is:
mongoimport --host Cluster0-shard-0/cluster0-shard-00-00-1jypq.mongodb.net:27017,
cluster0-shard-00-01-1jypq.mongodb.net:27017,
cluster0-shard-00-02-1jypq.mongodb.net:27017 --ssl --username <username>
--password <password> --authenticationDatabase admin --db tutorial
--collection somedocs --type CSV --file retail.csv --headerline
The error message is as follows:
2019-05-10T13:22:32.509+0500 [........................] tutorial.somedocs 4.0 KB/42.4 MB (0.0%)
2019-05-10T13:22:32.860+0500 Failed: error connecting to db server: no reachable servers
2019-05-10T13:22:32.860+0500 imported 0 documents
At this point in time, i am really out of ideas, and i do not know how to proceed forward. Looking forward to your cooperation and valuable ideas in this regard.
Thanks,
Related
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 1 year ago.
Firt of all: I don't want to connect to a docker container, running mongo.
I am building a docker container that should access the mongo database I have installed in my running Ubuntu 18.04 machine.
Docker suggests this could be done fairly easy by just adding the flag -pto the run command, so I did this:
docker run -p 27017:27017 --name mycontainer myimage
Port 27017 is the default port for mongo (see here) and running netstat -pna | grep 27017 confirms by returning the following:
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55880 ESTABLISHED -
tcp 0 0 127.0.0.1:55882 127.0.0.1:27017 ESTABLISHED -
tcp 0 0 127.0.0.1:55880 127.0.0.1:27017 ESTABLISHED -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55884 ESTABLISHED -
tcp 0 0 127.0.0.1:27017 127.0.0.1:55882 ESTABLISHED -
tcp 0 0 127.0.0.1:55884 127.0.0.1:27017 ESTABLISHED -
unix 2 [ ACC ] STREAM LISTENING 77163 - /tmp/mongodb-27017.sock
But running the docker command shown above, I get an error indicating that I can't connect to the port because it is already in use (which is actually the whole point of connecting to it):
docker: Error response from daemon: driver failed programming external connectivity on endpoint mycontainer (1c69e178b48ee51ab2765479e0ecf3eea8f43e6420e34b878882db8d8d5e07dd): Error starting userland proxy: listen tcp4 0.0.0.0:27017: bind: address already in use.
ERRO[0000] error waiting for container: context canceled
How should I proceed? What did I do wrong?
This dependents on how your application connects to a database.
Almost all languages needs the connection parameters.
Example with nodes & mysql:
const knex = require('knex')({
client: 'mysql',
connection: {
host: '10.10.10.10',
user: 'root',
password: 'changeme',
database: 'stackoverflow'
},
debug: true
});
Example with python & mongo
import pymongo
conn = pymongo.MongoClient('mongodb://root:pass#10.10.10.10:27017/')
Traditionally these connection parameters are stored in a properties or configuration file. One per environment: dev, staging, prod, etc
Configuration file
If your application use this method to get the connection parameters, you just need to follow these steps:
set the ip, port, user, password in the configuration file. Usually inside of your source code: application.properties, config.yml, parameters.ini, etc
perform a docker build ... of your app.
perform a docker run ... of your app. In this step, you don't need to pass any mongo parameter because there are already "inside" of your app. Check this and this to understand why localhost is not used in docker.
Disadvantage: This approach works in simple scenarios but if you have several environments like dev, testing, staging, pre-prod, prod, etc you will need to perform a build for each environment because the connection parameters are inside of your app.
Environment variables
This is my favorite and also is recommended in several platforms like heroku, openshift, couldfoundry, etc
In this approach you just need one build. This image could be deployed on any environment just playing with the correct parameters before the run.
Example with nodes & mysql using environment variables:
const knex = require('knex')({
client: 'mysql',
connection: {
host: process.env.database_host,
user: process.env.database_user,
password: process.env.database_password,
database: process.env.database_database
},
debug: true
});
Example with python & mongo using environment variables:
import pymongo
import os
database_host = os.environ['database_host']
database_user = os.environ['database_user']
database_password = os.environ['database_password']
urlConnect = "mongodb://{}:{}#{}:27017/".format(database_user, database_password,database_host )
conn = pymongo.MongoClient(urlConnect)
As you can see, the source code does not need to read a properties file to get the connection parameters because it hopes they are available as environment variables
Finally, the steps with this approach will be:
perform a docker build ... of your app.
perform a docker run ... of your app. In this case, you need to sent the variables from host to your container
docker run -it -p 8080:80 \
-e "database_host=10.10.10.10" \
-e "database_user=root" \
-e "database_password=pass" \
--name my_app my_container:1.0
Remote variables
If you have a distributed environment, scalable, etc you will want to manage your variables.
Basically you will have a web console to create, edit, delete and export your variables. Also these variables must be injected to your docker container in a easy way.
Example of how Heroku offer you a way to manage your variables
Check:
#4 Centralized and manageable configuration
I am trying to connect to our AWS DocumentDB, but it fails with the following error:
2019-12-04T17:46:52.551-0800 W CONTROL [main] Option: ssl is deprecated. Please use tls instead.
2019-12-04T17:46:52.551-0800 W CONTROL [main] Option: sslCAFile is deprecated. Please use tlsCAFile instead.
2019-12-04T17:46:52.551-0800 W CONTROL [main] Option: sslAllowInvalidHostnames is deprecated. Please use tlsAllowInvalidHostnames instead.
MongoDB shell version v4.2.1
connecting to: mongodb://insights-db-2019-08-12-18-32-13.cih94xwdmniv.us-west-2.docdb.amazonaws.com:27017/?compressors=disabled&gssapiServiceName=mongodb
2019-12-04T17:46:52.684-0800 E NETWORK [js] SSL peer certificate validation failed: Certificate trust failure: CSSMERR_CSP_UNSUPPORTED_KEY_SIZE; connection rejected
2019-12-04T17:46:52.685-0800 E QUERY [js] Error: couldn't connect to server insights-db-2019-08-12-18-32-13.cih94xwdmniv.us-west-2.docdb.amazonaws.com:27017, connection attempt failed: SSLHandshakeFailed: SSL peer certificate validation failed: Certificate trust failure: CSSMERR_CSP_UNSUPPORTED_KEY_SIZE; connection rejected :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2019-12-04T17:46:52.687-0800 F - [main] exception: connect failed
2019-12-04T17:46:52.687-0800 E - [main] exiting with code 1
The command I use:
mongo --ssl --host MY_DOCUMENT_DB_HOST_AND_PORT --sslCAFile MY_KEY_PATH --username MY_USERNAME --password MY_PASSWORD
A couple troubleshooting I already tried:
Sent the exact same command and key to another Mac OS X machine on the same network --> worked fine
Uninstalled and reinstalled my mongo app mongodb-community#4.2
Try adding the rds-combined-ca-bundle.pem certificate to your Mac, I had a very similar error when trying to connect to DocumentDb using localhost through a forwarded port, the command I ran is
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain rds-combined-ca-bundle.pem
I got this command from this answer
For those hitting this issue post 2020, see the last reply in this thread: https://forums.aws.amazon.com/message.jspa?messageID=936916
Mac OS X Catalina has updated the requirements for trusted certificates. Trusted certificates must now be valid for 825 days or fewer (see https://support.apple.com/en-us/HT210176). Amazon DocumentDB instance certificates are valid for over four years, longer than the Mac OS X maximum. In order to connect directly to an Amazon DocumentDB cluster from a computer running Mac OS X Catalina, you must allow invalid certificates when creating the TLS connection. In this case, invalid certificates mean that the validity period is longer than 825 days. You should understand the risks before allowing invalid certificates when connecting to your Amazon DocumentDB cluster.
To connect to an Amazon DocumentDB cluster from OS X Catalina using the AWS CLI, use the tlsAllowInvalidCertificates parameter.
mongo --tls --host <hostname> --username <username> --password <password> --port 27017 --tlsAllowInvalidCertificates
Basically, just ignore invalid certificates.
I want others to access mongoDB on my OS X, the firewall is off and my configuration file is like:
systemLog:
destination: file
path: /usr/local/var/log/mongodb/mongo.log
logAppend: true
storage:
dbPath: /data/db
net:
bindIp: 0.0.0.0
port: 27017
and i start the service like
sudo mongod --config /usr/local/etc/mongod.conf
I can always access the database using mongo and mongo 127.0.0.1
but when i use mongo xx.xxx.xxx.xxx(my ip address)
the access fails with the report:
mongo xx.xxx.xxx.xxx
MongoDB shell version v3.4.17
connecting to: mongodb://xx.xxx.xxx.xxx:27017/test
2018-10-04T09:05:54.316+0800 W NETWORK [thread1] Failed to connect to xx.xxx.xxx.xxx:27017 after 5000ms milliseconds, giving up.
2018-10-04T09:05:54.317+0800 E QUERY [thread1] Error: couldn't connect to server xx.xxx.xxx.xxx:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:240:13
#(connect):1:6
exception: connect failed
I'm really puzzled here. Obviously it's a timeout but my network connection is ok. I used homebrew to install mongoDB, and i've tried every version from 3.0 to 4.0 and the result seems always the same. And Google didn't give any help.
Here is mongodb version and environment info.
root#Backup:~# mongod --version
db version v3.2.22
git version: 105acca0d443f9a47c1a5bd608fd7133840a58dd
OpenSSL version: OpenSSL 1.0.1f 6 Jan 2014
allocator: tcmalloc
modules: none
build environment:
distmod: ubuntu1604
distarch: x86_64
target_arch: x86_64
What I did is to update
bindIp: 0.0.0.0 in both files: /etc/mongod.conf and /etc/mongodb.conf. After the service restart, I could be able to connect mongodb remotely.
I am new to mongoDB and i am trying to get it configured and running on my Ubuntu server. When i go and enter this command in my terminal
sudo service mongod start
I get the following output
start: Job is already running: mongod
So, when i try to enter the shell with
mongo
I get the following output
2015-02-24T14:54:39.557-0800 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2015-02-24T14:54:39.559-0800 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
I know I'm not working locally so I heard over to the mongod.conf file and change the following
port = 5000
# Listen to local interface only. Comment out to listen on all interfaces.
bind_ip = 10.0.1.51
Where bind_ip is now my ubuntu server and the port is 5000 as shown, so now i restart the service with
sudo service mongod restart
and outsputs
mongod start/running, process 1755
And now I try to renter back into shell with
mongo
and i still get the same error messages
MongoDB shell version: 2.6.7
connecting to: test
2015-02-24T15:01:26.229-0800 warning: Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2015-02-24T15:01:26.230-0800 Error: couldn't connect to server 127.0.0.1:27017 (127.0.0.1), connection attempt failed at src/mongo/shell/mongo.js:146
exception: connect failed
Can someone help me out with this issue? I've been going through the forums and nothing appears to be working. Thanks.
If anyone is having trouble, i looked into mongod --help and found the following solutions
mongod --smallfiles
or
mongod --nojournal
hope this helps anyone.
When running SchemaSpy get error:
Connection failed because of the following error: "no pg_hba.conf entry for host "xxx.xxx.xxx.xxx", user "xxxx", database "xxx", SSL off"
The error occurs because the database does require an SSL connection.
Is there a way to turn on the SSL flag for a connection in SchemaSpy, I opened up the jar file but couldn't find anything. I know the PostgreSQL JDBC Driver supports SSL so this should be theoretically possible.
Otherwise if any one knows any opensource/freeware tools for reverse engineering a postgresql database with an SSL connections, that would help a lot.
Thanks.
Do it like this:
java -jar schemaSpy_5.0.0 -t pgsql -host your-host-url -db your-database-name -s your-database-schema -u your-username -p your-password -connprops "ssl\=true;sslfactory\=org.postgresql.ssl.NonValidatingFactory" -o path-to-your-output-directory -dp path-to-your-jdbc-driver-jar-file
The trick: adding some additional parameters using the -connprops option: we are setting SSL to true (ssl parameter) and we are asking the client (i.e., the driver) to unconditionally accept the SSL connection (sslfactory parameter).
Per the PgJDBC documentation, use the ssl=true option in your URL's parameters, e.g.
jdbc:postgresql://myhost/mydb?ssl=true
If the host doesn't have a valid certificate or the cert doesn't match its hostname you can disable SSL validation too.
SchemaSpy accepts a JDBC URL for the connection, so this will work fine.