Recover MongoDB data from inaccessible replica set - database

I had previously created a 3-node docker cluster of MongoDB with port 27017 mapped to that of respective hosts.
I had then created a replica set rs0 with it's members being host1.mydomain.com:27017, host2.mydomain.com:27017 and host3.mydomain.com:27017. Please note that while creating the replica set, I had specified members with their mydomain.com addresses and not with ${IP1}:27017, etc. I had the respective DNS records set up for each host.
Thus, I could connect to this cluster with string:
mongodb+srv://admin:<pass>#host1.mydomain.com,host2.mydomain.com,host3.mydomain/admin?replicaSet=rs0
Unfortunately, I have lost access to mydomain.com as it has expired and has been scooped up by another buyer.
I can still SSH into individual hosts and log into docker containers, type mongo, then use admin; and then successfully authenticate using db.auth(<user>, <pass>). However, I cannot connect to the replica set nor can export the data out of it.
Here's what I get if I try to SSH into one of the nodes and try to access the data:
$ mongo
MongoDB shell version v3.6.8
connecting to: mongodb://127.0.0.1:27017
Implicit session: session { "id" : UUID("fc3cf772-b437-47ab-8faf-5e0d16158ff0") }
MongoDB server version: 4.4.10
> use admin;
switched to db admin
> db.auth('admin', <pass>)
1
> show dbs;
2022-07-22T13:37:38.013+0000 E QUERY [thread1] Error: listDatabases failed:{
"topologyVersion" : {
"processId" : ObjectId("62da79de34490970182aacee"),
"counter" : NumberLong(1)
},
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotPrimaryNoSecondaryOk"
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:67:1
shellHelper.show#src/mongo/shell/utils.js:860:19
shellHelper#src/mongo/shell/utils.js:750:15
#(shellhelp2):1:1
> rs.slaveOk();
> show dbs;
2022-07-22T13:38:04.016+0000 E QUERY [thread1] Error: listDatabases failed:{
"topologyVersion" : {
"processId" : ObjectId("62da79de34490970182aacee"),
"counter" : NumberLong(1)
},
"ok" : 0,
"errmsg" : "node is not in primary or recovering state",
"code" : 13436,
"codeName" : "NotPrimaryOrSecondary"
} :
_getErrorWithCode#src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs#src/mongo/shell/mongo.js:67:1
shellHelper.show#src/mongo/shell/utils.js:860:19
shellHelper#src/mongo/shell/utils.js:750:15
#(shellhelp2):1:1
How do I go about this? The DB contains important data that I would like to export or simply have the cluster (or one of the mongo hosts) running again.
Thanks!

Add following records to /etc/hosts file on each container running mongodb, and the client where you are connecting from:
xxx.xxx.xxx.xxx host1.mydomain.com
yyy.yyy.yyy.yyy host2.mydomain.com
zzz.zzz.zzz.zzz host3.mydomain.com
replace xxx, yyy, zzz with actual IP addresses that listen on 27017.
If the client is Windows, the hosts file is located at %SystemRoot%\System32\drivers\etc\hosts
If the replica set restores, you will be able to connect to the database without +srv schema:
mongodb://admin:<pass>#host1.mydomain.com,host2.mydomain.com,host3.mydomain.com \
?authSource=admin&replicaSet=rs0
If you don't know network configuration, or the replica set did not recover for any reason, you still can connect to individual node as standalone instances.
Restart the mongodb without --replSet parameter in the command line (somewhere in your Dockerfile) or replication part in mongodb.conf. It will resolve the "NotPrimaryOrSecondary" error.

Related

Phpstan doctrine database connection error

I'm using doctrine in a project (not symfony). In this project I also use phpstan, i installed both phpstan/phpstan-doctrine and phpstan/extension-installer.
My phpstan.neon is like this:
parameters:
level: 8
paths:
- src/
doctrine:
objectManagerLoader: tests/object-manager.php
Inside tests/object-manager.php it return the result of a call to a function that return the entity manager.
Here is the code that create the entity manager
$database_url = $_ENV['DATABASE_URL'];
$isDevMode = $this->isDevMode();
$proxyDir = null;
$cache = null;
$useSimpleAnnotationReader = false;
$config = Setup::createAnnotationMetadataConfiguration(
[$this->getProjectDirectory() . '/src'],
$isDevMode,
$proxyDir,
$cache,
$useSimpleAnnotationReader
);
// database configuration parameters
$conn = [
'url' => $database_url,
];
// obtaining the entity manager
$entityManager = EntityManager::create($conn, $config);
When i run vendor/bin/phpstan analyze i get this error:
Internal error: An exception occurred in the driver: SQLSTATE[08006] [7] could not translate host name "postgres_db" to address: nodename nor servname provided, or not known
This appear because i'm using docker and my database url is postgres://user:password#postgres_db/database postgres_db is the name of my database container so the hostname is known inside the docker container.
When i run phpstan inside the container i do not have the error.
So is there a way to run phpstan outside docker ? Because i'm pretty sure that when i'll push my code the github workflow will fail because of this
Do phpstan need to try to reach the database ?
I opened an issue on the github of phpstan-doctrine and i had an answer by #jlherren that explained :
The problem is that Doctrine needs to know what version of the DB server it is working with in order to instantiate the correct AbstractPlatform implementation, of which there are several available for the same DB vendor (e.g. PostgreSQL94Platform or PostgreSQL100Platform for postgres, and similarly for other DB drivers). To auto-detect this information, it will simply connect to the DB and query the version.
I just changed my database url from:
DATABASE_URL=postgres://user:password#database_ip/database
To:
DATABASE_URL=postgres://user:password#database_ip/database?serverVersion=14.2

Why Wordpress host does not work in Bigdump?

I´m trying to import a big sql file for a Wordpress project that we should be really simple.
I would like to use BigDump because PhpMyAdmin make a 413 error.
BigDump does not connect with the server.
I got this error :
Warning: mysqli::__construct(): (HY000/2002): php_network_getaddresses: getaddrinfo failed: H�te inconnu.
Database connection failed due to php_network_getaddresses: getaddrinfo failed: H�te inconnu
I report data from wp-config.php into bigdump.php like this :
$db_server <=> DB_HOST
$db_name <=> DB_NAME
$db_username <=> DB_USER
$db_password <=> DB_PASSWORD
Notice : DB_HOST is neither localhost nor 127.0.0.1
This is the php code which run this part :
$mysqli = new mysqli($db_server, $db_username, $db_password, $db_name);
if (mysqli_connect_error())
{ echo ("<p class=\"error\">Database connection failed due to ".mysqli_connect_error()."</p>\n");
echo ("<p>Edit the database settings in BigDump configuration, or contact your database provider.</p>\n");
$error=true;
}
It´s so weird,..should I input other thing in $db_server ?
Someone one could help me ?

Error in createIndexes: Failed to send "createIndexes" command with database "mydb": Failed to read 4 bytes: socket error or timeout

I have been recently migrating python code to C using libmongoc-1.0 1.15. I am having troubles creating indexes. I am following the example here. I think it has something to do with me using MongoDB 4.2 since it changed all indexes to be background by default, but I thought version 1.15.3 of libmongoc does support everything new in 4.2.
{ "createIndexes" : "mycol", "indexes" : [ { "key" : { "x" : 1, "y" : 1 }, "name" : "x_1_y_1" } ] }
{ }
Error in createIndexes: Failed to send "createIndexes" command with database "mydb": Failed to read 4 bytes: socket error or timeout
Any thoughts?
"failed to send "createIndexes" command with database "testdb" mongodb error"
I was having a similar issue, in our case, one of the replica sets was having the issue after fixing the issue related replica set and restarting the cluster issue solved

sh.isBalancerRunning(): false

I am trying to shard a mongodb database like this:
1- Start each member of the shard replica set
mongod --shardsvr --port 27100 --replSet r1 --dbpath <some_path>\shardsvr\shardsvr1
mongod --shardsvr --port 27200 --replSet r2 --dbpath <some_path>\shardsvr\shardsvr2
2- Start each member of the config server replica set
mongod --configsvr --port 27020 --replSet cfg1 --dbpath <some_path>\configsvr\configsvr1
3- Connect to config server replica set
mongo --port 27020
4- Initiate the replica set
conf = {
_id: "cfg1",
members: [
{
_id:0,
host: "localhost:27020"
}
]
}
rs.initiate(conf)
5- Start the mongos and specify the --configdb parameter
mongos --configdb cfg1/localhost:27020 --port 28000
6- Initiate the replica set of each shard
mongo --port 27100
var config = {_id: "r1", members: [{_id:0, host:"localhost:27100"}]}
rs.initiate(config)
exit
mongo --port 27200
var config = {_id: "r2", members: [{_id:0, host:"localhost:27200"}]}
rs.initiate(config)
exit
7- Connect to mongos to add shards
mongo --port 28000
sh.addShard("r1/localhost:27100")
sh.addShard("r2/localhost:27200")
8- Add some data
use sharddb
for (i = 10000; i < 30000; i++){
db.example.insert({
author: "author" + i,
post_title: "Blog Post by Author " + i,
date: new Date()
});
}
db.example.count()
9- Enable sharding
sh.enableSharding("sharddb")
10- Create the index as part of sh.shardCollection()
db.example.ensureIndex({author : 1}, true)
sh.shardCollection("sharddb.example", {author: 1})
11- Check if balancer is running
sh.isBalancerRunning()
However, in this step, I get a false as response, and I dont know what I did wrong to get this. I followed the steps of this tutorial
With only 20000 documents that are ~100 bytes each, there is probably only 1 chunk.
Check with
use sharddb
db.printShardingStatus()
I repeated the steps you listed above, and got the following result:
{ "_id" : "sharddb", "primary" : "shard02", "partitioned" : true }
sharddb.example
shard key: { "author" : 1 }
unique: false
balancing: true
chunks:
shard02 1
{ "author" : { "$minKey" : 1 } } -->> { "author" : { "$maxKey" : 1 } } on : shard02 Timestamp(1, 0)
The mongos will monitor what it has added to each chunk, and notify the config server to consider splitting when it has seen enough data added. Then the balancer will automatically be activated when one shard contains several more chunks than another.
If you insert enough documents to trigger automatic splitting, or manually split the chunk, the balancer will begin doing its thing.

Mongoose opening multiple unwanted TCP sockets on reconnect

Wanting to test a mongoDB server up/down procedure connected to Node/Mongoose, we found out that Mongoose can sometimes open hundreds of TCP sockets (which is not necessary and potentially blocking for the user who is limited to a certain amount of sockets). This occurs in the following case and environment :
Node supervised with PM2 and MongoDB surevised with daemontools
At normal and clean startup :
$ netstat -alpet | grep mongo
tcp 0 0 *:27017 *:* LISTEN mongo 65910844 22930/mongod
tcp 0 0 localhost.localdomain:27017 localhost.localdomain:54595 ESTABLISHED mongo 6591110422930/mongod
The last "ESTABLISHED" line repeated 5 times since the option (poolSize: 5) is specified in Mongoose ("mongo" is the user running mongod under daemontools)
When we have the Node procedure :
mongoose.connection.on('disconnected', function () {
var options = {server: { auto_reconnect:true, poolSize: 5 ,socketOptions: { connectTimeoutMS: 5000 } }
}
console.log('Mongoose default connection disconnected ' + mongoose.connection.readyState);
mongoose.connect( dbURI, options );
});
and we bring down the MongoDB by daemontools (mongodbdaemon is a simple $mongod command) :
svc -d /service/mongodbdaemon
there is of course no mongod running in the system (tested by the netstat command ) and the web server pages called which are using mongoose announce what is normal :
{"name":"MongoError","message":"topology was destroyed"}
The problem occurrs at this stage. Since the time we bring down MongoDB, Mongoose accumulates all the connect() calls in the 'disconnected' event handler. This means that the longer we wait before bringing up MongoDB, the more TCP connections will be opened.
So bringing up MongoDB by
svc -u /service/mongodbdaemon
gives the following :
$ netstat -alpet | grep mongo | wc -l
850 'ESTABLISHED' TCP connections to mongod !
If we bring down again mongod, the hundreds of connections remain in the TIME_WAIT state until Linux cleans the socket pool.
Questions
Can we check if a MongoDB instance is available before connecting to it ?
Can we configure Mongoose not to accumulate reconnecting() tries every millisecond or so ?
Is there a buffer for pending connection operations (as there is for mongoose.insert[...]) that we can access or clean manually ?
Problem reproductible on a CentOS 6.7 / mongoDB 3.0.6 / mongoose 4.1.8 / node 4.0.0
Edit :
From the official mongoose site where I posted this question after posting it here, I received an answer : "using auto_reconnect : true, on the initial connect() operation (which is set by default) there is no reason to reconnect() in a disconnect event callback".
This is true and it works jute fine, but the question is now why does this happen and how to avoid it (it is serious enough on the Linux system level to be an issue in mongoose).
Thanks !

Resources