Mongodb replication (secondary server) - database

I set up a master-slave replication of MongoDB by using 2 servers. The problem is I always going to assign rs.slaveOk() in slave server after inserting data in master. I want to automatically synced(no need to rs.slaveOk()) in secondary! What configurations should I need to change? Thanks !
This is my rs.conf()for master-slave replication!
> rs2:PRIMARY> rs.conf() { "_id" : "rs2", "version" : 3,
> "protocolVersion" : NumberLong(1), "members" : [ { "_id" : 0,
> "host" : "192.168.56.104:27017", "arbiterOnly" : false,
> "buildIndexes" : true, "hidden" : false, "priority" : 1,
> "tags" : {
> }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "192.168.56.106:27017", "arbiterOnly" :
> false, "buildIndexes" : true, "hidden" : false, "priority" :
> 0, "tags" : {
> }, "slaveDelay" : NumberLong(0), "votes" : 1 } ], "settings" : { "chainingAllowed" : true,
> "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10,
> "electionTimeoutMillis" : 10000, "getLastErrorModes" : {
> }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5a1e37704f3b7025eccaa874") } }

You can create a file /etc/mongorc.js and add rs.slaveOk() there. The file is being evaluated on each shell startup.
You can have a look here for more clarification.
Or, another way is to start mongo with the following command:
>mongo --port 27012 --eval "rs.slaveOk()" --shell
slaveOk() is only valid for that console session that it was executed in, so you would need to pass in a script or stay connected to the console with the --shell arguments.

Related

How to make new array from another array during aggregation?

I have following document:
{
"subscriptionIds" : [
ObjectId("60c312c6dbb5a49fbbf560ea")
],
"gps" : {
"type" : "Point",
"coordinates" : [
23.942706,
54.932539
]
},
"online" : false,
"suspended" : false,
"hwModel" : "",
"fw" : "",
"lastSeen" : ISODate("2021-06-16T04:43:36.682Z"),
"lastSimRequest" : ISODate("2021-06-16T04:34:59.749Z"),
"lastLocation" : "LT",
"lastLocationType" : "gps",
"createdAt" : ISODate("2021-05-20T10:37:16.025Z"),
"updatedAt" : ISODate("2021-06-11T07:37:56.981Z"),
"notes" : "",
"psk_seed" : "QTAebOeNP4nIs-JJSNNlkAQ78N_VaxOq98-_lQPCyZQ=",
"lastOnline" : ISODate("2021-06-15T08:01:59.886Z"),
"lastOffline" : ISODate("2021-06-16T04:43:36.682Z"),
"onlineReason" : "deviceOnlineStatusFromAC",
"offlineReason" : "deviceOfflineStatusTimeout",
"allocationSettings" : "dataplan",
"subscriptionDataplans" : [
{
"_id" : ObjectId("5fae82fc1224cc8d62b5bf17"),
"organizationId" : ObjectId("5dd63d1c1d042f3018e8374e"),
"organizationName" : "",
"name" : "Naujas plan Telia 75GB",
"enabled" : true,
"contractsId" : [
ObjectId("5e32847ab8013befcc14bb1b"),
ObjectId("5e32847ab8013befcc14bb1b")
],
"simQuota" : 0,
"periodQuota" : NumberLong(0),
"allocationRules" : null,
"createdAt" : ISODate("2020-11-13T12:58:36.650Z"),
"updatedAt" : ISODate("2021-06-14T08:08:28.728Z"),
"notes" : "",
"allowRoaming" : false,
"enablePriorityOrdering" : false,
"priorityOrdering" : ""
},
{
"_id" : ObjectId("5fcf25662b1c7d9bab8c1f7d"),
"organizationId" : ObjectId("5dd63d1c1d042f3018e8374e"),
"organizationName" : "",
"name" : "London test",
"enabled" : true,
"contractsId" : [
ObjectId("5e5dfea1efcf754767408eae")
],
"simQuota" : 0,
"periodQuota" : NumberLong(0),
"createdAt" : ISODate("2020-12-08T07:04:06.255Z"),
"updatedAt" : ISODate("2021-06-15T09:28:07.472Z"),
"notes" : "",
"allowRoaming" : true,
"enablePriorityOrdering" : false,
"priorityOrdering" : ""
}
],
}
Is there a way to make following array using "_id" and "allowRoaming" fields:
"dataplanRoaming": [
{
"_id" : ObjectId("5fae82fc1224cc8d62b5bf17"),
"allowRoaming" : false,
},
{
"_id" : ObjectId("5fcf25662b1c7d9bab8c1f7d"),
"allowRoaming" : true,
}
]
My best result was, I tried using project, addFields etc still can't get structure which I want. Rest of query works just fine just missing this part
"dataplanRoaming" : [
[
false,
true
],
[
ObjectId("5fae82fc1224cc8d62b5bf17"),
ObjectId("5fcf25662b1c7d9bab8c1f7d")
]
],
I hoped that {$addFields:{dataplanRoaming:["$subscriptionDataplans.allowRoaming", "$subscriptionDataplans._id"]}},
would give me wanted result it just made array with _id and allowRoaming as separates fields?
Is there a way to create my wanted result using aggregation etc?
$map to iterate loop of subscriptionDataplans and return needed feidls
db.collection.aggregate([
{
$addFields: {
dataplanRoaming: {
$map: {
input: "$subscriptionDataplans",
in: {
_id: "$$this._id",
allowRoaming: "$$this.allowRoaming"
}
}
}
}
}
])
Playground

Insert numbers into mongo-db collection

I am currently learning Mongo DB and trying to insert numbers into my "numbers" collection (in mongo command shell).
//This works :
db.numbers.insertMany([{"number":1},{"number":2}]);
//This doesn't
db.numbers.insertMany([1,2,3,4,5,6]);
(1) Does that mean that number is not a valid document or I am missing a very basic concept ?
(2) Why Mongo-db is not assigning Object-ID to numbers automatically in this case ?
//actual output from Mongoshell version 4.2.6 command line
> db.numbers.insertMany([{"number":1},{"number":2}]);
{
"acknowledged" : true,
"insertedIds" : [
ObjectId("5f79fa89d04cd9e2b3acbf03"),
ObjectId("5f79fa89d04cd9e2b3acbf04")
]
}
> db.numbers.find();
{ "_id" : ObjectId("5f79fa89d04cd9e2b3acbf03"), "number" : 1 }
{ "_id" : ObjectId("5f79fa89d04cd9e2b3acbf04"), "number" : 2 }
> db.numbers.insertOne({number:[1,2,3,4,5,6]});
{
"acknowledged" : true,
"insertedId" : ObjectId("5f79fa9ed04cd9e2b3acbf05")
}
> db.numbers.find();
{ "_id" : ObjectId("5f79fa89d04cd9e2b3acbf03"), "number" : 1 }
{ "_id" : ObjectId("5f79fa89d04cd9e2b3acbf04"), "number" : 2 }
{ "_id" : ObjectId("5f79fa9ed04cd9e2b3acbf05"), "number" : [ 1, 2, 3, 4, 5, 6 ] }
>

Mongodb - fetching subdocument in Array

My document has the following structure:
{
"exp" : "2020-01-27",
"session" : [
{
"parameters" : {
"run" : "2020-01-27-23-01-32_experiment"
},
"session" : "2eb2e35a-69ea-11ea-b7b6-005056b146e8",
"stage" : "intro",
"is_finished" : false,
"last_modified" : "2020-03-19 15:01:51"
},
{
"parameters" : {
"run" : "2020-01-27-23-01-32_experiment"
},
"session" : "32edb74c-69ea-11ea-b7b6-005056b146e8",
"stage" : "experiment",
"is_finished" : true,
"last_modified" : "2020-03-19 15:02:22"
},
{
"session" : "ffe003e2-69ed-11ea-b7b6-005056b146e8",
"parameters" : {
"run" : "2020-01-27-23-01-32_experiment"
},
"stage" : "intro",
"is_finished" : true,
"last_modified" : "2020-03-19 15:29:19"
}
]
}
I like to receive all sub documents where is_finished = true. I tried:
db.getCollection.sess.find(
{
"session.is_finished" : true
},
{
"session.$.session" : 1
}
);
But I just receive the first element that equals the criteria and not both sub documents.
How can I get all sub documents where is_finished = true?
You can take advantage of aggregation:
db.aggregate([{$unwind: '$session'},
{$match: {'session.is_finished': true}])

MongoDB Array Data removing

This is the data I have, I want to remove the the array of that particular monitorId when it matches the User Id
{
"_id" : ObjectId("5afd8d562b2de0034953fdae"),
"isActiveEnabled" : true,
"isFrEnabled" : null,
"isDriveEnabled" : true,
"organization" : "5747f009544abb2ecbccae5f",
"monitorList" : [
{
"timeFailSmsAlert" : false,
"emailAlert" : true,
"alcoholSmsAlert" : true,
"failEmailAlert" : false,
"displayName" : "t",est
"username" : "test",
"monitorId" : "5748fcb6c9e3deeb30d8c74f",
"organization" : "5747f009544abb2ecbccae5f"
}
],
"userId" : "5afd8d542b2de0034953fdac"
}
This is my query:
db.getCollection("userconfigs").update({'userId':'5b2f276ea93966a93474006e'},{$pull:{'monitorlist':{'monitorId':'5b30a4002dea1a0fd6597b79'}}})
This is the output I got , Basically I want to remove
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 0 })
Changing the variable from monitorlist to monitorList solved the issue, thanks.

It is possible to automatically delete data older than for instance 10 days from elasticsearch in real-time?

curl -XGET '127.0.0.1:9200/messages/message/_search?pretty' returns data like shown below. I wonder whether it is possible to automatically delete data older than for instance 10 days from elasticsearch preferably in real time? I added my example data because there is a field date that could be used in this case. Or maybe there is a different more recommended method?
{
"took" : 22,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 2,
"max_score" : 1.0,
"hits" : [
{
"_index" : "messages",
"_type" : "message",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"message" : "example message1"
}
},
{
"_index" : "messages",
"_type" : "message",
"_id" : "ZODslt0LZ1T6GMrC",
"_score" : 1.0,
"_source" : {
"date" : "2018-05-25T10:06:06Z",
"message" : "example message1"
}
}
]
}
}
Elastic Curator is exactly what you are looking for. You should create a separate file for an index for each day.
For example if your index has pattern like that: YOUR_INDEX_NAME-%{+YYYY.MM.dd} then you should apply configuration below:
actions:
1:
action: delete_indices
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: YOUR_INDEX_NAME-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d' <--- date pattern in your index name
unit: days
unit_count: 10 <--- after how many days delete the index

Resources