Update unique compound indexes on an existing data set - database

Problem:
I'm trying to update a unique compound index on an existing data set and Mongo isn't updating the index.
Background:
In the database for our web app we have a unique compound index using a user's clubID and email. This means emails must be unique in regards to a user's clubID.
I'm in the process of updating this index to allow users to share emails. We added a new property on the user model called 'primaryAccountHolder'.
I want the new compound index to allow users with same clubID to share an email but only one user in the same club can have the field primaryAccountHolder set to true. I have this index working locally but the updating on our existing data set is unsuccessful.
I believe this is because we have existing entries in our DB that won't allow this index to be updated. So my question is:
how can I achieve updating a compound index that maintains uniqueness on an existing data set?
Below are the indexes I have created using Mongoose / Typescript. These work locally but not on our existing db.
Old Index:
UserSchema.index({ email: 1, clubID: 1 }, { unique: true })
// Won't allow a user of the same club to have the same email. This is the index on our DB.
New Index:
UserSchema.index({ email: 1, clubID: 1 }, { unique: true, partialFilterExpression: { email: { $exists: true }, primaryAccountHolder: { $eq: true } } })
// Will allow users to share an email but only one of them can have the primary account holder field set to true.
The new index uses a partial filter expression. This is the part that isn't created on the existing data set.
Thanks for the help!
Sam Gruse

You'll have to drop and recreate the index:
UserSchema.dropIndex({ email: 1, clubID: 1 })
And then recreate it:
UserSchema.createIndex(
{ email: 1, clubID: 1 },
{ unique: true,
partialFilterExpression:
{ email: { $exists: true },primaryAccountHolder: { $eq: true } }}
)
from MongoDB Documentation:
https://docs.mongodb.com/manual/tutorial/manage-indexes/#modify-an-index

MongoDB cannot update an existing index. You need to drop the current index and create the new one.
From https://docs.mongodb.com/manual/tutorial/manage-indexes/#modify-an-index:
To modify an existing index, you need to drop and recreate the index. The exception to this rule is TTL indexes, which can be modified via the collMod command in conjunction with the index collection flag.

Related

How do I insert a document along with the result of a find query together in a collection in a single query in MongoDB?

Suppose I have a collection named oldCollection which has a record like
{
name: "XYZ"
}
Now I want to insert this data into a new collection named newCollection. But I also want to add another key-value field (suppose a boolean field exists) for this same record like :
{
name: XYZ
exists:true
}
I am using find query to extract the required data and insert it into the new collection but how can I add more fields (like exists in the above example) in the same record?
Use $out aggregation stage:
db.collection.aggregate([
{
"$addFields": { "exists": true }
},
{
"$out": "resultedCollection"
}
])
see playground

Query using ObjectId in MongoDB

I have a notes collection as:
{
note: {
type: String,
},
createdBy: {
type: String,
required: true
},
}
where "createdBy" contains _id of a user from users collection.
First Question: Should I define it as String or ObjectId?
Second Question:
While querying the data as db.users.find({ createdBy: ObjectId(userid) },'notes'). Is it a O(1) operation?
Or, do I have to create an index for that to be 0(1)?
If your user collection is using ObjectId then you would better also use ObjectId in notes collection since you may want to $lookup them.
Only _id field would create index in the begging of collection. You need to create index for createdBy since you want O(1) operation.

indexing large array in mongoDB

according to mongoDB documentation, it's not recommended to create multikey index for large arrays, so what is the alternative option for that?
I want to notify my app users whenever one of their contacts also start using the app, so I have to upload and manage the contacts list of each user.
we are using mongoDB with replica set of master with two secondaries machines.
does mongo can handle multikey indexing for array with hundreds of values?
hundreds of contacts for hundreds thousands of users can be very hard to mange.
the multikey solution looks like that:
{
customerId: "id1",
contacts: ["aaa", "aab", "aac", .... "zzz"]
}
index: createIndex({ contacts: 1 }).
another solution is to save each contacts in it's own document and save all the app users that related to him:
{
phone: "aaa",
contacts: ["id1", "id2", "id3"]
},
{
phone: "aab",
contacts: ["id1"]
},
{
phone: "aac",
contacts: ["id1"]
},
......
{
phone: "zzz",
contacts: ["id1"]
}
index: createIndex( { phone: 1 } )
both have poor performance on writing when uploading the contacts list:
the first on calculate huge index, and the second for updating lots of documents concurrent.
Is there a better way to do it?
I'm using a replica set with two secondaries machines, does shard key could help?
Thanks
To index a field that holds an array value, MongoDB creates an index key for each element in the array. These multikey indexes support efficient queries
against array fields.
So if i were you, my data model would be like this :
{
customerId: "id1",
contacts: ["_idx", "_idy", "_idw", .... "_idz"]
}
And then create your index on the contacts. MongoDB creates by default indexes on ids. So you will have to create new documents for the non app users, just try to to add a field, like "app_user" : true/false.
For index performance, you could make it build in the background without any issues, and for replica sets, this is how it's done.
For the sharding, it won't help you, because you won't even be able to shard anything, since you have one primary node in your cluster. Sharding needs at least 2 sets of primary Mongo instances, so in your case, you could add a fourth server, then have two replica sets, of one primary and one secondary, then shard them, and tranform your system into 2 replicated shards.
Once this is achieved, it will obviously balance the loads between the 2 shards, eventhough a hundred documents isn't really much to deal with for MongoDB.
On the other hand if you're going to go for sharding, you will need more setup, for config servers if you're using Mongodb 3.4 or higher.

Cannot update item where two columns combination must be unique

I am making an application with azure mobile services which stores users hours and points in a table called attended users. The problem I have is when I try and update the one specific user it selects all of the users with the same club id. I need an update function that finds a user with a club id and UniqueUserID that are unique, then updates the hours and points based on the one result.
Controller Code
$scope.saveChanges = function(){
$scope.show($ionicLoading);
var query = client.getTable('AttendedClubs').update({id: clubID.getJson(), UniqueUserID: memberID.getJson(), Hours: $scope.profile.Hours, Points: $scope.profile.Points}).done(function(results) {
$scope.hide($ionicLoading);
}, function(error) {
$scope.hide($ionicLoading);
alertDialogue.pop("No Internet Connection","Check Connection");
});
}
You are using a unique ID of the clubID. You need to construct this. When you create your object, do something like:
var table = client.getTable('AttendedClubs');
table.insert({
id: uuid.v4(),
clubID: clubID.getJson(),
UniqueUserID: memberID
...
});
To update all records, first do a fetch, then do an update on a per-record basis. Use the id to uniquely identify the record.

ServiceStack.OrmLite returning "empty records"

I´m starting with ServiceStack and using OrmLite to access my database. I used the Northwind example that comes bundled and modified it to access a SqlServer Database.
I changed the name of the table (Customer to Client) and the POCO class (Customer.cs) attributes so they match the correct ones in my table. When the request is made the returned data consist on a array containing N empty objects being N the number of records on the desired table.
If I add/remove records to the table this action is reflected on the returned data. So, OrmLite is querying the table but I can´t understand why my records are not populated.
The original json output:
{
Customers: [
{Id:"...", CompanyName:"...", },
{Id:"...", CompanyName:"...", },
{Id:"...", CompanyName:"...", }
],
ResponseStatus: {...}
}
After modification, I'm receiving:
{
Clients: [
{},
{},
{}
],
ResponseStatus: {}
}
Note the array with the N empty objects as value of the Clients key.

Resources