Why use of $[] with $ is causing write conflict ?
db.projectionTesting.updateOne({"metaData.title": "BCDe"} , {$set : {
"metaData.0.title" : "efde" ,
"metaData.$[].hasUpdateddd": 76
}} )
WriteError({
"index" : 0,
"code" : 40,
"errmsg" : "Updating the path 'metaData.$[].hasUpdateddd' would create a conflict at 'metaData'",
"op" : {
"q" : {
"metaData.title" : "BCDe"
},
"u" : {
"$set" : {
"metaData.0.title" : "efde",
"metaData.$[].hasUpdateddd" : 76
}
},
"multi" : false,
"upsert" : false
}
}) :
Why use of $[] with $ is causing write conflict ?
if these are working fine then why not the above one. I want to know what exactly the problem is with the use of $[] with $ in while performing updateOperation
db.projectionTesting.updateOne({"metaData.title": "BCDe"} ,
{$set : {"metaData.0.title" : "efde" ,
"metaData.$.hasUpdateddd": 76}} )
db.projectionTesting.updateOne({"metaData.title": "BCDe"} ,
{$set : {"metaData.$.title" : "efde" ,
"metaData.$.hasUpdateddd": 76}} )
In order to ensure consistency among replica set members, MongoDB replication requires that operations added to the oplog be idempotent. To achieve this update operations such as $inc, $dec, and most array operations are stored in the oplog as $set with the new value of the field.
When combining 2 updates using metaData.$, you are making updates to one specific element.
When combining metaData.0 and metadata.$[] you are making unrelated updates a single element and to the entire array.
The query executor has not been given logic to be able to ensure an idempotent operation in that situation, so it balks with a write conflict.
Related
I'm new to MongoDB. I've an object below
{
"_id" : "ABCDEFGH1234",
"level" : 0.6,
"pumps" : [
{
"pumpNo" : 1
},
{
"pumpNo" : 2
}
]
}
And I just want to move level field to pumps array's objects like this
{
"_id" : "ABCDEFGH1234",
"pumps" : [
{
"pumpNo" : 1,
"level" : 0.6
},
{
"pumpNo" : 2,
"level" : 0.6
}
]
}
I've check on MongoDB doc in Aggregation section but didn't found anything. In SQL by JOIN or SUB Query I'm able to do but here it's No-SQL
Could you please help me with this? Thankyou
Try this on for size:
db.foo.aggregate([
// Run the existing pumps array through $map and for each
// item (the "in" clause), create a doc with the existing
// pumpNo and bring in the level field from doc. All "peer"
// fields to 'pumps' are addressable as $field.
// By $projecting to a same-named field (pumps), we effectively
// overwrite the old pumps array with the new.
{$project: {pumps: {$map: {
input: "$pumps",
as: "z",
in: {pumpNo:"$$z.pumpNo", level:"$level"}
}}
}}
]);
Strongly recommend you explore the power of $map, $reduce, $concatArrays, $slice, and other array functions that make MongoDB query language different from the more scalar-based approach in SQL.
I have this simple database with one single collection and when I try a simple query with a field and value that exists il returns nothing.
one row of the database :
{
"title" : "Cupone Salice Salentino",
"sku" : 1000126,
"vendor" : "messapia-tesori-del-salento",
"image" : "",
"estimatedprice" : 21,
"finalprice" : 21,
"qty" : 1,
"category" : "Vins & alcools",
"status" : "fulfilled"
}
Code:
db.orders.find(); // this works
db.orders.find({qty : 2}); // this returns nothing
I think you have not gave whole document here.Because according to me "qty" is in array in your document object that's why.
Currently we are performing full text search within MSSQL with query:
select * from contract where number like 'word%'
the problem is that contract number may be like
АА-1641471
TST-100069
П-5112-90-00230
001-1000017
1617/292/000001
and ES split all this into tokens.
How to configure ES not to split all this contract numbers into tokens and perform same search like SQL query above ?
the closest solution i've found is to perform query like this:
{
"size": 10,
"query": {
"regexp": {
"contractNumber": {
"value": ".*п-11.*"
}
}
}
}
this solution work same as MSSQL LIKE 'word%' with value like 1111,2568 etc, but fails with п-11
One option could be to use the wildcard query which can perform any type of wildcard combination i.e %val%, %val or val%
{
"query": {
"wildcard" : { "contractNumber" : "*11" }
}
}
NOTE: It's not recommended to start with a wildcard in the search. Could be extremely slow
To make this work with string values to prevent them from being tokenized, you need to update your index and tell the analyser to stay away. One way of doing that is to define the property as type keyword instead of text
PUT /_template/template_1
{
"index_patterns" : ["your_index*"],
"order" : 0,
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"your_document_type" : {
"properties" : {
"contractNumber" : {
"type" : "keyword"
}
}
}
}
NOTE: replace your_index with your index name and your_document_type with the document type.
When the mapping is added, delete the current index and recreate it, then it will use the template for properties and your contractNumber will be indexed as a keyword
Assume we have the following collection, which I have a question about:
{
"_id" : 1,
"user_id" : 12345,
"items" : [
{
"item_id" : 1,
"value" : 21,
"status" : "active"
},
{
"item_id" : 2,
"value" : 22,
"status" : "active"
},
{
"item_id" : 3,
"value" : 23,
"status" : "active"
},
...
{
"item_id" : 1000,
"value" : 1001,
"status" : "active"
},
]
}
In a collection I have a lot of documents (as much as users in the system, at about 100K documents in collection). In every document I have around 1000 documents inside array "items"
The list of operations that will be used:
Read whole document once user logins to the system (rare operation).
Update a single document in a nested array items and set "value" and "status" almost on every "user click" (frequent operation)
db.items.update({_id : 1 , "items.item_id" : 1000} , {$set: {"items.$.value": 1000}})
Insert a new document to a collection with 1000 documents in nested array. This operation will be done on a new user registration (very rare operation)
The question is: Do I need to create a compound index like
db.items.createIndex( { "_id": 1, "items.item_id": 1 } )
to help the MongoDB to update certain document inside array or MongoDB does search in whole document no matter of compound index? Or maybe someone can propose a different schema for such a scenario?
I have a Person collection that is made up of the following structure
{
"_id" : ObjectId("54ddd6795218e7964fa9086c"),
"_class" : "uk.gov.gsi.hmpo.belt.domain.person.Person",
"imagesMatch" : true,
"matchResult" : {
"_id" : null,
"score" : 1234,
"matchStatus" : "matched",
"confirmedMatchStatus" : "notChecked"
},
"earlierImage" : DBRef("image", ObjectId("54ddd6795218e7964fa9086b")),
"laterImage" : DBRef("image", ObjectId("54ddd67a5218e7964fa908a9")),
"tag" : DBRef("tag", ObjectId("54ddd6795218e7964fa90842"))
}
Notice that the "tag" is a DBRef.
I've got a Spring Data finder that looks like the following:
Page<Person> findByMatchResultNotNullAndTagId(#Param("tagId") String tagId, Pageable page);
When this code is executed the find query looks like the following:
{ matchResult: { $ne: null }, tag: { $ref: "tag", $id: ObjectId('54ddd6795218e7964fa90842') } } sort: {} projection: {} skip: 0 limit: 1
Which is fine, I get a collection of 1 person back (limit=1). However the page details are not correct. I have 31 persons in the collection so I should have 31 pages. What I get is the following:
"page" : {
"size" : 1,
"totalElements" : 0,
"totalPages" : 0,
"number" : 0
}
The count query looks like the following:
{ count: "person", query: { matchResult: { $ne: null }, tag.id: "54ddd6795218e7964fa90842" } }
That tag.id doesn't look correct to me compared with the equivalent find query above.
I've found that if I add a new method to org.springframework.data.mongodb.core.MongoOperations:
public interface MongoOperations {
public long count(Query query, Class<?> entityClass, String collectionName);
}
And then re-jig AbstractMongoQuery.execute(Query query) to use that method instead of the similar method without the entityClass parameter then I get the correct paging results.
Question: Am I doing something wrong or is this a bug in Spring Data Mongo?
Edit
Taking inspiration from Christoph I've added the following test code on Git https://github.com/tedp/Spring-Data-Test
The information contained in the Page returned depends on the query executed. Assuming a total number of 31 elements in you collection, only a few of them, or even just one might match the given criteria by referencing the tag with id: 54ddd6795218e7964fa90842. Therefore you only get the total elements that match the query, and not the total elements within your collection.
This bug was actually fixed DATAMONGO-1120 as pointed out by Christoph. I needed to override the spring data version to use 1.6.2.RELEASE until the next iteration of Spring Boot where presumably Spring Data will be up lifted to at least 1.6.2.RELEASE.