I have the following CSV:
matchId, score, players.Name, players.Goals
2730319610399, 5-0, John, 3
When I use mongoimport on Studio 3T it is imported in the form I need because of the dot notation:
{
"matchId" : "2730319610399",
"score" : "5-0",
"players" : {
"Name" : "John",
"Goals" : "3"
}
}
My issue is that the csv actually has one more player that I want to add in this import. The array of "players" has two entries.
This is the actual CSV format:
matchId, score, players.Name, players.Goals, players.Name, players.Goals
2730319610399, 5-0, John, 3, Kyle, 2
But this does not work and I get an error of:
Every row will be parsed as one column.
The header contains a duplicate name "players.Name" in columns: [3, 5]
Is it possible to format the CSV so that I can add multiple values into the "players" array? I was thinking of naming it something like players[0].Name and players[1].Name
But that doesn't work because it creates two arrays: players[0] and players[1]
This is what I need the database structure to look like:
{
"matchId" : "2730319610399",
"score" : "5-0",
"players" : {
"Name" : "John",
"Goals" : "3"
},
{
"Name" : "Kyle",
"Goals" : "2"
}
}
Try with this:
matchId, score, players.Name, players.Goals
2730319610399, 5-0, John.Kyle, 3.2
then, use:
db.collection.find().snapshot().forEach(function (test) {
test.players.Name = test.players.Name.split('.');
db.whatevercollection.save(test);
});
db.collection.find().snapshot().forEach(function (test) {
test.players.Goals = test.players.Goals.split('.');
db.whatevercollection.save(test);
});
ok, so the best option would be to import a json file instead of a csv.
For example:
{ "matchId":"2730319610399","score":"5-0","players":[{"Name":"John","Goals":"3"},{"Name":"Kyle","Goals":"2"}]}
{ "matchId":"2830319610399","score":"1-0","players":[{"Name":"Mauri","Goals":"1"}]}
then use mongoimport as follows:
mongoimport --db=TestDB --collection=TestCol --type=json example_file.json
you should see something like this on Robo3T:
Related
I'm new to MongoDB. I've an object below
{
"_id" : "ABCDEFGH1234",
"level" : 0.6,
"pumps" : [
{
"pumpNo" : 1
},
{
"pumpNo" : 2
}
]
}
And I just want to move level field to pumps array's objects like this
{
"_id" : "ABCDEFGH1234",
"pumps" : [
{
"pumpNo" : 1,
"level" : 0.6
},
{
"pumpNo" : 2,
"level" : 0.6
}
]
}
I've check on MongoDB doc in Aggregation section but didn't found anything. In SQL by JOIN or SUB Query I'm able to do but here it's No-SQL
Could you please help me with this? Thankyou
Try this on for size:
db.foo.aggregate([
// Run the existing pumps array through $map and for each
// item (the "in" clause), create a doc with the existing
// pumpNo and bring in the level field from doc. All "peer"
// fields to 'pumps' are addressable as $field.
// By $projecting to a same-named field (pumps), we effectively
// overwrite the old pumps array with the new.
{$project: {pumps: {$map: {
input: "$pumps",
as: "z",
in: {pumpNo:"$$z.pumpNo", level:"$level"}
}}
}}
]);
Strongly recommend you explore the power of $map, $reduce, $concatArrays, $slice, and other array functions that make MongoDB query language different from the more scalar-based approach in SQL.
I have this collection :
{
username : "user1",
arr : [
{
name : "test1",
times : 0
},
{
name : "test2",
times : 5
}
]
}
I have an array with some object. This objects have a name and the value times. Now I want to add new objects, if my array doesn't contain them. Example:
I have this two objects with the name "test1" and "test2" already in the collection. I want now to insert the objects "test2", "test3" and "test4". It should only add the object "test3" and "test4" to the array and not "test2" again. The value times doesn't do anything in this case, they should just have the value 0 when it gets insert.
Is there a way to do this with one query?
If you can insert test1, test2,... one by one, then you can do something like this.
db.collection.update(
{username : "user1", 'arr.name': {$ne: 'test2'}},
{$push: {
arr: {'name': 'test2', 'times': 0}
}
})
The $ne condition will prevent the update if the name is already present in arr.
You can now use the addToSet operator that is built just for that: adds a value to an array if it does not exist.
Documentation: https://docs.mongodb.com/manual/reference/operator/update/addToSet/
I have following db collection of users.
[{
name : "abc",
obj:{ id : 123 , arr[{fid:"a123",field:"0"},{fid:"b123",field:"0"}]}
},
{
name : "pqr",
obj:{ id : 456 , arr[{fid:"a456",field:"0"},{fid:"b456",field:"0"}]}
}]
I want to update field value of fid : b456 to 1 in mongodb.
How to write query for same
Use $
db.users.update({obj.arr.fid:"b456"},{$set: {"obj.arr.$.field":"1"}})
You can as the below:
db.users.update({"obj.arr.fid": b456 }, {$set: { "obj.arr.$.fid": 1 }})
The positional $ operator acts as a placeholder for the first element
that matches the query document.
Have you tried something?
Maybe this can help you:
db.users.update({name:"pqr"},{$set: {"obj.arr[1].fid":"1"}})
For more info take a look here $set
My document structure is as below.
{
"_id" : {
"timestamp" : ISODate("2016-08-27T06:00:00.000+05:30"),
"category" : "marketing"
},
"leveldata" : [
{
"level" : "manager",
"volume" : [
"45",
"145",
"2145"
]
},
{
"level" : "engineer",
"volume" : [
"2145"
]
}
]
}
"leveldata.volume" embedded array document field can have around 60 elements in it.
In this case, "leveldata" is an array document.
And "volume" is another array field inside "leveldata".
We have a requirement to fetch specific elements from the "volume" array field.
For example, elements from specific positions, For Example, position 1-5 within the array element "volume".
Also, we have used positional operator to fetch the specific array element in this case based on "leveldata.level" field.
I tried using the $slice operator. But, it seems to work only with arrays not with array inside array fields, as that
is the case in my scenario.
We can do it from the application layer, but that would mean loading the entire the array element from mongo db to memory and
then fetching the desired elements. We want to avoid doing that and fetch it directly from mongodb.
The below query is what I had used to fetch the elements as required.
db.getCollection('mycollection').find(
{
"_id" : {
"timestamp" : ISODate("2016-08-26T18:00:00.000-06:30"),
"category" : "sales"
}
,
"leveldata.level":"manager"
},
{
"leveldata.$.volume": { $slice: [ 1, 5 ] }
}
)
Can you please let us know your suggestions on how to address this issue.
Thanks,
mongouser
Well yes you can use $slice to get that data like
db.getCollection('mycollection').find({"leveldata.level":"manager"} , { "leveldata.volume" : { $slice : [3 , 1] } } )
I want to employ hashtag searching in combination with the standard text search.
Here is the kind of query I wish to be able to make:
"leather trousers #vintage #london"
So in effect I am wanting to strip off the #hashtaged elements and search for them by name, in a cumulative sense. Firstly I want it to prioritise on an exact match via the search string, then to ones with near match + hashtags, then lastly if no match with search string, via the hash tags.
So items with both Vintage and London would be placed higher than ones with either Vintage or London.
Here is my mapping
{
"title" : {
"type" : "string",
"analyzer" : "standard"
},
"hashtags" : {
"properties" : {
"id" : "integer",
"name" : "string"
}
}
}
So the query I want to make is
"exact or near match string" + "optional cumulative array match (preferably with fuzzyness)"
or in relation to my mapping
"near or exact match on 'title'" + "cumulative array match with fizzyness on hashtag.name"
I've tried a fuzzy match but get back too much results with not enough clarity. I've tried a simple simple_query_string but it returns weird results, and tried a bool match but get back nothing when I add the array.
Any help anyone can offer will be more than gratefully accepted. Let me know if you need more info or whatever? Many thanks in advance for your time to have even read this.
maybe a "dis_max" query can work for you. it enable to make multiples differents queries and concat the results. So her it make a first queries where "hashtags = 'vintage london'" then "hashtags = 'vintage'" then "hashtags = 'london'". you can also add wildcards (*) in the researched data like "hashtags = 'london*'"
{
"fields" : ["hashtags", "title"],
"query" : {
"dis_max" : {
"tie_breaker" : 0,
"queries" : [ {
"wildcard" : {
"hashtags" : "vintage london"
}
}, {
"wildcard" : {
"hashtags" : "vintage"
}
}, {
"wildcard" : {
"hashtags" : "london"
}
}
]
}
},
"sort" : {
"_score" : "desc"
} }