I'm doing a certain query, and I want to get the executionTime of it (including the popualtion):
const managerId = "023492745"
const company = await Companies.find({
_id: "1234"
})
.populate(
{
path: "employees",
match: {
_id: { $ne: managerId },
},
})
.explain()
I try to use explain() on the query, but all It only retrieves information about the find() part and not about the populate() part. How can I get the executionTime of the whole query?
explain is a command executed by the mongodb server, while populate is a function executed on the client side by mongoose.
The populate function works by receiving the results of the find from the server, then submitting additional queries to retrieve the corresponding data to place in each document.
The response to the explain command does not contain the found documents, only the statistics and metadata about the query, so there is nothing for populate to operate on.
Instead of explain, you might try increasing the log verbosity or enabling profiling on the mongod server to capture the subsequent queries.
Related
in my react app I'm reading my Firestore collections and used conditional query for filtering. and this queries contains multiple orderBy (price, date, type). each order has ascending and descending. and i used it like so.
const constraints = [];
if (price)
constraints.push(orderBy("price", price == "1" ? "desc" : "asc"));
if (date)
constraints.push(orderBy("postedDate", date == "1" ? "desc" : "asc"));
const posts = collection(db, "allPosts");
let q = query(livings, ...constraints);
const qSnapshot = await getDocs(q);
when running this and filtering only by one of them, it works. but when i use the together it only works for the first query, in this case for price. no matter if i change the value before or after.
what is the solution for this? also does this happen with where query as well?
Every query that you execute against Firestore needs a matching index. For single-field queries, the indexes are automatically generated. But for queries (including ordering results) involving multiple fields you will need to (often) explicitly define the composite index yourself.
If the index that is needed for a query is not found, the server sends back an error and the SDK raises that error. So you catch errors in your code and log them, you'll find the error message in your logging output. In that error message you'll find a direct link to the Firestore console to generate the exact index that is needed.
So:
Catch and log the error.
Find the message in the logging output.
Click the link in the error message.
Tell Firestore to generate the index with a single click.
Be patient while your existing data is indexed.
Try the query again. :)
I am trying to retrieve data in my MongoDB database. If I have the following below in my mongoDB database, I want to select the Password given the Username. So in this case, I will be looking through the database for a Username that is 'e' and retrieving the password associated with that specific Username. I've tried looking everywhere but I can't seem to find a solution on how to do it. I am using express, node, and mongoDB for this personal project. What I have so far is just looking up with database with .find({ Username: Username} and it outputs the entire JSON object.
To clarify, I will be sending a request with a Username of value 'e' and looking it up the database trying to retrieve the value of Password.
{
_id: 62d7712e6d6732706b46094e,
Username: 'e',
Password: 'hi',
__v: 0
}
find takes multiple inputs you can give the select statements also in find itself
so the query will be like
db.collectionName.find({username:'e'},{_id:0,password:1})
mongo by default fetch _id all the time by default so you need to specifically mention to not fetch _id thus _id :0
for such scenarios, there are 2 options if username is unique i would suggest to go with findOne rather then find
db.collectionName.findOne({username:'e'}).password
the same will work if you have multiple records with same username but you want only the first record
but if you want data of all the records as array
db.collectionName.find({username:'e'},{_id:0,password:1})..map( function(u) { return u.password; } )
I have several IDs (usually 2 or 3) of users whom I need to fetch from the database. Thing is, I also need to know the distance from a certain point. Problem is, my collection has 1,000,000 documents (users) in it, and it takes upwards of 30 seconds to fetch the users.
Why is this happening? When I just use the $in operator for the _id it works fine and returns everything in under 200ms, and when I just use the $geoNear operator it also works fine, but when I use the 2 together everything slows down insanely. What do I do? Again, all I need is a few users with the IDs from the userIds array and their distance from a certain point (user.location).
EDIT: Also wanted to mention that when i use $nin instead of $in the query also performs pefrectly. Only $in is causing the problem when combined with $geoNear
const user = await User.findById('logged in users id');
const userIds = ['id1', 'id2', 'id3'];
[
{
$geoNear: {
near: user.location,
distanceField: 'distance',
query: {
_id: { $in: userIds }
}
}
}
]
I found a work-around: i just query by the ID field, and later I use a library to determine the distance of the returned docs from the central point.
Indexing your data could be a solution to your problem. without indexing mongodb has to scan through all documents.
I'm hoping this is something really simple that I've miss understood as I'm new to both DynamoDb and nodeJs.
I have a table that has id, siteId.
These are being used to create a Global Secondary Index named IdxSiteId.
I need to be able to grab some of the other Item's values (not shown in screenshot) using the siteId. Reading things the best option is to use Query rather than GetItemBatch or Scan. Through trail and error got to this point.
const getWarnings = async (siteId) => {
const params = {
TableName: process.env.WARNINGS_TABLE_NAME,
IndexName: 'IdxSiteId',
KeyConditionExpression: 'SiteId = :var_siteId',
ExpressionAttributeValues: {
':var_siteId': siteId
},
ProjectionExpression: 'id, endTime, startTime, warningSubType',
ScanIndexForward: false
};
return new DynamoDB.DocumentClient().query(params).promise();
};
While this is the closest it has appeared to working I'm getting the following error Error retrieving current warnings ValidationException: Query condition missed key schema element: siteId and looking at the examples online I don't know what I'm doing wrong at this point.
As I said I'm sure this is super simple, but I could really do with a pointer.
Field names are case sensitive. Your GSI partition key is siteId. Your query is SiteId = :var_siteId. It should be siteId = :var_siteId.
I'd like my users to be able to update the slug on the URL, like so:
url.co/username/projectname
I could use the primary key but unfortunately Firestore does not allow any modifcation on assigned uid once set so I created a unique slug field.
Example of structure:
projects: {
P10syfRWpT32fsceMKEm6X332Yt2: {
slug: "majestic-slug",
...
},
K41syfeMKEmpT72fcseMlEm6X337: {
slug: "beautiful-slug",
...
},
}
A way to modify the slug would be to delete and copy the data on a new document, doing this becomes complicated as I have subcollections attached to the document.
I'm aware I can query by document key like so:
var doc = db.collection("projects");
var query = doc.where("slug", "==", "beautiful-slug").limit(1).get();
Here comes the questions.
Wouldn't this be highly impractical as if I have more than +1000 docs in my database, each time I will have to call a project (url.co/username/projectname) wouldn't it cost +1000 reads as it has to query through all the documents? If yes, what would be the correct way?
As stated in this answer on StackOverflow: https://stackoverflow.com/a/49725001/7846567, only the document returned by a query is counted as a read operation.
Now for your special case:
doc.where("slug", "==", "beautiful-slug").limit(1).get();
This will indeed result in a lot of read operations on the Firestore server until it finds the correct document. But by using limit(1) you will only receive a single document, this way only a single read operation is counted against your limits.
Using the where() function is the correct and recommended approach to your problem.