Firebase Firestore compound orderBy query not working - reactjs

in my react app I'm reading my Firestore collections and used conditional query for filtering. and this queries contains multiple orderBy (price, date, type). each order has ascending and descending. and i used it like so.
const constraints = [];
if (price)
constraints.push(orderBy("price", price == "1" ? "desc" : "asc"));
if (date)
constraints.push(orderBy("postedDate", date == "1" ? "desc" : "asc"));
const posts = collection(db, "allPosts");
let q = query(livings, ...constraints);
const qSnapshot = await getDocs(q);
when running this and filtering only by one of them, it works. but when i use the together it only works for the first query, in this case for price. no matter if i change the value before or after.
what is the solution for this? also does this happen with where query as well?

Every query that you execute against Firestore needs a matching index. For single-field queries, the indexes are automatically generated. But for queries (including ordering results) involving multiple fields you will need to (often) explicitly define the composite index yourself.
If the index that is needed for a query is not found, the server sends back an error and the SDK raises that error. So you catch errors in your code and log them, you'll find the error message in your logging output. In that error message you'll find a direct link to the Firestore console to generate the exact index that is needed.
So:
Catch and log the error.
Find the message in the logging output.
Click the link in the error message.
Tell Firestore to generate the index with a single click.
Be patient while your existing data is indexed.
Try the query again. :)

Related

Why is my MongoDB aggregation query so slow

I have several IDs (usually 2 or 3) of users whom I need to fetch from the database. Thing is, I also need to know the distance from a certain point. Problem is, my collection has 1,000,000 documents (users) in it, and it takes upwards of 30 seconds to fetch the users.
Why is this happening? When I just use the $in operator for the _id it works fine and returns everything in under 200ms, and when I just use the $geoNear operator it also works fine, but when I use the 2 together everything slows down insanely. What do I do? Again, all I need is a few users with the IDs from the userIds array and their distance from a certain point (user.location).
EDIT: Also wanted to mention that when i use $nin instead of $in the query also performs pefrectly. Only $in is causing the problem when combined with $geoNear
const user = await User.findById('logged in users id');
const userIds = ['id1', 'id2', 'id3'];
[
{
$geoNear: {
near: user.location,
distanceField: 'distance',
query: {
_id: { $in: userIds }
}
}
}
]
I found a work-around: i just query by the ID field, and later I use a library to determine the distance of the returned docs from the central point.
Indexing your data could be a solution to your problem. without indexing mongodb has to scan through all documents.

Firebase pagination using a timestamp/date as the orderBy

I am struggling to get pagination working when I use a date (firebase timestamp) to retrieve data.
This is basically what I do:
let jobsRef = db.collection("jobs")
.orderBy('createdAt', 'desc')
.limit(QUERY_LIMIT);
jobsRef = jobsRef.startAfter(this.props.jobs[this.props.jobs.length - 1].createdAt);
However it seems that i get returned items sometimes that I have just already received. I am guessing because of similar dates?
So how could I basically return a list of jobs ordered by createdAt and have an offset/limit (pagination)?
createdAt looks like the timestamp type: 23 October 2020 at 17:26:31 UTC+2
When I log createdAt however I see this: {seconds: 1603537477, nanoseconds: 411000000}
Maybe I should be storing createdAt as a unix timestamp? Or what is the ideal way to deal with this?
Here is how it looks in the database (popup when i click edit on createdAt):
If multiple documents can have the same value for the field you're sorting on, passing in a value for that field is not guaranteed to point to a unique document. So you indeed may be passing in an ambiguous instruction, leading to an unwanted result.
When possible, I highly recommend passing the entire document to the Firestore API. This leaves it up to Firestore to take the necessary data from that document to uniquely/unambiguously find the anchor for your query.
So instead of:
jobsRef.startAfter(this.props.jobs[this.props.jobs.length - 1].createdAt);
Do:
jobsRef.startAfter(this.props.jobs[this.props.jobs.length - 1]);
I was facings a similar problem, after may hours I finally found a solution, all you need to do is converting that number to firestore's Timestamp
import { Timestamp }. from #angular/fire/firestore;
let createdAt: number = 56772766688383;
let timestamp = Timestamp.fromMillis(createdAt);
//then pass that to startAfter
startAfter(timestamp)

Querying Firestore without Primary Key

I'd like my users to be able to update the slug on the URL, like so:
url.co/username/projectname
I could use the primary key but unfortunately Firestore does not allow any modifcation on assigned uid once set so I created a unique slug field.
Example of structure:
projects: {
P10syfRWpT32fsceMKEm6X332Yt2: {
slug: "majestic-slug",
...
},
K41syfeMKEmpT72fcseMlEm6X337: {
slug: "beautiful-slug",
...
},
}
A way to modify the slug would be to delete and copy the data on a new document, doing this becomes complicated as I have subcollections attached to the document.
I'm aware I can query by document key like so:
var doc = db.collection("projects");
var query = doc.where("slug", "==", "beautiful-slug").limit(1).get();
Here comes the questions.
Wouldn't this be highly impractical as if I have more than +1000 docs in my database, each time I will have to call a project (url.co/username/projectname) wouldn't it cost +1000 reads as it has to query through all the documents? If yes, what would be the correct way?
As stated in this answer on StackOverflow: https://stackoverflow.com/a/49725001/7846567, only the document returned by a query is counted as a read operation.
Now for your special case:
doc.where("slug", "==", "beautiful-slug").limit(1).get();
This will indeed result in a lot of read operations on the Firestore server until it finds the correct document. But by using limit(1) you will only receive a single document, this way only a single read operation is counted against your limits.
Using the where() function is the correct and recommended approach to your problem.

Error launching query in GAE Firestore DatastoreException: no matching index found

I have a problem with executing a query on Firestore in Google App Engine. Insertion is successful. But when he tries to run a simple query I get the following error:
com.google.cloud.datastore.DatastoreException: no matching index found.
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.translate(HttpDatastoreRpc.java:128)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.translate(HttpDatastoreRpc.java:113)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.runQuery(HttpDatastoreRpc.java:181)
at com.google.cloud.datastore.DatastoreImpl$1.call(DatastoreImpl.java:180)
at com.google.cloud.datastore.DatastoreImpl$1.call(DatastoreImpl.java:177)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.datastore.DatastoreImpl.runQuery(DatastoreImpl.java:176)
at com.google.cloud.datastore.QueryResultsImpl.sendRequest(QueryResultsImpl.java:73)
at com.google.cloud.datastore.QueryResultsImpl.<init>(QueryResultsImpl.java:57)
at com.google.cloud.datastore.DatastoreImpl.run(DatastoreImpl.java:170)
at com.google.cloud.datastore.DatastoreImpl.run(DatastoreImpl.java:161)
...
Caused by:
com.google.datastore.v1.client.DatastoreException: no matching index found., code=FAILED_PRECONDITION
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:136)
at com.google.datastore.v1.client.RemoteRpc.makeException(RemoteRpc.java:185)
at com.google.datastore.v1.client.RemoteRpc.call(RemoteRpc.java:96)
at com.google.datastore.v1.client.Datastore.runQuery(Datastore.java:119)
at com.google.cloud.datastore.spi.v1.HttpDatastoreRpc.runQuery(HttpDatastoreRpc.java:179)
at com.google.cloud.datastore.DatastoreImpl$1.call(DatastoreImpl.java:180)
at com.google.cloud.datastore.DatastoreImpl$1.call(DatastoreImpl.java:177)
at com.google.api.gax.retrying.DirectRetryingExecutor.submit(DirectRetryingExecutor.java:105)
at com.google.cloud.RetryHelper.run(RetryHelper.java:76)
at com.google.cloud.RetryHelper.runWithRetries(RetryHelper.java:50)
at com.google.cloud.datastore.DatastoreImpl.runQuery(DatastoreImpl.java:176)
at com.google.cloud.datastore.QueryResultsImpl.sendRequest(QueryResultsImpl.java:73)
at com.google.cloud.datastore.QueryResultsImpl.<init>(QueryResultsImpl.java:57)
at com.google.cloud.datastore.DatastoreImpl.run(DatastoreImpl.java:170)
at com.google.cloud.datastore.DatastoreImpl.run(DatastoreImpl.java:161)
The query I run is as follows:
Query<Entity> q = Query.newEntityQueryBuilder()
.setKind(tableName)
.setOrderBy(OrderBy.asc("t"))
.setFilter(PropertyFilter.le("t", 1000))
.build();
QueryResults<Entity> result = datastore.run(q);
It doesn't seem to me to be a query that needs an index. For only one property I read that the index is created automatically. However, I created a single-field index on firebase. But I always get the same error.
Can someone help me?
Thanks
The problem in your Datastore query is that the order of the functions was altered.
First filter and then order by.
Query<Entity> q = Query.newEntityQueryBuilder()
.setKind(tableName)
.setFilter(PropertyFilter.le("t", 1000))
.setOrderBy(OrderBy.asc("t"))
.build();
QueryResults<Entity> result = datastore.run(q);
The built-in indexes can be used in very specific cases. Some cases always require a composite index, your appears to be one of them. From Index configuration:
For more complex queries, an application must define composite, or
manual, indexes. Composite indexes are required for queries of the
following form:
...
Queries with one or more filters and one or more sort orders
You have both a filter and a sort order in your query.

Solr Custom RequestHandler - optimizing results

Yet another potentially embarrassing question. Please feel free to point any obvious solution that may have been overlooked - I have searched for solutions previously and found nothing, but sometimes it's a matter of choosing the wrong keywords to search for.
Here's the situation: coded my own RequestHandler a few months ago for an enterprise-y system, in order to inject a few necessary security parameters as an extra filter in all queries made to the solr core. Everything runs smoothly until the part where the docs resulting from a query to the index are collected and then returned to the user.
Basically after the filter is created and the query is executed we get a set of document ids (and scores), but then we have to iterate through the ids in order to build the result set, one hit at a time - which is a good 10x slower that querying the standard requesthandler, and only bound to get worse as the number of results increase. Even worse, since our schema heavily relies on dynamic fields for flexibility, there is no way (that I know of) of previously retrieving the list of fields to retrieve per document, other than testing all possible combinations per doc.
The code below is a simplified version of the one running in production, for querying the SolrIndexSearcher and building the response.
Without further ado, my questions are:
is there any way of retrieving all results at once, instead of building a response document by document?
is there any possibility of getting the list of fields on each result, instead of testing all possible combinations?
any particular WTFs in this code that I should be aware of? Feel free to kick me!
//function that queries index and handles results
private void searchCore(SolrIndexSearcher searcher, Query query,
Filter filter, int num, SolrDocumentList results) {
//Executes the query
TopDocs col = searcher.search(query,filter, num);
//results
ScoreDoc[] docs = col.scoreDocs;
//iterate & build documents
for (ScoreDoc hit : docs) {
Document doc = reader.document(hit.doc);
SolrDocument sdoc = new SolrDocument();
for(Object f : doc.getFields()) {
Field fd = ((Field) f);
//strings
if (fd.isStored() && (fd.stringValue() != null))
sdoc.addField(fd.name(), fd.stringValue());
else if(fd.isStored()) {
//Dynamic Longs
if (fd.name().matches(".*_l") ) {
ByteBuffer a = ByteBuffer.wrap(fd.getBinaryValue(),
fd.getBinaryOffset(), fd.getBinaryLength());
long testLong = a.getLong(0);
sdoc.addField(fd.name(), testLong );
}
//Dynamic Dates
else if(fd.name().matches(".*_dt")) {
ByteBuffer a = ByteBuffer.wrap(fd.getBinaryValue(),
fd.getBinaryOffset(), fd.getBinaryLength());
Date dt = new Date(a.getLong());
sdoc.addField(fd.name(), dt );
}
//...
}
}
results.add(sdoc);
}
}
Per OPs request:
Although this doesn't answer your specific question, I would suggest another option to solve your problem.
To add a Filter to all queries, you can add an "appends" section to the StandardRequestHandler in the SolrConfig.xml file. Add a "fl" (stands for filter) section and add your filter. Every request piped through the StandardRequestHandler will have the filter appended to it automatically.
This filter is treated like any other, so it is cached in the FilterCache. The result is fairly fast filtering (through docIds) at query time. This may allow you to avoid having to pull the individual documents in your solution to apply the filtering criteria.

Resources