ReactNativeFirebase issues with BATCH querying of arrays - reactjs

i thank you all for being there for all who have issues with codes.. i have an issue too, and at this point, i've ran out of ideas, i've read through blogs and followed gists i possibly can, regarding this, but none worked for me...
THE PROBLEM:
i'm working on pagination in my app, which was working fine earlier, until i added a feature (ability to let users add posts they don't want to see to thier blacklist fields, these posts won't be shown to them). Now, they've blacklisted more than 10 ids, (let's say 15 ids).
due to firestore query limitations while using the (not-in, in, etc), if the array you are filtering from is more than 10 values, you'll have to run a batch request, probably in a loop, which goes with 10 values in an array at a time, after u must have split the arrays in chunks.
Now, the thing is, on the first iteration, i'm successfully getting my desired posts, also i'm storing the lastQuerySnapshot so i can use it on the second iteration, THIS IS WHERE THE ISSUE IS, for some reasons, i still get the posts whose ids are contained in the 2nd iteration, i've logged as many as possible, but couldn't trace successfully, i noticed probably this might be an issue with startAfter(doc) from Rnfirebase, but i'm not sure.
i'll be really happy if you help me on this, i've been on this for weeks now. here's my codes
export const getFilteredPosts = async (
postsBlacklisted,
lastItem,
limit = 10,
) => {
try {
// const baseUrl = firestore().collectionGroup('AllPosts').orderBy('postedOn');
// const baseUrl = firestore().collectionGroup('AllPosts').orderBy('postID');
const baseUrl = firestore()
.collection('AllPosts')
.orderBy('postID', 'desc');
const postCondition = lastItem ? baseUrl.startAfter(lastItem) : baseUrl;
if (!postsBlacklisted.length > 0) {
console.log('FETCHING WITHOUT FILTER, NO BLACKLISTS ', postsBlacklisted);
//just get all random post.
let response = await postCondition.limit(limit).get();
if (response) {
const lastVisibleItem = response.docs[response.docs.length - 1];
let results = response.docs.map(result => ({
id: result.id,
...result.data(),
}));
return {postsArray: results, lastVisibleItem: lastVisibleItem};
} else {
console.log('check ur filterOutBlackList method');
}
} else {
console.log(' posts black list passed!!!', postsBlacklisted);
}
if (postsBlacklisted || postsBlacklisted.length > 0) {
console.log('FETCHING with FILTER, see BLACKLISTS ', postsBlacklisted);
var copiedPostsBlacklisted = [...postsBlacklisted];
var batches = [];
var index = 0;
var batchChunks = [...splitArrayInChunks(copiedPostsBlacklisted, 10)];
var lastQueryResult;
while (index < batchChunks.length) {
if (lastQueryResult) {
const results = await postCondition
.where('postID', 'not-in', batchChunks[index])
.startAfter(lastQueryResult)
.limit(10)
.get();
if (results) {
batches.push(results.docs);
}
} else {
const results = await postCondition
.where('postID', 'not-in', batchChunks[index])
.limit(10)
.get();
if (results) {
lastQueryResult = results.docs[results.docs.length - 1];
batches.push(results.docs);
}
}
index++;
}
var allPosts = batches.flat().map(result => ({
id: result.id,
...result.data(),
}));
return {
lastVisibleItem: lastQueryResult,
postsArray: allPosts,
};
}
} catch (error) {
console.log('FATAL ERROR ', error.message);
}
};
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script>
**the function that splits the array is : **
//this function returns a splitted array in chunks of 10, if the array passed has more than 10 values
export function* splitArrayInChunks(arr, n) {
for (let i = 0; i < arr.length; i += n) {
yield arr.slice(i, i + n);
}
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script>
**the way i'm calling this getFilteredPosts **
const querySnapshot = await getFilteredPosts(
blackListedPosts,
afterDoc,
limit,
);
//initially, the "afterDoc" is null, becuz it's the first time i'm calling this function, i've disabled the load more functionality to enable me focus on debugging where the blacklist wierd issue is comming from. the "blackListedPosts" is soomething like this = [i hold more than 10 posts ids], the "limit" is exactly 10
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script>
here's my package.json, a few dependences
"#react-native-firebase/app": "^14.7.0",
"#react-native-firebase/auth": "^14.7.0",
"#react-native-firebase/firestore": "^14.7.0",
"#react-native-firebase/storage": "^14.7.0",
"react": "17.0.2",
"react-native": "0.67.3",

Related

How to get all subcollection documents with subcollection name as a date? [duplicate]

Say I have this minimal database stored in Cloud Firestore. How could I retrieve the names of subCollection1 and subCollection2?
rootCollection {
aDocument: {
someField: { value: 1 },
anotherField: { value: 2 }
subCollection1: ...,
subCollection2: ...,
}
}
I would expect to be able to just read the ids off of aDocument, but only the fields show up when I get() the document.
rootRef.doc('aDocument').get()
.then(doc =>
// only logs [ "someField", "anotherField" ], no collections
console.log( Object.keys(doc.data()) )
)
It is not currently supported to get a list of (sub)collections from Firestore in the client SDKs (Web, iOS, Android).
In server-side SDKs this functionality does exist. For example, in Node.js you'll be after the ListCollectionIds method:
var firestore = require('firestore.v1beta1');
var client = firestore.v1beta1({
// optional auth parameters.
});
// Iterate over all elements.
var formattedParent = client.anyPathPath("[PROJECT]", "[DATABASE]", "[DOCUMENT]", "[ANY_PATH]");
client.listCollectionIds({parent: formattedParent}).then(function(responses) {
var resources = responses[0];
for (var i = 0; i < resources.length; ++i) {
// doThingsWith(resources[i])
}
})
.catch(function(err) {
console.error(err);
});
It seems like they have added a method called getCollections() to Node.js:
firestore.doc(`/myCollection/myDocument`).getCollections().then(collections => {
for (let collection of collections) {
console.log(`Found collection with id: ${collection.id}`);
}
});
This example prints out all subcollections of the document at /myCollection/myDocument
Isn't this detailed in the documentation?
/**
* Delete a collection, in batches of batchSize. Note that this does
* not recursively delete subcollections of documents in the collection
*/
function deleteCollection(db, collectionRef, batchSize) {
var query = collectionRef.orderBy('__name__').limit(batchSize);
return new Promise(function(resolve, reject) {
deleteQueryBatch(db, query, batchSize, resolve, reject);
});
}
function deleteQueryBatch(db, query, batchSize, resolve, reject) {
query.get()
.then((snapshot) => {
// When there are no documents left, we are done
if (snapshot.size == 0) {
return 0;
}
// Delete documents in a batch
var batch = db.batch();
snapshot.docs.forEach(function(doc) {
batch.delete(doc.ref);
});
return batch.commit().then(function() {
return snapshot.size;
});
}).then(function(numDeleted) {
if (numDeleted <= batchSize) {
resolve();
return;
}
// Recurse on the next process tick, to avoid
// exploding the stack.
process.nextTick(function() {
deleteQueryBatch(db, query, batchSize, resolve, reject);
});
})
.catch(reject);
}
This answer is in the docs
Sadly the docs aren't clear what you import.
Based on the docs, my code ended up looking like this:
import admin, { firestore } from 'firebase-admin'
let collections: string[] = null
const adminRef: firestore.DocumentReference<any> = admin.firestore().doc(path)
const collectionRefs: firestore.CollectionReference[] = await adminRef.listCollections()
collections = collectionRefs.map((collectionRef: firestore.CollectionReference) => collectionRef.id)
This is of course Node.js server side code. As per the docs, this cannot be done on the client.

How To Circumvent 504 Errors

I am working in ReactJs and one of the main aspects of our project is the ability to upload a scorecard and have all of its results parsed and placed into objects. However, due to the nature of these pdfs that get uploaded, there's a LOT of information, an average of 12-14 pages.
Most of the information is irrelevant, I usually will only need pages 5-7, but users will be users, and they upload all 12.
I am using the pdfParser API which is very good, we're not looking for replacements on that. However, due to how large the file is, if I am somewhere with only half-decent connection, I am hit with a 504 error since the process takes so long. If I have good to great connection, there's no issue.
This being said I have two questions:
Is there a way to extend the amount of time that needs to elapse before my computer gives up on the process
Is there a way to parse only SOME of the pages that get submitted?
The relevant code will be shown below...
var url = 'https://pdftables.com/api?key=770oukvvx1wl&format=xlsx-single';
const pdfToExcel = (pdfFile) => {
var req = request.post({encoding: null, url: url}, async function (err, resp, body) {
if (!err && resp.statusCode == 200) {
fs.writeFile(`${pdfFile.path}.xlsx`, body, function(err) {
if (err) {
console.log('error writing file');
}
});
} else {
console.log('error retrieving URL');
};
});
var form = req.form();
form.append('file', fs.createReadStream(`./${pdfFile.path}`));
}
const parseExcel = async (file) => {
let workSheetsFromFile;
if (file.path.search(".xlsx") === -1) {
const filePath = await path.resolve(`./${file.path}.xlsx`)
workSheetsFromFile = await xlsx.parse(`./${file.path}.xlsx`);
await fs.unlinkSync(`./${file.path}`)
await fs.unlinkSync(filePath)
return workSheetsFromFile[0].data
}
if (file.path.search(".xlsx") !== -1) {
const filePath = await path.resolve(`./${file.path}`)
workSheetsFromFile = await xlsx.parse(`./${file.path}`);
await fs.unlinkSync(filePath)
return workSheetsFromFile[0].data
}
}

How to for loop all documents in a collection - Azure CosmosDB - Nodejs

I have looked around at a few answers/questions regarding this issue but yet to find a solution.
I have a collection with documents (simplified) as such:
{
"id": 123
"stuff": "abc"
"array":[
{
"id2":456
"properties": [
{
"id3": 789
"important": true
}
]
}
]
}
I want to check for each document in my collection, for each array object within array, for each properties, if it has important: true for example. Then return:
"id": 123
"id2": 456
"id3": 789
I have tried using:
client.queryDocuments(self.collection._self, querySpec).toArray(function(err, results) {
if (err) {
callback(err);
} else {
callback(null, results[0]);
}
});
But the issue is an array has a maximum character limit. If my collection has millions of documents, this would presumably be exceeded. (Javascript Increase max array size)
Or, am I misunderstanding the above question? Is it talking about the number of objects in an array (of which, each can have unlimited object character length?)
Thus I am looking a for loop-esque solution, where each document is returned, I do my analysis, then move to then next/do them in parallel.
Any insight would be greatly appreciated.
But the issue is an array has a maximum character limit. If my
collection has millions of documents, this would presumably be
exceeded. (Javascript Increase max array size)
Based on my research,the longest possible array in js could have 232-1 = 4,294,967,295 = 4.29 billion elements. However, it is perfectly enough to meet your millions data volume requirements. In addition,you can't query such huge volume data directly surely,that's impossible you do that.
Whether about throughput constraints(RUs settings) or query efficiency factors, you should consider batching large volumes of data anyway.
Thus I am looking a for loop-esque solution, where each document is
returned, I do my analysis, then move to then next/do them in
parallel.
Maybe you could use v2 js sdk for cosmos db sql api.Please refer to the sample code:
const cosmos = require('#azure/cosmos');
const CosmosClient = cosmos.CosmosClient;
const endpoint = "https://***.documents.azure.com:443/"; // Add your endpoint
const masterKey = "***"; // Add the masterkey of the endpoint
const client = new CosmosClient({ endpoint, auth: { masterKey } });
const databaseId = "db";
const containerId = "coll";
async function run() {
const { container, database } = await init();
const querySpec = {
query: "SELECT r.id,r._ts FROM root r"
};
const queryOptions = {
maxItemCount : -1
}
const queryIterator = await container.items.query(querySpec,queryOptions);
while (queryIterator.hasMoreResults()) {
const { result: results, headers } = await queryIterator.executeNext();
console.log(results)
console.log(headers)
//do what you want to do
if (results === undefined) {
// no more results
break;
}
}
}
async function init() {
const { database } = await client.databases.createIfNotExists({ id: databaseId });
const { container } = await database.containers.createIfNotExists({ id: containerId });
return { database, container };
}
run().catch(err => {
console.error(err);
});
More details about continuation token ,please refer to my previous case.Any concern,please let me know.
I am using Cosmos DB SQL API Node.js library. I am unable to find the Continuation Token from this library so that I can return it to client. The idea is to get it back from the client for the next pagination request.
I have a working code which iterates multiple times to get all the documents. What changes will be required here to get the continuation token?
function queryCollectionPaging() {
return new Promise((resolve, reject) => {
function executeNextWithRetry(iterator, callback) {
iterator.executeNext(function (err, results, responseHeaders) {
if (err) {
return callback(err, null);
}
else {
documents = documents.concat(results);
if (iterator.hasMoreResults()) {
executeNextWithRetry(iterator, callback);
}
else {
callback();
}
}
});
}
let options = {
maxItemCount: 1,
enableCrossPartitionQuery: true
};
let documents = []
let iterator = client.queryDocuments( collectionUrl, 'SELECT r.partitionkey, r.documentid, r._ts FROM root r WHERE r.partitionkey in ("user1", "user2") ORDER BY r._ts', options);
executeNextWithRetry(iterator, function (err, result) {
if (err) {
reject(err)
}
else {
console.log(documents);
resolve(documents)
}
});
});
};

mongoose update array and add new element

I pretty sure this question was asked many items, however I cannot find an answer to my particular problem, which seems very but I just can't seem to get it working. I have a user model with cart schema array embedded in it. I am trying to add an object to an array and if it exists only update quantity and price, if it is doesn't add to an array. what happens with my code is that it adds a new item when array is empty, it updates the item's quantity and price but it doesn't want to add a new item. I read a bit a bout and as far as I understood I cannot use two different db methods in one request. I would appreciate any help on this, this is the first time I am actually using mongooose.
const CartItem = require('../models/cartModel');
const User = require('../models/userModel');
exports.addToCart = (req, res) => {
const cartItem = new CartItem.model(req.body);
const user = new User.model();
User.model
.findById(req.params.id)
.exec((err, docs) => {
if (err) res.sendStatus(404);
let cart = docs.cart;
if (cart.length == 0) {
docs.cart.push(cartItem);
}
let cart = docs.cart;
let isInCart = cart.filter((item) => {
console.log(item._id, req.body._id);
if (item._id == req.body._id) {
item.quantity += req.body.quantity;
item.price += req.body.price;
return true;
}
});
if (isInCart) {
console.log(cart.length)
} else {
cart.push(cartItem);
console.log(false);
}
docs.save(function (err, docs) {
if (err) return (err);
res.json(docs);
});
});
};
I actually managed to get it working like this
exports.addToCart = (req, res) => {
const cartItem = new Cart.model(req.body);
const user = new User.model();
User.model
.findById(req.params.id)
.exec((err, docs) => {
if (err) res.sendStatus(404);
let cart = docs.cart;
let isInCart = cart.some((item) => {
console.log(item._id, req.body._id);
if (item._id == req.body._id) {
item.quantity += req.body.quantity;
item.price += req.body.price;
return true;
}
});
if (!isInCart) {
console.log(cart.length)
cart.push(cartItem);
}
if (cart.length == 0) {
cart.push(cartItem);
}
docs.save(function (err, docs) {
if (err) return (err);
res.json(docs);
});
});
};
don't know if this is the right way to do it, but I can both add a new product into my array and update values of existing ones
Maybe your problem hava a simpler solution: check this page at the mongoose documentation: there is a compatibility issue with some versions of mongoDB and mongoose, maybe you need to edit your model code to look like this:
const mongoose = require("mongoose");
const CartSchema = new mongoose.Schema({
//your code here
}, { usePushEach: true });
module.exports = mongoose.model("Cart", CartSchema);
You can find more information here: https://github.com/Automattic/mongoose/issues/5924
Hope it helps.

How to store data from firebaselistobservable to an array?

I'm trying to copy the data from firebase to an array using angular 2. But i'm unable to push the data into the array.
Here's the code:
Variables:
uid: string = '';
agencyItems: FirebaseListObservable<any[]>;
trackerItems: FirebaseListObservable<any[]>;
agencyID: any[] = [];
getData()
this.af.auth.subscribe(auth => {
if (auth) {
this.uid = auth.auth.uid;
}
});
this.getAgencyData();
console.log("AgentID: ",this.agencyID);
console.log("Array Length = ",this.agencyID.length); //PROBLEM HERE: Array agencyID is still 0.
this.getTrackerData();
getAgencyData():
console.log("Fetching agency data");
this.agencyItems = this.af.database.list('/agencies/',{preserveSnapshot:true});
this.agencyItems.subscribe(snapshots => {
snapshots.forEach(snapshot => {
console.log(snapshot.val()._id);
this.agencyID.push(snapshot.val()._id);
});
});
getTrackerData():
for (let i = 0; i < this.agencyID.length; i++)
{
console.log("Fetching Tracker data");
this.trackerItems = this.af.database.list('/tracker/' + this.agencyID[i]);
this.trackerItems.subscribe(trackerItems => trackerItems.forEach(Titem =>
console.log("Tracker name: " + Titem.name),
));
}
Here is the debug console screenshot:
Since i'm a newbie to web programming some code may seem completely unnecessary.
What am I doing wrong in this code? How can I implement the same.
The problem is the location where, or better WHEN, you are checking the length of the array. You make an asynchronous call when you fetch the data, but you are checking the length of the array before the data has been returned. Therefore the array is still empty.
Try the following in getAgencyData():
console.log("Fetching agency data");
this.agencyItems = this.af.database.list('/agencies/',{preserveSnapshot:true});
this.agencyItems.subscribe(snapshots => {
snapshots.forEach(snapshot => {
console.log(snapshot.val()._id);
this.agencyID.push(snapshot.val()._id);
console.log("Array Length = ",this.agencyID.length); // See the length of the array growing ;)
});
// EDIT
this.getTrackerData();
});

Resources