How to add multiple docs to a collection in firebase? - reactjs

Im working with React native and react-native-firebase
My objective is to add multiple docs(objects) to a collection at once.
Currently, I have this:
const array = [
{
name: 'a'
},{
name: 'b'
}
]
array.forEach((doc) => {
firebase.firestore().collection('col').add(doc);
}
This triggers an update on other devices for each update made to the collection.
How can I batch these docs together for ONE update?

You can create batch write like
var db = firebase.firestore();
var batch = db.batch()
in you array add updates
array.forEach((doc) => {
var docRef = db.collection("col").doc(); //automatically generate unique id
batch.set(docRef, doc);
});
finally you have to commit that
batch.commit()

You can execute multiple write operations as a single batch that contains any combination of set(), update(), or delete() operations. A batch of writes completes atomically and can write to multiple documents.
var db = firebase.firestore();
var batch = db.batch();
array.forEach((doc) => {
batch.set(db.collection('col').doc(), doc);
}
// Commit the batch
batch.commit().then(function () {
// ...
});

Version 9 of Web API is slightly different, the docs include this example:
import { writeBatch, doc } from "firebase/firestore";
// Get a new write batch
const batch = writeBatch(db);
// Set the value of 'NYC'
const nycRef = doc(db, "cities", "NYC");
batch.set(nycRef, {name: "New York City"});
// Update the population of 'SF'
const sfRef = doc(db, "cities", "SF");
batch.update(sfRef, {"population": 1000000});
// Delete the city 'LA'
const laRef = doc(db, "cities", "LA");
batch.delete(laRef);
// Commit the batch
await batch.commit();

The batch from database also has a create function that adds a new document in a collection and throws an error if there is already a document. we just need the reference to the document. Please note that this function exists in admin sdk of firebase.
const batch = db.batch();
await users.map(async (item)=> {
const collectionRef = await db.collection(COLLECTION_NAME).doc();
batch.create(collectionRef, item);
});
const result = await batch.commit();

A batched write can contain up to 500 operations. Each operation in the batch counts separately towards your Cloud Firestore usage.
Note: For bulk data entry, use a server client library with parallelized individual writes. Batched writes perform better than serialized writes but not better than parallel writes. You should use a server client library for bulk data operations and not a mobile/web SDK.

Related

Firestore: How to map data, to an array, from a document that contains an array of maps?

I am self taught and very new, so please excuse any dumb questions.
I am trying to fetch a stored document using getDoc, map that data to an array, and use that array as data in a table. I believe I correctly stored my data as a document that contains an object(dataExcel) that contains an array of maps in Firestore.
const submitGrades = async () => {
await setDoc(doc(db, 'EventData', 'Game 1' + ' PlayData'), {dataExcel})
await setDoc(doc(db, "EventData", 'Game 1' + ' PlayerData'), {playerData})
}
I know that this isn't ideal for complex querying, but it works for me as I have no problem pulling the entire document anyway. Then using JavaScript to sort the data the way I want. I am able get the document successfully using:
const Fetch = async () => {
const dataRef = doc(db, 'EventData', 'Game 1 PlayData');
const data = await getDoc(dataRef);
console.log(data);
}
, however I can't figure out how to pull out the array of maps, set them in state and make them usable as data in a table.
My current data structure
I need it to look like this once passed to a table:
enter image description here

Updating state when database gets updated

I got a schema looking something like this:
const mongoose = require("mongoose");
const Schema = mongoose.Schema;
//Create Schema
const PhoneNumbersSchema = new Schema({
phone_numbers: {
phone_number: 072382838232
code: ""
used: false
},
});
module.exports = PhoneNumbers = mongoose.model(
"phonenumbers",
PhoneNumbersSchema
);
And then I got an end-point that gets called from a 3rd party application that looks like this:
let result = await PhoneNumbers.findOneAndUpdate(
{ country_name: phoneNumberCountry },
{ $set: {"phone_numbers.$[elem1].services.$[elem2].sms_code": 393} },
{ arrayFilters: [ { "elem1.phone_number": simNumberUsed }, { "elem2.service_name": "steam" } ] },
Basically the end-point updates the "code" from the phone numbers in the database.
In react this is how I retrieve my phone numbers from the state:
const phonenumbers_database = useSelector((state) => {
console.log(state);
return state.phonenumbers ? state.phonenumbers.phone_numbers_details : [];
});
Every time the code gets changed in my database from the API call I would like to update "phonenumbers_database" in my state automatically.
How would I be able to do that?
MongoDB can actually watch for changes to a collection or a DB by opening a Change Stream.
First, you would open up a WebSocket from your React app to the server using something like Socket.io, and then watch for changes on your model:
PhoneNumbers
.watch()
.on('change', data => socket.emit('phoneNumberUpdated', data));
Your third party app will make the changes to the database to your API, and then the changes will be automatically pushed back to the client.
You could do a polling and check the Database every N secs or by using change streams
After that, to notify your frontend app, you need to use WebSockets, check on Socket IO

Firestore Snapshot Read only last document

fire.firestore().collection('Customer').get()
.then(data=>{
data.docs.forEach(doc=>{
let db = fire.firestore().collection(`Customer`)
db.where("updated", ">=", 0).limit(100).onSnapshot(async doc=>{
try {
await doc.docs.map(each=>{
setDatas([...datas, {...each.data()}])
})
}
})
})
I am trying to append the object in the array to query from firestore.
However, somehow it only reads last document.
Please help me if you could store state without using array.
Checking your code and running this data as sample I could notice your code is getting all the results and not just the last one, so there could be something else. Also it seems like a double unnecessary query.
Instead you can query all the docs you may want to get them using .. where("updated", ">=", 0).limit(100) .. and not a query inside a query.
You can use the sample code from here to make something more simpler:
const admin = require('firebase-admin');
admin.initializeApp();
let db = admin.firestore();
const citiesRef = db.collection('Customer');
const snapshot = await citiesRef.where('updated', '>=', 100).limit(100).get();
if (snapshot.empty) {
console.log('No matching documents.');
return;
}
snapshot.forEach(doc => {
//Your code here to add to your array
});
This code is for NodeJS but you can adapt it to ReactJS

papaparse too fast for database angular 5

I'm using papaparse to read the csv file to fetch the records but the issue is that it is too fast for the database. It reads the csv file in an instant but the individual record is still being processed in the database API, so only some records are processed asynchronously in the database with random sequence and all the records do not enter the database because of it. My code is
onSubmit() {
this.papa.parse(this.csvFile, {
step: (row) => {
var jsonObj = this.arrayToJSON(row.data[0]);
console.log(jsonObj);
this.apiService.addEmployees(jsonObj).
subscribe(
add => this.arrayToJSON(jsonObj),
error => console.log("Error :: " + error))
}
});}
I want all that the papaparse waits for the db request to process, then fetch the next record.
You can use Observable.concat for that:
onSubmit() {
// First of all, prepare an array of observables
const databaseWrites = [];
this.papa.parse(this.csvFile, {
step: (row) => {
let jsonObj = this.arrayToJSON(row.data[0]);
console.log(jsonObj);
databaseWrites.push(this.apiService.addEmployees(jsonObj));
}
});
// Once everything is prepared, use concat to do each operation one after another
Observable.concat(...databaseWrites).subscribe(() => { console.log('write success !')});
}
Because Observables are cold, you can prepare your requests and subscribe to the concat in order to have each operation being executed one after another.

How to get a count of number of documents in a collection with Cloud Firestore [duplicate]

This question already has answers here:
Cloud Firestore collection count
(29 answers)
Closed 10 months ago.
In Firestore, how can I get the total number of documents in a collection?
For instance if I have
/people
/123456
/name - 'John'
/456789
/name - 'Jane'
I want to query how many people I have and get 2.
I could do a query on /people and then get the length of the returned results, but that seems a waste, especially because I will be doing this on larger datasets.
You currently have 3 options:
Option 1: Client side
This is basically the approach you mentioned. Select all from collection and count on the client side. This works well enough for small datasets but obviously doesn't work if the dataset is larger.
Option 2: Write-time best-effort
With this approach, you can use Cloud Functions to update a counter for each addition and deletion from the collection.
This works well for any dataset size, as long as additions/deletions only occur at the rate less than or equal to 1 per second. This gives you a single document to read to give you the almost current count immediately.
If need need to exceed 1 per second, you need to implement distributed counters per our documentation.
Option 3: Write-time exact
Rather than using Cloud Functions, in your client you can update the counter at the same time as you add or delete a document. This means the counter will also be current, but you'll need to make sure to include this logic anywhere you add or delete documents.
Like option 2, you'll need to implement distributed counters if you want to exceed per second
Aggregations are the way to go (firebase functions looks like the recommended way to update these aggregations as client side exposes info to the user you may not want exposed) https://firebase.google.com/docs/firestore/solutions/aggregation
Another way (NOT recommended) which is not good for large lists and involves downloading the whole list: res.size like this example:
db.collection("logs")
.get()
.then((res) => console.log(res.size));
If you use AngulareFire2, you can do (assuming private afs: AngularFirestore is injected in your constructor):
this.afs.collection(myCollection).valueChanges().subscribe( values => console.log(values.length));
Here, values is an array of all items in myCollection. You don't need metadata so you can use valueChanges() method directly.
Be careful counting number of documents for large collections with a cloud function. It is a little bit complex with firestore database if you want to have a precalculated counter for every collection.
Code like this doesn't work in this case:
export const customerCounterListener =
functions.firestore.document('customers/{customerId}')
.onWrite((change, context) => {
// on create
if (!change.before.exists && change.after.exists) {
return firestore
.collection('metadatas')
.doc('customers')
.get()
.then(docSnap =>
docSnap.ref.set({
count: docSnap.data().count + 1
}))
// on delete
} else if (change.before.exists && !change.after.exists) {
return firestore
.collection('metadatas')
.doc('customers')
.get()
.then(docSnap =>
docSnap.ref.set({
count: docSnap.data().count - 1
}))
}
return null;
});
The reason is because every cloud firestore trigger has to be idempotent, as firestore documentation say: https://firebase.google.com/docs/functions/firestore-events#limitations_and_guarantees
Solution
So, in order to prevent multiple executions of your code, you need to manage with events and transactions. This is my particular way to handle large collection counters:
const executeOnce = (change, context, task) => {
const eventRef = firestore.collection('events').doc(context.eventId);
return firestore.runTransaction(t =>
t
.get(eventRef)
.then(docSnap => (docSnap.exists ? null : task(t)))
.then(() => t.set(eventRef, { processed: true }))
);
};
const documentCounter = collectionName => (change, context) =>
executeOnce(change, context, t => {
// on create
if (!change.before.exists && change.after.exists) {
return t
.get(firestore.collection('metadatas')
.doc(collectionName))
.then(docSnap =>
t.set(docSnap.ref, {
count: ((docSnap.data() && docSnap.data().count) || 0) + 1
}));
// on delete
} else if (change.before.exists && !change.after.exists) {
return t
.get(firestore.collection('metadatas')
.doc(collectionName))
.then(docSnap =>
t.set(docSnap.ref, {
count: docSnap.data().count - 1
}));
}
return null;
});
Use cases here:
/**
* Count documents in articles collection.
*/
exports.articlesCounter = functions.firestore
.document('articles/{id}')
.onWrite(documentCounter('articles'));
/**
* Count documents in customers collection.
*/
exports.customersCounter = functions.firestore
.document('customers/{id}')
.onWrite(documentCounter('customers'));
As you can see, the key to prevent multiple execution is the property called eventId in the context object. If the function has been handled many times for the same event, the event id will be the same in all cases. Unfortunately, you must have "events" collection in your database.
Please check below answer I found on another thread. Your count should be atomic. Its required to use FieldValue.increment() function in such case.
https://stackoverflow.com/a/49407570/3337028
firebase-admin offers select(fields) which allows you to only fetch specific fields for documents within your collection. Using select is more performant than fetching all fields. However, it is only available for firebase-admin and firebase-admin is typically only used server side.
select can be used as follows:
select('age', 'name') // fetch the age and name fields
select() // select no fields, which is perfect if you just want a count
select is available for Node.js servers but I am not sure about other languages:
https://googleapis.dev/nodejs/firestore/latest/Query.html#select
https://googleapis.dev/nodejs/firestore/latest/CollectionReference.html#select
Here's a server side cloud function written in Node.js which uses select to count a filtered collection and to get the IDs of all resulting documents. Its written in TS but easily converted to JS.
import admin from 'firebase-admin'
// https://stackoverflow.com/questions/46554091/cloud-firestore-collection-count
// we need to use admin SDK here as select() is only available for admin
export const videoIds = async (req: any): Promise<any> => {
const id: string = req.query.id || null
const group: string = req.query.group || null
let processed: boolean = null
if (req.query.processed === 'true') processed = true
if (req.query.processed === 'false') processed = false
let q: admin.firestore.Query<admin.firestore.DocumentData> = admin.firestore().collection('videos')
if (group != null) q = q.where('group', '==', group)
if (processed != null) q = q.where('flowPlayerProcessed', '==', processed)
// select restricts returned fields such as ... select('id', 'name')
const query: admin.firestore.QuerySnapshot<admin.firestore.DocumentData> = await q.orderBy('timeCreated').select().get()
const ids: string[] = query.docs.map((doc: admin.firestore.QueryDocumentSnapshot<admin.firestore.DocumentData>) => doc.id) // ({ id: doc.id, ...doc.data() })
return {
id,
group,
processed,
idx: id == null ? null : ids.indexOf(id),
count: ids.length,
ids
}
}
The cloud function HTTP request completes within 1 second for a collection of 500 docs where each doc contains a lot of data. Not amazingly performant but much better than not using select. Performance could be improved by introducing client side caching (or even server side caching).
The cloud function entry point looks like this:
exports.videoIds = functions.https.onRequest(async (req, res) => {
const response: any = await videoIds(req)
res.json(response)
})
The HTTP request URL would be:
https://SERVER/videoIds?group=my-group&processed=true
Firebase functions detail where the server is located on deployment.
Following Dan Answer: You can have a separated counter in your database and use Cloud Functions to maintain it. (Write-time best-effort)
// Example of performing an increment when item is added
module.exports.incrementIncomesCounter = collectionRef.onCreate(event => {
const counterRef = event.data.ref.firestore.doc('counters/incomes')
counterRef.get()
.then(documentSnapshot => {
const currentCount = documentSnapshot.exists ? documentSnapshot.data().count : 0
counterRef.set({
count: Number(currentCount) + 1
})
.then(() => {
console.log('counter has increased!')
})
})
})
This code shows you the complete example of how to do it:
https://gist.github.com/saintplay/3f965e0aea933a1129cc2c9a823e74d7
Get a new write batch
WriteBatch batch = db.batch();
Add a New Value to Collection "NYC"
DocumentReference nycRef = db.collection("cities").document();
batch.set(nycRef, new City());
Maintain a Document with Id as Count and initial Value as total=0
During Add Operation perform like below
DocumentReference countRef= db.collection("cities").document("count");
batch.update(countRef, "total", FieldValue.increment(1));
During Delete Operation perform like below
DocumentReference countRef= db.collection("cities").document("count");
batch.update(countRef, "total", FieldValue.increment(-1));
Always get Document count from
DocumentReference nycRef = db.collection("cities").document("count");
I created an NPM package to handle all counters:
First install the module in your functions directory:
npm i adv-firestore-functions
then use it like so:
import { eventExists, colCounter } from 'adv-firestore-functions';
functions.firestore
.document('posts/{docId}')
.onWrite(async (change: any, context: any) => {
// don't run if repeated function
if (await eventExists(context)) {
return null;
}
await colCounter(change, context);
}
It handles events, and everything else.
If you want to make it a universal counter for all functions:
import { eventExists, colCounter } from 'adv-firestore-functions';
functions.firestore
.document('{colId}/{docId}')
.onWrite(async (change: any, context: any) => {
const colId = context.params.colId;
// don't run if repeated function
if (await eventExists(context) || colId.startsWith('_')) {
return null;
}
await colCounter(change, context);
}
And don't forget your rules:
match /_counters/{document} {
allow read;
allow write: if false;
}
And of course access it this way:
const collectionPath = 'path/to/collection';
const colSnap = await db.doc('_counters/' + collectionPath).get();
const count = colSnap.get('count');
Read more: https://code.build/p/9DicAmrnRoK4uk62Hw1bEV/firestore-counters
GitHub: https://github.com/jdgamble555/adv-firestore-functions
Use Transaction to update the count inside the success listener of your database write.
FirebaseFirestore.getInstance().runTransaction(new Transaction.Function<Long>() {
#Nullable
#Override
public Long apply(#NonNull Transaction transaction) throws FirebaseFirestoreException {
DocumentSnapshot snapshot = transaction
.get(pRefs.postRef(forumHelper.getPost_id()));
long newCount;
if (b) {
newCount = snapshot.getLong(kMap.like_count) + 1;
} else {
newCount = snapshot.getLong(kMap.like_count) - 1;
}
transaction.update(pRefs.postRef(forumHelper.getPost_id()),
kMap.like_count, newCount);
return newCount;
}
});

Resources