How to efficiently organise firestore methods and connections - reactjs

I'm current working with Firebase firestore and Next JS. I've googled how to organise a firestore project but most of them (all actually) aren't scalable.
What I have tried to do is to have a folder containing all the Firebase-related components such as configurations and utility methods. I found the most challenging part is to write a general-purpose function to get the collection/document ref that applies all the supported methods, namely .orderBy(), .limit(), .where() and .doc(). It's also really tough to write a transformer that transforms the data returned by the database to another format.
Here's what I have implemented:
Where getDocRef.js is a helper function that puts all those methods mentioned above together, getOnce.js and observe.js expose methods that interact with the database and db.js contains the configurations.
Also, for anyone who interested, here's my naive solution for the .getDocRef() function:
import db from '../db';
/*
Options:
- ref: Specify the ref of a document
- collectionName: Specify the collection name
- queryArgs: Specify the arguments to be passed down to .where()
- orderByArgs: Specify the arguments to be passed down to .orderBy()
- limit: Specify the fetching limit
- docName: Specify the document name/id
*/
export default options => {
const { ref, collectionName, queryArgs, orderByArgs, limit, docName } = options;
if (ref != null) return ref;
const initRef = db.collection(collectionName);
if (docName != null) return initRef.doc(docName);
if (queryArgs != null) {
if (orderByArgs != null) {
if (limit != null)
return initRef
.where(...queryArgs)
.orderBy(...orderByArgs)
.limit(limit);
return initRef.where(...queryArgs).orderBy(...orderByArgs);
}
return initRef.where(...queryArgs);
}
return initRef;
};
So, I would love to know if my current implementation of Firebase is okay. If not, what project structure should I apply? How should I improve my current structure to make it more efficiently? And last but not least is there an alternative to my naive JS solution posted above? Thanks in advance

My personal approach:
Extract all credentials to .env with dotenv package
A directory call /lib/db and two files here:
init.js to initialise the Firebase/firestore
Another class with some methods for CRUD
If your project is getting big, I suggest to extract every collection's related method to a file in /lib/db and organise them there(somehow like state managements).

Related

Is there a way to batch read firebase documents

I am making a mobile app using flutter with firebase as my backend.
I have a collection of user document that stores user information. one of the fields is an array of references (reference documents in another collection) which I want to use in an operation like batch that in that would then allow be to read all the documents.
I know batch only allows writes to the database, My second option would be Transaction, which requires writes after reads which I am trying to avoid.
Is there a way to read multiple documents in one operation without having to use Transaction?
Firestore doesn't offer a formal batch read API. As Frank mentions in his comment, there is a way to use IN to fetch multiple documents from a single collection using their IDs. However, all of the documents must be in the same collection, and you can't exceed 10 documents per query. You might as well just get() for each document individually, as the IN query has limitations, and isn't guaranteed to execute any faster than the individual gets. Neither solution is guaranteed to be "consistent", so any one of the documents fetched could be "more fresh" than the others at any given moment in time.
If you know the document IDs and the collection paths of the documents needed to be fetched, you could always use the getAll() method which is exposed in the firebase Admin SDK (at least for Node.js environments).
Then, for example, you could write an HTTPS Callable Function that would accept a list of absolute document paths and perform a "batch get" operation on them using the getAll() method.
e.g.
// Import firebase functionality
const functions = require('firebase-functions');
const admin = require('firebase-admin');
// Configure firebase app
admin.initializeApp(functions.config().firebase);
// HTTPS callable function
exports.getDocs = functions.https.onCall((data, context) => {
const docPathList = data.list; // e.g. ["users/Jkd94kdmdks", "users/8nkdjsld", etc...]
const firestore = admin.firestore();
var docList = [];
for (var i = 0; i <= docPathList.length - 1; i++) {
const docPath = docPathList[i];
const doc = firestore.doc(docPath);
docList.push(doc);
}
// Get all
return firestore.getAll(...docList)
.then(results => {
return { data : results.map(doc => doc.data()) };
})
.catch(err => {
return { error : err };
})
});
Not sure what the limit (if any) is for the number of documents you can fetch using getAll(), but I do know my application is able to fetch at least 50 documents per call successfully using this method.
Firestore has a REST API that allows you to do batch GETs with document paths that may be what you need.
See https://firebase.google.com/docs/firestore/reference/rest/v1beta1/projects.databases.documents/batchGet

test database for react-native app development

I'm in the early stages of developing an app with react-native, and I need a DB implementation for for testing and development. I thought that the obvious choice would be to use simple JSON files included with the source, but the only way I see to load JSON files requires that you know the file name ahead of time. This means that the following does not work:
getTable = (tableName) => require('./table-' + tableName + '.json') // ERROR!
I cannot find a simple way to load files at runtime.
What is the proper way to add test data to a react-native app?
I cannot find a simple way to load files at runtime.
In node you can use import() though I'm not sure if this is available in react-native. The syntax would be something like:
async function getTable(tableName){
const fileName = `./table-${tableName}.json`
try {
const file = await import(fileName)
} catch(err){
console.log(err
}
}
though like I said I do not know if this is available in react-natives javascript environment so ymmv
Unfortunately dynamic import not supported by react-native but there is a way so to do this
import tableName1 from './table/tableName1.json';
import tableName2 from './table/tableName2.json';
then create own object like
const tables = {
tableName1,
tableName2,
};
after that, you can access the table through bracket notation like
getTable = (tableName) => tables[tableName];

Ways to access firebase storage (photos) via web app

I'm confused as to the appropriate way to access a bunch of images stored in Firebase storage with a react redux firebase web app. In short, I'd love to get a walkthrough of, once a photo has been uploaded to firebase storage, how you'd go about linking it to a firebase db (like what exactly from the snapshot returned you'd store), then access it (if it's not just <img src={data.downloadURL} />), and also how you'd handle (if necessary) updating that link when the photo gets overwritten. If you can answer that, feel free to skip the rest of this...
Two options I came across are either
store the full URL in my firebase DB, or
store something less, like the path within the bucket, then call downloadURL() for every photo... which seems like a lot of unnecessary traffic, no?
My db structure at the moment is like so:
{
<someProjectId>: {
imgs: {
<someAutoGenId>: {
"name":"photo1.jpg",
"url":"https://<bucket, path, etc>token=<token>"
},
...
},
<otherProjectDetails>: "",
...
},
...
}
Going forward with that structure and the first idea listed, I ran into trouble when a photo was overwritten, so I would need to go through the list of images and remove the db record that matches the name (or find it and update its URL). I could do this (at most, there would be two refs with the old token that I would need to replace), but then I saw people doing it via option 2, though not necessarily with my exact situation.
The last thing I did see a few times, were similar questions with generic responses pointing to Cloud Functions, which I will look into right after posting, but I wasn't sure if that was overcomplicating things in my case, so I figured it couldn't hurt too much to ask. I initially saw/read about Cloud Functions and the fact that Firebase's db is "live," but wasn't sure if that played well in a React/Redux environment. Regardless, I'd appreciate any insight, and thank you.
In researching Cloud Functions, I realized that the use of Cloud Functions wasn't an entirely separate option, but rather a way to accomplish the first option I listed above (and probably the second as well). I really tried to make this clear, but I'm pretty confident I failed... so my apologies. Here's my (2-Part) working solution to syncing references in Firebase DB to Firebase Storage urls (in a React Redux Web App, though I think Part One should be applicable regardless):
PART ONE
Follow along here https://firebase.google.com/docs/functions/get-started to get cloud functions enabled.
The part of my database with the info I was storing relating to the images was at /projects/detail/{projectKey}/imgs and had this structure:
{
<autoGenKey1>: {
name: 'image1.jpg',
url: <longURLWithToken>
},
<moreAutoGenKeys>: {
...
}, ...}
My cloud function looked like this:
exports.updateURLToken = functions.database.ref(`/projects/detail/{projectKey}/imgs`)
.onWrite(event => {
const projectKey = event.params.projectKey
const newObjectSet = event.data.val()
const newKeys = Object.keys(newObjectSet)
const oldObjectSet = event.data.previous.val()
const oldKeys = Object.keys(oldObjectSet)
let newObjectKey = null
// If something was removed, none of this is necessary - return
if (oldKeys.length > newKeys.length) {
return null
}
for (let i = 0; i < newKeys.length; ++i) {// Looking for the new object -> will be missing in oldObjectSet
const key = newKeys[i]
if (oldKeys.indexOf(key) === -1) {// Found new object
newObjectKey = key
break
}
}
if (newObjectKey !== null) {// Checking if new object overwrote an existing object (same name)
const newObject = newObjectSet[newObjectKey]
let duplicateKey = null
for (let i = 0; i < oldKeys.length; ++i) {
const oldObject = oldObjectSet[oldKeys[i]]
if (newObject.name === oldObject.name) {// Duplicate found
duplicateKey = oldKeys[i]
break
}
}
if (duplicateKey !== null) {// Remove duplicate
return event.data.ref.child(duplicateKey).remove((error) => error ? 'Error removing duplicate project detail image' : true)
}
}
return null
})
After loading this function, it would run every time anything changed at that location (projects/detail/{projectKey}/imgs). So I uploaded the images, added a new object to my db with the name and url, then this would find the new object that was created, and if it had a duplicate name, that old object with the same name was removed from the db.
PART TWO
So now my database had the correct info, but unless I refreshed the page after every time images were uploaded, adding the new object to my database resulted (locally) in me having all the duplicate refs still, and this is where the realtime database came in to play.
Inside my container, I have:
function mapDispatchToProps (dispatch) {
syncProjectDetailImages(dispatch) // the relavant line -> imported from api.js
return bindActionCreators({
...projectsContentActionCreators,
...themeActionCreators,
...userActionCreators,
}, dispatch)
}
Then my api.js holds that syncProjectDetailImages function:
const SAVING_PROJECT_SUCCESS = 'SAVING_PROJECT_SUCCESS'
export function syncProjectDetailImages (dispatch) {
ref.child(`projects/detail`).on('child_changed', (snapshot) => {
dispatch(projectDetailImagesUpdated(snapshot.key, snapshot.val()))
})
}
function projectDetailImagesUpdated (key, updatedProject) {
return {
type: SAVING_PROJECT_SUCCESS,
group: 'detail',
key,
updatedProject
}
}
And finally, dispatch is figured out in my modules folder (I used the same function I would when saving any part of an updated project with redux - no new code was necessary)

Partial Updates (aka PATCH) using a $resource based service?

We're building a web application using Django/TastyPie as the back-end REST service provider, and building an AngularJS based front end, using lots of $resource based services to CRUD objects on the server. Everything is working great so far!
But, we would like to reduce the amount of data that we're shipping around when we want to update only one or two changed fields on an object.
TastyPie supports this using the HTTP PATCH method. We have defined a .diff() method on our objects, so we can determine which fields we want to send when we do an update. I just can't find any documentation on how to define/implement the method on the instance object returned by $resource to do what we want.
What we want to do is add another method to the object instances, (as described in the Angular.js documentation here) like myobject.$partialupdate() which would:
Call our .diff() function to determine which fields to send, and then
Use an HTTP PATCH request to send only those fields to the server.
So far, I can't find any documentation (or other SO posts) describing how to do this, but would really appreciate any suggestions that anyone might have.
thank you.
I would suggest using
update: {
method: 'PATCH',
transformRequest: dropUnchangedFields
}
where
var dropUnchangedFields = function(data, headerGetter) {
/* compute from data using your .diff method by */
var unchangedFields = [ 'name', 'street' ];
/* delete unchanged fields from data using a for loop */
delete data['name'] ;
delete data['street'];
return data;
}
PS: not sure from memory, whether data is a reference to your resource of a copy of it, so you may need to create a copy of data, before deleting fields
Also, instead of return data, you may need return JSON.stringify(data).
Source (search for "transformRequest" on the documentation page)
We implemented $patchusing ngResource, but it's a bit involved (we use Django Rest Framework on the server-side). For your diff component, I'll leave to your own implementation. We use a pristine cache to track changes of resources, so I can poll a given object and see what (if any) has changed.
I leverage underscore's _.pick() method to pull the known fields to save off the existing instance, create a copy (along with the known primary key) and save that using $patch.
We also use some utility classes to extend the built-in resources.
app.factory 'PartUpdateMixin', ['$q', '_', ($q, _) ->
PartUpdateMixin = (klass) ->
partial_update: (keys...) ->
deferred = $q.defer()
params = _.pick(#, 'id', keys...)
o = new klass(params)
o.$patch(deferred.resolve, deferred.reject)
return deferred.promise
]
Here's the utility classes to enhance the Resources.
app.factory 'extend', ->
extend = (obj, mixins...) ->
for mixin in mixins
obj[name] = method for name, method of mixin
obj
app.factory 'include', ['extend', (extend) ->
include = (klass, mixins...) ->
extend klass.prototype, mixins...
return include
]
Finally, we can enhance our Resource
include TheResource, PartUpdateMixin(TheResource)
resourceInstance = TheResource.get(id: 1234)
# Later...
updatedFields = getChangedFields(resourceInstance)
resourceInstance.partial_update(updatedFields...)
I would suggest using Restangular over ngResource. The angular team keeps improving ngResource with every version, but Restangular still does a lot more, including allowing actions like PATCH that ngResource doesn't. Here'a a great SO question comparing the two What is the advantage of using Restangular over ngResource?

Salesforce Metadata apis

I want to rerieve list of Metadata Component's like ApexClass using Salesforce Metadata API's.
I'm getting list of all the Apex Classes(total no is 2246) that are on the Salesforce using the following Code and its taking too much time to retrieve these file names:
ListMetadataQuery query = new ListMetadataQuery();
query.type = "ApexClass";
double asOfVersion = 23.0;
// Assume that the SOAP binding has already been established.
FileProperties[] lmr = metadataService.listMetadata(
new ListMetadataQuery[] { query }, asOfVersion);
if (lmr != null)
{
foreach(FileProperties n in lmr)
{
string filename = n.fileName;
}
}
My requirement is to get list of Metadata Components(Apex Classes) which are developed by my organizasion only so that i can get the Salesforce Metadata Components which are relevant to me and possibly can save my time by not getting all the classes.
How can I Achieve this?
Reply as soon as possible.
Thanks in advance.
I've not used the meta-data API directly, but I'd suggest either trying to filter on the created by field, or use a prefixed name on your classes so you can filter on that.
Not sure if filters are possible though! As for speed, my experience of using the Meta-Data API via Eclipse is that it's always pretty slow and there's not much you can do about it!

Resources