Firebase realtime database - filtering query not on client side Web/React - reactjs

Quite new to Firebase and I'm facing some issue on the logic on querying/filtering the needed requests.
I have my users stored in the /users and they have a list of projects such as :
users : {
userA : {
projects: {
projectId1: true,
projectId2: true
},
...
}
...
}
And obviously I have the projects as such:
projects: {
projectId1: {
name: "bla"
}
...
}
I want for a user to query all the projects that are in his projects list based on their Ids.
Right now I only succeed to query every single projects of the database and their filter on the client side but obviously this has some serious security implication and loading time as well as I don't want anyone to query all the projects and get them. I can add security rules but then I have access to nothing as I can't query /projects/ anymore but need to be specific.
I'm using https://github.com/CSFrequency/react-firebase-hooks/tree/master/database
and getting the data as such:
const [projects, loading, error] = useListVals(firebase.db.ref("projects"), {
keyField: "uid",
});
And so would like to be able to add an array of projected in this request like where({ id is included in [projectsId]})

You'll need to load each individual project for the user separately, pretty much like a client-side join operation. This is not nearly as slow as you may think, as Firebase pipelines the operations over a single connection.
I don't see anything built into the library you use for such client-side joins, but in regular JavaScript it's something like this:
let userRef = firebase.database().ref('users').child(firebase.auth().currentUser.uid);
userRef.once('value').then((projectKeys) => {
let promises = [];
projectSnapshot.forEach((projectKey) => {
let key = projectKey.key;
let projectRef = firebase.database().ref('projects').child(key);
promises.push(projectRef.once('value');
});
Promise.all(promises).then((snapshots) => {
console.log(snapshots.map(snapshot => snapshot.val()));
});
});

Related

I can't order my firestore data by multiple fields. React

I have a collection called list which I'm trying to order by 3 different fields: important, unimportant and date, so the unimportant come first (they'll be at the bottom of the list, important the last, and all that sorted by timestamp.
If i query to order my docs by any of the above on its own, then it works, trouble starts when I try to put them together as per firestore documentation. So I have the following code:
const q = query(listRef, orderBy("important", "desc"), orderBy("unimportant"), orderBy("date"));
Which gets me Uncaught Error in snapshot listener. This is how i get my data from firestore:
const getData = () => { // get data from firestore to app
onSnapshot(q, (snapshot) => {
firestoreList = [];
firestoreIds = [];
snapshot.docs.forEach((doc) => {
firestoreList.push({ ...doc.data(), id: doc.id });
!firestoreIds.includes(doc.id) && firestoreIds.push(doc.id);
});
if (firestoreList.length === 0) {
setItems(items.concat(newItem));
} else {
setItemIds(firestoreIds);
setItems(firestoreList);
}
});
}
useEffect(() => {
getData();
}, []);
I'm using onSnapshot, because i need the user to be able to add, remove and do other stuff with data and see the outcome reflected immediately for them and for other users who'll be using the app simultaneously.
Ordering/filtering on multiple fields requires that your database contains a so-called composite index on those fields. And unlike single-field indexes, which are created automatically, composite indexes are only created when you explicitly tell the database to do so.
If you log the warning/error message you get, it contains a long direct link to the Firestore console to create the exact index the query needs. The link has all details already filled in, so all you have to do is click the link and then click the button to start creating the index.
Also see:
Firestore order by two fields
How to query one field then order by another one in Firebase cloud Firestore?
the Firestore documentation on ordering a Firestore query on multiple fields.

Ways to access firebase storage (photos) via web app

I'm confused as to the appropriate way to access a bunch of images stored in Firebase storage with a react redux firebase web app. In short, I'd love to get a walkthrough of, once a photo has been uploaded to firebase storage, how you'd go about linking it to a firebase db (like what exactly from the snapshot returned you'd store), then access it (if it's not just <img src={data.downloadURL} />), and also how you'd handle (if necessary) updating that link when the photo gets overwritten. If you can answer that, feel free to skip the rest of this...
Two options I came across are either
store the full URL in my firebase DB, or
store something less, like the path within the bucket, then call downloadURL() for every photo... which seems like a lot of unnecessary traffic, no?
My db structure at the moment is like so:
{
<someProjectId>: {
imgs: {
<someAutoGenId>: {
"name":"photo1.jpg",
"url":"https://<bucket, path, etc>token=<token>"
},
...
},
<otherProjectDetails>: "",
...
},
...
}
Going forward with that structure and the first idea listed, I ran into trouble when a photo was overwritten, so I would need to go through the list of images and remove the db record that matches the name (or find it and update its URL). I could do this (at most, there would be two refs with the old token that I would need to replace), but then I saw people doing it via option 2, though not necessarily with my exact situation.
The last thing I did see a few times, were similar questions with generic responses pointing to Cloud Functions, which I will look into right after posting, but I wasn't sure if that was overcomplicating things in my case, so I figured it couldn't hurt too much to ask. I initially saw/read about Cloud Functions and the fact that Firebase's db is "live," but wasn't sure if that played well in a React/Redux environment. Regardless, I'd appreciate any insight, and thank you.
In researching Cloud Functions, I realized that the use of Cloud Functions wasn't an entirely separate option, but rather a way to accomplish the first option I listed above (and probably the second as well). I really tried to make this clear, but I'm pretty confident I failed... so my apologies. Here's my (2-Part) working solution to syncing references in Firebase DB to Firebase Storage urls (in a React Redux Web App, though I think Part One should be applicable regardless):
PART ONE
Follow along here https://firebase.google.com/docs/functions/get-started to get cloud functions enabled.
The part of my database with the info I was storing relating to the images was at /projects/detail/{projectKey}/imgs and had this structure:
{
<autoGenKey1>: {
name: 'image1.jpg',
url: <longURLWithToken>
},
<moreAutoGenKeys>: {
...
}, ...}
My cloud function looked like this:
exports.updateURLToken = functions.database.ref(`/projects/detail/{projectKey}/imgs`)
.onWrite(event => {
const projectKey = event.params.projectKey
const newObjectSet = event.data.val()
const newKeys = Object.keys(newObjectSet)
const oldObjectSet = event.data.previous.val()
const oldKeys = Object.keys(oldObjectSet)
let newObjectKey = null
// If something was removed, none of this is necessary - return
if (oldKeys.length > newKeys.length) {
return null
}
for (let i = 0; i < newKeys.length; ++i) {// Looking for the new object -> will be missing in oldObjectSet
const key = newKeys[i]
if (oldKeys.indexOf(key) === -1) {// Found new object
newObjectKey = key
break
}
}
if (newObjectKey !== null) {// Checking if new object overwrote an existing object (same name)
const newObject = newObjectSet[newObjectKey]
let duplicateKey = null
for (let i = 0; i < oldKeys.length; ++i) {
const oldObject = oldObjectSet[oldKeys[i]]
if (newObject.name === oldObject.name) {// Duplicate found
duplicateKey = oldKeys[i]
break
}
}
if (duplicateKey !== null) {// Remove duplicate
return event.data.ref.child(duplicateKey).remove((error) => error ? 'Error removing duplicate project detail image' : true)
}
}
return null
})
After loading this function, it would run every time anything changed at that location (projects/detail/{projectKey}/imgs). So I uploaded the images, added a new object to my db with the name and url, then this would find the new object that was created, and if it had a duplicate name, that old object with the same name was removed from the db.
PART TWO
So now my database had the correct info, but unless I refreshed the page after every time images were uploaded, adding the new object to my database resulted (locally) in me having all the duplicate refs still, and this is where the realtime database came in to play.
Inside my container, I have:
function mapDispatchToProps (dispatch) {
syncProjectDetailImages(dispatch) // the relavant line -> imported from api.js
return bindActionCreators({
...projectsContentActionCreators,
...themeActionCreators,
...userActionCreators,
}, dispatch)
}
Then my api.js holds that syncProjectDetailImages function:
const SAVING_PROJECT_SUCCESS = 'SAVING_PROJECT_SUCCESS'
export function syncProjectDetailImages (dispatch) {
ref.child(`projects/detail`).on('child_changed', (snapshot) => {
dispatch(projectDetailImagesUpdated(snapshot.key, snapshot.val()))
})
}
function projectDetailImagesUpdated (key, updatedProject) {
return {
type: SAVING_PROJECT_SUCCESS,
group: 'detail',
key,
updatedProject
}
}
And finally, dispatch is figured out in my modules folder (I used the same function I would when saving any part of an updated project with redux - no new code was necessary)

couchdb update design doc

I have a nodejs application where i connect to my couchdb using nano with the following script:
const { connectionString } = require('../config');
const nano = require('nano')(connectionString);
// creates database or fails silent if exists
nano.db.create('foo');
module.exports = {
foo: nano.db.use('foo')
}
This script is running on every server start, so it tries to create the database 'foo' every time the server (re)starts and just fails silently if the database already exists.
I like this idea a lot because this way I'm actually maintaining the database at the application level and don't have to create databases manually when I decide to add a new database.
Taking this approach one step further I also tried to maintain my design docs from application level.
...
nano.db.create('foo');
const foo = nano.db.use('foo');
const design = {
_id: "_design/foo",
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
foo.insert(design, (err) => {
if(err)
console.log('design insert failed');
})
module.exports = {
foo
}
Obviously this will only insert the design doc if it doesn't exist. But what if I updated my design doc and want to update it?
I tried:
foo.get("_design/foo", (err, doc) => {
if(err)
return foo.insert(design);
design._rev = doc._rev
foo.insert(design);
})
The problem now is that the design document is updated every time the server restarts (e.g it gets a new _rev on every restart).
Now... my question(s) :)
1: Is this a bad approach for bootstrapping my CouchDB with databases and designs? Should I consider some migration steps as part of my deployment process?
2: Is it a problem that my design doc gets many _revs, basically for every deployment and server restart? Even if the document itself hasn't changed? And if so, is there a way to only update the document if it changed? (I thought of manually setting the _rev to some value in my application but very unsure that would be a good idea).
Your approach seems quite reasonable. If the checks happen only at restarts, this won't even be a performance issue.
Too many _revs can become a problem. The history of _revs is kept as _revs_info and stored with the document itself (see the CouchDB docs for details). Depending on your setup, it might be a bad decision to create unnecessary revisions.
We had a similar challenge with some server-side scripts that required certain views. Our solution was to calculate a hash over the old and new design document and compare them. You can use any hashing function for this job, such as sha1 or md5.
Just remember to remove the _rev from the old document before hashing it, or otherwise you will get different hash values every time.
I tried the md5 comparison like #Bernhard Gschwantner suggested. But I ran into some difficulties because im my case I'd like to write the map/reduce functions in the design documents in pure javascript in my code.
const design = {
_id: "_design/foo",
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
while getting the design doc from CouchDb returns the map/reduce functions converted as strings:
...
"by_name": {
"map": "function (doc) {\n emit(doc.name, null);\n }"
},
...
Obviously md5 comparing does not really work here.
I ended up with the very simple solution by just putting a version number on the design doc:
const design = {
_id: "_design/foo",
version: 1,
views: {
by_name: {
map: function(doc) {
emit(doc.name, null);
}
}
}
}
When I update the design doc, I simply increment the version number and compare it with the version number in database:
const fooDesign = {...}
foo.get('_design/foo', (err, design) => {
if(err)
return foo.insert(fooDesign);
console.log('comparing foo design version', design.version, fooDesign.version);
if(design.version !== fooDisign.version) {
fooDesign._rev = design._rev;
foo.insert(fooDesign, (err) => {
if(err)
return console.log('error updating foo design', err);
console.log('foo design updated to version', fooDesign.version)
});
}
});
Revisiting your question again: In a recent project I used the great couchdb-push module by Johannes Schmidt. You get conditional updates for free, alongside with many other benefits inherited from its dependency couchdb-compile.
That library turned out to be a hidden gem for me. HIGHLY recommended!

Is there a way to query multiple tables at the same time in Sails?

I've been tasked with adding an Angular Typeahead search field to a site and the data needs to come from multiple tables. It needs to be a "search all the things" kind of query which looks for people, servers, and applications in one spot.
I was thinking the best way to do this would be to have a single API endpoint in Sails which could pull from 3 tables on the same DB and send the results, but I'm not quite sure how to go about it.
Use the built-in bluebird library, specifically Promise.all(). To handle the results, use .spread(). Example controller code (modify to suit your case):
var Promise = require('bluebird');
module.exports = {
searchForStuff: function(req, res) {
var params = req.allParams();
// Replace the 'find' criteria with whatever suitable for your case
var requests = [
Person.find({name: params.searchString}),
Server.find({name: params.searchString}),
Application.find({name: params.searchString})
];
Promise.all(requests)
.spread(function(people, servers, applications) {
return res.json({
people: people,
servers: servers,
applications: applications
})
})
}
}

How to execute a relay mutation asynchronously?

I have a relay mutation that posts some data to my server. My app shouldn't wait for the response before continuing.
I know I can execute arbitrary queries with the following:
const query = Relay.createQuery(Relay.QL`
query {
viewer {
searchInterests(prefix: $prefix, first: 10) {
edges {
node {
id
name
}
}
}
},
}
`, {prefix: input});
Relay.Store.primeCache({query}, readyState => {
if (readyState.done) {
// When all data is ready, read the data from the cache:
const data = Relay.Store.readQuery(query)[0];
...
}
How can I fire off mutations asynchronously without my app waiting for the response?
When designing a fat query, consider all of the data that might change as a result of the mutation – not just the data currently in use by your application. We don't need to worry about overfetching; this query is never executed without first intersecting it with a ‘tracked query’ of the data our application actually needs. If we omit fields in the fat query, we might observe data inconsistencies in the future when we add views with new data dependencies, or add new data dependencies to existing views.

Resources