Every day I am importing products from external retailers into a Google Cloud Firestore database.
During that process, products can be either new (a new document will be added to the database) or existing (an existing document will be updated in the database).
Should be noted that I am importing about 10 million products each day so I am not querying the database for each product to check if it exists already.
I am currently using set with merge, which is exactly what I need as it creates a document if it doesn't exist or updates specific fields of an existing document.
Now the question is, how could I achieve a createdAt timestamp given that provided fields will be updated, therefore the original createdAt timestamp will be lost on update? Is there any way to not update a specific field if that field already exists in the document?
I suggest that you use a Cloud Function that would create a new dedicated field when a Firestore doc is created. This field shall not be included in the object you pass to the set() method when you update the docs.
Here is a proposal for the Cloud Function code:
const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();
exports.productsCreatedDate = functions.firestore
.document('products/{productId}')
.onCreate((snap, context) => {
return snap.ref.set(
{
calculatedCreatedAt: admin.firestore.FieldValue.serverTimestamp()
},
{ merge: true }
)
.catch(error => {
console.log(error);
return false;
});
});
Based on Bob Snyder comment above, note that you could also do:
const docCreatedTimestamp = snap.createTime;
return snap.ref.set(
{
calculatedCreatedAt: docCreatedTimestamp
},
{ merge: true }
)
.catch(...)
in version firebase 9 the correct way is:
import { serverTimestamp } from "firebase/firestore";
....
return snap.ref.set(
{
calculatedCreatedAt: serverTimestamp();
.....
Related
I am building an app using Firebase Firestore as a BaaS.
But I am facing a problem when I try to create a feed/implement full-text-search on my app.
I want to be able to search through all the users posts, the problem is, the users posts are structured like this in the Firestore Database:
Posts(collection) -> UserID(Document) -> user posts(subcollection that holds all userID posts) -> actual posts(separate documents within that collection)
I want to loop through every user's user posts subcollection and fetch all data for the feed, and also to implement it with a full text search app like Algolia or ES.
I can loop through a specific user ID(code below), but being a beginner, I couldn't find a way to loop through all of them and fetch all of them.
firebase.firestore()
.collection('allPosts')
.doc('SPECIFIC_USER_ID') //-> Here I have to loop through all docs in that collection
.collection('userPosts')
.orderBy("creation", "asc")
.get()
.then((snapshot) => {
let posts = snapshot.docs.map(doc => {
const data = doc.data();
const id = doc.id;
return { id, ...data }
})
setUserPosts(posts)
})
}
Would love some help!
Collection Group Query
You can query in all collections named X using a collection group query.
var posts= db.collectionGroup('userPosts').orderBy('creation').limit(10);
posts.get().then((querySnapshot) => {
let posts = querySnapshot.map(doc => {
const data = doc.data();
const id = doc.id;
return { id, ...data }
})
setUserPosts(posts)
});
Source: https://firebase.google.com/docs/firestore/query-data/queries#collection-group-query
Algolia implementation
You will need to use Cloud Functions to migrate fields to a dedicated collection specifically for Algolia. Many users have found nested SubCollections to be problematic with Algolia's setup.
You do this by duplicating the user Post data as a 'source' to this new public collection, and using the Firebase Algolia Extension, you can sync it directly
exports.bakePosts= functions.firestore
.document('allPosts/{userID}/userPosts/{postID}')
.onWrite((change, context) => {
// Get an object with the current document value.
// If the document does not exist, it has been deleted.
const document = change.after.exists ? change.after.data() : null;
// Get an object with the previous document value (for update or delete)
const oldDocument = change.before.data();
if(document != null)
db.collection("posts/"+ context.params.postID).set(document);
if(document == null)
db.collection("posts/"+ context.params.postID).delete();
});
Algolia Extension:
https://firebase.google.com/products/extensions/firestore-algolia-search
You can avoid most of the above if you simply submit posts to a master collection and have the userID as the 'owner' property within the document. The above also have benefits, but more related to blog posts where users may have a "work in progress" version vs Live.
The Algolia Extension has the full guide on how to set it up and if you need to customize the extensions, the source code is also available.
i don't know how to go about accessing several documents inside a document of unknown name. Here is my Firebase Firestore structure:
BUSINESS
unknownbusinessid1
PROMOTIONS
unknownpromotionid1
unknownpromotionid2
(...)
unknownbusinessid2
(...)
unknownbusinessid3
(...)
what I would like to achieve is to retrieve all promotion documents (unknownpromotionid1, unknownpromotionid2, etc.) without knowing the business ids (unknownbusinessid1, unknownbusinessid2, unknownbusinessid3, etc.)
it should be something like this:
const getItem = db.doc(`BUSINESS/$whatever`).collection("PROMOTIONS")
.get().then((snapshot) => {
snapshot.docs.map(doc => {
console.log("this is your promotion", doc)
})
return promotionsArray
})
how can this be accomplished?
thank you!
You can use a collection group query, as detailed in the docs, it allows accessing subcollections with a specific name from every documents at once:
db.collectionGroup('PROMOTIONS').get().then(snapshot => {
snapshot.forEach(doc => {
console.log("this is your promotion", doc)
}
});
Note that you will need to set an index as well a an appropriate security rule for Firestore:
match /{path=**}/PROMOTIONS/{id} {
allow read: if true;
}
So i am curious when does onDataChange method occur?
It seems like it is activated when user add new information or change already existed data.
However, what I am trying to do is that, before adding new data, I want to check if the item is existing in database....if there is an identical item, adding new data won't be done, or if there is no such item, then it should be added to database.
so, my actual question is that, this process "Checking all the database items", can it be done without using onDataChange method?
You basically set up a subscription to the "onDataChange" so its actually watching firebase for changes.
But for checking you could literate through the results or do one time query to the exact path your data it held at.
It also may be a better choice to record everything and then remove the data when not needed.
import { AngularFirestore } from 'angularfire2/firestore';
import { AngularFireDatabase, AngularFireList } from 'angularfire2/database';
import { map } from 'rxjs/operators';
import { Observable, Subscription } from 'rxjs';
import firebase as firebase from 'firebase/app';
private mysubscription: Subscription;
public this.items:any = [];
constructor(
public _DB: AngularFireDatabase
) {
try {
//subscription using AngulaFire
this.mysubscription = this._DB.list("myFireBaseDataPath").snapshotChanges().pipe(map(actions => {
return actions.map(action => ({ key: action.key, val: action.payload.val() }));
}))
.subscribe(items => {
this.items = [];
this.items = items.map(item => item);
console.log("db results",this.items);
var icount=0;
for (let i in this.items) {
console.log("key",this.items[i].key);
console.log("val",this.items[i].val);
console.log("----------------------------------);
//checking if something exists
if (this.items[i].key == 'SomeNodePath') {
var log = this.items[i].val;
}
}
} catch (e) {
console.error(e);
}
});
}
ngOnDestroy() {
this.mysubscription.unsubscribe();
}
//or we can do a one time query using just the firebase module
try {
return firebase.database().ref("myFireBaseDataPath").once('value').then(function(snapshot) { return snapshot.val(); })
.then(res => {
for (let myNode in res) {
console.log(res[myNode]);
console.warn(res[myNode].myChildPath);
console.log("----------------------------------);
}
})
.catch(error => console.log(error));
} catch (e) {
console.error(e);
}
//however it may be better practice to log all data and then firebase.database().ref(/logs").remove(); the entire log when not needed
var desc ="abc";
let newPostKey = firebase.database().ref("/logs").push();
newPostKey.set({
'info': desc,
'datetime': new Date().toISOString()
});
When does onDataChange method occur?
The onDataChange method is called for every change in the database reference it is attached to. It is also called for every visit to the database reference it is attached to.
For example,
final FirebaseDatabase database = FirebaseDatabase.getInstance();
DatabaseReference ref = database.getReference("some/database/refrence");
ref.addValueEventListener(new ValueEventListener() {
#Override
public void onDataChange(DataSnapshot dataSnapshot) {
// This method will be fired for any change in the
database.getReference("some/database/refrence") part of the database.
// It will also be fired anytime you request for data in the
database.getReference("some/database/refrence") part of the database
}
#Override
public void onCancelled(DatabaseError databaseError) {
System.out.println("The read failed: " + databaseError.getCode());
// This method will be fired anytime you request for data in the
database.getReference("some/database/refrence") part of the database
and an error occurred
}
});
Before adding new data, I want to check if the item is existing in database....if there is an identical item, adding new data won't be done, or if there is no such item, then it should be added to database.
This can be done by calling the exists() method on the snapshot retrieved from your database query.
Check this stackoverflow question Checking if a particular value exists in the firebase database for an answer to that
So, my actual question is that, this process "Checking all the database items", can it be done without using onDataChange method?
No. The onDataChange method is the callback used to retrieve data from the database. Even if you use the equalTo() method on a query, you'll still have to use the onDataChange method.
I am not a Firebaser Specialist tho. There are folks who work at Firebase on here. They could give you more information
PS: Please make your own research on your questions first before asking. Some questions are already answered in the documentation and on stackoverflow.
This question already has answers here:
Cloud Firestore collection count
(29 answers)
Closed 10 months ago.
In Firestore, how can I get the total number of documents in a collection?
For instance if I have
/people
/123456
/name - 'John'
/456789
/name - 'Jane'
I want to query how many people I have and get 2.
I could do a query on /people and then get the length of the returned results, but that seems a waste, especially because I will be doing this on larger datasets.
You currently have 3 options:
Option 1: Client side
This is basically the approach you mentioned. Select all from collection and count on the client side. This works well enough for small datasets but obviously doesn't work if the dataset is larger.
Option 2: Write-time best-effort
With this approach, you can use Cloud Functions to update a counter for each addition and deletion from the collection.
This works well for any dataset size, as long as additions/deletions only occur at the rate less than or equal to 1 per second. This gives you a single document to read to give you the almost current count immediately.
If need need to exceed 1 per second, you need to implement distributed counters per our documentation.
Option 3: Write-time exact
Rather than using Cloud Functions, in your client you can update the counter at the same time as you add or delete a document. This means the counter will also be current, but you'll need to make sure to include this logic anywhere you add or delete documents.
Like option 2, you'll need to implement distributed counters if you want to exceed per second
Aggregations are the way to go (firebase functions looks like the recommended way to update these aggregations as client side exposes info to the user you may not want exposed) https://firebase.google.com/docs/firestore/solutions/aggregation
Another way (NOT recommended) which is not good for large lists and involves downloading the whole list: res.size like this example:
db.collection("logs")
.get()
.then((res) => console.log(res.size));
If you use AngulareFire2, you can do (assuming private afs: AngularFirestore is injected in your constructor):
this.afs.collection(myCollection).valueChanges().subscribe( values => console.log(values.length));
Here, values is an array of all items in myCollection. You don't need metadata so you can use valueChanges() method directly.
Be careful counting number of documents for large collections with a cloud function. It is a little bit complex with firestore database if you want to have a precalculated counter for every collection.
Code like this doesn't work in this case:
export const customerCounterListener =
functions.firestore.document('customers/{customerId}')
.onWrite((change, context) => {
// on create
if (!change.before.exists && change.after.exists) {
return firestore
.collection('metadatas')
.doc('customers')
.get()
.then(docSnap =>
docSnap.ref.set({
count: docSnap.data().count + 1
}))
// on delete
} else if (change.before.exists && !change.after.exists) {
return firestore
.collection('metadatas')
.doc('customers')
.get()
.then(docSnap =>
docSnap.ref.set({
count: docSnap.data().count - 1
}))
}
return null;
});
The reason is because every cloud firestore trigger has to be idempotent, as firestore documentation say: https://firebase.google.com/docs/functions/firestore-events#limitations_and_guarantees
Solution
So, in order to prevent multiple executions of your code, you need to manage with events and transactions. This is my particular way to handle large collection counters:
const executeOnce = (change, context, task) => {
const eventRef = firestore.collection('events').doc(context.eventId);
return firestore.runTransaction(t =>
t
.get(eventRef)
.then(docSnap => (docSnap.exists ? null : task(t)))
.then(() => t.set(eventRef, { processed: true }))
);
};
const documentCounter = collectionName => (change, context) =>
executeOnce(change, context, t => {
// on create
if (!change.before.exists && change.after.exists) {
return t
.get(firestore.collection('metadatas')
.doc(collectionName))
.then(docSnap =>
t.set(docSnap.ref, {
count: ((docSnap.data() && docSnap.data().count) || 0) + 1
}));
// on delete
} else if (change.before.exists && !change.after.exists) {
return t
.get(firestore.collection('metadatas')
.doc(collectionName))
.then(docSnap =>
t.set(docSnap.ref, {
count: docSnap.data().count - 1
}));
}
return null;
});
Use cases here:
/**
* Count documents in articles collection.
*/
exports.articlesCounter = functions.firestore
.document('articles/{id}')
.onWrite(documentCounter('articles'));
/**
* Count documents in customers collection.
*/
exports.customersCounter = functions.firestore
.document('customers/{id}')
.onWrite(documentCounter('customers'));
As you can see, the key to prevent multiple execution is the property called eventId in the context object. If the function has been handled many times for the same event, the event id will be the same in all cases. Unfortunately, you must have "events" collection in your database.
Please check below answer I found on another thread. Your count should be atomic. Its required to use FieldValue.increment() function in such case.
https://stackoverflow.com/a/49407570/3337028
firebase-admin offers select(fields) which allows you to only fetch specific fields for documents within your collection. Using select is more performant than fetching all fields. However, it is only available for firebase-admin and firebase-admin is typically only used server side.
select can be used as follows:
select('age', 'name') // fetch the age and name fields
select() // select no fields, which is perfect if you just want a count
select is available for Node.js servers but I am not sure about other languages:
https://googleapis.dev/nodejs/firestore/latest/Query.html#select
https://googleapis.dev/nodejs/firestore/latest/CollectionReference.html#select
Here's a server side cloud function written in Node.js which uses select to count a filtered collection and to get the IDs of all resulting documents. Its written in TS but easily converted to JS.
import admin from 'firebase-admin'
// https://stackoverflow.com/questions/46554091/cloud-firestore-collection-count
// we need to use admin SDK here as select() is only available for admin
export const videoIds = async (req: any): Promise<any> => {
const id: string = req.query.id || null
const group: string = req.query.group || null
let processed: boolean = null
if (req.query.processed === 'true') processed = true
if (req.query.processed === 'false') processed = false
let q: admin.firestore.Query<admin.firestore.DocumentData> = admin.firestore().collection('videos')
if (group != null) q = q.where('group', '==', group)
if (processed != null) q = q.where('flowPlayerProcessed', '==', processed)
// select restricts returned fields such as ... select('id', 'name')
const query: admin.firestore.QuerySnapshot<admin.firestore.DocumentData> = await q.orderBy('timeCreated').select().get()
const ids: string[] = query.docs.map((doc: admin.firestore.QueryDocumentSnapshot<admin.firestore.DocumentData>) => doc.id) // ({ id: doc.id, ...doc.data() })
return {
id,
group,
processed,
idx: id == null ? null : ids.indexOf(id),
count: ids.length,
ids
}
}
The cloud function HTTP request completes within 1 second for a collection of 500 docs where each doc contains a lot of data. Not amazingly performant but much better than not using select. Performance could be improved by introducing client side caching (or even server side caching).
The cloud function entry point looks like this:
exports.videoIds = functions.https.onRequest(async (req, res) => {
const response: any = await videoIds(req)
res.json(response)
})
The HTTP request URL would be:
https://SERVER/videoIds?group=my-group&processed=true
Firebase functions detail where the server is located on deployment.
Following Dan Answer: You can have a separated counter in your database and use Cloud Functions to maintain it. (Write-time best-effort)
// Example of performing an increment when item is added
module.exports.incrementIncomesCounter = collectionRef.onCreate(event => {
const counterRef = event.data.ref.firestore.doc('counters/incomes')
counterRef.get()
.then(documentSnapshot => {
const currentCount = documentSnapshot.exists ? documentSnapshot.data().count : 0
counterRef.set({
count: Number(currentCount) + 1
})
.then(() => {
console.log('counter has increased!')
})
})
})
This code shows you the complete example of how to do it:
https://gist.github.com/saintplay/3f965e0aea933a1129cc2c9a823e74d7
Get a new write batch
WriteBatch batch = db.batch();
Add a New Value to Collection "NYC"
DocumentReference nycRef = db.collection("cities").document();
batch.set(nycRef, new City());
Maintain a Document with Id as Count and initial Value as total=0
During Add Operation perform like below
DocumentReference countRef= db.collection("cities").document("count");
batch.update(countRef, "total", FieldValue.increment(1));
During Delete Operation perform like below
DocumentReference countRef= db.collection("cities").document("count");
batch.update(countRef, "total", FieldValue.increment(-1));
Always get Document count from
DocumentReference nycRef = db.collection("cities").document("count");
I created an NPM package to handle all counters:
First install the module in your functions directory:
npm i adv-firestore-functions
then use it like so:
import { eventExists, colCounter } from 'adv-firestore-functions';
functions.firestore
.document('posts/{docId}')
.onWrite(async (change: any, context: any) => {
// don't run if repeated function
if (await eventExists(context)) {
return null;
}
await colCounter(change, context);
}
It handles events, and everything else.
If you want to make it a universal counter for all functions:
import { eventExists, colCounter } from 'adv-firestore-functions';
functions.firestore
.document('{colId}/{docId}')
.onWrite(async (change: any, context: any) => {
const colId = context.params.colId;
// don't run if repeated function
if (await eventExists(context) || colId.startsWith('_')) {
return null;
}
await colCounter(change, context);
}
And don't forget your rules:
match /_counters/{document} {
allow read;
allow write: if false;
}
And of course access it this way:
const collectionPath = 'path/to/collection';
const colSnap = await db.doc('_counters/' + collectionPath).get();
const count = colSnap.get('count');
Read more: https://code.build/p/9DicAmrnRoK4uk62Hw1bEV/firestore-counters
GitHub: https://github.com/jdgamble555/adv-firestore-functions
Use Transaction to update the count inside the success listener of your database write.
FirebaseFirestore.getInstance().runTransaction(new Transaction.Function<Long>() {
#Nullable
#Override
public Long apply(#NonNull Transaction transaction) throws FirebaseFirestoreException {
DocumentSnapshot snapshot = transaction
.get(pRefs.postRef(forumHelper.getPost_id()));
long newCount;
if (b) {
newCount = snapshot.getLong(kMap.like_count) + 1;
} else {
newCount = snapshot.getLong(kMap.like_count) - 1;
}
transaction.update(pRefs.postRef(forumHelper.getPost_id()),
kMap.like_count, newCount);
return newCount;
}
});
Is it possible to implement reactivity by a sub-class with a transformed collection?
This is the example code of jamgold on the Meteor forum; the sub-class subcollection is joined to the main-class collection_name. If something changes on the collection_name collection, Meteor is in fact reactive. However, when something changes on the sub-collection subcollection, this is not reactively pushed to this publish/subscription.
Collection = new Meteor.Collection('collection_name');
if(Meteor.isServer)
{
Meteor.publish('collection', function(query,options) {
var self = this;
var handler = null;
query = query == undefined ? {} : query;
options = options == undefined ? {} : options;
//
handler = Collection.find(query,options).observeChanges({
added: function (id, doc) {
var object = null;
doc.object = Meteor.subcollection.findOne({_id: doc.objectId},);
self.added('collection_name', id, doc);
},
changed: function (id, fields) {
self.changed('collection_name', id, fields);
},
removed: function (id) {
self.removed('collection_name', id);
}
});
self.ready();
self.onStop(function () {
if(handler) handler.stop();
});
});
}
if(Meteor.isClient)
{
Meteor.subscribe('collection');
}
To make it reactive for the SubCollection, you would need to also observe it's changes as well. Keep in mind that this becomes very complex fast and my example only works if there is a 1 to 1 relationship between your Collection and SubCollection. You could implement something that works for a 1 to many relationship, but you will have some logic issues to address (e.g. when a doc in SubCollection changes...does that invalidate all associated Collection docs that were already published with that SubCollection doc. If so then do you emit a removed then an added to re-send them with their updated SubCollection doc, etc.).
Here is the full example.
const Collection = new Meteor.Collection('collection_name');
const SubCollection = new Meteor.Collection('sub_collection_name');
if (Meteor.isServer) {
Meteor.publish('collection', function(query,options) {
var self = this;
var handler = null;
query = query == undefined ? {} : query;
options = options == undefined ? {} : options;
// enable reactivity for Collection
handler = Collection.find(query, options).observeChanges({
added: function (id, doc) {
// find the associated object (using it's id) and add it to the doc
doc.object = SubCollection.findOne({_id: doc.objectId});
// now pass the original doc + the associated object down to client
self.added('collection_name', id, doc);
},
changed: function (id, fields) {
// doc.object is assumed to already exist on the doc...so just notify the subscriber
// of the changes in Collection
self.changed('collection_name', id, fields);
},
removed: function (id) {
// doc.object is assumed to already exist on the doc...so just notify the subscriber
// of the changes in Collection
self.removed('collection_name', id);
}
});
var handleSubCollectionDocChange = function(callbackThis, id) {
// find the doc from Collection that has a reference to the new SubCollection doc
var parentCollectionDoc = Collection.findOne({objectId: id});
// only do something if one exists
if (parentCollectionDoc) {
// remove the previously published doc since the SubCollection doc changed (if it was previously published)
self.removed('collection_name', parentCollectionDoc._id);
// store the new SubCollection doc in Collection.object
parentCollectionDoc.object = doc;
// send down the Collection doc (with new SubCollection doc attached)
self.added('collection_name', parentCollectionDoc._id, parentCollectionDoc);
}
};
// enable reactivity for SubCollection
subhandler = SubCollection.find().observeChanges({
added: function (id, doc) {
// find the doc from Collection that has a reference to the new SubCollection doc
var parentCollectionDoc = Collection.findOne({objectId: id});
// only do something if one exists
if (parentCollectionDoc) {
// remove the previously published doc since the SubCollection doc changed (if it was previously published)
self.removed('collection_name', parentCollectionDoc._id);
// store the new SubCollection doc in Collection.object
parentCollectionDoc.object = doc;
// send down the Collection doc (with new SubCollection doc attached)
self.added('collection_name', parentCollectionDoc._id, parentCollectionDoc);
}
},
changed: function (id, fields) {
// get the full SubCollection doc (since we only get the fields that actually changed)
var doc = SubCollection.findOne({_id: id});
// find the doc from Collection that has a reference to the new SubCollection doc
var parentCollectionDoc = Collection.findOne({objectId: id});
// only do something if one exists
if (parentCollectionDoc) {
// remove the previously published doc since the SubCollection doc changed (if it was previously published)
self.removed('collection_name', parentCollectionDoc._id);
// store the new SubCollection doc in Collection.object
parentCollectionDoc.object = doc;
// send down the Collection doc (with new SubCollection doc attached)
self.added('collection_name', parentCollectionDoc._id, parentCollectionDoc);
}
},
removed: function (id) {
// find the doc from Collection that has a reference to the new SubCollection doc
var parentCollectionDoc = Collection.findOne({objectId: id});
// only do something if one exists
if (parentCollectionDoc) {
// remove the previously published doc since the SubCollection doc no longer exists (if it was previously published)
self.removed('collection_name', parentCollectionDoc._id);
}
}
});
self.ready();
self.onStop(function () {
if (handler) handler.stop();
if (subhandler) subhandler.stop();
});
});
}
With that said, if you are only trying to achieve reactive joins then you really should look into the Meteor Publish Composite package. It handles reactive joins very easily and will keep your publication up to date with the parent collection changes or any of the child collections change.
Here is what a publication would look like (based on your example) using publish composite.
const Collection = new Meteor.Collection('collection_name');
const SubCollection = new Meteor.Collection('sub_collection_name');
Meteor.publishComposite('collection', function(query, options) {
query = query == undefined ? {} : query;
options = options == undefined ? {} : options;
return {
find: function() {
return Collection.find(query,options);
},
children: [{
find: function(collectionDoc) {
return SubCollection.find({_id: collectionDoc.objectId});
}
}],
};
});
With this example, anytime Collection or associated SubCollection docs change they will be sent to the client.
The only gotcha with this approach is that it publishes the docs into their respective collections. So you would have to perform the join (SubDocument lookup) on the client. Assuming we have subscribed to the above publication and we wanted to get a SubCollection doc for a certain Collection doc on the client, then it would look like this.
// we are on the client now
var myDoc = Collection.findOne({ //..search selector ..// });
myDoc.object = SubCollection.findOne({_id: myDoc.objectId});
The composite publication ensures that the latest SubCollection doc is always on the client. The only problem with the above approach is that if your SubCollection doc changes and is published to the client, your data will be stale because you have stored an static (and unreactive) version of the SubCollection doc in myDoc.object.
The way around this is to only perform your join when you need it and don't store the results. Or, another option is to use the Collection Helpers package and create a helper function that dynamically does the join for you.
// we are on the client now
Collection.helpers({
object: function() {
return SubCollection.findOne({_id: myDoc.objectId});
},
});
With this helper in place, anytime you need access to the joined SubCollection doc you would access it like this.
var myDoc = Collection.findOne({ //..search selector ..// });
console.dir(myDoc.object);
Under the covers, the collection helper does the SubCollection lookup for you.
So long story short, take your pick (roll your own reactive join publication or use Publish Composite + Collection Helpers). My recommendation is to use the packages because it's a tried and true solution that works as advertised out of the box (fyi...I use this combination in several of my Meteor apps).