MongoDB - Update subdocuments using javascript - arrays

The documents in my apps collection each contain a subcollection of users. Now I need to update a single user per app given a set of _ids for the apps collection using javascript. I cannot use a regular call to update() for this, as the data inserted will be encrypted using a public key stored within the app document. Therefore the data written into the user-subdocument is dependant on the app-document it is contained in. Pseudo-code of what I need to do:
foreach app in apps:
app.users.$.encryptedData = encrypt(data, app.publicKey)
One way to do it would be to find all the apps and then use forEach() to update every single app. However, this seems to be quite inefficient to me, as all the app-documents would have to be found twice in the database, one time to gather all of them and then another time to update every single document. There has to be a more efficient way.

The short answer is that no, you can not update a document in mongoDB with a value from that document.
Have a look at
https://stackoverflow.com/a/37280419/5293110
for ideas other that doing the iteration yourself.

Related

Adding Prefix to Firestore Default Document Id

Regarding Firebase document Ids, I am trying to set a prefix before the default Firebase docId generation. For example, if the default document Id generated is 23492drf94fl, then I would write it as, somePrefix:23492drf94fl.
Currently, I understand that there are two ways to do this: one with generating your own custom UUID on the client side with custom prefix, and another is to rewrite the document Id after initially writing it in Firestore.
Is there any shorthand method or function I could use in React (Node.js) to just use the default Firestore docId generation w/ a specified prefix?
Thanks in advance.
There isn't a shorter/easier method or function that would allow you to set a prefix to the auto-generated id. You will need to do it manually as you mentioned it and even doing this manually, it's not a very good option, as you will be impacting more of your application and, of course, spending part of your quotas on each read and write every time a new doc is created.
However, if you really would like or need, to have the document id with a prefix, I would recommend you to use a second field, where you would copy the value of the document id and then, add the prefix. This way, you won't affect the default field created - which can impact the uniform distribution of it, since it's automatically created - and you would still be able to have a MATH:235E23 or SCI:2309F4 your database, that you can use as a default field for you.
Besides that, in case you feel this could be a good improvement to the system, please, consider raising a Feature Request in Google's Issue Tracker, so they can check about the possibility of implementing it in the future.
Let me know if the information helped you!

TimelineItem id vs. sourceItemId

If two timeline items are inserted with the same sourceItemId the mirror api creates a second timeline item and does not automatically update the first. Is it correct that I must store the mirror api timeline id after insert and map that to the sourceItemId on creation and then use update or patch to modify the item later? How are others maintaining consistency between the mirror data and app data?
The sourceItemId is fully in your control and there might be use-cases where you want multiple timeline items with the same sourceItemId(for example for multiple comments referring to the same article) therefore the Mirror API doesn't check this parameter.
Mapping timeline ids to your sourceItemId in your datastore is probably the best and most efficient solution.
Alternatively you can use the timeline.list method, which allows searching for all items with a specified sourceItemId, and update the existing timeline item when found, or create a new one otherwise. https://developers.google.com/glass/v1/reference/timeline/list
With the currently rather limited API quota you will want to avoid the second solution though.

Objects not saving using Objectify and GAE

I'm trying to save an object and verify that it is saved right after, and it doesn't seem to be working.
Here is my object
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
#Entity
public class PlayerGroup {
#Id public String n;//sharks
public ArrayList<String> m;//members [39393,23932932,3223]
}
Here is the code for saving then trying to load right after.
playerGroup = new PlayerGroup();
playerGroup.n = reqPlayerGroup.n;
playerGroup.m = reqPlayerGroup.m;
ofy().save().entity(playerGroup).now();
response.i = playerGroup;
PlayerGroup newOne = ofy().load().type(PlayerGroup.class).id(reqPlayerGroup.n).get();
But the "newOne" object is null. Even though I just got done saving it. What am I doing wrong?
--Update--
If I try later (like minutes later) sometimes I do see the object, but not right after saving. Does this have to do with the high replication storage?
Had the same behavior some time ago and asked a question on google groups - objectify
Here the answer I got :
You are seeing the eventual consistency of the High-Replication
Datastore. There has been a lot of discussion of this exact subject
on the Objecify list in google groups , including several links to the
Google documentation on the subject.
Basically, any kind of query which does not include an ancestor() may
return results from a stale view of the datastore.
Jeff
I also got another good answer to deal with the behavior
For deletes, query for keys and then batch-get the entities. Make sure
your gets are set to strong consistency (though I believe this is the
default). The batch-get should return null for the deleted entities.
When adding, it gets a little trickier. Index updates can take a few
seconds. AFAIK, there are three ways out of this: 1; Use precomputed
results (avoiding the query entirely). If your next view is the user's
recently created entities, keep a list of those keys in the user
entity, and update that list when a new entity is created. That list
will always be fresh, no query required. Besides avoiding stale
indexes, this also speeds up your app. The more you result sets you
can reliably manage, the more queries you can avoid.
2; Hide the latency by "enhancing" the query results with the recently
added entities. Depending on the rate at which you're adding entities,
either inject only the most recent key, or combine this with the
solution in 1.
3; Hide the latency by taking the user through some unaffected views
before landing on your query-based view. This strategy definitely has
a smell over it. You need to make sure those extra steps are relevant
to the user, or you'll give a poor experience.
Butterflies, Joakim
You can read it all here:
How come If I dont use async api after I'm deleting an object i still get it in a query that is being done right after the delete or not getting it right after I add one
Another good answer to a similar question : Objectify doesn't store synchronously, even with now

Nested structs on GAE datastore using Go

I'm trying to figure out how to get nested structs to work with GAE datastore using Go. I know the datastore doesn't specifically support nested structs. I need to find a simple way of getting user information to go with a post when it is sent out to a user as JSON.
One thing I thought of was to put two fields for the user. One for the ID/key referencing to user and another one for the user type struct which would be added there when the post is loaded from the datastore. Extra fields seem silly so I'm hoping there is a better solution for this.
There are two entity types or structs: POST and USER
Posts need to contain information about the user who made the post.
The structure for the JSON I'm going to output for users is as follows:
POST
field1
field2
USER
user_field1
user_Field2
Go's appengine datastore api provides the PropertyLoadSaver interface for this sort of thing: https://developers.google.com/appengine/docs/go/datastore/reference#PropertyLoadSaver
You structure your struct however you want and then implement the Load and Save methods of that interface to populate it correctly. It means you write the serialization code yourself but it gives you full freedom in how you structure your data.
This will allow you still filter over the fields and have a nested struct.
The python runtime has the ndb library which supports nested structures like this. Go does not, so I can think of two solutions:
In the POST kind, have a user field that is a key, referencing a USER kind with the necessary fields. Requires two fetches and roundtrips.
Make a user field in the POST kind that is a blob. The blob is a string that is [de]serialized in go. This means you can't search or filter on any of the user data, but it also allows you to store everything in one entity.
You should use these based on the needs of your app. If you need users to be a real thing, use 1. If users aren't objects you need to work with (i.e., just data to display), you can use 2.

Updating Model Schema in Google App Engine?

Google is proposing changing one entry at a time to the default values ....
http://code.google.com/appengine/articles/update_schema.html
I have a model with a million rows and doing this with a web browser will take me ages. Another option is to run this using task queues but this will cost me a lot of cpu time
any easy way to do this?
Because the datastore is schema-less, you do literally have to add or remove properties on each instance of the Model. Using Task Queues should use the exact same amount of CPU as doing it any other way, so go with that.
Before you go through all of that work, make sure that you really need to do it. As noted in the article that you link to, it is not the case that all entities of a particular model need to have the same set of properties. Why not change your Model class to check for the existence of new or removed properties and update the entity whenever you happen to be writing to it anyhow.
Instead of what the docs suggest, I would suggest to use low level GAE API to migrate.
The following code will migrate all the items of type DbMyModel:
new_attribute will be added if does not exits.
old_attribute will be deleted if exists.
changed_attribute will be converted from boolean to string (True to Priority 1, False to Priority 3)
Please note that query.Run returns iterator returning Entity objects. Entity objects behave simply like dicts:
from google.appengine.api.datastore import Query, Put
query = Query("DbMyModel")
for item in query.Run():
if not 'new_attribute' in item:
item['attribute'] = some_value
if 'old_attribute' in item:
del item['old_attribute']
if ['changed_attribute'] is True:
item['changed_attribute'] = 'Priority 1'
elif ['changed_attribute'] is False:
item['changed_attribute'] = 'Priority 3'
#and so on...
#Put the item to the db:
Put(item)
In case you need to select only some records, see the google.appengine.api.datastore module's source code for extensive documentation and examples how to create filtered query.
Using this approach it is simpler to remove/add properties and avoid issues when you have already updated your application model than in GAE's suggested approach.
For example, now-required fields might not exist (yet) causing errors while migrating. And deleting fields does not work for static properties.
This doesn't help OP but may help googlers with a tiny app: I did what Alex suggested, but simpler. Obviously this isn't appropriate for production apps.
deploy App Engine Console
write code right inside the web interpreter against your live datastore
like so:
from models import BlogPost
for item in BlogPost.all():
item.attr="defaultvalue"
item.put()

Resources