I'm building an client/server-app where I want to sync data. I'm thinking about including the largest key from the local client database in the query so the server can fetch all entities added after that entity (with key > largest_local_key).
Can I be sure that the Google App Engine always increase the ID of new entities?
Is that a good way to implement synchronization?
No, IDs do not increase monotonically.
Consider synchronizing based on a create/update timestamp.
Related
I have an application that uses Cloud Datastore via App Engine to save data.
I need to refresh the clients when an object is put on the database. To do it, after the object is put on the database, the server sends a sync message to the clients. The clients read the sync message and does a query to the server. The server does a Query to return the new result.
The problem is that when the Query is done, the put object doesn't appears on the query results. Reading the documentation, I suppose that the reason is that the put isn't on the Milestone B, see https://cloud.google.com/appengine/articles/transaction_isolation, because another later call object appears.
How can I know when a put reaches a "Milestone B"? If it isn't possible to know it, how can I do this logic (refresh clients after put)?
You can ensure up-to-date query results by using an ancestor query, or, if you know the key of the specific entity you need to retrieve, you can fetch it by key rather than using a query.
This page discusses the trade-offs of using ancestor queries.
The data do not appear in the result of your query because the indexes have not been updated yet.
There is some latency before the indexes will be updated and unfortunately there is no way to know when this will happen.
The only way to handle this case is to use the entity's key, that is the only index that guarantees to be updated as soon the entity it's stored.
https://cloud.google.com/appengine/docs/java/datastore/entities
We're using Cloudant as the remote database for our app. The database contains documents for each user of the app. When the app launches, we need to query the database for all the documents belonging to a user. What we found is the CDTDatastore API only allows pulling the entire database and storing it inside the app then performing the query in the local copy. The initial pulling to the local datastore takes about 10 seconds and I imagine will take longer when adding more users.
Is there a way I can save only part of the remote database to the local datastore? Or, are we using the wrong service for our app?
You can use a server side replication filter function; you'll need to add information about your filter to the pull replicator. However replication will have a performance hit when using the function.
That being said a common pattern is to use one database per user, however this has other trade offs and it is something you should read up on. There is some information on the one database per user pattern here.
I am building an Azure Chargeback solution and for that I am pulling the Azure Usage data from Azure Billing REST APIs for multiple subscriptions and different dates. I need to store this into custom MS SQL database as per customer’s requirements. I get various usage records from Azure.
Problem: From these Usage records, I am not able to find any combination of the columns in the data I receive which will give me a
Unique Key to identify a Usage record for a particular subscription
and for a specific date. Only column I see as different is Quantity
but even that can be duplicated. E.g. If there are 2 VMs of type A1
with no data or applications on them, in the same cloud service, then
they will have exact quantity of usage. I do not get the exact name
of the VM or any other resource via the Usage APIs.
One Custom Solution (Ineffective): I can append a counter or unique ID to the usage records but if I fetch the data next time the
order may shuffle or new data may be introduced thereby affecting the
logic for uniqueness. Any logic I build to checking if any data is
missing in DB will result in bugs if there is any alteration in the
order the usage records are returned (for a specific subscription for
a specific date).
I am sure that Microsoft stores this data in some database. I can’t find the unique id to identify a usage record from many records returned by the Billing API. Maybe I am missing something here.
I will appreciate any help or any pointers on this.
When you call the Usage API set the ShowDetails parameter to true: &showDetails=true
MSDN Doc
This will populate the instance data in the returned JSON with the unique URI for the resource which includes the name for example:
Website:
"instanceData": "{\"Microsoft.Resources\":{\"resourceUri\":\"/subscriptions/xxx-xxxx/resourceGroups/mygoup/providers/Microsoft.Web/sites/WebsiteName\",\"...
Virtual Machine:
"instanceData": "{\"Microsoft.Resources\":{\"resourceUri\":\"/subscriptions/xxx-xxxx/resourceGroups/TESTINGBillGroup/providers/Microsoft.Compute/virtualMachines/TestingBillVM\",\...
If ShowDetails is false all your resources will be aggregated on the server side based on the resource type, all your websites will show as one entry.
The resource URI, date range and meterid will form a unique key as far as I know.
NOTE: If you are using the legacy API your VMs will be aggregated under the cloud service hosting them.
I understand that the Google AppEngine Data Store changed their default policy on how ID's are auto-generated.
We have application code that expects all ID's to be less than the maximum value for an Integer. In trying to create sample data using the dashboard ("Datastore Viewer"), there is a way to create Entities manually. However when I do this, there appears to be no place to manually set the ID, and the auto-generated ID is larger than the maximum Integer value.
Setting <auto-id-policy>legacy</auto-id-policy> in appengine-web.xml and re-deploying did not seem to help.
I understand when you create Entities programmatically, you can specify your own ID number. Is there any way to do this from the Dashboard, or at least use "legacy" auto-id generation?
No, the Datastore Viewer only allows auto-generated IDs. :(
I have an Account entity that has a facebook id.
Sometimes, the client might send all facebook ids (the clients facebook friends) to the server.
We want to select all Accounts IN the facebook ids the client provided.
Looping and calling get on each facebook id seems rather slow, considering people might have 1000+ friends. Further more, GAE is limited to 30 queries with IN clause.
Has anyone had a similar situation? How did you handle it?
Thanks!
You can set up a model that uses the facebook ID as a key which allows you to use Model. get_by_key_name(key_names=fb_ids) to fetch all the models with keys in fb_ids at once.
e.g.
class FBModel(db.Model):
account = db.ReferenceProperty(reference_class=Account)
When creating the model:
model = FBModel(key_name=fb_id)