TimelineItem id vs. sourceItemId - google-mirror-api

If two timeline items are inserted with the same sourceItemId the mirror api creates a second timeline item and does not automatically update the first. Is it correct that I must store the mirror api timeline id after insert and map that to the sourceItemId on creation and then use update or patch to modify the item later? How are others maintaining consistency between the mirror data and app data?

The sourceItemId is fully in your control and there might be use-cases where you want multiple timeline items with the same sourceItemId(for example for multiple comments referring to the same article) therefore the Mirror API doesn't check this parameter.
Mapping timeline ids to your sourceItemId in your datastore is probably the best and most efficient solution.
Alternatively you can use the timeline.list method, which allows searching for all items with a specified sourceItemId, and update the existing timeline item when found, or create a new one otherwise. https://developers.google.com/glass/v1/reference/timeline/list
With the currently rather limited API quota you will want to avoid the second solution though.

Related

React Query useInfiniteQuery invalidate individual items

How can I invalidate a single item when working with useInfiniteQuery?
Here is an example that demonstrates what I am trying to accomplish.
Let`s say I have a list of members and each member has a follow button. When I press on to follow button, there is a separate call to the server to mark that the given user is following another user. After this, I have to invalidate the entire infinite query to reflect the state of following for a single member. That means I might have a lot of users loaded in infinite query and I need to re-fetch all the items that were already loaded just to reflect the change for one item.
I know I can change the value in queryClient.setQueryData when follow fetch returns success but without following this with invalidation and fetch of a member, I am basically going out of sync with the server and relying on local data.
Any possible ways to address this issue?
Here is a reference UI photo just in case if it will be helpful.
I think it is not currently possible because react-query has no normalized caching and no underlying schema. So one entry in a list (doesn't matter if it's infinite or not) does not correspond to a detail query in any way.
If you prefix the query-keys with the same string, you can utilize the partial query key matching to invalidate in one go:
['users', 'all']
['users', 1]
['users', 2]
queryClient.invalidateQueries(['users]) will invalidate all three queries.
But yes, it will refetch the whole list, and if you don't want to manually set with setQueryData, I don't see any other way currently.
If you return the whole detail data for one user from your mutation, I don't see why setting it with setQueryData would get you out-of-sync with the backend though. We are doing this a lot :)

MongoDB - Update subdocuments using javascript

The documents in my apps collection each contain a subcollection of users. Now I need to update a single user per app given a set of _ids for the apps collection using javascript. I cannot use a regular call to update() for this, as the data inserted will be encrypted using a public key stored within the app document. Therefore the data written into the user-subdocument is dependant on the app-document it is contained in. Pseudo-code of what I need to do:
foreach app in apps:
app.users.$.encryptedData = encrypt(data, app.publicKey)
One way to do it would be to find all the apps and then use forEach() to update every single app. However, this seems to be quite inefficient to me, as all the app-documents would have to be found twice in the database, one time to gather all of them and then another time to update every single document. There has to be a more efficient way.
The short answer is that no, you can not update a document in mongoDB with a value from that document.
Have a look at
https://stackoverflow.com/a/37280419/5293110
for ideas other that doing the iteration yourself.

Backbone: Many small requests for model changes vs one collection sync?

What is a general good practice for when some action - changes multiple models in Backbone.js:
Trigger multiple PUT requests for each mode.save()
Single request to sync the entire collection
In case if the quantity of the changed models greater than 1 - definitely it should be the second item.
Usually, good REST api practice seems to suggest that you should update, save, create, delete single instances of persistent elements. In fact, you will find that a Backbone.Collection object does not implement these methods.
Also, if you use a standard URI scheme for your data access point, you will notice that a collection does not have a unique id.
GET /models //to get all items,
GET /models/:id //to read an element,
PUT /models/:id //to update an element,
POST /models/:id //to create an element,
DELETE /models/:id //to delete an element.
If you need to update every model of a collection on the server at once, maybe you need to ask why and there might be some re-thinking of the model structure. Maybe there should be a separate model holding that common information.
As suggested by Bart, you could implement a PATCH method to update only changed attributes of a particular element, thus saving bandwidth.
I like the first option, but I'd recommend you implement a PATCH behavior (only send updated attributes) to keep the requests as small as possible. This method gives you a more native "auto-save" feel like Google Docs. Of course, this all depends on your app and what you are doing.

Retrieving common data on different forms

Lets take an example of WinForms applcation and making invoice. On the Invoice form we retrieve a list of products, so the user will be ale to pick products for current invoice. Lets also consider that during this process user realizes that he needs to add a new product (or edit current) to ProductList before he can place it in invoice. So he opens a ProductForm where all the products are retreived (again).
It could also be in opposite order, that user first edits Products, and then without closing the Products Form, opens new Invoice. The principle is that data is two times loaded, and effectively its the same data.
What is the propper way to handle this scenario, so we can tell one form that data is already loaded, and to retrieve that data from memory? And when all the consumers (Forms) of the data are closed, then also the data should be released from memory? Or I am going in wrong direction, and there is a better way?
Thanks,
Goran
Definitelly go with data loaded "twice" or you will introduce much worse problems.
Sharing data means sharing ObjectContext. Even in WinForms application this is considered as bad approach. Check this article (it is about NHibernate but the description is valid for EF as well).
The problem is that ObjectContext is unit of work. If share context between two windows you can easily get into situation where you modify data in first window (without saving them!) and you continue in second window where you push save button but it will save data from both windows! You can't selectively save data only from one window when you share the context.
If the Controls that are using the data are all child controls of a shared Parent control, then you could just pass around the datacontext, so that they all shared the same datacontext.
However, the general use case with databases, which is what backs EF in most cases, is to read the data in each time that it is needed.
A solution to this if as you say you already have the item being used in one form is to just take a Refrence to that item into your new form.
So in the case Where you have an invoice which has a Product List and you want to add to the product list, you could pass the product list from the invoice to the opening product list.
There are some issues with this:
If another user changes the datasource while one has opened it (a.k.a. Concurrency)
Handling save don't save scenarios where they may have made a change in one area that they don't actually want added to the data.
However, unless it is a true performance issues, I would just load the data every time. You can simplify this a lot by using the repository pattern, so you can just call a single method to get a list of products or an invoice, or whatever part of data you need.

Best way to implement supplemental analytics

I want to be able to allow my writers to see how much traffic their articles are getting. I can do this in Google Analytics but can't figure out how to share this data with them without giving them access to all the data so I was thinking of adding another analytics service that would insert a unique code for each author on their articles. I already have the GA code and quantcast code so I don't want to bog down my site much more. Should I use a pixel tracker or javascript tracker?
UPDATE: Here is the code I use in analytics to track my authors.
var pageTracker = _gat._getTracker("UA-xxxxxxx-x");
pageTracker._trackPageview();
} catch(err) {}
<?php if ( is_singular()) { ?>
pageTracker._trackEvent('Authors','viewed','<?php the_author_meta('ID'); ?>');
<?php } ?>
you could use a custom field to track the writers by a unique id that they probably have. Then you could use GA's api to pull data where custom field value = unique id and display it in their profile or wherever you want them to see it.
One option would be to use a server-local Redis instance and use the PHP Redis library to increment a local counter using the author ID and article IDs.
For example, if in redis you use a sorted set with AuthorID as the redis key, and use the article ID (or however you identify an article) as a member that you increment using zincrby for each load you'll have the data readily available and under your control. You could then have a PHP page that pulls the author's data from Redis and display it in whatever format you need. For example you could build a table showing them traffic for each of their articles, or make pretty graphs to display it. You could extend the above to do per-day traffic (for example) by using a key structure of "AUTHORID:YYYY-MM-DD" instead of just author ID.
The hit penalty for tracking this is much lower than reaching out to an external site - it should be on the order of single-digit milliseconds. Even if your Redis instance was elsewhere the response times should still be lower than external tracking. I know you are using GA but this is a simple to implement method you could consider.
This slightly depends on how many authors you have and your level of involvement, main type I would use is either
Create a separate view per author and filter in his / hers traffic
Use a google docs plugin to pull down authors data and share
Use the API to pull down relevant information
Happy to give mor specifics if you could guide in more details what you want

Resources