My API calls are automatically being cached in the default cache provided by $cacheFactory ({cache: true}). If I add another record to the DB I want to be able to add the relevant information (which is returned from the post API call to add the record) to the cached information I have rather than deleting the cache and having it re-request all the data.
var $httpDefaultCache = $cacheFactory.get('$http');
var data = $httpDefaultCache.get(key)
The key is the path of my API request. Data is just an array. Element 1 is my stored data which is a hash containing arrays. I planned to simply add to it my new record. However when I retrieve it ($httpDefaultCache.get(key)[1]), I don't get a hash. I get a string. I could just take said string and transform it into a hash or simply add to the string, but I think I'm missing a key component of retrieving data from the cache. Thoughts?
An answer of how I went forward in case anyone runs into a similar problem.
Let's say you used $http to make a get request to an API with cache set to true.
Set the return value of this request within your controller to whatever; we'll use $scope.politicians.
Since we used the default cache, we can retrieve it by $cacheFactory.get('$http')(cachedUrl).
Remove the cache by doing $cacheFactory.get('$http').remove(urlOfGetRequest)
Make any changes I want to the $scope value that was initially set to the return value from the API call and then reset the cache to that scope when finished.
$scope.politicians[new-key] = new-value;
Set the cache again, but use your scope as the value.
$cacheFactory.get('$http').put(urlOfGetRequest, $scope.politicians);
Using the non-default cache makes this process rather easier.
I swear there must be a more direct way of doing this. But I could not find it.
Related
I'm learning how to use Apollo Client for React and how to manage local state using the cache. From the docs, it's as simple as writing to the cache using cache.writeData and reading from it using a simple query. Their example is
const GET_VISIBILITY_FILTER = gql`
{
visibilityFilter #client
}
`;
I then wrap my JSX in a Query and can read the value fine (in my case loggedIn)
return <Query query={GET_LOGGED_IN}>
{({loading, error, data}) => {
const {loggedIn} = data
I'm curious, though, why I don't need to write a resolver for this to work. Is it because with scalar values if a value exists at the root of an object, that is, here at the top level of the cache, Apollo/GraphQL automatically just grabs that value and sends it to you without a resolver?
What are the limits of this, that is, could you grab arrays at the root level without writing a resolver? Objects? I'm assuming not, as these don't seem to be scalars. But if the data is hard-coded, that is, doesn't require any DB lookup, the answer is still no?
From the docs:
[The #client directive] tells Apollo Client to fetch the field data locally (either from the cache or using a local resolver), instead of sending it to our GraphQL server.
If the directive is present on a field, Apollo will attempt to resolve the field using the provided resolver, or fall back to fetching directly from the cache if one doesn't exist. You can initialize your cache with pretty much any sort of data at the root level (taking care to include __typename fields for objects) and you should be able to fetch it without having to also provide a resolver for it. On the other hand, providing a resolver can provide you with more granular control over what's actually fetched from the cache -- i.e. you could initialize the cache with an array of items, but use a resolver to provide a way to filter or sort them.
There's an import nuance here: Fetching without a resolver only works when there's data in the cache to fetch. That's why it's important to provide initial state for these fields when building your client. If you have a more deeply nested #client field (for example, maybe you're including additional information alongside data fetched from the server), you also technically don't have to write a resolver. But we typically do write them because there is no existing data in the cache for those nested fields.
In addition to a great (as usual) answer from Daniel, I would like to add a few words.
You can use (read/write) objects to cache and manipulate its properties directly.
Using resolvers and mutations for local data can help with readability, data access/change unification, overal manageability or future changes (move feature/settings to server).
More practical/advanced example of local state managament you can find in apollo-universal-starter-kit project.
So I have an endpoint which returns the contents of many entities in parallel.
I have a shared service which calls this endpoint and puts them into a shared $cacheFactory.
When GET /base_entity/<id>/all route is hit first, then GET /entity/<id> should return the cached copy.
What's best-practice in telling the GET /entity/<id> service to not perform an HTTP get until GET /base_entity/<id>/all has had a chance to complete?
$broadcast/$emit approach seems odd. I suppose I could use that shared $cacheFactory with cache.put('START /all for ID:' +, id) and cache.put('FIN /all for ID:' +, id), but not sure if that's a strange way of solving the problem.
Ended up creating a new view and controller. The controller's constructor calls GET /base_entity/<id>/all and caches it then does a $state.go passing along current $stateParams. Concurrently the view shows a shiny graphic loading directive.
Now when /entity/<id> state is transitioned to, the service first checks ALL cache; updates its cache accordingly; then checks cache and returns that in a $q promise, or hits $http otherwise.
I'm trying to find a generic solution to commit/rollback a model alongside ng-resource.
When to commit/save model:
Successful http write methods (PUT, POST, PATCH).
When to rollback/reset model:
Failing http write methods.
User deciding to cancel their local changes (before PUT).
I've found pieces of this solution scattered, but no comprehensive strategy.
My initial strategy was to make use of the default ng-resource cache. Controller starts with GETting an item. If a subsequent PUT/POST/PATCH failed (caught either in the controller's promise, or in the $resource.<write_method>'s interceptor.responseError, re-run the GET that first got the object, and replace my model with that, as on initial load. However, this doesn't help if there's a successful PUT, followed by a failed one, because I'd be replacing my model with a stale object. The same problem occurs when a user tries to cancel their form changes before submitting... the last successful GET may be stale.
I've got a model with ~10 properties, and multiple <form>s (for html/design's sake) that all interact with this one object that gets PUT upon any of the forms being submitted:
{'location_id': 1234,
'address': {
'line1': '123 Fake St'}
...
},
'phone': '707-123-4567',
...
}
Models already have nice rollback features, but getting ng-resource to touch them seems tricky, and I really want to reset the entire object/model. Using a custom cache alongside my $resource instance seems like a decent way to go, but is equally tricky to give user-interaction a way to rollback. Another idea was to store a separate object when loading into scope: $scope.location = respData; $scope._location = angular.copy(respData); that I could load from when wanting to roll back, by way of $scope.location = $scope._location;, but I really don't want to clutter my controllers site-wide with these shadow objects/functions (DRY).
Suppose that one needs to send the same collection of 10,000 documents down to every client for a Meteor app.
At a high level, I'm aware that the server does some bookkeeping for every client subscription - namely, it tracks the state of the subscription so that it can send the appropriate changes for the client. However, this is horribly inefficient if each client has the same large data set where each document has many fields.
It seems that there used to be a way to send a "static" publish down the wire, where the initial query was published and never changed again. This seems like a much more efficient way to do this.
Is there a correct way to do this in the current version of Meteor (0.6.5.1)?
EDIT: As a clarification, this question isn't about client-side reactivity. It's about reducing the overhead of server-side tracking of client collections.
A related question: Is there a way to tell meteor a collection is static (will never change)?
Update: It turns out that doing this in Meteor 0.7 or earlier will incur some serious performance issues. See https://stackoverflow.com/a/21835534/586086 for how we got around this.
http://docs.meteor.com/#find:
Statics.find({}, {reactive: false} )
Edited to reflect comment:
Do you have some information that the reactive: false param is only client side? You may be right, it's a reasonable, maybe likely interpretation. I don't have time to check, but I thought this may also be a server side directive, saying not to poll the mongo result set. Willing to learn...
You say
However, this is horribly inefficient if each client has the same large data set where each document has many fields.
Now we are possibly discussing the efficiency of the server code, and its polling of the mongo source for updates that happen outside of from the server. Please make that another question, which is far above my ability to answer! I doubt that is happening once per connected client, more likely is a sync between app server info and mongo server.
The client requests you issue, including sorting, should all be labelled non-reactive. That is separate from whether you can issue them with sorting instructions, or whether they can be retriggered through other reactivity, but which need not include a trip to the server. Once each document reaches the client side, it is cached. You can still do whatever minimongo does, no loss in ability. There is no client asking server if there are updates, you don't need to shut that off. The server pushes only when needed.
I think using the manual publish ( this.added ) still works to get rid of overhead created by the server observing data for changes. The observers either need to be added manually or are created by returning a Collection.curser.
If the data set is big you might also be concerned about the overhead of a merge box holding a copy of the data for each client. To get rid of that you could copy the collection locally and stop the subscription.
var staticData = new Meteor.Collection( "staticData" );
if (Meteor.isServer ){
var dataToPublish = staticData.find().fetch(); // query mongo when server starts
Meteor.publish( "publishOnce" , function () {
var self = this;
dataToPublish.forEach(function (doc) {
self.added("staticData", doc._id, doc); //sends data to client and will not continue to observe collection
});
});
}
if ( Meteor.isClient ){
var subHandle = Meteor.subscribe( "publishOnce" ); // fills client 'staticData' collection but also leave merge box copy of data on server
var staticDataLocal = new Meteor.Collection( null ); // to store data after subscription stops
Deps.autorun( function(){
if ( subHandle.ready() ){
staticData.find( {} ).forEach( function ( doc ){
staticDataLocal.insert( doc ); // move all data to local copy
});
subHandle.stop(); // removes 'publishOnce' data from merge box on server but leaves 'staticData' collection empty on client
}
});
}
update: I added comments to the code to make my approach more clear. The meteor docs for stop() on the subscribe handle say "This will typically result in the server directing the client to remove the subscription's data from the client's cache" so maybe there is a way to stop the subscription ( remove from merge box ) that leaves the data on the client. That would be ideal and avoid the copying overhead on the client.
Anyway the original approach with set and flush would also have left the data in merge box so maybe that is alright.
As you've already pointed out yourself in googlegroups, you should use a Meteor Method for sending static data to the client.
And there is this neat package for working with Methods without async headaches.
Also, you could script out the data to a js file, as either an array or an object, minimize it, then link to it as a distinct resource. See
http://developer.yahoo.com/performance/rules.html for Add an Expires or a Cache-Control Header. You probably don't want meteor to bundle it for you.
This would be the least traffic, and could make subsequent loads of your site much swifter.
as a response to a Meteor call, return an array of documents (use fetch()) No reactivity or logging. On client, create a dep when you do a query, or retrieve the key from the session, and it is reactive on the client.
Mini mongo just does js array/object manipulation with an syntax interpreting dsl between you and your data.
The new fast-render package makes one time publish to a client collection possible.
var staticData = new Meteor.Collection ('staticData');
if ( Meteor.isServer ){
FastRender.onAllRoutes( function(){
this.find( staticData, {} );
});
}
What technique shall one use to implement batch insert/update for Backbone.sync?
I guess it depends on your usage scenarios, and how much you want to change the calling code. I think you have two options:
Option 1: No Change to client (calling) code
Oddly enough the annotated source for Backbone.sync gives 'batching' as a possible reason for overriding the sync method:
Use setTimeout to batch rapid-fire updates into a single request.
Instead of actually saving on sync, add the request to a queue, and only batch-save every so often. _.throttle or _.delay might help you here.
Option 2: Change client code
Alternatively, instead of calling save on your models, you could add some sort of save method to collections. You'd have to track which models were actually modified and hence in need of update, since as far as I can tell, Backbone only knows whether they're new or not (but I could be wrong about that).
Here's how I did it
Backbone.originalSync = Backbone.sync;
Backbone.sync = function (method, model, options) {
//
// code to extend sync
//
// calling original sync
Backbone.originalSync(method, model, options);
}
works fine for me , and I use it to control every ajax request coming out of any model or collection