I'm creating a class with extjs record in config. I keep a reference to this record. On creating the class instance I expected the record and It's completely ok.
Later, in one of the object methods I'm trying to use this record. Quite unexpectedly the record has lost its store reference. Except this the record looks ok with the right data.
I hacked the situation keeping reference to the store in initialization phase:
this.store = this.recordLoosingItsStore.store;
And later I can retrieve my record like this:
var recordOK = this.store.getById( this.recordLoosingItsStore.getId() );
Can anyone shares opinion what's happen and where my record.store disappears? How can I debug it easily? I tried to advice for store datachanged event but this didn't produce any positive results.
If you check the documentation, it states that a record instance may only belong to a store instance at a time. Is it possible that you are assigning it to a different store at any point in your code and then destroying that other store? If that is the case, using the .copy method when assigning the record to a different store should solve the issue.
More info: http://docs.sencha.com/ext-js/3-4/#!/api/Ext.data.Record
It's hard to say where it's loosing it without looking at your code. Try to post some code here. If it's too big - try to extract places where you first load record and then all places where you do something with the store/record.
Related
How can I invalidate a single item when working with useInfiniteQuery?
Here is an example that demonstrates what I am trying to accomplish.
Let`s say I have a list of members and each member has a follow button. When I press on to follow button, there is a separate call to the server to mark that the given user is following another user. After this, I have to invalidate the entire infinite query to reflect the state of following for a single member. That means I might have a lot of users loaded in infinite query and I need to re-fetch all the items that were already loaded just to reflect the change for one item.
I know I can change the value in queryClient.setQueryData when follow fetch returns success but without following this with invalidation and fetch of a member, I am basically going out of sync with the server and relying on local data.
Any possible ways to address this issue?
Here is a reference UI photo just in case if it will be helpful.
I think it is not currently possible because react-query has no normalized caching and no underlying schema. So one entry in a list (doesn't matter if it's infinite or not) does not correspond to a detail query in any way.
If you prefix the query-keys with the same string, you can utilize the partial query key matching to invalidate in one go:
['users', 'all']
['users', 1]
['users', 2]
queryClient.invalidateQueries(['users]) will invalidate all three queries.
But yes, it will refetch the whole list, and if you don't want to manually set with setQueryData, I don't see any other way currently.
If you return the whole detail data for one user from your mutation, I don't see why setting it with setQueryData would get you out-of-sync with the backend though. We are doing this a lot :)
Good day to all.
There is a problem.
Create one object $rootScope.metaContent (need just $rootScope). In the future, to work with the data of the object, creating a new based on the first $scope.addMeta = $rootScope.metaContent. Needless to say, when I'm working with the second (adding an object, change its data or purify) is changed first. But I need to work with the second ($scope.addMeta), that would be the first not changed. It is necessary to restore the data to the source by pressing one button "Undoing".
How can they decouple from each other???
Thanks in advance!!!
You can create a deep copy of an object or an array in angular using angular.copy
$scope.addMeta = angular.copy($rootScope.metaContent);
I have a WPF application that uses entity framework. I am going to be implementing a repository pattern to make interactions with EF simple and more testable. Multiple clients can use this application and connect to the same database and do CRUD operations. I am trying to think of a way to synchronize clients repositories when one makes a change to the database. Could anyone give me some direction on how one would solve this type of issue, and some possible patterns that would be beneficial for this type of problem?
I would be very open to any information/books on how to keep clients synchronized, and even be alerted of things other clients are doing(The only thing I could think of was having a server process running that passes messages around). Thank you
The easiest way by far to keep every client UI up to date is just to simply refresh the data every so often. If it's really that important, you can set a DispatcherTimer to tick every minute when you can get the latest data that is being displayed.
Clearly, I'm not suggesting that you refresh an item that is being edited, but if you get the fresh data, you can certainly compare collections with what's being displayed currently. Rather than just replacing the old collection items with the new, you can be more user friendly and just add the new ones, remove the deleted ones and update the newer ones.
You could even detect whether an item being currently edited has been saved by another user since the current user opened it and alert them to the fact. So rather than concentrating on some system to track all data changes, you should put your effort into being able to detect changes between two sets of data and then seamlessly integrating it into the current UI state.
UPDATE >>>
There is absolutely no benefit from holding a complete set of data in your application (or repository). In fact, you may well find that it adds detrimental effects, due to the extra RAM requirements. If you are polling data every few minutes, then it will always be up to date anyway.
So rather than asking for all of the data all of the time, just ask for what the user wants to see (dependant on which view they are currently in) and update it every now and then. I do this by simply fetching the same data that the view requires when it is first opened. I wrote some methods that compare every property of every item with their older counterparts in the UI and switch old for new.
Think of the Equals method... You could do something like this:
public override bool Equals(Release otherRelease)
{
return base.Equals(otherRelease) && Title == otherRelease.Title &&
Artist.Equals(otherRelease.Artist) && Artists.Equals(otherRelease.Artists);
}
(Don't actually use the Equals method though, or you'll run into problems later). And then something like this:
if (!oldRelease.Equals(newRelease)) oldRelease.UpdatePropertyValues(newRelease);
And/Or this:
if (!oldReleases.Contains(newRelease) oldReleases.Add(newRelease);
I'm guessing that you get the picture now.
What is a general good practice for when some action - changes multiple models in Backbone.js:
Trigger multiple PUT requests for each mode.save()
Single request to sync the entire collection
In case if the quantity of the changed models greater than 1 - definitely it should be the second item.
Usually, good REST api practice seems to suggest that you should update, save, create, delete single instances of persistent elements. In fact, you will find that a Backbone.Collection object does not implement these methods.
Also, if you use a standard URI scheme for your data access point, you will notice that a collection does not have a unique id.
GET /models //to get all items,
GET /models/:id //to read an element,
PUT /models/:id //to update an element,
POST /models/:id //to create an element,
DELETE /models/:id //to delete an element.
If you need to update every model of a collection on the server at once, maybe you need to ask why and there might be some re-thinking of the model structure. Maybe there should be a separate model holding that common information.
As suggested by Bart, you could implement a PATCH method to update only changed attributes of a particular element, thus saving bandwidth.
I like the first option, but I'd recommend you implement a PATCH behavior (only send updated attributes) to keep the requests as small as possible. This method gives you a more native "auto-save" feel like Google Docs. Of course, this all depends on your app and what you are doing.
I'm trying to save an object and verify that it is saved right after, and it doesn't seem to be working.
Here is my object
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
#Entity
public class PlayerGroup {
#Id public String n;//sharks
public ArrayList<String> m;//members [39393,23932932,3223]
}
Here is the code for saving then trying to load right after.
playerGroup = new PlayerGroup();
playerGroup.n = reqPlayerGroup.n;
playerGroup.m = reqPlayerGroup.m;
ofy().save().entity(playerGroup).now();
response.i = playerGroup;
PlayerGroup newOne = ofy().load().type(PlayerGroup.class).id(reqPlayerGroup.n).get();
But the "newOne" object is null. Even though I just got done saving it. What am I doing wrong?
--Update--
If I try later (like minutes later) sometimes I do see the object, but not right after saving. Does this have to do with the high replication storage?
Had the same behavior some time ago and asked a question on google groups - objectify
Here the answer I got :
You are seeing the eventual consistency of the High-Replication
Datastore. There has been a lot of discussion of this exact subject
on the Objecify list in google groups , including several links to the
Google documentation on the subject.
Basically, any kind of query which does not include an ancestor() may
return results from a stale view of the datastore.
Jeff
I also got another good answer to deal with the behavior
For deletes, query for keys and then batch-get the entities. Make sure
your gets are set to strong consistency (though I believe this is the
default). The batch-get should return null for the deleted entities.
When adding, it gets a little trickier. Index updates can take a few
seconds. AFAIK, there are three ways out of this: 1; Use precomputed
results (avoiding the query entirely). If your next view is the user's
recently created entities, keep a list of those keys in the user
entity, and update that list when a new entity is created. That list
will always be fresh, no query required. Besides avoiding stale
indexes, this also speeds up your app. The more you result sets you
can reliably manage, the more queries you can avoid.
2; Hide the latency by "enhancing" the query results with the recently
added entities. Depending on the rate at which you're adding entities,
either inject only the most recent key, or combine this with the
solution in 1.
3; Hide the latency by taking the user through some unaffected views
before landing on your query-based view. This strategy definitely has
a smell over it. You need to make sure those extra steps are relevant
to the user, or you'll give a poor experience.
Butterflies, Joakim
You can read it all here:
How come If I dont use async api after I'm deleting an object i still get it in a query that is being done right after the delete or not getting it right after I add one
Another good answer to a similar question : Objectify doesn't store synchronously, even with now