nHibernate: Reset an object's original state - wpf

I have a very basic query. I am using WPF Binding to edit a object which is loaded by a ISession. If somebody edits this object in the form, because of two way binding and a stateful session, whenever I close the session, changes to the object made in the form are stored back in the database. Which is the best way to avoid this?
The methods I know:
Shadow copy the object and use the copied object as the DataContext (the method I am using as of now).
ISession.Clear
Use IStatelessSession.
Is there any way to reset the object to it's original form before closing the ISession?

If you look here: http://nhforge.org/wikis/howtonh/finding-dirty-properties-in-nhibernate.aspx
It is an example of finding dirty properties. NHibernate internally tracks a persistent object's state by way of the EntityEntry object.
This is useful for you, because with a little modification to the method above, you're able to get old values back ... which you can use to reset the properties.
As for closing your session causing the object to be flushed to the database, you can set the session FlushMode to FlushMode.Never. This will mean no database sync occurs until you call Session.Flush().
Alternatively, you can hook into IFlushEntityEventListener to reset the object state. There are a reasonable examples of using the NHibernate event system on google.

See Managing the caches on NHibernate Forge:
When Flush() is subsequently called, the state of that object will be synchronized with the database. If you do not want this synchronization to occur or if you are processing a huge number of objects and need to manage memory efficiently, the Evict() method may be used to remove the object and its collections from the first-level cache.
I think that sounds like what you want.

I would suggest the use of transactions. You just rollbackthe transaction if that is the case? what do you think?

Related

Why do we need dispatcher in Flux?

This is not a React specific question. I'm thinking of implementing Flux in Aurelia/Angularjs.
While reading up on flux, I'm not convinced of the need of the dispatcher step. Why can't a component call the store directly to update and retrieve data? Is there anything wrong with that approach?
For example: If I have a CarStore that can create new cars, update cars and get a list of cars(just a thin layer on the CRUD api), I should be able to retrieve/update the list by directly calling the store from the car-grid component. Since the store is a singleton, whenever the list updates, car-grid should automatically get the new items. What is the benefit of using a dispatcher in this scenario?
I've created several large apps using React-native with Redux as the store / view state updater.
The dispatch action is synchronous regardless. There's a big disadvantage to using dispatchers, you lose the function signature. (Debugging, auto-catching type-errors, refactoring lost, multiple declarations of the same function, list goes on)
Never had to use a dispatcher and its caused no issues. Within the actions we simply call getState().dispatch. The store is a singleton anyhow, it's heavily recommended you don't have multiple stores. (Why would you do that...)
You can see here why are dispatchers important (check out the section Why We Need a Dispatcher). The way I see it, the idea is basically being able to access to various stores in a synchronous way (one callback finishes before another one is called). You can make this thanks to the waitFor method, which allows you to wait for a store to finish processing an action (or more tan one). There is a good example in the docs. For example, your application may grow and instead of having just that CarStore you have another Store whose updates depend on the CarStore updates.
If you will only ever have one store, then a dispatcher is redundant in my opinion. If you have multiple stores however, then a dispatcher is important so that actions don't need to know about each of these stores.
Please note that I am not saying that you should ditch the dispatcher if you only have one store. It's still a good pattern as it gives you the option of supporting multiple stores if you ever need to in the future.

Can I save changes to objects to another TR besides those they are locked?

When I try to switch to edit mode for a Report source, a popup comes up telling me
"A new task will be created for the following request of user XXX".
A transport request is also being suggested.
I don't want to save my changes in this request however, but in another existing one. I am not aware of any versioning systems being implemented in my system, and don't know how to check that.
Is what i'm trying to achieve possible? And if so, how?
No, this is not possible. There are very good reasons for this being an exclusive lock -- reasons that you should know about before you attempt to change anything. Briefly speaking
The CTS only notes that an object was touched, not what change was made.
When the transport is released, the entire object in its current state is exported - there is no delta/diff logic involved.
Therefore you can't separately transport changes to the same development object. Furthermore, if you serialize this manually, the second transport will always comprise the changes of the first one.
Things get slightly more complicated with partial objects - you can have LIMU METH objects (methods of a class) in different transports, but as soon as you try to lock the R3TR CLAS main class, you'll have to resolve that.

onSync Event and its properties

Hello People of Stackoverflow, I am using a persistent SharedObject for Adobe Media Server to store and share date in real-time for multiple clients. I am using the SyncEvent to dispatch any event that has been updated.
Reading through the documerntation the SyncEvent contains numerous properties. What i want to achieve is to use remote shared object to store a list of people who are online when one client disconnects all the other clients listed will be updated of the disconnection.
Adobe docs unfortunately doesnt provide any examples how to do this.
Would the best approach be to create a changeList array that contains properties of all members then execute a loop?
Or can anyone suggest any other method?
Thanks
The changelist property of the event contains only the properties that got changed. So, if your shared object contains the list of ids, you should be able to get what you achieve.
Note, that the notification is done for the top level properties stored in the shared object. So, what you want would probably look like:
idSo.setProperty("1", true);
while adding. To remove the user, you should use:
idSo.setProperty("1", null);
To reassert, having
idSo.setProperty("ids", <array of ids>)
would send the whole array when it's updated. So, this would be a bad approach
This sync event would be sent to all the connected shared objects.

Short lived DbContext in WPF application reasonable?

In his book on DbContext, #RowanMiller shows how to use the DbSet.Local property to avoid 1.) unnecessary roundtrips to the database and 2.) passing around collections (created with e.g. ToList()) in the application (page 24). I then tried to follow this approach. However, I noticed that from one using [} – block to another, the DbSet.Local property becomes empty:
ObservableCollection<Destination> destinationsList;
using (var context = new BAContext())
{
var query = from d in context.Destinations …;
query.Load();
destinationsList = context.Destinations.Local; //Nonzero here.
}
//Do stuff with destinationsList
using (var context = new BAContext())
{
//context.Destinations.Local zero here again;
//So no way of getting the in-memory data from the previous using- block here?
//Do I have to do another roundtrip to the database here to get the same data I wanted
//to cache locally???
}
Then, what is the point on page 24? How can I avoid the passing around of my collections if the DbSet.Local is only usable inside the using- block? Furthermore, how can I benefit from the change tracking if I use these short-lived context instances not handing over any cache data to each others under the hood? So, if the contexts should be short-lived for freeing resources such as connections, have I to give up the caching for this? I.e. I can’t use both at the same time (short-lived connections but long-lived cache)? So my only option would be to store the results returned by the query in my own variables, exactly what is discouraged in the motivation on page 24?
I am developing a WPF application which maybe will also become multi-tiered in the future, involving WCF. I know Julia has an example of this in her book, but I currently don’t have access to it. I found several others on the web, e.g. http://msdn.microsoft.com/en-us/magazine/cc700340.aspx (old ObjectContext, but good in explaining the inter-layer-collaborations). There, a long-lived context is used (although the disadvantages are mentioned, but no solution to these provided).
It’s not only that the single Destinations.Local gets lost, as you surely know all other entities fetched by the query are, too.
[Edit]:
After some more reading in Julia Lerman’s book, it seems to boil down to that EF does not have 2nd level caching per default; with some (considerable, I think) effort, however, ones can add 3rd party caching solutions, as is also described in the book and in various articles on MSDN, codeproject etc.
I would have appreciated if this problem had been mentioned in the section about DbSet.Local in the DbContext book that it is in fact a first level cache which is destroyed when the using {} block ends (just my proposal to make it more transparent to the readers). After first reading I had the impression DbSet.Local would always return the same reference (Singleton-style) also in the second using {} block despite the new DbContext instance.
But I am still unsure whether the 2nd level cache is the way to go for my WPF application (as Julia mentions the 2nd level cache in her article for distributed applications)? Or is the way to go to get my aggregate root instances (DDD, Eric Evans) of my domain model into memory by one or some queries in a using {} block, disposing the DbContext and only holding the references to the aggregate instances, this way avoiding a long-lived context? It would be great if you could help me with this decision.
http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
http://www.codeproject.com/Articles/435142/Entity-Framework-Second-Level-Caching-with-DbConte
http://blog.3d-logic.com/2012/03/31/using-tracing-and-caching-provider-wrappers-with-codefirst/
The Local property provides a “local view of all Added, Unchanged, and Modified entities in this set”. Like all change tracking it is specific to the context you are currently using.
The DB Context is a workspace for loading data and preparing changes.
If two users were to add changes at the same time, they must not know of the others changes before they saved them. They may discard their prepared changes which suddenly would lead to problems for other other user as well.
A DB Context should be short lived indeed, but may be longer than super short when necessary. Also consider that you may not save resources by keeping it short lived if you do not load and discard data but only add changes you will save. But it is not only about resources but also about the DB state potentially changing while the DB Context is still active and has data loaded; which may be important to keep in mind for longer living contexts.
If you do not know yet all related changes you want to save into the database at once then I suggest you do not use the DB Context to store your changes in-memory but in a data structure in your code.
You can of course use entity objects for doing so without an active DB Context. This makes sense if you do not have another appropriate data class for it and do not want to create one, or decide preparing the changes in them make more sense. You can then use DbSet.Attach to attach the entities to a DB Context for saving the changes when you are ready.

WPF and Active Objects

I have a collection of "active objects". That is, objects that need to preiodically update themselves. In turn, these objects should be used to update a WPF-based GUI.
In the past I would just have each object include it's own thread, but that only makes sense when working with a finite number of objects with well-defined life-cycles. Now I'm using objects that only exist when needed by a form so the life cycle is unpredicable. Also, I can have dozens of objects all making database and web service calls.
Under normal circumstances the update interval is 1 second, but it can take up to 30 seconds due to timeouts.
So, what design would you recommend?
You may use one dispatcher (scheduler) for all or group of active objects. Dispatcher can process high priority tasks at the first place then other ones.
You can see this article about the long-running active objects with code to find out how to do it. In additional I recommend to look at Half Sync/ Half Async pattern.
If you have questions - welcome.
I am not an expert, but I would just have the objects fire an event indicating when they've changed. The GUI can then refresh the necessary parts of itself (easy when using data binding and INotifyPropertyChanged) whenever it receives an event.
I'd probably try to generalize out some sort of data bus, if possible, and when objects are 'active' have them add themselves to a list of objects to be updated. I'd especially be tempted to use this pattern if the objects are backed by a database, as that way you can aggregate multiple queries, instead of having to do a single query per each object.
If there end up being no listeners for a specific object, no big deal, the data just goes nowhere.
The core updater code can then use a single timer (or multiple, or whatever is appropriate) to determine when to get updates. Doing this as more of a dataflow, and less of a 'state update' will probably save a lot of sanity in the end.

Resources