When does the defered execution occur? - wpf

I've got a situation which I want to fetch data from a database, and assign it to the tooltips of each row in a ListView control in WPF. (I'm using C# 4.0.) Since I've not done this sort of thing before, I've started a smaller, simpler app to get the ideas down before I attempt to use them in my main WPF app.
One of my concerns is the amount of data that could potentially come down. For that reason I thought I would use LINQ to SQL, which uses deferred execution. I thought that would help and not pull down the data until the user passes their mouse over the relevant row. To do this, I'm going to use a separate function to assign the values to the tooltip, from the database, passed upon the parameters I need to pass to the relevant stored procedures. I'm doing 2 queries using LINQ to SQL, using 2 different stored procedures, and assigning the results to 2 different DataGrids.
Even though I know that LINQ to SQL does use deferred execution, I'm beginning to wonder if some of the code I'm writing may defeat my whole intent of using LINQ to SQL. For example, in testing in my simpler app, I am choosing several different values to see how it works. One selection of values brought no data back, as there was no data for the given parameters. I thought this could potentially cause the user confusion, so I thought I would check the Count property of the list that I assign from running the DBML associated method (related to the stored procedure). Thinking about it, I would think it would be necessary for LINQ to run the query, in order to give me a result for the Count property. Am I not correct?
If I eliminate the call to the list's Count property, I'm still wondering if I might have a problem; if LINQ may still be invoked, because I'm associating the tooltip to the control via a function call?

You are correct, when you call the Count property it iterates over the result set. Not clear on your last question, but the LINQ probably gets called at the point where you populate your DataGrids, way after the tooltip comes into play.
EDIT: however, this does not mean there is anything wrong with deffered execution or your use of it, it executes at the latest possible stage, right when you need the data. If you still want to check the Count ahead of actually fetching all the data, you could have a simple LINQ to SQL function that checks for Any() rows. (Actually Any() is probably what you want more than Count > 0)

You should use Any(), not Count(), but even Any() will cause the query to be executed - after all, it can't determine whether or not there are any rows in the result set without executing the query. But there's executing the query, and there's fetching the result set. Any() will fetch one row, Count() will fetch them all.
That said, I think that having a non-instantaneous operation that occurs on mouseover is just a bad idea. There was a build of Outlook, once, that displayed a helpful tooltip when you moused over the Print button. Less helpfully, it got the data for that tooltip by calling the system function that finds out what printers are available. So you'd be reaching for a menu, and the button would grab the mouse pointer and the UI would freeze for two seconds while it went out and figured out how to display a tooltip that you weren't even asking for. I still hate this program today. Don't be this guy.
A better approach would be to get your tooltip data asynchronously after populating the visible data on the screen. It's easy enough to create a BackgroundWorker that fetches the data into a DataTable, and then make the DataTable available to the view models in the RunWorkerCompleted event handler. (Do it there so that you don't do any updates to UI-bound data on the UI thread.) You can implement a ToolTip property in your view model that returns a default value (probably null, but maybe something like "Fetching data...") if the DataTable containing tool tip data is null, and that calculates the value if it's not. That should work admirably. You can even implement property-change notification so that the ToolTip will still get updated if the user keeps the mouse pointer over it while you're fetching the data.

Alex is correct that calling Count() or Any() will enumerate the LINQ expression causing the query to execute. I would recommend re-thinking your design as you probably don't want a query to the database executed every time the user moves his/her mouse. There is also the issue of the delay to query the database. What might be instantaneous on your dev box with a local database might have a multi-second delay on a heavily loaded server. I would recommend creating a DisplayTooltip() function that takes a lazily evaluated LINQ expression. You can then cache the results or apply other heuristics to decide whether you should actually be querying the database or not.

Related

More on ui-grid row filtering

Long version of the question
I have a complex filtering operation that I'm trying to implement for a ui-grid application. Essentially, I have a big grid with lots of columns, each having the typical filter fields at the top of the columns. That works great.
Then I have an extra analysis step that the user can turn on (which involves looking for sets of rows that meet a certain criterion, and then marking rows visible or not based on the results) that MUST be applied logically after all the other filters (i.e. it does share 'commutative property' as all the column-top filters do). This extra analysis/filter step intends to take the row set that is produced by the column-top filters and then apply this one final, mother-of-all-complex-filtration steps.
I am able to get that filtration logic to produce initially correct results - when the user first clicks into the special mode, I perform the analysis and save the necessary info in a hidden column of the grid; and then a RowsProcessor sets the row.visible attribute accordingly. (perhaps I didn't need the RowsProcessor, and maybe I could have just set the visibility in the analysis subroutine.) But whatever - the point is that the rows are marked visible or not. The problem occurs when the user subsequently adds/removes/changes a filter to one of the column top filters. That extra analysis step by necessity needs to be based upon the rows that are visible according to the column-top-filters. And the first time into the special filtering routine, a call to gridApi.core.getVisibleRows() returns exactly that rowset. But after that, the visible rowset is now reduced by the prior execution of the special filtering. But I need to get back to the rowset (i.e. complete recalculation of the row.visible attributes) of just the column-top-filters, without any special final filtration. Is there a way to do that - to effectively undo the filtration effects of the RowsProcessor?
Short version of the question
Is there some way to force recalculation of the visible row set based on the column top filters? and to do so in a way to get control back so additional filtration steps can be executed?
I've looked at various things in the APIs but cannot tell which, if any, might help me. For example:
In the ui.grid (Grid) portion of the API, I see many different flavors of refresh methods that may help, but there's no distinction given that I understand. I hope the one that I need is not refreshRows( ) that says "not functional at present"
Also, the GridRow 'class' seems to have various methods that speak of
visibility "overrides" - that sounds possibly like what I might need
(my final visibility result possibly being an override to those calculated by the column-top filters). But I tried using those methods instead of directly setting row.visible and I did not see any difference.
Can anyone suggest a direction for me to try?
and even better, is there any written description that provides a high-level overview of ui-grid functionality? I love the package, but using it for the first time, I'm just having a hard time with what are probably basic concepts, and possibly I'm thinking about this problem all wrong.
Once again, thanks for any assistance.
Whenever the rowsProcessors run they start by setting all rows to visible, then each rowsProcessor runs in turn with the results from the previous rowsProcessor being passed to the next one. RowsProcessors have a priority, so you can set your processor to run at the appropriate place in the sequence.
It sounds like your problem is that you're using getVisibleRows to calculate what to do, rather than looking at the rows that are passed in to your rows processor, and evaluating based on which rows are visible in that input.
My guess is that you would be better to set your rowsProcessor to have a high (late) priority, and then process all your calculations within that processor rather than attempting to cache them on the data set itself. If you need to extract the visible rows from the set of renderableRows that are passed to your processor, you could do it with:
var visibleRows = renderableRows.filter( function(row) { return row.visible; });

Keeping repository synced with multiple clients

I have a WPF application that uses entity framework. I am going to be implementing a repository pattern to make interactions with EF simple and more testable. Multiple clients can use this application and connect to the same database and do CRUD operations. I am trying to think of a way to synchronize clients repositories when one makes a change to the database. Could anyone give me some direction on how one would solve this type of issue, and some possible patterns that would be beneficial for this type of problem?
I would be very open to any information/books on how to keep clients synchronized, and even be alerted of things other clients are doing(The only thing I could think of was having a server process running that passes messages around). Thank you
The easiest way by far to keep every client UI up to date is just to simply refresh the data every so often. If it's really that important, you can set a DispatcherTimer to tick every minute when you can get the latest data that is being displayed.
Clearly, I'm not suggesting that you refresh an item that is being edited, but if you get the fresh data, you can certainly compare collections with what's being displayed currently. Rather than just replacing the old collection items with the new, you can be more user friendly and just add the new ones, remove the deleted ones and update the newer ones.
You could even detect whether an item being currently edited has been saved by another user since the current user opened it and alert them to the fact. So rather than concentrating on some system to track all data changes, you should put your effort into being able to detect changes between two sets of data and then seamlessly integrating it into the current UI state.
UPDATE >>>
There is absolutely no benefit from holding a complete set of data in your application (or repository). In fact, you may well find that it adds detrimental effects, due to the extra RAM requirements. If you are polling data every few minutes, then it will always be up to date anyway.
So rather than asking for all of the data all of the time, just ask for what the user wants to see (dependant on which view they are currently in) and update it every now and then. I do this by simply fetching the same data that the view requires when it is first opened. I wrote some methods that compare every property of every item with their older counterparts in the UI and switch old for new.
Think of the Equals method... You could do something like this:
public override bool Equals(Release otherRelease)
{
return base.Equals(otherRelease) && Title == otherRelease.Title &&
Artist.Equals(otherRelease.Artist) && Artists.Equals(otherRelease.Artists);
}
(Don't actually use the Equals method though, or you'll run into problems later). And then something like this:
if (!oldRelease.Equals(newRelease)) oldRelease.UpdatePropertyValues(newRelease);
And/Or this:
if (!oldReleases.Contains(newRelease) oldReleases.Add(newRelease);
I'm guessing that you get the picture now.

Comparing IDataObject from the Clipboard

My WPF application checks the data on the clipboard to see if it can work with the data or not. Because I set some buttons to be enabled/disabled based on the data (via an ICommand implementation), this code is called frequently.
The work to determine if my application can work with the data can be non-trivial at times, and thus is causing my application to "hang" randomly. I don't believe I can push this work off to another thread since the WPF runtime is expecting a response quickly.
In order to solve this issue, I thought I would compare the IDataObjects (the current one from the clipboard vs. a cached one from the previous attempt. A straight comparison (and even an object.ReferenceEquals does not return the desired results, so I thought I would try the method Clipboard.IsCurrent. The description sounds like exactly what I want, but when I evaluate the following:
Clipboard.IsCurrent(Clipboard.GetDataObject())
the result is false. The current workaround is to compare the data formats on the IDataObject, but that's not a good answer since my application can handle some files from the file system, but not all. So even though the formats are identical, the result on whether my application can handle the data may not always be the same.
Unfortunately, IsCurrent does not work in conjunction with GetDataObject. MSDN's description of OleIsCurrentClipboard (which IsCurrent uses internally) is quite explicit about this:
OleIsCurrentClipboard only works for the data object used in the OleSetClipboard function. It cannot be called by the consumer of the data object to determine if the object that was on the clipboard at the previous OleGetClipboard call is still on the clipboard.
A workaround could be to subscribe to clipboard updates (see e.g.
Clipboard event C#) and evaluate the data only when it changes, potentially in a background thread.

How do you create a deconstructor (or similar) in APEX that runs right before an object is destroyed?

My application has many aggregate fields that need to be updated when any related record is changed, added or deleted. The relationships and calculations are somewhat involved, so I created a class that handles all of the calculations for all of the related tables. There is some SOQL and DML overhead involved in the calculations, so the class handles everything in bulk.
I would like to have the updateAll() method on this class run no more than once per request on all of the records that have been added to its queue. But, there doesn't appear to be "deconstructor-like" functionality in APEX that would automatically get called right before this calculator object was destroyed.
What is the best way to implement this pattern in APEX?
Yes, there is no way to detect or predict object destruction, since its essentially JSP in the background (shhh, they don't want you to know, it's the "no software" thing ;) ir probably follows its lifetime mechnisms but you can't rely on that.
We actually handle our aggregation in triggers or in te reporting (depending on whether aggregation needs to be stored). Triggers also receive batches as List rather than one-by-one row which allows for batch aggregation and allows us to satisfy the pesky governor. Unfortunately if you have multi-table aggregates you'll need to have triggers for all of them and rerun them for every batch
Here's what I did. I created a Calculator class that recalcs every related aggregate/calculated field in a ~10 table/object relationship. I used triggers on each of those objects to make the calculator class run on the set of related object families to the objects that were changed. I used a static variable on the calculator class to check if the calculator was running in each of the triggers so that they would only call the calculator if it wasn't currently running. It works well enough. A bit inefficient, but stays below governor limits and works in bulk very well. And, I can grow with it...

WPF and Active Objects

I have a collection of "active objects". That is, objects that need to preiodically update themselves. In turn, these objects should be used to update a WPF-based GUI.
In the past I would just have each object include it's own thread, but that only makes sense when working with a finite number of objects with well-defined life-cycles. Now I'm using objects that only exist when needed by a form so the life cycle is unpredicable. Also, I can have dozens of objects all making database and web service calls.
Under normal circumstances the update interval is 1 second, but it can take up to 30 seconds due to timeouts.
So, what design would you recommend?
You may use one dispatcher (scheduler) for all or group of active objects. Dispatcher can process high priority tasks at the first place then other ones.
You can see this article about the long-running active objects with code to find out how to do it. In additional I recommend to look at Half Sync/ Half Async pattern.
If you have questions - welcome.
I am not an expert, but I would just have the objects fire an event indicating when they've changed. The GUI can then refresh the necessary parts of itself (easy when using data binding and INotifyPropertyChanged) whenever it receives an event.
I'd probably try to generalize out some sort of data bus, if possible, and when objects are 'active' have them add themselves to a list of objects to be updated. I'd especially be tempted to use this pattern if the objects are backed by a database, as that way you can aggregate multiple queries, instead of having to do a single query per each object.
If there end up being no listeners for a specific object, no big deal, the data just goes nowhere.
The core updater code can then use a single timer (or multiple, or whatever is appropriate) to determine when to get updates. Doing this as more of a dataflow, and less of a 'state update' will probably save a lot of sanity in the end.

Resources