My WPF application checks the data on the clipboard to see if it can work with the data or not. Because I set some buttons to be enabled/disabled based on the data (via an ICommand implementation), this code is called frequently.
The work to determine if my application can work with the data can be non-trivial at times, and thus is causing my application to "hang" randomly. I don't believe I can push this work off to another thread since the WPF runtime is expecting a response quickly.
In order to solve this issue, I thought I would compare the IDataObjects (the current one from the clipboard vs. a cached one from the previous attempt. A straight comparison (and even an object.ReferenceEquals does not return the desired results, so I thought I would try the method Clipboard.IsCurrent. The description sounds like exactly what I want, but when I evaluate the following:
Clipboard.IsCurrent(Clipboard.GetDataObject())
the result is false. The current workaround is to compare the data formats on the IDataObject, but that's not a good answer since my application can handle some files from the file system, but not all. So even though the formats are identical, the result on whether my application can handle the data may not always be the same.
Unfortunately, IsCurrent does not work in conjunction with GetDataObject. MSDN's description of OleIsCurrentClipboard (which IsCurrent uses internally) is quite explicit about this:
OleIsCurrentClipboard only works for the data object used in the OleSetClipboard function. It cannot be called by the consumer of the data object to determine if the object that was on the clipboard at the previous OleGetClipboard call is still on the clipboard.
A workaround could be to subscribe to clipboard updates (see e.g.
Clipboard event C#) and evaluate the data only when it changes, potentially in a background thread.
Related
When I try to switch to edit mode for a Report source, a popup comes up telling me
"A new task will be created for the following request of user XXX".
A transport request is also being suggested.
I don't want to save my changes in this request however, but in another existing one. I am not aware of any versioning systems being implemented in my system, and don't know how to check that.
Is what i'm trying to achieve possible? And if so, how?
No, this is not possible. There are very good reasons for this being an exclusive lock -- reasons that you should know about before you attempt to change anything. Briefly speaking
The CTS only notes that an object was touched, not what change was made.
When the transport is released, the entire object in its current state is exported - there is no delta/diff logic involved.
Therefore you can't separately transport changes to the same development object. Furthermore, if you serialize this manually, the second transport will always comprise the changes of the first one.
Things get slightly more complicated with partial objects - you can have LIMU METH objects (methods of a class) in different transports, but as soon as you try to lock the R3TR CLAS main class, you'll have to resolve that.
I have a WPF application that uses entity framework. I am going to be implementing a repository pattern to make interactions with EF simple and more testable. Multiple clients can use this application and connect to the same database and do CRUD operations. I am trying to think of a way to synchronize clients repositories when one makes a change to the database. Could anyone give me some direction on how one would solve this type of issue, and some possible patterns that would be beneficial for this type of problem?
I would be very open to any information/books on how to keep clients synchronized, and even be alerted of things other clients are doing(The only thing I could think of was having a server process running that passes messages around). Thank you
The easiest way by far to keep every client UI up to date is just to simply refresh the data every so often. If it's really that important, you can set a DispatcherTimer to tick every minute when you can get the latest data that is being displayed.
Clearly, I'm not suggesting that you refresh an item that is being edited, but if you get the fresh data, you can certainly compare collections with what's being displayed currently. Rather than just replacing the old collection items with the new, you can be more user friendly and just add the new ones, remove the deleted ones and update the newer ones.
You could even detect whether an item being currently edited has been saved by another user since the current user opened it and alert them to the fact. So rather than concentrating on some system to track all data changes, you should put your effort into being able to detect changes between two sets of data and then seamlessly integrating it into the current UI state.
UPDATE >>>
There is absolutely no benefit from holding a complete set of data in your application (or repository). In fact, you may well find that it adds detrimental effects, due to the extra RAM requirements. If you are polling data every few minutes, then it will always be up to date anyway.
So rather than asking for all of the data all of the time, just ask for what the user wants to see (dependant on which view they are currently in) and update it every now and then. I do this by simply fetching the same data that the view requires when it is first opened. I wrote some methods that compare every property of every item with their older counterparts in the UI and switch old for new.
Think of the Equals method... You could do something like this:
public override bool Equals(Release otherRelease)
{
return base.Equals(otherRelease) && Title == otherRelease.Title &&
Artist.Equals(otherRelease.Artist) && Artists.Equals(otherRelease.Artists);
}
(Don't actually use the Equals method though, or you'll run into problems later). And then something like this:
if (!oldRelease.Equals(newRelease)) oldRelease.UpdatePropertyValues(newRelease);
And/Or this:
if (!oldReleases.Contains(newRelease) oldReleases.Add(newRelease);
I'm guessing that you get the picture now.
I've got a situation which I want to fetch data from a database, and assign it to the tooltips of each row in a ListView control in WPF. (I'm using C# 4.0.) Since I've not done this sort of thing before, I've started a smaller, simpler app to get the ideas down before I attempt to use them in my main WPF app.
One of my concerns is the amount of data that could potentially come down. For that reason I thought I would use LINQ to SQL, which uses deferred execution. I thought that would help and not pull down the data until the user passes their mouse over the relevant row. To do this, I'm going to use a separate function to assign the values to the tooltip, from the database, passed upon the parameters I need to pass to the relevant stored procedures. I'm doing 2 queries using LINQ to SQL, using 2 different stored procedures, and assigning the results to 2 different DataGrids.
Even though I know that LINQ to SQL does use deferred execution, I'm beginning to wonder if some of the code I'm writing may defeat my whole intent of using LINQ to SQL. For example, in testing in my simpler app, I am choosing several different values to see how it works. One selection of values brought no data back, as there was no data for the given parameters. I thought this could potentially cause the user confusion, so I thought I would check the Count property of the list that I assign from running the DBML associated method (related to the stored procedure). Thinking about it, I would think it would be necessary for LINQ to run the query, in order to give me a result for the Count property. Am I not correct?
If I eliminate the call to the list's Count property, I'm still wondering if I might have a problem; if LINQ may still be invoked, because I'm associating the tooltip to the control via a function call?
You are correct, when you call the Count property it iterates over the result set. Not clear on your last question, but the LINQ probably gets called at the point where you populate your DataGrids, way after the tooltip comes into play.
EDIT: however, this does not mean there is anything wrong with deffered execution or your use of it, it executes at the latest possible stage, right when you need the data. If you still want to check the Count ahead of actually fetching all the data, you could have a simple LINQ to SQL function that checks for Any() rows. (Actually Any() is probably what you want more than Count > 0)
You should use Any(), not Count(), but even Any() will cause the query to be executed - after all, it can't determine whether or not there are any rows in the result set without executing the query. But there's executing the query, and there's fetching the result set. Any() will fetch one row, Count() will fetch them all.
That said, I think that having a non-instantaneous operation that occurs on mouseover is just a bad idea. There was a build of Outlook, once, that displayed a helpful tooltip when you moused over the Print button. Less helpfully, it got the data for that tooltip by calling the system function that finds out what printers are available. So you'd be reaching for a menu, and the button would grab the mouse pointer and the UI would freeze for two seconds while it went out and figured out how to display a tooltip that you weren't even asking for. I still hate this program today. Don't be this guy.
A better approach would be to get your tooltip data asynchronously after populating the visible data on the screen. It's easy enough to create a BackgroundWorker that fetches the data into a DataTable, and then make the DataTable available to the view models in the RunWorkerCompleted event handler. (Do it there so that you don't do any updates to UI-bound data on the UI thread.) You can implement a ToolTip property in your view model that returns a default value (probably null, but maybe something like "Fetching data...") if the DataTable containing tool tip data is null, and that calculates the value if it's not. That should work admirably. You can even implement property-change notification so that the ToolTip will still get updated if the user keeps the mouse pointer over it while you're fetching the data.
Alex is correct that calling Count() or Any() will enumerate the LINQ expression causing the query to execute. I would recommend re-thinking your design as you probably don't want a query to the database executed every time the user moves his/her mouse. There is also the issue of the delay to query the database. What might be instantaneous on your dev box with a local database might have a multi-second delay on a heavily loaded server. I would recommend creating a DisplayTooltip() function that takes a lazily evaluated LINQ expression. You can then cache the results or apply other heuristics to decide whether you should actually be querying the database or not.
I have a collection of "active objects". That is, objects that need to preiodically update themselves. In turn, these objects should be used to update a WPF-based GUI.
In the past I would just have each object include it's own thread, but that only makes sense when working with a finite number of objects with well-defined life-cycles. Now I'm using objects that only exist when needed by a form so the life cycle is unpredicable. Also, I can have dozens of objects all making database and web service calls.
Under normal circumstances the update interval is 1 second, but it can take up to 30 seconds due to timeouts.
So, what design would you recommend?
You may use one dispatcher (scheduler) for all or group of active objects. Dispatcher can process high priority tasks at the first place then other ones.
You can see this article about the long-running active objects with code to find out how to do it. In additional I recommend to look at Half Sync/ Half Async pattern.
If you have questions - welcome.
I am not an expert, but I would just have the objects fire an event indicating when they've changed. The GUI can then refresh the necessary parts of itself (easy when using data binding and INotifyPropertyChanged) whenever it receives an event.
I'd probably try to generalize out some sort of data bus, if possible, and when objects are 'active' have them add themselves to a list of objects to be updated. I'd especially be tempted to use this pattern if the objects are backed by a database, as that way you can aggregate multiple queries, instead of having to do a single query per each object.
If there end up being no listeners for a specific object, no big deal, the data just goes nowhere.
The core updater code can then use a single timer (or multiple, or whatever is appropriate) to determine when to get updates. Doing this as more of a dataflow, and less of a 'state update' will probably save a lot of sanity in the end.
Basically I'm working on a program that processes a lot of large video and image files, and I'm struggling with the memory management side of it because I've never dealt with anything quite like this before.
For instance, it stores all these images in a database, and loads a list of videos, and then you can switch between the videos and view images from the video. Right now, it's keeping all of those images in memory all the time, which is eating up a lot of space. I know I can lazy load the images, but once you've switched back and forth you get all of them stuck in memory.
I want to take advantage of the WPF databinding functionality and MVVM as much as possible, but if I need to look at a different architecture I will.
I'm just looking for general advice, tips, links to articles, or anything that could help.
One of the things you could look at is data virtualization, which is not provided in WPF by default (they provide UI virtualization instead). Data virtualization can say 'load and bind the data for an item / range of items while visible, then unload when not visible'.
Here's a great article that describes a concrete implementation that you may be able to use as-is or adapt:
http://www.codeproject.com/KB/WPF/WpfDataVirtualization.aspx
It sounds like the main problem you're having is not so much the performance-intensiveness of the application (which things like fixed-size buffers and static allocation will help with) but its overall memory footprint. The way to control that is with virtualization.
Lazy loading gets you halfway there: you don't actually create the object until something needs it. That's fine, but the longer the user works with the application and the more objects he visits in the UI, the more objects get created, and eventually the application runs out of memory.
So you want to throw away objects that the user doesn't need anymore. Figuring out which objects the user doesn't need can be a hard problem, but it can also be as easy as assuming that the user doesn't need the object that he used least recently. You use a least-recently-used (LRU) cache to do this.
This is totally consistent with the MVVM pattern. In your view class, you make your property getter for the object use this pseudocode:
if object hasn't been loaded
load object
add object to the LRU cache (whether you loaded it or not)
return object
The LRU cache I wrote keeps a simple queue of the objects it contains. When you add an object to the cache, if it's not already in the queue it gets added to the back, and if it is already in the queue it gets moved to the back.
If the queue's at its capacity when you add an object, it pops off whatever is at the front of the queue (which is the one that was used least recently) and raises the DiscardingOldestItem event.
This event is the object's chance to tell anything that holds a reference to it (i.e. the view object that it's a property of) that it needs to be discarded (probably by raising an event of its own). The view object's event handler should first raise the PropertyChanged event. If the property getter gets called when it does this, there's a binding somewhere that's still looking at the property, so it shouldn't be discarded yet. (Also, since the getter was called, the object just got moved to the back of the queue.) Otherwise, it can be thrown away.
(Note that if you have more objects visible in the UI than the cache can hold, this little dance becomes an infinite loop and you'll get a stack overflow.)
A more sophisticated approach would have the LRU cache start discarding old items when the application started running low on memory (it uses a fixed capacity right now). That's a straightforward change, but if you make that change, the scenario described in the previous paragraph is something you need to give more thought to; one very large object could result in the whole UI going kablooey.
It seems that to increase raw performance you would actually want to avoid patterns. They have their uses, don't get me wrong, but if you're trying to blast video at the highest performance possible the last thing you need to do it introduce abstraction layers that are designed to write higher quality code, not increase application performance.
this article on informIt has a lot of good info on the subject although it is more c and c++.
Static Allocation Pattern: Allocates memory up front
It suggests,
Pool Allocation Pattern: Preallocates pools of needed objects
Fixed Sized Buffer Pattern: Allocates memory in same-sized blocks
Smart Pointer Pattern: Makes pointers reliable
Garbage Collection Pattern: Automatically reclaims lost memory
Garbage Compactor Pattern: Automatically defragments and reclaims memory
"I know I can lazy load the images,
but once you've switched back and
forth you get all of them stuck in
memory."
This is not true to my understanding. The images can get garbage collected just like anything else, by removing all references. Are you sure you dont have a reference to them somewhere? Try a memory profiler like memprofiler or ANTS to see whats happening.
To those who have found this question looking for general patterns (not WPF) to reduce memory, the famous one (which I have never seen used!) is The Flyweight pattern