I'm attempting to pan around the contents of a ScrollViewer in the same way you would pan around in a PDF document (scroll to zoom in/out, click + drag to pan) ScrollViewer has this functionality built in for Touch events (PanningMode), however this doesn't seem to translate to Click+Drag events. Is there a way to tell it to/emulate this functionality?
Panning is enabled internally by four virtual methods implemented by ScrollViewer:
OnManipulationCompleted
OnManipulationDelta
OnManipulationInertiaStarting
and OnManipulationStarting
So where are these virtual methods defined. Lets go up the hierarchy. We see that it's called on UIElement within the OnManipulationCompletedThunk (I'm sure there are concomitant methods for the rest as well).
Everything is still private at this point, we want something to tap into. Unfortunately this is the point at which both reflector and ILSpy failed me (well actually it didn't, the call site is in a different dll (PresentationCore) that I don't have loaded brb). Ok back. Once we look in PresentationCore, we have a vague idea that dependencyProperties are registered statically, so we find the .cctor. There are a couple lines of interest here.
ManipulationCompletedEvent = Manipulation.ManipulationCompletedEvent.AddOwner(typeof(UIElement));
and
EventManager.RegisterClassHandler(typeof(UIElement), ManipulationCompletedEvent, new EventHandler(UIElement.OnManipulationCompletedThunk));
We see that OnManipulationCompletedThunk is the registered callback for this class handler that listens for ManipulationCompletedEvent. Also, ManipulationCompletedEvent is not originally defined on UIElement, it is borrowed from the Manipulation static class via AddOwner.
Doing a search for the Manipulation class, I see that it's in the System.Windows.Input namespace within the same assembly. Is it public, yep. Cool! so at this point I know that if I fire the ManipulationCompletedEvent or any of it's buddies, it will eventually call into ScrollViewer. http://msdn.microsoft.com/en-us/library/system.windows.input.manipulation.aspx
In the documentation for this public static class, I see there are a bunch of interesting and possibly useful methods. The only one that isn't readily obvious is AddManipulator. What does this thing do? Clicks.. reads.. oh, "Each touch point is an IManipulator object. For example, if you use two fingers to resize an object, a TouchDevice, which implements IManipulator, is created for each finger." So TouchDevice is an IManipulator. Maybe that will give me some idea of how to create my own manipulator.
The Properties on TouchDevice give some clue as to features it supports. It's sort of like a MouseDevice (has the concept of capture, DirectlyOver etc.), but it doesn't support manipulation in the same way. Rather we want to do Manipulation in response to Mouse events. Lets look harder at TouchDevice to see how it really implements some of these features.
The methods TouchDevice is implementing are GetPosition and ManipultionEnded
GetPosition returns this.GetTouchPoint(relativeTo).Position;
relativeTo is a parameter
ManipulationEnded calls OnManipulationEnded forwarding a bool parameter named cancel. Not sure what cancel does yet. oh, turns out it's not used, weird but ok. This basically sets capture to null. Kindof at the end of the rabbit hole here so we'll have to back up and reevaluate.
All I really want to do is raise events manually on the UIElement and see if it works. The RaiseEvent method on UIElement should work for this. going to try brb. Err wait I missed something, all the events defined on the Manipulation class are marked as internal.
Clearly these events are meant only for internal consumption, and short of doing reflection we don't have an avenue there.
I think maybe using the Manipulation feature is overkill for what you're trying to do. There is probably a way to implement this with just drag event and a canvas.
Also, found this while googling and thought it might be of some relevance http://multitouchvista.codeplex.com/
Related
I have inherited a large MFC application which contains a CComboBox subclass that overrides OnPaint. Currently it does all its drawing by hand (with lines and rectangles), and renders a combo box that looks decidedly Windows 98-style. However, it otherwise works great and provides a lot of useful custom functionality that we rely on, and rewriting the entire control is probably not an option.
I would like to modernize it so that the OnPaint draws in Aero style where available (falling back to the old code when modern theming is unavailable). I've done this with some other custom controls we have, like buttons, and it works great for our purposes. I know there are some tiny behaviors that it won't get right, like gentle highlights on mouse-hover, but that's not a big deal for this app.
I have access to the CVisualStylesXP ckass, so I've already got the infrastructure to make calls like OpenThemeData, GetThemeColor or DrawThemeBackground pretty easily (via LoadLibrary so we don't force Vista as a min-system). Unfortunately, I don't know the proper sequence of calls to get a nice looking combo box with the theme-appropriate border and drop-down button.
Anyone know what to do here?
Honestly, I don't know why they originally tried to override OnPaint. Is there a good reason? I'm thinking that at least 99% of the time you are just going to want to override the drawing of the items in the ComboBox. For that, you can override DrawItem, MeasureItem, and CompareItem in a derived combo box to get the functionality you want. In that case, the OS will draw the non-user content specific to each OS correctly.
I think you best shot without diving in the depth of xp theming and various system metrics is take a look at this project: http://www.codeproject.com/Articles/2584/AdvComboBox-Version-2-1
Check the OnPaint of the CAdvComboBox class - there is a full implementation of the control repainting including xp theme related issues.
Not sure if it's the same situation - but when I faced this problem (in my case with subclassed CButtons), solving it only required changing the control declaration to a pointer and creating the control dynamically.
Let's assume that your subclassed control is called CComboBoxExt.
Where you had
CComboBoxExt m_cComboBoxExt;
You'll now have
CComboBoxExt* m_pcComboBoxExt;
And on the OnInitDialog of the window where the control is placed, you create it using
m_pcComboBoxExt = new CComboBoxExt();
m_pcComboBoxExt->Create(...)
Since this is now a pointer, don't forget to call DestroyWindow() and delete the pointer on termination.
This solved my particular problem - if your control is declared in the same way, consider giving it a try.
I have been getting more involved with WPF for about a year now. A lot of things are new and sometimes it is hard to get my head wrapped around it.
At the same time I am rereading the GOF Design Patterns book.
A few times I would stop in the middle because I would realize that a certain pattern is the very one used in some WPF functionality. Whenever such a realization hits me, I feel like my understanding of the related WPF principle just took a big leap. It's kind of like an aha-effect.
I also realized that I had a much easier time understanding Prism for example because the documentation does such a great job at explaining the patterns involved.
So here is my "question" (more like an effort):
In order to help us all to understand
WPF better it would be great if anyone
who also "spotted" a design pattern in
WPF could give a short explanation.
One pretty obvious example that I found is the Routed Event:
If an event is detected by a child
control and no handler has been
specified, it passes it along to its
parent and so on until it is finally
handled or no parent is found anymore.
Lets say we have an image on a button
that is inside a StackPanel that is
inside a window. If the user clicks
the image, the event will either be
handled by it (if handling code has
been specified) or "bubble" up until
one of the controls handles it. So
each control will get a chance to
react in this order.
Image
Button
StackPanel
Window
Once a control handles it, the
bubbling will stop.
This is the short explanation, for a
more precise one consult the WPF
literature.
This kind of functionality represents
the "Chain of Responsibility
Design Pattern" which states, that if
their is a request, it gets passed
along a responsibility chain to give
each object in it a chance to handle
it. The sender of the request has no
idea who will handle it which ensures
decoupling. For a more thorough
explanation follow the link.
The purpose here is merely to show how this (seemingly old 10+ years) idea found its way into our current technology and to offer another way of looking at it.
I think this is enough for a start and hope more parallels will be collected here.
Cheers, Thorsten
I don't think it is specific for WPF but the observer design pattern seems to be the foundation on which all event handling in .Net and WPF is based.
The observer design pattern is described as "Define a one-to-many dependency between objects so that when one object changes state, all its dependents are notified and updated automatically.". In .Net with the += operator you subscribe to such a change in state. Subsequently you unsubscribe with the -= operator.
I'd say CommandBindings are pretty important and fundamental to the way I develop.
WPF Tutorial - Command Bindings and Custom Commands
MSDN Overview
I've made it a personal rule to inherit every UI control before using it. In a previous life I'd always considered this one of the less useful things you could do because the justification always seemed to be "I might want to change the font on all the buttons at once"... a motivation that never paid off... once... ever.
Two recent projects have changed my mind about the practice, though. In the first, we needed a consistent "ValueChanged" event so that we could easily implement a "dirty" flag on our forms without a massive switch statement to choose between a Textbox's "TextChanged" event, or a ListBox's "SelectedIndexChanged" etc. We just wanted one consistent thing to listen for on all controls, and subclassing the built-in controls bought us that pretty easily.
In the second project, every effort was made to get by with the base controls because the UI was expected to be pretty simple, but a few months in, it became obvious that they just weren't going to cut it anymore, and we purchased the Telerik control suite. If we had inherited all the controls to begin with, then changing our derived controls to inherit from the Telerik controls would have applied the changes for us globally. Instead, we had to do some searching and replacing in all the form designers.
So here's my question: What are the relative strengths and weaknesses of
Simply adding a Class, and making it inherit from a control.
Adding a new "Custom Control" and inheriting.
Adding a new "Component" and inheriting.
All three have the same effect in the end, you get a new type of Button to put on your forms. I've seen all three used by different people, and everyone seems to think that their way is the best. I thought I should put this discussion on StackOverflow, and maybe we can nail down a concensus as a community as to which one is the "right" way.
Note: I already have my personal opinion of which is "right", but I want to see what the world thinks.
If both 1 & 2 are inheriting, then they are functionally identical, no? Should one of them be encapsulating a control? In which case you have a lot of pass-thru members to add. I wouldn't recommend it.
Peronally, I simply wouldn't add extra inheritance without a very good reason... for example, the "changed event" could perhaps have been handled with some overloads etc. With C# 3.0 this gets even cleaner thanks to extension methods - i.e. you can have things like:
public static AddChangeHandler(
this TextBox textbox, EventHandler handler) {
testbox.TextChanged += handler;
}
public static AddChangeHandler(
this SomethingElse control, EventHandler handler) {
control.Whatever += handler;
}
and just use myControl.AddChangeHandler(handler); (relying on the static type of myControl to resolve the appropriate extension method).
Of course, you could take a step back and listen to events on your own model, not the UI - let the UI update the model in a basic way, and have logic in your own object model (that has nothing to do with controls).
I use composition. I simply create a new UserControl and add the controls I need. This works fine, because:
I never use that many properties anyway, so pass-through methods are kept to a minimum.
I can start with a naive approach and refine it later.
Properties for look and feel should be set consistently across the site. Now I can set them once and for all.
I have an MVVM application. In one of the ViewModels is the 'FindFilesCommand' which populates an ObservableCollection. I then implement a 'RemoveFilesCommand' in the same ViewModel. This command then brings up a window to get some more user input.
Where/what is the best way to do this whilst keeping with the MVVM paradigm? Somehow
doing:
new WhateverWindow( ).Show( )
in the ViewModel seems wrong.
Cheers,
Steve
I personally look at this scenario as one where the main window view model wants to surface a task for the end user to complete.
It should be responsible for creating the task, and initializing it. The view should be responsible for creating and showing the child window, and using the task as the newly instantiated window's view model.
The task can be canceled or committed. It raises a notification when it is completed.
The window uses the notification to close itself. The parent view model uses the notification to do additional work once the task has committed if there is followup work.
I believe this is as close to the natural/intuitive thing people do with their code-behind approach, but refactored to split the UI-independent concerns into a view model, without introducing additional conceptual overhead such as services etc.
I have an implementation of this for Silverlight. See http://www.nikhilk.net/ViewModel-Dialogs-Task-Pattern.aspx for more details... I'd love to hear comments/further suggestions on this.
In the Southridge realty example of Jaime Rodriguez and Karl Shifflet, they are creating the window in the viewmodel, more specifically in the execute part of a bound command:
protected void OnShowDetails ( object param )
{
// DetailsWindow window = new DetailsWindow();
ListingDetailsWindow window = new ListingDetailsWindow();
window.DataContext = new ListingDetailsViewModel ( param as Listing, this.CurrentProfile ) ;
ViewManager.Current.ShowWindow(window, true);
}
Here is the link:
http://blogs.msdn.com/jaimer/archive/2009/02/10/m-v-vm-training-day-sample-application-and-decks.aspx
I guess thats not of a big problem. After all, the Viewmodel acts as the 'glue' between the view and the business layer/data layer, so imho it's normal to be coupled to the View (UI)...
Onyx (http://www.codeplex.com/wpfonyx) will provide a fairly nice solution for this. As an example, look at the ICommonDialogProvider service, which can be used from a ViewModel like this:
ICommonFileDialogProvider provider = this.View.GetService<ICommonDialogProvider>();
IOpenFileDialog openDialog = provider.CreateOpenFileDialog();
// configure the IOpenFileDialog here... removed for brevity
openDialog.ShowDialog();
This is very similar to using the concrete OpenFileDialog, but is fully testable. The amount of decoupling you really need would be an implementation detail for you. For instance, in your case you may want a service that entirely hides the fact that you are using a dialog. Something along the lines of:
public interface IRemoveFiles
{
string[] GetFilesToRemove();
}
IRemoveFiles removeFiles = this.View.GetService<IRemoveFiles>();
string[] files = removeFiles.GetFilesToRemove();
You then have to ensure the View has an implementation for the IRemoveFiles service, for which there's several options available to you.
Onyx isn't ready for release yet, but the code is fully working and usable at the very least as a reference point. I hope to release stabilize the V1 interface very shortly, and will release as soon as we have decent documentation and samples.
I have run into this issue with MVVM as well. My first thought is to try to find a way to not use the dialog. Using WPF it is a lot easier to come up with a slicker way to do things than with a dialog.
When that is not possible, the best option seems to be to have the ViewModel call a Shared class to get the info from the user. The ViewModel should be completely unaware that a dialog is being shown.
So, as a simple example, if you needed the user to confirm a deletion, the ViewModel could call DialogHelper.ConfirmDeletion(), which would return a boolean of whether the user said yes or no. The actual showing of the dialog would be done in the Helper class.
For more advanced dialogs, returning lots of data, the helper method should return an object with all the info from the dialog in it.
I agree it is not the smoothest fit with the rest of MVVM, but I haven't found any better examples yet.
I'd have to say, Services are the way to go here.
The service interface provides a way of returning the data. Then the actual implementation of that service can show a dialog or whatever to get the information needed in the interface.
That way to test this you can mock the service interface in your tests, and the ViewModel is none the wiser. As far as the ViewModel is concerned, it asked a service for some information and it received what it needed.
What we are doing is somethng like that, what is described here:
http://www.codeproject.com/KB/WPF/DialogBehavior.aspx?msg=3439968#xx3439968xx
The ViewModel has a property that is called ConfirmDeletionViewModel. As soon as I set the Property the Behavior opens the dialog (modal or not) and uses the ConfirmDeletionViewModel. In addition I am passing a delegate that is executed when the user wants to close the dialog. This is basically a delegate that sets the ConfirmDeletionViewModel property to null.
For Dialogs of this sort. I define it as a nested class of the FindFilesCommand. If the basic dialog used among many commands I define it in a module accessible to those commands and have the command configure the dialog accordingly.
The command objects are enough to show how the dialog is interacting with the rest of the software. In my own software the Command objects reside in their own libraries so dialog are hidden from the rest of the system.
To do anything fancier is overkill in my opinion. In addition trying to keep it at the highest level often involving creating a lot of extra interfaces and registration methods. It is a lot of coding for little gain.
Like with any framework slavish devotion will lead you down some strange alleyways. You need to use judgment to see if there are other techniques to use when you get a bad code smell. Again in my opinion dialogs should be tightly bound and defined next to the command that use them. That way five years later I can come back to that section of the code and see everything that command is dealing with.
Again in the few instances that a dialog is useful to multiple commands I define it in a module common to all of them. However in my software maybe 1 out of 20 dialogs is like this. The main exception being the file open/save dialog. If a dialog is used by dozens of commands then I would go the full route of defining a interface, creating a form to implement that interface and registering that form.
If Localization for international use is important to your application you will need to make sure you account for that with this scheme as all the forms are not in one module.
Background: I have a little video playing app with a UI inspired by the venerable Sasami2k, just updated to use VMR9 (i.e. Direct3D9 with DirectShow) and be less unstable. Currently, it's a C++ app using raw Win32, through necessity: none of the various toolkits are worth a damn. WPF, in particular, was not possible, due to its airspace restrictions.
OK, so, now that D3DImage exists it might be viable to mix and match D3D/VMR9/DirectShow and WPF. Given past frustrations with Win32's inextensibility, this seems like a good thing.
But y'know, I'm falling at the first hurdle here.
With Win32 I have created (very easily) a borderless window that's resizable, resizes proportionately, snaps to the screen edges, and takes up the whole screen (including taskbar area) when maximized. It's a video app, so these are all pretty desirable properties.
OK, so, how to do the same with WPF?
In Win32, I use:
WM_GETMINMAXINFO to control the maximize behaviour
WM_NCHITTEST to control the resize borders
WM_MOVING to control the snap-to-screen-edges
WM_SIZING to control the resize aspect ratio
However, looking at WPF it seems that the various events arrive too late, unless I'm misunderstanding the documentation?
For example, I don't know when I'm mid-move, as LocationChanged says it fires only once the window has moved (which is too late).
Similarly, it appears that StateChanged only fires once the window has been restored/maximized (when I need the information prior to the maximize, to tell the system the correct maximize size).
And I seem to be completely overlooking where the system tells me about resizes. Likewise the hit testing.
So, uh, am I missing something here, or do I have no choice but to drop back to hooking the wndproc of this thing anyway? Can I do what I want without hooking the WndProc?
If I have to use the WndProc I might as well stick with my existing codebase; I want to have simpler, cleaner UI code, and moving away from the WndProc is fundamental to this.
If I do have to hook the WndProc, I have to wonder--why? Win32 has got the sizing/sized, moving/moved, poschanging/poschanged window messages, and they're all useful. Why wouldn't WPF replicate the same set of events? It seems like an unnecessary gap in functionality.
Plus, it means that WPF is tied to a specific USER32-dependent implementation. This means that MS can't (in Windows 7 or 8, say) invert the display layer to make WPF "native" and emulate HWNDs and WndProcs for legacy apps--even though this is precisely what MS should be doing.
OK, to answer my own question, I was missing Adorners (never came back in any of the searches I did, so it doesn't seem that they're as widely known as they perhaps should be).
They seem rather more complex than the WndProc overrides, unfortunately, but I think it should be possible to manhandle them into doing what I want.
And I seem to be completely overlooking where the system tells me about resizes. Likewise the hit testing.
For the resizing you're indeed missing the SizeChanged event.
AFAIK there is sadly no OnSizeChanging, OnLocationChanging and OnStateChanging event on a Window in .NET
I saw that one, but as far as I can tell it only fires after the size has changed, whereas I need the event to fire during the resize. Unless I'm misreading the docs and it
actually fires continuously?
It does not fire continuously but you can probably use the ResizeBegin and ResizeEnd events and be able to do that.
Aren't they WinForms events?
Hmm, you're right.
In code you can set the WindowStyle property to "None" and WindowsState to "Maximized"
Im not sure what the Xaml would look like.
Can you perhaps override the ArrangeOverride and/or MeasureOverride to make up for those missing resize events? Measure is the first pass, and occurs when a layout needs to adjust for a new size, so it's kind of like a size changing event.