Getting started with UIAutomation - wpf

I'm trying to find a good resource to get started with UIAutomation. I need to simulate mouse input in a WPF application. Are there any good examples out there? I couldn't find any, and the MSDN documentation seems too extensive.

UI Automation is not really intended to simulate mouse clicks. It is meant to expose the user interface in a programmatically-accessible fashion.
It organizes controls in a hierarchy that can be easily traversed/navigated by screen readers or similar applications. And, it uses control patterns to allow users to interact with the controls.
A Button for example can expose the InvokePattern via it's automation peer. You can simulate a click using the Invoke method on that pattern. This is done independently of the mouse, so there would be no mouse over/enter/leave/down events, just a Click event.

You can use mouse class with UIAutomation. But as CodeNaked rightly said, we should use UIAutomation patterns for mouse like operations and it is not good practice to use mouse class.
You can refer this code project article to start with UIAutomation.
I hope this will help you.

Related

How to use right hand gesture in kinect v2?

I am using kinect (V2) for developing a gesture based application in WPF. I am able to scroll the image, zoom it, get the click event. Now what i need to do is use the right click gesture using kinect. What is the gesture to use right click using kinect or do i need to write any custom code to achieve it. I need to use the right click using gesture in kinect ? Could you please help me with it?
Implementing a gestural system is not the same of implementing a WIMP (Windows-Icons-Menu-Pointer) interface. So you should probably completely overcome the idea of interacting with a desktop and "use a mouse via gestures". If you will decide to follow this latter (wrong) idea, you will risk to produce a not-so-usable interface.
Anyway, there is no standard "right-click gesture", and this is due to the fact that a gestural interface is different from a classical desktop GUI.
You should write some custom gesture to do this indeed.
For more information, take a look to the Human Interface Guidelines 2.0 by Microsoft.

MVVM navigation how to

I dont hope to get any answer but i will try to be clear.
I tried Caliburn Micro . At first it seemed fine and all i need. Some features yes but other not.
All i wanted is a single window with some views as usercontrols and multiple dialogs at each view. Using a conductor.OneActive i could achieve the first with little pain. However switching between views even looking the example was to cast Parent to Conductor and call a method there.
Even example of caliburn micro did casting like this. Calling .close(false) at screen was same as close(true) resulting in killing the view and recreating causing lag in lowest end atom pc.
So only solution was to cast to parent.
Dialogs
I tried tons of dialogs examples but non worked and made my life hard.
Messagebox etc were DEAD easy but in case you wanted multiple dialogs you were out of luck.
If you put code at close callback to open another dialog you got bonus stackoverflow exception as it gets confused.(Endless loop).
I could figure a good dialog that could cache the view and at the same time to display efficiently multiple dialogs.
Event Aggregrator
Also i cant figure out how on earth event aggregrator is suitable for switching views. In case you have multiple conductors it could be a hell to manage.
To show a dialog - as in Modal Dialog that blocks the view that showed it - you should be using IWindowManager.ShowDialog.
You should take a look at prism library http://compositewpf.codeplex.com/
see navigation chapter: http://msdn.microsoft.com/en-us/library/gg430861%28v=pandp.40%29.aspx
But I don't know how EventAggregator could help you to switch views… you could subscribe to received an event on a closingView but… …
You might want to take a look at Catel. It has a UIVisualizerService which allows you to show windows based on their view model.
It also has a ViewManager (IViewManager) which allows you to manage all views inside your whole application. Besides that, it also provides a ViewModelManager (IViewModelManager) which does the same for your view models. Best of all: you can find all views that are connected to a specific view model in your application to interact with that.

Design Time Extension Properties In Winforms

I have built out a couple of keyboards for a touch aware app we are building in work. Since we use a controller that is aware of when the app is in touch screen mode I thought it would be nice if, in design mode, we could associate a control with a keyboard type and have the controller look after the calling of the keyboard.
One of the things I do not want to do is to sub class each type of control just to add this property as I feel it is a very heavy for small gain. I had thought of using the tag property but it is not available in all controls due to use.
I was wondering if there is a way of attaching a property to a control on design time for the purpose of adding this meta data. So to recap I would like to be able to give each control a value that would be read by the controller to decide what keyboard to show.
Yes, the designer supports this. Good examples of existing components that do this are ErrorProvider and HelpProvider. Note how they add properties to existing control types.
You'll want to implement your own component, it needs to implement the IExtenderProvider interface. The MSDN Library article for it has a good example that should help you getting it right.

wpf WindowsFormHost

Hi Every Body
Am Working With WPF Application
I Needed To Use MS Reports In My Application So I Used WindowsFormsHost
But The Problem Is That The WindowsFormsHost Can'nt Handle Touch Events
How I Can Handle Touch Events In My Report In WindowsFormsHost
Please I Need Ur Help
Thanks
The question is as old as multi touch. As long as the control doesn't support it, you have to trick. Whether that's going to work depends on the control. The BING map control was successfully made to respond to touches. I think there's a touch enabled layer that takes touches and translates them to window messages. This can get a real mess.
Please keep in mind, that even if you can get such a solution to work, you won't get a touch optimized control. The control will probably have a bad, bad user experience...
If it's all about displaying data, I've described a possible alternative implementation on my post: for answer How to display a PDF document in a Microsoft Surface application?

Coded UI Test - get my custom object (WinForms)?

I want to create an automated UI test that will test my syncfusion grid. My problem is that the recorder can't recognize this control (or any syncfusion control). I've searched a lot in the internet but I couldn't find any extension so the recorder will recognize my controls (I'm using WinForms, not WPF!), or at least a way to extend the recorder abilities so syncfusion's controls will be recognized somehow.
Is there any easy way to extend the recorder? Or is there any extension available?
Or maybe can I get the grid object from the WinClient that the recorder generates?
Thanks!
Start your program. Run the Spy++ utility. Type Ctrl+F to start the finder tool and drag the bulls-eye onto your form. Ok, Synchronize and have a look-see at the windows that are visible in the tree. If you see regular Windows Forms controls, like a Button or a Label, but not any of the SyncFusion controls then you've probably found the source of the problem.
Component vendors that try to improve .NET controls typically do so by creating 'window-less' controls. They are not really controls, they don't derive from the Control class and don't have a Handle property. They use the surface of the parent to draw themselves, making them look just like controls. The .NET ToolStripItem classes do this. And this is also the approach WPF uses.
The big advantage is that they render quickly and support all kinds of effects that regular controls can't support, like transparency, rotation and anti-aliased window edges. The big disadvantage is that the kind of tool that you are using suddenly gets noddy and can't find the control back. Because they work by finding the Windows window back on your form, there is no window for them.
This is a hard problem to solve, the 'control' exists only in memory and there's no good way for a tool to find it back. Using Accessibility is about the only other way for such a tool to find a control that I can think of. Which would have to be implemented by the control vendor first, a somewhat obscure feature that gets easily overlooked. You really do need the help of the vendor to find a workaround for this. Shouldn't be a problem, that's why you paid them the big money.
This is Rajadurai from Syncfusion. Thank you for your interest in Syncfusion Products. To make UI Test Automation recognize Syncfusion grids(WinForms), some internal support need to be provided in grid whose implementation is in progress and about to be completed. Please submit an incident through Direct-Trac for any further related inquiries in the following link.
http://www.syncfusion.com/Account/Logon?ReturnUrl=%2fsupport%2fdirecttrac
You can also contact us through support#syncfusion.com. We are happy to assist you.
Regards,
Rajadurai

Resources