Is there any other sensible way where I can fire a touch event on a Windows Mobile App?
I have a Hangman game. I have a bunch of images which are the alphabet.
I've put all of the images inside a Scroll Viewer so the user can scroll through the list.
As a precaution, I've set the click event to MouseLeftButtonUp because if I set it to "Down", it will trigger the image code when really, the user just wanted to scroll.
However, even with MouseLeftButtonUp, I've still got the same problem. When I test it, it scrolls but if the finger is still on the same image and if I release my finger, it'll fire off the code behind that image...
How can I overcome this problem? I hope my situation makes clear sense.
Perhaps you want to use touch events rather than mouse events since Windows mobiles devices are designed for touch rather than mouse clicking.
Related
We've been working on an application for the last few months that's aimed at Windows 7 tablet PCs. So we've used the Surface 2 SDK for most controls and it's all touch-happy.
I have noticed recently, though, that one of our custom controls isn't working as it should. This control provides popout menus, and these are achieved through the Popup control. On a developer's laptop, this works fine and the menus vanish when you click away from them. I've noticed, though, that on our test tablet they have a tendency to stay open.
I found that there was a SurfacePopup in the first Surface SDK, but I can't find one in the Surface 2 SDK. Did they get rid of it? Is there a 'best practice' approach?
If there's no simple solution, I may have to go old-school and add a window-sized hidden SurfaceButton below the menu when it appears, that hides itself and the menu when clicked or touched.
Beyond that I've noticed that sometimes the SurfaceScrollViewer within the popups won't work. I'm guessing this is because it's not picking up touch events properly. I tried adding this extension method to the window..
this.EnableSurfaceInput();
..but I get a NullReferenceException on System.Windows.Input.Mouse.get_LeftButton() which bizarrely suggests that it can only enable surface inputs for controls when there's a mouse plugged in.
Any ideas? They'll all be welcomed with open arms!
There's no SurfacePopup in the Surface SDK 2.0, however you can use a normal WPF popup. Then you need to make sure that it receives Touch Events by using the extension method you suggested above on the popup, not the window:
((HwndSource)HwndSource.FromVisual(popup)).EnableSurfaceInput();
Edit: As I just found out, this only works when the popup is initially open. To get it to work when the popup is opened later on, you don't need to use the popup, but the parent of it's child (see this question).
For the benefit of Daniel, and anyone else who needs a solution to this, I'll try to cast my mind back two years and explain how we got this working.
As far as I can remember, the answer was to use an adorner layer instead of a popup. Basically, every WPF control has an adorner layer, which sits above the control's UI stack. By default it contains nothing, but you can add whatever you like to it.
I got this all working by writing a custom control that allows you to place that control, with content, in the XAML and then show and hide it whenever you need to. When it's shown, it moves its contents into the adorner layer of the containing window, and when it's hidden it moves the contents back into the control itself, which is hidden from the user.
Afraid I can't go into any more detail than that, but as far as I can remember this was the ultimate solution; replacing popups (which never quite worked very well) with a custom control that uses the adorner layer.
Hope that helps!
I'm attempting to create a UI where I interact with normal WPF controls without the mouse. I want to support multiple cursors so using any regular input simulation (such as SendInput) doesn't work. I also tried interlacing SendInput messages to simulate to mouse cursors but that didn't work either (only got one mouse input). I also have the constraint that I do not want to use Windows MultiPoint.
I've tried sending events to the controls (testing it on Button) through mouse events when I detect my cursor position is above the controls using: MouseEnterEvent, MouseLeaveEvent, MouseMove, MouseDownEvent and MouseUpEvent. But except for the MouseDownEvent, none of it seems to work.
Here is an example of how I send the MouseEnterEvent:
System.Windows.Input.MouseEventArgs e =
new System.Windows.Input.MouseEventArgs(System.Windows.Input.Mouse.PrimaryDevice, DateTime.Now.Millisecond);
e.RoutedEvent = System.Windows.Input.Mouse.MouseEnterEvent;
elementUnderCursor.RaiseEvent(e);
Where elementUnderCursor is a UIElement.
I think that i wanted the same thing for my multimose kinect app be but if you have an emulated mouse driver you would just have to figure out how to tell the app that an event happened with that mouse driver instead of your usb mouse or ps/2 mouse driver.
For instance,
send the mousedown and up event to simulate mouse click for mouse #1,#2. Also, update mouse positon for mouse #1,#2.
My reasoning for the above is i wanted it to work in any application by running a similar program to the above program in the background as a service.
Update dsf works for simulating virtual mouse devices im working on a multiple mouse project with kinect so please look at the current progress at the site: http://kinectmultipoint.codeplex.com.
be warned multiple mouse drivers cant be built overnight.
also, download the windows ddk to simulate the mouse devices. the testgenerichid.wsf script has to be changed for your scenario but its possible.
I am currently working on the WPF project which involves creating a touch-screen application for Windows XP embedded. And as Windows XP wasn't built for touch interaction, there are some problems and issues with developing those applications.
An example would be a click: On windows XP click is mouse down and mouse up event, however if you use your finger instead of the mouse, you might get a drag motion instead of the click, as when you press mouse down you finger might slightly move to the side from the initial position and you will get a drag instead of click. This is just a single example of the problems you get when developing an touch-screen app for Windows XP.
If someone has been working on the WPF touch-screen application for Windows XP, could you share some knowledge and point out the pitfalls you have encountered or if you know of any resources on this topic, could you please share it.
I would agree with #bflosabre91. With a mouse you could have the same problems and in fact happens quite frequently when someone is learning to use a mouse. I think this problem is more relevant at the hardware level and how the touchscreen actually interprets what the user is doing.
On the software side, you COULD add some logic something along the lines of:
On mouse down: record coordinates and maybe the control (button, etc.) that is under the pointer
On mouse up: compare recorded coordinates with current coordinates. If it's within x pixels, either do a "control.click" or move the mouse to the old coordinates and tell tell the mouse to click.
The hardware may already be doing something like this...
i have a WPF touch screen application and it is running on kiosks with XP(although its not XP embedded like you said). I haven't had any issue with any type of click event or anything like that. I programmed it using all the normal mouse click events so it technically does work with a mouse or with the touch screen. As long as you build the controls to be large enough to account for the fact a finger will be touching it instead of a mouse pointer, I did not come across any issues.
When I have 2 apps open and one has the focus but I want to execute a command in the other app, it requires a click to regain focus and another to execute the command. Is there some good reason why I couldn't take focus on MouseOver? I'm working with a WPF app if that is pertinent. TIA
EDIT: Oddly enough the MouseOvers work without focus.
I would not recommend doing this. This is not a standard way of working in Windows, so you will confuse your users. People are used to clicking into an application (or tabbing) to provide focus.
However, this is a configurable setting via the Accessability Tools in Windows. It can be enabled by choosing "Activate a window by hovering over it with the mouse" globally. Let your users specify this behavior if they want it.
The setting is configurable at a system-wide level. You should never ever override the user's current setting regarding this.
MS Windows Vista -- focus follows mouse (There's also a link on how to do it on XP.)
Edit: Normally, you can click a button on a form and both bring focus to the window and click it at the same time. The origins of the current setting "eating" the initial mouse click that brings focus to a window started as a fix to a bug in the Ribbon UI. The discussion is somewhere in this video: The Story of the Ribbon. Sorry I can't narrow it down more than that, but at least the video is a great insight and work watching - maybe you can send a message to Jensen Harris if you need a faster answer.
Edit 2: I just added a button to a WPF window, and I'm able to click it as long as I can see it - whether or not the window has focus.
You can take focus on MouseOver manually
I've been working on injecting input into a WPF application. What makes this project hard is that I need to be able to inject the input into the application even though it's running in the background (i.e. another application has the input focus). Using the SendInput() function is therefore out of the question.
So far, I've got keyboard input working but am having trouble injecting mouse input.
I used Spy++ to observe the window messages that get sent to the WPF window when I physically click the mouse button. I then simply craft these same mouse messages (such as WM_LBUTTONDOWN and WM_LBUTTONUP) manually and send them explicitly to the WPF window to emulate mouse input.
Unfortunately, this doesn't work as expected (not even when I, for testing purposes, have set the WPF window as the foreground window).
I've added a button to my test WPF window which when clicked displays a message box. Injecting the appropriate mouse messages when I've manually positioned the cursor over the button doesn't cause the button to be clicked, however (i.e. the clicked event isn't fired by the WPF framework).
If I add a handler for mouse clicks on the actual dialog (the client area), that handler does get called if I position the cursor over the dialog itself and inject the same window messages as before:
this.MouseLeftButtonDown += WndMouseDown;
public void WndMouseDown(object sender, EventArgs e)
{
...
}
Strangely enough, if I change the push mode of the button to Press (i.e. it's considered clicked on mouse down rather than the default mouse up), the button clicked event is now fired when I inject the same messages as before. (It's worth mentioning that the handler from the example above correctly fires for both mouse downs and ups, so it'd seem the WPF framework does process both messages successfully.)
It seems like there are some other criteria that need to be fulfilled in order for a mouse clicked event to be fired by the WPF framework. Does anybody know how mouse input is handled internally in WPF, or why it's not interpreting my mouse up and down messages as a click on the button?
(It's worth mentioning that this approach [sending window messages] works fine on ordinary Win32 windows, such as the Start->Run dialog. The difference here is that WPF only has one physical Win32 window and the rest is WPF specific, which means all window messages go to that top-level window rather than the actual button.)
I've been searching high and low for an answer to this and would appreciate any thoughts or ideas.
I'd highly suggest going the UIAutomation route. You create an AutomationElement by window handle. Crawl to the button and invoke it. I'd just like to know how you managed to get the keyboard input working. I am currently trying to resolve the converse issue. How to get a WPF window (I've managed to get a hWnd to it via Win32 calls), to respond to virtual keyboard messages. I've logged ++spy sessions on the window in question and replicated it's input without success.
Use UI Automation to do this - trying to manually simulate input via window messages is a textbook mistake, like trying to start a land war against Russia.
Your strategy is basically sound but in order to send a message to a window owned by another process you must first register the message.
Here is an article explaining the whole business. The sample code is unfortunately in VB but I'm sure that won't stop you.