Touch screen operations for .NET windows application? - winforms

We are building a Windows application in .NET and one of its requirements is touch screen monitor. Other than that, it's a normal windows form based application. But except for making UI items little bigger for touch, I can't find anything I as a developer need to do for the requirement since touch screen is basically mouse operations. Am I missing something?

No, you are not missing anything. Do get the actual hardware hooked up so you can test it, "little bigger" is invariably underestimating the problem of fat fingers. Everything should work from a single click, right-clicks are horribly impractical, double-clicks are best avoided.
The only other thing you'll want to do is go into the Control Panel + Display applet and change the size of standard Windows UI elements. Pick a large window caption font if you want to allow the user to drag or close windows. Make the scrollbars at least twice as wide. And the menu and message box font. Go in the Mouse applet to increase double-click range and time if you want to support that.

If you do not need touch-specific event handling I think it's all you have to do. But touch means more than that and you may want to support it in a better way: http://archive.msdn.microsoft.com/WindowsTouch/Release/ProjectReleases.aspx?ReleaseId=2127

Related

Click a button in second life menu automatically

i recently started to play Second Life and I would like to know if there is a way to write a program outside of SL viewer that allow to click on sl menu's button automatically.
Not unless You really feel like writing Your own 3rd party viewer. The only time I've seen this done is through SmartBots, but even that is using a custom coded viewer to host the bot.
You can always use external programs such as AutoHotkey to do clicks at certain locations on-screen.
Do note that all UI elements in SL viewer are drawn not using the OS's GUI component system, but actually drawn by SL viewer itself using OpenGL calls, so you'll have to do coordinate calculations yourself, and click at certain relative coordinate to the viewer's window.

How does a GUI framework switch windows/window views/forms on Windows?

From what I understand, a GUI will have its windows, window classes, and use these for the main windows and all the buttons and tabs etc.
These would all have handles and be rendered either with the Windows GDI or another backend such as OpenGL. When a user interacts, say by clicking on a widget, there will be a callback function/event handler and it'll do its job. But what is happening when the user clicks on a button that switches the (I'm not sure what to call this so I'll call it a "form" - by this I mean the visible set of all menus and widgets and things - like on Google Chrome I have this tab open right now and I could move to another one that displays a different website and GUI) form.
How does the GUI framework change all the windows on the screen? I can understand it could change what's being rendered with the API of choice, like OpenGL, but how does it get rid of all the old windows and load the new ones? Does it disable all the child windows through their handles, and just leave them there on the screen, but unseen and not accepting input? Does it delete everything and create new windows? How does it actually perform this change (efficiently too)? I may be making a mountain out of a molehill here - if I'm overthinking this please let me know!
I once made a very bad game, using c Win32, the GDI and Direct2D, and when you pressed "play" it'd go to the game, but I just had to hide the buttons in a very glitchy fashion - I had no clue how to perform the "switch."
I have never ever used a "proper" GUI framework like Qt nor have I ever built one myself so apologies for any errata in the question, please correct me. I ask because I want to make my own GUI framework as a long term project (nothing special just something I can say that I've achieved) and I am at a loss as to how I can implement this from a low-level perspective, or rather how industry standards such as Qt will implement this at the lowest possible level.
Any answers would preferably not refer to managed code or any scripting languages, or external libraries - I want to know how to do this in c Win32 + any arbitrary graphics API. Thanks in advance.
This is accomplished by altering the z-order (the idea being that the windows form a stack from closest to the user to furthest away) of children at the appropriate level. The direct children of every window are in some z-order even if they are arranged such that they don't actually overlap.
For example, in the case of a tab control there will likely be a single child associated with each tab, that child representing the view for that tab. When a button is clicked the child for that tab is moved in the z-order so that is above all of its siblings (the forms for the other tabs). Those windows for the tab children will all be the same size (the empty area of the tab's client window) so bringing the child to the top of its parent's z-order will cover all other views.
In the case of the window's API you alter z-order placement via SetWindowPos, if you are going to roll your own (as WPF does) then you will need to re-implement this idea in some manner.

Information on developing WPF touch-screen application for Windows XP

I am currently working on the WPF project which involves creating a touch-screen application for Windows XP embedded. And as Windows XP wasn't built for touch interaction, there are some problems and issues with developing those applications.
An example would be a click: On windows XP click is mouse down and mouse up event, however if you use your finger instead of the mouse, you might get a drag motion instead of the click, as when you press mouse down you finger might slightly move to the side from the initial position and you will get a drag instead of click. This is just a single example of the problems you get when developing an touch-screen app for Windows XP.
If someone has been working on the WPF touch-screen application for Windows XP, could you share some knowledge and point out the pitfalls you have encountered or if you know of any resources on this topic, could you please share it.
I would agree with #bflosabre91. With a mouse you could have the same problems and in fact happens quite frequently when someone is learning to use a mouse. I think this problem is more relevant at the hardware level and how the touchscreen actually interprets what the user is doing.
On the software side, you COULD add some logic something along the lines of:
On mouse down: record coordinates and maybe the control (button, etc.) that is under the pointer
On mouse up: compare recorded coordinates with current coordinates. If it's within x pixels, either do a "control.click" or move the mouse to the old coordinates and tell tell the mouse to click.
The hardware may already be doing something like this...
i have a WPF touch screen application and it is running on kiosks with XP(although its not XP embedded like you said). I haven't had any issue with any type of click event or anything like that. I programmed it using all the normal mouse click events so it technically does work with a mouse or with the touch screen. As long as you build the controls to be large enough to account for the fact a finger will be touching it instead of a mouse pointer, I did not come across any issues.

Track "commands" send to WPF window by touchpad (Bamboo)

I just bought a touchpad wich allows drawing and using multitouch. The api is not supported fully by windows 7, so I have to rely on the build in config dialog.
The basic features are working, so if I draw something in my WPF tool, and use both fingers to do a right click, I can e.g. change the color. What I want to do now is assign other functions to special features in WPF.
Does anybody know how to find out in what way the pad communicates with the app? It works e.g. in Firefox to scroll, like it should (shown on this photo). But I do not know how to hookup the scroll event, I tried a Scrollviewer (which ignores my scroll attempts) and I also hooked up an event with the keypressed, but it does not fire (I assume the pad does not "press a key" but somehow sends the "scroll" command direclty. How can I catch that command in WPF?
Thanks a lot,
Chris
[EDIT] I got the scroll to work, but only up and down, not left and right. It was just a stupid "listbox in scrollviewer" mistake. But still not sure about commands like ZOOM in (which is working even in paint).. Which API contains such things?
[EDIT2] Funny, the zoom works in Firefox, the horizontal scrolling does not. But, in paint, the horizontal scrolling works...
[EDIT 3] Just asked in the wacom forum, lets see about vendor support reaction time...
http://forum.wacom.eu/viewtopic.php?f=4&t=1939
Here is a picture of the config surface to get the idea what I am talking about: (Bamboo settings, I try to catch these commands in WPF)
alt text http://img340.imageshack.us/img340/3751/20091008210914.jpg
Have you had a look at this yet.
WPF 3.5 does not natively support multi-touch (it is coming in WPF 4.0) however the samples in that kit should get you started using the Windows7 Integration Library which access the native Win32 APIs to provide the required support (Don't worry its not real ugly:).

Making a WPF application retain focus at all times

I've got an issue with a WPF application that I'm writing. The app needs to be able to keep focus at all times. The computer it's running on is a highly specialized machine with the only purpose of running this application.
There is no keyboard connected to the machine (it has a touch screen), so the only thing that can steal focus is windows own "needy windows", such as windows update etc.
How can I make it so that my app retains focus at all times? Is it possible to make the entire app modal?
EDIT:
Thank you both for your answers. I think I'll end up using Topsmost for now, but I'll definitely check out the source of babysmash as that application works exactly the way I want mine to, in regards to the way it handles focus.
Look at the source of BabySmash. It is specifically designed to keep focus even under quite bizar circumstances. (It is a program designed to run at full screen and let babys smash on a keyboard - so quite some focus went into capturing all kinds of weird keyboard combinations and alert messages).
I would use
<Window ... Topmost="True">
i xaml. But maybe this is not what you are looking for

Resources