Click a button in second life menu automatically - linden-scripting-language

i recently started to play Second Life and I would like to know if there is a way to write a program outside of SL viewer that allow to click on sl menu's button automatically.

Not unless You really feel like writing Your own 3rd party viewer. The only time I've seen this done is through SmartBots, but even that is using a custom coded viewer to host the bot.

You can always use external programs such as AutoHotkey to do clicks at certain locations on-screen.
Do note that all UI elements in SL viewer are drawn not using the OS's GUI component system, but actually drawn by SL viewer itself using OpenGL calls, so you'll have to do coordinate calculations yourself, and click at certain relative coordinate to the viewer's window.

Related

How does a GUI framework switch windows/window views/forms on Windows?

From what I understand, a GUI will have its windows, window classes, and use these for the main windows and all the buttons and tabs etc.
These would all have handles and be rendered either with the Windows GDI or another backend such as OpenGL. When a user interacts, say by clicking on a widget, there will be a callback function/event handler and it'll do its job. But what is happening when the user clicks on a button that switches the (I'm not sure what to call this so I'll call it a "form" - by this I mean the visible set of all menus and widgets and things - like on Google Chrome I have this tab open right now and I could move to another one that displays a different website and GUI) form.
How does the GUI framework change all the windows on the screen? I can understand it could change what's being rendered with the API of choice, like OpenGL, but how does it get rid of all the old windows and load the new ones? Does it disable all the child windows through their handles, and just leave them there on the screen, but unseen and not accepting input? Does it delete everything and create new windows? How does it actually perform this change (efficiently too)? I may be making a mountain out of a molehill here - if I'm overthinking this please let me know!
I once made a very bad game, using c Win32, the GDI and Direct2D, and when you pressed "play" it'd go to the game, but I just had to hide the buttons in a very glitchy fashion - I had no clue how to perform the "switch."
I have never ever used a "proper" GUI framework like Qt nor have I ever built one myself so apologies for any errata in the question, please correct me. I ask because I want to make my own GUI framework as a long term project (nothing special just something I can say that I've achieved) and I am at a loss as to how I can implement this from a low-level perspective, or rather how industry standards such as Qt will implement this at the lowest possible level.
Any answers would preferably not refer to managed code or any scripting languages, or external libraries - I want to know how to do this in c Win32 + any arbitrary graphics API. Thanks in advance.
This is accomplished by altering the z-order (the idea being that the windows form a stack from closest to the user to furthest away) of children at the appropriate level. The direct children of every window are in some z-order even if they are arranged such that they don't actually overlap.
For example, in the case of a tab control there will likely be a single child associated with each tab, that child representing the view for that tab. When a button is clicked the child for that tab is moved in the z-order so that is above all of its siblings (the forms for the other tabs). Those windows for the tab children will all be the same size (the empty area of the tab's client window) so bringing the child to the top of its parent's z-order will cover all other views.
In the case of the window's API you alter z-order placement via SetWindowPos, if you are going to roll your own (as WPF does) then you will need to re-implement this idea in some manner.

Is there a SurfacePopup control in Surface 2?

We've been working on an application for the last few months that's aimed at Windows 7 tablet PCs. So we've used the Surface 2 SDK for most controls and it's all touch-happy.
I have noticed recently, though, that one of our custom controls isn't working as it should. This control provides popout menus, and these are achieved through the Popup control. On a developer's laptop, this works fine and the menus vanish when you click away from them. I've noticed, though, that on our test tablet they have a tendency to stay open.
I found that there was a SurfacePopup in the first Surface SDK, but I can't find one in the Surface 2 SDK. Did they get rid of it? Is there a 'best practice' approach?
If there's no simple solution, I may have to go old-school and add a window-sized hidden SurfaceButton below the menu when it appears, that hides itself and the menu when clicked or touched.
Beyond that I've noticed that sometimes the SurfaceScrollViewer within the popups won't work. I'm guessing this is because it's not picking up touch events properly. I tried adding this extension method to the window..
this.EnableSurfaceInput();
..but I get a NullReferenceException on System.Windows.Input.Mouse.get_LeftButton() which bizarrely suggests that it can only enable surface inputs for controls when there's a mouse plugged in.
Any ideas? They'll all be welcomed with open arms!
There's no SurfacePopup in the Surface SDK 2.0, however you can use a normal WPF popup. Then you need to make sure that it receives Touch Events by using the extension method you suggested above on the popup, not the window:
((HwndSource)HwndSource.FromVisual(popup)).EnableSurfaceInput();
Edit: As I just found out, this only works when the popup is initially open. To get it to work when the popup is opened later on, you don't need to use the popup, but the parent of it's child (see this question).
For the benefit of Daniel, and anyone else who needs a solution to this, I'll try to cast my mind back two years and explain how we got this working.
As far as I can remember, the answer was to use an adorner layer instead of a popup. Basically, every WPF control has an adorner layer, which sits above the control's UI stack. By default it contains nothing, but you can add whatever you like to it.
I got this all working by writing a custom control that allows you to place that control, with content, in the XAML and then show and hide it whenever you need to. When it's shown, it moves its contents into the adorner layer of the containing window, and when it's hidden it moves the contents back into the control itself, which is hidden from the user.
Afraid I can't go into any more detail than that, but as far as I can remember this was the ultimate solution; replacing popups (which never quite worked very well) with a custom control that uses the adorner layer.
Hope that helps!

Touch screen operations for .NET windows application?

We are building a Windows application in .NET and one of its requirements is touch screen monitor. Other than that, it's a normal windows form based application. But except for making UI items little bigger for touch, I can't find anything I as a developer need to do for the requirement since touch screen is basically mouse operations. Am I missing something?
No, you are not missing anything. Do get the actual hardware hooked up so you can test it, "little bigger" is invariably underestimating the problem of fat fingers. Everything should work from a single click, right-clicks are horribly impractical, double-clicks are best avoided.
The only other thing you'll want to do is go into the Control Panel + Display applet and change the size of standard Windows UI elements. Pick a large window caption font if you want to allow the user to drag or close windows. Make the scrollbars at least twice as wide. And the menu and message box font. Go in the Mouse applet to increase double-click range and time if you want to support that.
If you do not need touch-specific event handling I think it's all you have to do. But touch means more than that and you may want to support it in a better way: http://archive.msdn.microsoft.com/WindowsTouch/Release/ProjectReleases.aspx?ReleaseId=2127

WPF Printing in XPS Document Writer

I've implemented a printing feature to print some of the Financial charts in my WPF application by using PrintVisual method. Since the user is free to change his/her window size and/or screen resolution I've use a LayoutTransform, Measure and Arrange methods to make sure that the printed charts gets spread across the entire page evenly irrespective of the size of the application window. All works absolutely well when the user prints on an actual printer or selects PDF Print Driver to print. Layout transform takes effect behind the scene and that shows up in the print, but the user doesn't experience any flicker or change in display on his screen.
The problem comes when the user selects XPS Document Writer. When user does that, the layout on the screen also changes. When the "Save As" dialog box comes up the screen layout changes based on the LayoutTransform provided, which makes the charts go smaller or bigger. The moment user saves the XPS file OR hit cancel on the Save as dialog box the layout goes back to normal. But the strange part is, this happens only when we select XPS Document Writer.
But, user doesn't want to see this.. What can I do to prevent this to happen in the case of XPS Document Writer?
Please suggest. Thanks
perhaps you can make a clone of your canvas or visual prior to applying your transform. clone is not built-in to wpf UIElements but you can use XamlWriter.Save() and XamlReader.Load to clone via XmlReader. google "wpf clone UIElement" or I can post some code if you feel that's the way to go.

Hooking into Forms redrawing

I'm looking for a way to overlay the graphical output of a third-party application with some lines, arcs etc. The applications accepts a handle of a window in which it will then display its output.
Using VC++ I put together a Windows Forms app in Visual Studio that draws (non-static) stuff in the onPaint-method of a form. Passing this form's handle to the other app, of course, overwrites my graphics stuff every time the other app redraws.
Can I somehow hook into this redrawing process to add my graphics after the other app redraws? Overlaying the form with a transparent panel onto which I draw could be an alternative. But real transparency for controls seem to be a problem of its own in Windows ...
You can't do this easily without getting notifications from the app. Which, if it doesn't provide them, would require setting a global hook with SetWindowsHookEx() so you can see the WM_ERASEBKGND and WM_PAINT messages. That's hard to get right, you cannot write such a hook in managed code. Since it requires injecting a DLL into the target process.
The only other option is that you put a transparent overlay on top of your form. Another form that has its TransparencyKey property set. The basic code you need to get that right is available in my answer in this thread. You just need to tweak it so it is permanent.

Resources