my custom answer box not working like in the windows livecode - mobile

Hi Is there a way that my custom answer/ask box will appear the same on a smartphone? It was deployed on every screen on the phone and did not look like a computer (I used a substack)

Mobile phones OS are as a single stack, no way to show other stacks. So any answer dialogue is the OS standard style.
Otherwise ask to http://ekkotek.com/index.php/products/livecode-tools/wheelib
The developed custom gui for mobile.

In mobile systems, the stacks always occupy the entire screen space. So if you want to use a custom answer.
You must simulate a floating window using a group.
Try creating a group with all the substack controls you already have. Now instead of calling the substack, what you should do is show and hide the group.
Create a graph inside the group and before making this group visible, establish the rectangel of this graph to the rectangel of the current card. Then put your dialog window in the center and in this way you will have a floating and personalized window.

Related

Click a button in second life menu automatically

i recently started to play Second Life and I would like to know if there is a way to write a program outside of SL viewer that allow to click on sl menu's button automatically.
Not unless You really feel like writing Your own 3rd party viewer. The only time I've seen this done is through SmartBots, but even that is using a custom coded viewer to host the bot.
You can always use external programs such as AutoHotkey to do clicks at certain locations on-screen.
Do note that all UI elements in SL viewer are drawn not using the OS's GUI component system, but actually drawn by SL viewer itself using OpenGL calls, so you'll have to do coordinate calculations yourself, and click at certain relative coordinate to the viewer's window.

How does a GUI framework switch windows/window views/forms on Windows?

From what I understand, a GUI will have its windows, window classes, and use these for the main windows and all the buttons and tabs etc.
These would all have handles and be rendered either with the Windows GDI or another backend such as OpenGL. When a user interacts, say by clicking on a widget, there will be a callback function/event handler and it'll do its job. But what is happening when the user clicks on a button that switches the (I'm not sure what to call this so I'll call it a "form" - by this I mean the visible set of all menus and widgets and things - like on Google Chrome I have this tab open right now and I could move to another one that displays a different website and GUI) form.
How does the GUI framework change all the windows on the screen? I can understand it could change what's being rendered with the API of choice, like OpenGL, but how does it get rid of all the old windows and load the new ones? Does it disable all the child windows through their handles, and just leave them there on the screen, but unseen and not accepting input? Does it delete everything and create new windows? How does it actually perform this change (efficiently too)? I may be making a mountain out of a molehill here - if I'm overthinking this please let me know!
I once made a very bad game, using c Win32, the GDI and Direct2D, and when you pressed "play" it'd go to the game, but I just had to hide the buttons in a very glitchy fashion - I had no clue how to perform the "switch."
I have never ever used a "proper" GUI framework like Qt nor have I ever built one myself so apologies for any errata in the question, please correct me. I ask because I want to make my own GUI framework as a long term project (nothing special just something I can say that I've achieved) and I am at a loss as to how I can implement this from a low-level perspective, or rather how industry standards such as Qt will implement this at the lowest possible level.
Any answers would preferably not refer to managed code or any scripting languages, or external libraries - I want to know how to do this in c Win32 + any arbitrary graphics API. Thanks in advance.
This is accomplished by altering the z-order (the idea being that the windows form a stack from closest to the user to furthest away) of children at the appropriate level. The direct children of every window are in some z-order even if they are arranged such that they don't actually overlap.
For example, in the case of a tab control there will likely be a single child associated with each tab, that child representing the view for that tab. When a button is clicked the child for that tab is moved in the z-order so that is above all of its siblings (the forms for the other tabs). Those windows for the tab children will all be the same size (the empty area of the tab's client window) so bringing the child to the top of its parent's z-order will cover all other views.
In the case of the window's API you alter z-order placement via SetWindowPos, if you are going to roll your own (as WPF does) then you will need to re-implement this idea in some manner.

Touch screen operations for .NET windows application?

We are building a Windows application in .NET and one of its requirements is touch screen monitor. Other than that, it's a normal windows form based application. But except for making UI items little bigger for touch, I can't find anything I as a developer need to do for the requirement since touch screen is basically mouse operations. Am I missing something?
No, you are not missing anything. Do get the actual hardware hooked up so you can test it, "little bigger" is invariably underestimating the problem of fat fingers. Everything should work from a single click, right-clicks are horribly impractical, double-clicks are best avoided.
The only other thing you'll want to do is go into the Control Panel + Display applet and change the size of standard Windows UI elements. Pick a large window caption font if you want to allow the user to drag or close windows. Make the scrollbars at least twice as wide. And the menu and message box font. Go in the Mouse applet to increase double-click range and time if you want to support that.
If you do not need touch-specific event handling I think it's all you have to do. But touch means more than that and you may want to support it in a better way: http://archive.msdn.microsoft.com/WindowsTouch/Release/ProjectReleases.aspx?ReleaseId=2127

Get Bitmap/DC of a tabbed MDI forms

I have a set of forms which are visualized as MDI tab children of a main form (through an Infragistics UltraTabbedMDIManager, but this API is not so important)
I use GetDC(), CreateCompatibleDC(), CreateCompatibleBitmap(), SelectObject(), BitBlt().. to blit the bitmap of the device contexts of these forms into some memory.
This works, but only for the active MDI child form, the one that is visible to the user.
If I do it for forms that are not active (any tabs that are not currently shown), I get a black screen in the memory area, or I even get a "copy" of the screen that's above it.
If I do it for forms that are no longer visible, I also get a black screen.
What should I do to get a bitmap of these hidden forms? Do I have to resort to caching or is there some other trickery I can use?
I cannot use Winforms DrawToBitmap() function, because the forms contain some low-level graphical things that cannot be retrieved with it.
How can I use the winapi to retrieve a bitmap of these "hidden" forms' DC?
I managed to do it using the PrintWindow API in user32.dll.
It solves the MDI tabs problem, however it didn't solve the problem for hidden forms.
I solved that problem by showign the forms briefly in some off-screen location.
It seems the "ultimate" way is to use the (undocumented) dwm.dll, but this is not so advisable because interfaces differ between versions of Windows.

Is it possible to build a WinForm app (or another type of .NET app) which allows me to interact with other windows outside the applicaiton itself?

I'm learning Chinese at the moment and I have gotten my hand on a Chinese dictionary definition.
Now I would like to make an interface.
All I really want the application to do is when I point my mouse pointer over any text on the screen (in any window), it would identify the text I am pointing at and then display a small form over it, which would the chinese transaction.
Is that possible to do? Can WinForms apps interact with windows outside of it's own application?
In C# you can get text under mouse cursor by P/Invoking
GetCursorPos
GetClassName
SendMessage
WM_GETTEXT
WM_GETTEXTLENGTH
WindowFromPoint
Like mentioned here
here is another example in C++
A WinForms application can interact with the Windows of other applications. Window handles exist in a global namespace so if you can get the handle of another application's window you can send it messages. You will have to use pinvoke to do some of this, have a look at WindowFromPoint
However, there is no standardized way to display text in a window; there are dozens of APIs for displaying text. So when you point at text with a mouse, you can only get the pixels, but not necessarily the text.
Some window classes will allow you to send class-specific messages query for the text at a specific location, but many will not. Your best bet is probably to use the same methods that screen readers for the blind use http://en.wikipedia.org/wiki/Screen_reader

Resources