I am working at an OS independent file manager (using SDL). I am trying to use native functions as much as possible (with the appropriate #ifdefs), and I am having a problem with Windows. When I am using
CopyFileEx()
for example, if there is a problem it will pop up a modal dialogue, and the user will have to press some buttons to get rid of it. I want to handle the errors myself, in my programs, to make it less annoying.
Is there any way to disable those modal windows?
I noticed that if I start my application from a debugger (Insight) it will not display those messages.
Thanks in advance!
P.S. The language I am using is plain C.
You might want the SetErrorMode function.
For a Windows file manager SHFileOperation() is possible a better fit than CopyFileEx(). This will result in the native sheall dialogs for progress, conflict resolution etc. The levels of progress and error reporting can all be controlled.
Related
Is there a way to detect input areas such as textboxes and checkboxes within an application? I want to label each input area with a number so I can jump between input fields with AHK using my keyboard.
For example: Once the script is activated and active window is Google Chrome, Chrome could have its address bar labeled #1. When I press "1", the cursor will be directed to that area.
I'm basically trying to create a workaround for applications that are not very keyboard friendly.
Most Windows applications use standard windows elements.
For these...
https://autohotkey.com/docs/commands/WinGet.htm - with the ControlList parameter, gets a list of all standard controls.
For those:
https://autohotkey.com/docs/commands/ControlGet.htm - can get the type of control, and
https://autohotkey.com/docs/commands/ControlGetPos.htm - can get position and dimensions of the control.
Some can also be controlled through COM: https://gist.github.com/kheybot/7026077#automation-of-office-applications
Commandline and console programs can sometimes be communicated with directly, using the standard streams (STDIN, STDOUT, STDERR, LPTn, PRN, NUL), or you can communicate with the terminal that displays the program using COM or WSH:
https://gist.github.com/kheybot/7026077#interact-with-command-line
This is important for a lot of legacy data-entry programs.
Browsers (eg Chrome), unfortunately, can't use these heavyweight components, because there may be far too many on a page, but there are other options for communicating with them, such as COM, DDE, etc to communicate with the DOM:
https://gist.github.com/kheybot/7026077#browser-automation
For a web browser, I'd be inclined to go for a hybrid approach, combining AHK-handling of the web browser's input areas (address bar, etc) with a Greasemonkey/Tampermonkey script to handle input fields within the web page itself - the Javascript will be better able to handle input areas using the DOM than any screen-scraping software could. There's also the possibility of using a functional-testing suite like Selenium for automation, and using the browser's plug-in functionality to write an extension to handle its UI.
This would mean that you now have TWO programming problems, of course...
Java applications, Flash applications, HTML5 applications, some graphic design software, and just about all computer games are essentially just graphics, with no way of externally identifying controls.
For these, you have to use basic screen scraping techniques: http://www.autohotkey.com/docs/commands/ImageSearch.htm and http://www.autohotkey.com/docs/commands/PixelSearch.htm to identify specific areas, which can only really be done by individually programming the specific control.
One option for generic detection, though, is to have something that detects shadows (drop shadows, buttonized components, etc) and allows you to tab between and send a click to the components detected that way. Unfortunately, modern flat design meant this won't always work, so you could also try searching for flat-colored rectangles... except sometimes they have curved corners. Because graphic designers hate people.
At this point, you will hopefully see that what you have here is an infinite rabbithole of fractal complexity.
You can make a simple ControlGet solution which doesn't work for a lot of applications you would use regularly... or you can create a hybrid approach that targets many applications individually, while also trying to have a generic solution for unrecognized apps.
If you are creating this for your own use, I'd say aim for making it work with the apps you know and use regularly, and that should be enough.
If you're writing it as accessibility software for others to use, I'd say aim for having it user-configurable for each application: let them control what input element they want to click, and in what order, because auto-detection will never work perfectly, and will only rarely pick the ideal solution.
The answer is yes, if the number of check boxes and their position in the application is fixed and you know on which machine the automation takes place.
Please research ImageSearch on how to locate them from screenshots.
If you know the X/Y position of the checkbox in the window, you can also use PixelGetColor to check if a check is visible or not.
You should also examine your application with the included AutoIt Spy. This program shows you, what it can see in the application window.
To get your labelling, checkout the Gui commands. If you make you gui transparent and don't give focus, you can write labels on top of the application.
In my App I have an annoying behavior. It is causing problems to my costumers.
The app has several points where I need to show a Dialog (Modal), then the users can fill some fields and then they can close the dialog. So the system follows its natural path.
In determined moments this works fine. The dialog is shown, user interacts with it, closes it , ....
But, in others moments (the same code) the dialog doesn't appear automatically. The user needs to execute some external action on device (like change its orientation, touch in the center of the screen, execute scroll gesture, etc). Some action that isn't intuitive at the moment.
This behavior makes the user think my app froze.
For me it is clear that the dialog was called, simply it wasn't drawed on the screen.
I tryed read about this problem.I did some researches in similar questions without success.
I guess the cause is related to EDT.
In short, How can I call a Dialog Modal without breaking EDT-rules.
And more specifically, How can I resolve this problem.
when I request a dialog to be displayed on the screen, I want it really appears in 100% of cases. Today works randomly.
Additional infomations:
My app uses Java 5 yet.
Do you recommends migration to Java 8?
======= Additional Informations (1) ===========
This problem is strongly dependent of device model.
In MotoG3 (Android 6) this problem is a exception. Rarely it occurs.
In my Galaxy Note 8 is the opposite. Always occurs.
In Lenovo Vibe5 (Android 6). Frequently occurs.
I added these informations. Maybe it help to compound problem picture.
Additional question:
Is it possible write a snippet that I can use as a template
to execute Dialog Modal call without break some rule of EDT?
Turn on the EDT violation detection tool in the simulator which should detect such issues. Inspect potentially problematic cases of Dialog calls and post them specifically if you don't know how to fix them.
Java 8 is unrelated although migrating a project is non-trivial.
The app is using my library which works using threads to do some operation; also it uses SIP VOIP library (obviously it is using threads). GUI is bound to interfaces of both libraries.
I noticed a weird behavior of my app. Usually it works just fine but sometimes after some time (3-5 minutes) it suddenly closes.
It is too irregular to debug it or diagnose.
Anyone had that kind of problem? Any idea what could be the reason for that?
I would recommend you add an application level error handler so that you can log any errors that are occuring that you might be missing. It is as simple as
Application.Current.DispatcherUnhandledException += HandleApplicationException;
Here is an MSDN article that describes it:
http://msdn.microsoft.com/en-us/library/system.windows.application.dispatcherunhandledexception.aspx
I want to make project for my final year in college.
So someone suggested me to make Remote Desktop in C.
Now I know basic socket functions for windows in C i.e. I know how to make
echo server in C.
But I don't know what to do next. I searched on internet but couldn't find
something informative.
Could someone suggest me how to approach from this point..any tutorial...or any source ?
I think this is do-able. For a college project, you don't need to have something as complex and as full-featured as VNC. Even demonstrating simple keyboard and mouse control and screen feedback would be enough, in my opinion, and that's well within reach.
If you're doing everything from scratch and using Win32, you can get the remote screen using the regular "printscreen" example all around the internet.
http://www.codeproject.com/KB/cpp/Screen_Capture__Win32_.aspx has it, for one. You can then compress the image with a third-party library, or just send it raw; this wouldn't be very efficient but it would still be a viable demonstration.
Apart from capturing the screen data remotely and showing it in the local window, you'll need to listen for local window messages for mouse and keyboard events, send them to the remote host, and then play them back. http://msdn.microsoft.com/en-us/library/ms646310%28VS.85%29.aspx will probably do that for you.
Check tightvnc TightVNC is a free remote control software package. The source code is also available.
For sending the image of the screen I would probably use rtp. The JRTPLIB is really handy for that.
And yes, as KevinDTimm says, an echo server is the very easiest part.
KevinDTimm may well be right, writing an RDP client would a fairly significant undertaking. To give you some idea, the current spec, available at the top of this page, is 419 pages long and includes references to several additional documents for specific aspects of RDP like Audio Redirection and Clipboards.
Edit: In addition to the bounty, we're willing to pay $250 to have this bug fixed in the Firefox/Gecko codebase. Here is a simple test project (Visual Studio 2008 C#) that reproduces the problem.
Edit #2 we're willing to pay $600 to have this bug fixed. See above for sample project that reproduces the problem.
We have a Firefox (Gecko) ActiveX control on our C# Windows Form to display HTML.
When this Firefox ActiveX control is on our form, about 2-3% of our key presses don't make it through. Or rather, a different Windows message is sent:
We hold down the TAB key to tab through 3 regular WinForms text boxes. It will behave correctly 97% of the time. Spy++ tells us WM_KEYDOWN message is sent properly:
normal behavior http://judahhimango.com/images/normaltab.jpg
But randomly, maybe 2-3% of the time, the tab key (or other key) isn't processed right. Spy++ tells us WM_CHAR is being sent instead:
odd behavior http://judahhimango.com/images/screwytab.png
When the odd behavior occurs, either the key is not processed at all, or is processed incorrectly (such as inserting a '\t' character into a textbox that doesn't support tab characters.
This only occurs if the Firefox ActiveX control is on our form.
Our question is: does Firefox/Gecko engine install some kind of keyboard hook that might cause these side effects? Or better yet, how do we fix this problem?
The WM_CHAR message is generated by TranslateMessage call, so a good place to start looking would be the TranslateMessage calls in the Gecko source code.
In the first example code you provided the function is imported only by two libraries - mozctl.dll and xul.dll. Since you claim that the same error happens also with GeckoFX we can take mozctl.dll out of the equation. That leaves us with xul.dll, so given the Gecko source code I would suggest to look into widget\src\windows\nsToolkit.cpp. I am not sure if the code is run if the engine is embedded, but if it is then the library starts a whole new message pump in different thread, which is bound to break.
Unfortunately I can't run or compile the code on my machine (Windows 7 x64 w/o the Mozilla ActiveX control installed), so I can't verify any of this with a debugger. Hope it helps someone to track it down further.
The root problem is that when Mozilla is embedded in another application, it incorrectly pumps Windows messages when it dispatches internal events. Mozilla uses an event system to coordinate across threads or to schedule deferred processing on a thread (see nsIThread, nsIEventTarget). If you embed a web page with a lot of active XMLHTTPRequests, for example, Mozilla will use its event dispatching interface to dispatch events back to javascript and it will pump windows messages as a side effect. Once Mozilla events are fully dispatched, it goes back to the main event loop.
When Mozilla pumps windows messages, it doesn't include the extra processing done by the application's event loop - IsDialogMessage(), TranslateMessage(), PreTranslateMessage(), or any other pre-processing are skipped when Mozilla gets into this state. Symptoms therefore include tab key presses getting inserted as a character instead of being used for dialog navigation, keyboard hotkeys being sporadically ignored, or custom message pre-processing being sporadically skipped. For example, the Outlook 2007/2010 "Compose" screen sporadically loses keystrokes because it relies on custom message pre-processing to handle keyboard input.
See https://bugzilla.mozilla.org/show_bug.cgi?id=582790 for a patch that addresses the problem.
I have Snoop Free and PSM Anti-Keylogger.
One of them detected firefox trying to install a Keyboard Hook.
Mozilla/Firefox file xul.dll attempt at installing at keyboard hook.
DENIED.
I noticed that you have implemented all of the interoperability yourself. Can you try this with the GeckoFX project and see if you get the same error? I use this project at work and haven't encountered any issues yet.