what is a console app mainly used for? - winforms

I have always used winforms to do my projects but I have never really explored the console application. I have seen some videos of different programs done in console applications for example, games, engines, display images using the keyboard keys and more but what exactly is a console app for?

More backend types of things, typically for apps that require no user interaction (backup jobs, etc… that run on a scheduled basis).

This question is in danger of being closed ... but I'll try to answer.
Really console applications have a history - in the days when a console, which may have been a teletype with a single output line, to a tty style console with a several line screen was the main UI for computers from mainframes to personal computers running CP/M, DOS or one of almost hundreds of different operating systems.
Whilst #james_schorr is parially correct, some technical people still find console apps to be efficient ways of performing certain sorts of operations - even with user interaction. For example I use git and mercurial from the command line in preference to a UI like tortoise.
They have some advantages (especially on Unix and Linux) because you can combine the commands in scripts using pipes (i.e. where standard out of one is fed directly into standard input of the next) to form very powerful operations.

I use them primarily for command line apps that perform repetitive tasks. You can support input parameters through the args array parameter to the Main method.

Simplicity. Throwawy apps. Test apps. Logging consoles. Ensuring it will work on Mono and/or from a command line. Or just laziness. Take your pick.
Specifically text-based programs, which represent decades of our programming heritage.
Before Windowing came about mainstream, users were quite happy to work in in console based text programs.
I can remember when we could write real business/financial apps in DOS or green-terminal UNIX that people actually wanted.

They could be used for virtually any task, but are mainly used for automated tasks, tasks that don't require too much advanced user intervention, tasks where a GUI would consume too much CPU power, tasks where large amounts of data are used as inputs or outputs that the user doesn't need to view, etc.

Related

Can I make a terminal program in C to edit photos in GIMP [macOS]?

I am editing a large batch of photos using the same steps, and want to create a program to run through terminal that will run the process for me. I am comfortable with writing in C, but I am unsure of how to start on the code/what commands to use.
When I am in GIMP, I start by opening a .xcf file, and importing the photo I wish to edit in as the bottom layer. Next, I resize the layer to 1000px wide. After that, I edit the curves with a preset I have saved, and then do the same with the brightness controls. Finally, I export the file as a .png with a specific name: 01-0xx.png, based on the number of the photo in the set.
This sounds like a job for macros or the automation tools available in Gimp:
Ref: Gimp Automate Editing https://www.gimp.org/tutorials/Automate_Editing_in_GIMP/
This tutorial will describe and provide examples for two types of
automation functions. The first function is a tool to capture and
execute “Macro” commands. The second function is a set of Automation
Tools to capture and run a “Flow” or “Process”. The code for this
tutorial is written using Gimp-Python and should be platform portable
– able to run on either Linux or Windows operating systems. *
The goal of these functions is to provide tools that speed up the
editing process, make the editing process more repeatable, and reduce
the amount of button pushing the user has to do. Taking over the
button pushing and book-keeping chores allows the user to focus on the
more creative part of the editing process.
I haven't ever used GIMP, but programs of this sort typically have automation scripting support, and this is the right place to start.
Could be done with C, but the learning curve is steep.
You can write Gimp scripts in Scheme (Lisp) or Python, and if you know C you can learn enough Python in a couple of hours. See an example of a Python batch script here.
Side note #1: Curves+Brightness contrast can be done in one single call to Curves (with a different curve of course). Each operation entails some color loss, so the fewer, the better.
Side note #2: It may be simpler to do this with without Gimp using:
The ImageMagick tool box (command called from a shell script)
An image library with any language ("pillow" for python).
Your Curves preset is just what is called a "CLUT" (Color Look-Up Table).

Autohotkey - How to detect all input areas/checkboxes in an application?

Is there a way to detect input areas such as textboxes and checkboxes within an application? I want to label each input area with a number so I can jump between input fields with AHK using my keyboard.
For example: Once the script is activated and active window is Google Chrome, Chrome could have its address bar labeled #1. When I press "1", the cursor will be directed to that area.
I'm basically trying to create a workaround for applications that are not very keyboard friendly.
Most Windows applications use standard windows elements.
For these...
https://autohotkey.com/docs/commands/WinGet.htm - with the ControlList parameter, gets a list of all standard controls.
For those:
https://autohotkey.com/docs/commands/ControlGet.htm - can get the type of control, and
https://autohotkey.com/docs/commands/ControlGetPos.htm - can get position and dimensions of the control.
Some can also be controlled through COM: https://gist.github.com/kheybot/7026077#automation-of-office-applications
Commandline and console programs can sometimes be communicated with directly, using the standard streams (STDIN, STDOUT, STDERR, LPTn, PRN, NUL), or you can communicate with the terminal that displays the program using COM or WSH:
https://gist.github.com/kheybot/7026077#interact-with-command-line
This is important for a lot of legacy data-entry programs.
Browsers (eg Chrome), unfortunately, can't use these heavyweight components, because there may be far too many on a page, but there are other options for communicating with them, such as COM, DDE, etc to communicate with the DOM:
https://gist.github.com/kheybot/7026077#browser-automation
For a web browser, I'd be inclined to go for a hybrid approach, combining AHK-handling of the web browser's input areas (address bar, etc) with a Greasemonkey/Tampermonkey script to handle input fields within the web page itself - the Javascript will be better able to handle input areas using the DOM than any screen-scraping software could. There's also the possibility of using a functional-testing suite like Selenium for automation, and using the browser's plug-in functionality to write an extension to handle its UI.
This would mean that you now have TWO programming problems, of course...
Java applications, Flash applications, HTML5 applications, some graphic design software, and just about all computer games are essentially just graphics, with no way of externally identifying controls.
For these, you have to use basic screen scraping techniques: http://www.autohotkey.com/docs/commands/ImageSearch.htm and http://www.autohotkey.com/docs/commands/PixelSearch.htm to identify specific areas, which can only really be done by individually programming the specific control.
One option for generic detection, though, is to have something that detects shadows (drop shadows, buttonized components, etc) and allows you to tab between and send a click to the components detected that way. Unfortunately, modern flat design meant this won't always work, so you could also try searching for flat-colored rectangles... except sometimes they have curved corners. Because graphic designers hate people.
At this point, you will hopefully see that what you have here is an infinite rabbithole of fractal complexity.
You can make a simple ControlGet solution which doesn't work for a lot of applications you would use regularly... or you can create a hybrid approach that targets many applications individually, while also trying to have a generic solution for unrecognized apps.
If you are creating this for your own use, I'd say aim for making it work with the apps you know and use regularly, and that should be enough.
If you're writing it as accessibility software for others to use, I'd say aim for having it user-configurable for each application: let them control what input element they want to click, and in what order, because auto-detection will never work perfectly, and will only rarely pick the ideal solution.
The answer is yes, if the number of check boxes and their position in the application is fixed and you know on which machine the automation takes place.
Please research ImageSearch on how to locate them from screenshots.
If you know the X/Y position of the checkbox in the window, you can also use PixelGetColor to check if a check is visible or not.
You should also examine your application with the included AutoIt Spy. This program shows you, what it can see in the application window.
To get your labelling, checkout the Gui commands. If you make you gui transparent and don't give focus, you can write labels on top of the application.

OS X Intercept Keyboard Events to Password Forms Elements

I am currently creating a c program that counts all of the keys I press in a day and sorts the key types by amount, so I can tell which ones I press most often. It was more of a side project than anything else but I have become annoyed with the fact that my program doesn't seem to be able to intercept any input to password fields. I suppose this is a good thing, but I have been spending hours looking at documentation and trying to figure how to do this. I am not trying to create any sort of malicious software. Is there a way around this? My program is running as root. I am using the ApplicationServices framework and CGEventRef and the CGEventTapCreate funcion. Should I be using a different framework or struct? Also, is there a difference between kCGHIDEventTap, kCGSessionEventTap, and kCGAnnotatedSessionEventTap? I have tried using each of them and it does not seem to make a difference in my program.
I am running this on OS X 10.9
UPDATE
Apparently I cannot capture keystrokes going to terminal either, which is where I spend most of my time on my laptop. This is a problem.
What you want is fairly complicated and requires a kernel extension. The interprocess communication is also not trivial. Take a look at logKext, specifically logKext.cpp. That project actually logs the keystrokes to an encrypted file. You should be able to pull everything you need from it.

Avoiding all system messages and messages from other software

Here is the situation. The company I work for builds this piece of software in c that can make a Windows computer act a bit like a TV. Essentially, our piece of software is meant to be played full screen and content is displayed from the internet without the user having to ever touch the computer again.
The problem is that once in a while, the system brings up pop-ups like "Your Windows system is ready for an upgrade." or "Please renew your Norton subscription" etc. which the user has to periodically and manually remove.
Is there a way to display content full screen without being bothered by those warnings?
Yah, whether or not the development community agrees, Microsoft has several standards for when and why it might be acceptable to have exclusive use of the monitor.
The most official strategy is to use DirectX in exclusive mode. This is what games do, what windows media player does in full screen video with hardware acceleration enabled, etc... If your application is multimedia intensive (as suggested by TV like functionality), you should probably be using DirectX too. Besides giving you the exclusive display access it will also increase your applications performance while lowering the CPU load (as it will overload graphics work to the video card when possible).
If DirectX is not an option, there are a great number of hacks available that seem to all behave differently between various generations of windows operating systems. So you might have to be prepared to implement several techniques to cover each OS you plan to support.
One technique is to set your application as the currently running screensaver. A screensaver if really just an EXE renamed to SCR with certain command line switches it should support. But you can write your own application to be such a screensaver and a little launcher stub that sets it as the screensaver and launches it. Upon exit the application should return the original screensaver settings (perhaps the launcher waits for the process to exit so that it returns the settings in both graceful exits and any unplanned process terminations ie: app crash). I'm not sure if this behavior is consistent across platforms though, you'll have to test it.
Preventing other applications from creating window handles is truly a hack in my opinion and pretty bad one that I wouldn't appreciate as a customer of such software.
A constant BringWindowToTop() call to keep you in front is better (it doesn't break other software) but still a little hack-ish.
Catch window creation messages with a global hook. This way you can close or hide unwanted windows before they become visible.
EDIT: If you definitely want to avoid hooks, then you can call a function periodically, which puts your window to the top of the z-stack.
You could disable system updates http://support.microsoft.com/kb/901037 and remove the norton malware.
You could also connect a second screen so that the bubbles appear in the the first monitor.
Or you rewrite it for linux or windows ce.
One final option is to install software that reconfigures your os into a kiosk http://shop.inteset.com/Products/9-securelockdown.aspx
If you don't need keyboard or mouse input, how about running your application as a screensaver?
A lot of thoses messages are trigged/managed by Windows Explorer.
Just replace it with your dummy c#/winform.
By changing the registry value
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon]
"Shell"="Explorer.exe"
You can specify virtually any exe as an alternative to explorer.exe
That's the way all windows based (embedded) system (ATM & co) do.
There's still few adjustment (disable services you dont need / dr watson & others), and of course, you'll want to keep a "restart explorer.exe" backdoor.
But that's a good start

Is there a way for my binary to react to some global hotkeys in Linux?

Is it possible to listen for a certain hotkey (e.g:Ctrl-I) and then perform a specific action? My application is written in C, will only run on Linux, and it doesn't have a GUI. Are there any libraries that help with this kind of task?
EDIT: as an example, amarok has global shortcuts, so for example if you map a combination of keys to an action (let's say Ctrl-+, Ctrl and +) you could execute that action when you press the keys. If I would map Ctrl-+ to the volume increase action, each time I press ctrl-+ the volume should increase by a certain amount.
Thanks
How global do your hotkeys need to be? Is it enough for them to be global for a X session? In that case you should be able to open an Xlib connection and listen for the events you need.
Ordinarily keyboard events in X are delivered to the window that currently has the focus, and propagated up the hierarchy until they are handled. Clearly this is not what we want. We need to process the event before any other window can get to it. We need to call XGrabKey on the root window with the keycode and modifiers of our hotkey to accomplish this.
I found a good example here.
I think smoofra is on the right track here; you're looking to register a global hotkey with X so that you can intercept keypresses and take appropriate action. Xlib is probably what you want, and XGrabKey is the function, i think.
It's not easy to learn, I'm afraid; I did locate this example that seems useful: TinyWM. I also found an example using Java/JNI (accessing the same underlying Xlib function).
You should look at the source code of xbindkeys.
Xlib programming is pretty arcane, documentation is hard to find, and there are subtle portability issues. You'll be better off copying some battle-hardened code.
One way to do it is to have your application listen on a certain port, or socket file, for incoming requests.
Then you can write a small client application that connects to that port or socket file and sends commands to the running application.
Then you can configure your window manager to bind certain key combinations to launch your small client app.
In UNIX, your access to a commandline shell is via a terminal. This harks back to the days when folks accessed their big shared computers literally via terminals connected directly to the machines (e.g. by a serial cable).
In fact, the 'xterm' program or whatever derivative you use on your UNIX box is properly referred to as a terminal emulator - it behaves (from both your point of view and that of the operating system) much like one of those old-fashioned terminal machines.
This makes it slightly complicated to handle input in interesting ways, since there are lots of different kinds of terminals, and your UNIX system has to know about the capabilities of each kind. These capabilities were traditionally stored in a termcap file, and I think more modern systems use terminfo instead. Try
man 5 terminfo
on a Linux system for more information.
Now, the good news is that you don't need to do too much messing about with terminal capabilities, etc. to have a commandline application that does interesting things with input or windowing features. There's a library, curses, that will help. Lookup
man 3 ncurses
on your Linux system for more information. You will probably be able to find a decent tutorial on using curses online.

Resources