Spliting screen into ncurses and non ncurses area - c

I am a beginner C programmer and one of my assignments asks me to write an interpreter for the Logo Programming Language. To that end I was wondering if it was possible, when using the ncurses library, to split the screen up so that half retains basic terminal properties with regular text i/o and the other half is formatted in ncurses mode.
My wish is to create a UI such that in one half users can type in Logo style commands and on the other half, such commands are executed onto a little icon.

There is an application called screen which can split the terminal into multiple areas. What is does is implementing it's own terminal emulator which runs inside another terminal emulator. That's the only way to do it because the terminal itself has no concept of screen areas. So you basically have to implement a terminal emulator on top of ncurses which can be used as a "non ncurses area".
Perhaps a different approach would be easier. Does it need to run in a terminal? If not you could use the terminal for regular I/O only and create a GUI window of some sort beside it. Or not use the terminal at all, instead have some terminal widget embedded in your GUI (most GUI toolkits provide such I suppose).

Related

Can I make a terminal program in C to edit photos in GIMP [macOS]?

I am editing a large batch of photos using the same steps, and want to create a program to run through terminal that will run the process for me. I am comfortable with writing in C, but I am unsure of how to start on the code/what commands to use.
When I am in GIMP, I start by opening a .xcf file, and importing the photo I wish to edit in as the bottom layer. Next, I resize the layer to 1000px wide. After that, I edit the curves with a preset I have saved, and then do the same with the brightness controls. Finally, I export the file as a .png with a specific name: 01-0xx.png, based on the number of the photo in the set.
This sounds like a job for macros or the automation tools available in Gimp:
Ref: Gimp Automate Editing https://www.gimp.org/tutorials/Automate_Editing_in_GIMP/
This tutorial will describe and provide examples for two types of
automation functions. The first function is a tool to capture and
execute “Macro” commands. The second function is a set of Automation
Tools to capture and run a “Flow” or “Process”. The code for this
tutorial is written using Gimp-Python and should be platform portable
– able to run on either Linux or Windows operating systems. *
The goal of these functions is to provide tools that speed up the
editing process, make the editing process more repeatable, and reduce
the amount of button pushing the user has to do. Taking over the
button pushing and book-keeping chores allows the user to focus on the
more creative part of the editing process.
I haven't ever used GIMP, but programs of this sort typically have automation scripting support, and this is the right place to start.
Could be done with C, but the learning curve is steep.
You can write Gimp scripts in Scheme (Lisp) or Python, and if you know C you can learn enough Python in a couple of hours. See an example of a Python batch script here.
Side note #1: Curves+Brightness contrast can be done in one single call to Curves (with a different curve of course). Each operation entails some color loss, so the fewer, the better.
Side note #2: It may be simpler to do this with without Gimp using:
The ImageMagick tool box (command called from a shell script)
An image library with any language ("pillow" for python).
Your Curves preset is just what is called a "CLUT" (Color Look-Up Table).

Autohotkey - How to detect all input areas/checkboxes in an application?

Is there a way to detect input areas such as textboxes and checkboxes within an application? I want to label each input area with a number so I can jump between input fields with AHK using my keyboard.
For example: Once the script is activated and active window is Google Chrome, Chrome could have its address bar labeled #1. When I press "1", the cursor will be directed to that area.
I'm basically trying to create a workaround for applications that are not very keyboard friendly.
Most Windows applications use standard windows elements.
For these...
https://autohotkey.com/docs/commands/WinGet.htm - with the ControlList parameter, gets a list of all standard controls.
For those:
https://autohotkey.com/docs/commands/ControlGet.htm - can get the type of control, and
https://autohotkey.com/docs/commands/ControlGetPos.htm - can get position and dimensions of the control.
Some can also be controlled through COM: https://gist.github.com/kheybot/7026077#automation-of-office-applications
Commandline and console programs can sometimes be communicated with directly, using the standard streams (STDIN, STDOUT, STDERR, LPTn, PRN, NUL), or you can communicate with the terminal that displays the program using COM or WSH:
https://gist.github.com/kheybot/7026077#interact-with-command-line
This is important for a lot of legacy data-entry programs.
Browsers (eg Chrome), unfortunately, can't use these heavyweight components, because there may be far too many on a page, but there are other options for communicating with them, such as COM, DDE, etc to communicate with the DOM:
https://gist.github.com/kheybot/7026077#browser-automation
For a web browser, I'd be inclined to go for a hybrid approach, combining AHK-handling of the web browser's input areas (address bar, etc) with a Greasemonkey/Tampermonkey script to handle input fields within the web page itself - the Javascript will be better able to handle input areas using the DOM than any screen-scraping software could. There's also the possibility of using a functional-testing suite like Selenium for automation, and using the browser's plug-in functionality to write an extension to handle its UI.
This would mean that you now have TWO programming problems, of course...
Java applications, Flash applications, HTML5 applications, some graphic design software, and just about all computer games are essentially just graphics, with no way of externally identifying controls.
For these, you have to use basic screen scraping techniques: http://www.autohotkey.com/docs/commands/ImageSearch.htm and http://www.autohotkey.com/docs/commands/PixelSearch.htm to identify specific areas, which can only really be done by individually programming the specific control.
One option for generic detection, though, is to have something that detects shadows (drop shadows, buttonized components, etc) and allows you to tab between and send a click to the components detected that way. Unfortunately, modern flat design meant this won't always work, so you could also try searching for flat-colored rectangles... except sometimes they have curved corners. Because graphic designers hate people.
At this point, you will hopefully see that what you have here is an infinite rabbithole of fractal complexity.
You can make a simple ControlGet solution which doesn't work for a lot of applications you would use regularly... or you can create a hybrid approach that targets many applications individually, while also trying to have a generic solution for unrecognized apps.
If you are creating this for your own use, I'd say aim for making it work with the apps you know and use regularly, and that should be enough.
If you're writing it as accessibility software for others to use, I'd say aim for having it user-configurable for each application: let them control what input element they want to click, and in what order, because auto-detection will never work perfectly, and will only rarely pick the ideal solution.
The answer is yes, if the number of check boxes and their position in the application is fixed and you know on which machine the automation takes place.
Please research ImageSearch on how to locate them from screenshots.
If you know the X/Y position of the checkbox in the window, you can also use PixelGetColor to check if a check is visible or not.
You should also examine your application with the included AutoIt Spy. This program shows you, what it can see in the application window.
To get your labelling, checkout the Gui commands. If you make you gui transparent and don't give focus, you can write labels on top of the application.

Terminal behavior within program

I'm using the termcaps library for my UI. And i wish to know if there is some way to change how the emulator behave ?
eg: Enable terminal scrollback buffer (termcap flag 'da' and 'db' set to one)
Thank you
The termcap library does not modify the behavior of the terminal emulator. Instead, it provides an application with details about the capabilities of the terminal. Because different terminals may have similar capabilities, there are conventional names for the more common features.
The features you asked about are summarized in the terminfo(5) manual page as
memory_above da da display may be
retained above the
screen
memory_below db db display may be
retained below the
screen
The descriptions are terse, and might be improved by relating them to examples. However, these features are not often implemented in terminals because they do not correspond to anything in the ECMA-48 standard (also too terse). Looking at the terminal database, most of those which implemented them are HP terminals (and the emulator hpterm). Having used HP terminals (long ago), I think these capabilities describe a full-screen mode in which the terminal would echo cursor-keys as actual cursor movement, and allow vertical scrolling as a side effect. When doing this, the screen contents were not lost, but retained, and could be scrolled back into view.
None of the terminals you are likely to encounter support a feature like this.

Hook keyboard shortcuts from Windows lock screen

I have an arduino in keyboard emulation mode that sends keyboard keystrokes to the computer it's connected to, the latter appending a log line in a local webpage upon receiving each keystroke. The log program is coded in C using Win32 API.
Now, since it's supposed to function at work (the idea is to get a log file online of when pushbuttons on my desk have been activated), I will be locking my computer...
How can I keep processing CTRL+ALT+key strokes from the windows lock screen?
Thanks,
Mister Mystère
This seems to work: https://www.codeproject.com/Articles/19004/A-Simple-C-Global-Low-Level-Keyboard-Hook
When you run the compiled executable, the keys A and B are detected globally even when the screen is locked.
I ended up downloading a third party lock screen and my program works in its background as it is a standard program. For those of you in the same situation, as far as I know after all that research I'm afraid you'll have to do that instead: it seems like it is not possible starting from Vista.

Opening Windows console programs in Full Screen Mode

I am developing a C program that prints out a message. The problem with it is that when I run its .exe file, it does not run in fullscreen (until I press alt+enter to force it to full screen). I want the program to run in fullscreen itself when I run it. Is there any way I can do it?
Thanks in advance.
You could call SetConsoleDisplayMode() to force CONSOLE_FULLSCREEN_MODE. Beware that support for this has been disappearing. The last machine I owned that could still do this has been gathering dust for quite a while already. Along with the memory of the loud relay clicking sound, mixed with the high-pitched wail of the flyback transformer in the CRT.

Resources