Reading Keyboard state - c

i am writing a c console application (for windows platform, using msc++ compiler) which requires reading state of some keyboard keys in very short time intervals (of the order of a couple of milliseconds). The read state is then fed into a FSM which provides rich key events (KEY_UP, KEY_RELEASED, KEY_DOWN, KEY_HELD_FOR_LONG_PERIOD, etc.) to rest of the application logic. (basically porting an embedded application to windows platform).
i don't know how to read key states, hence i googled and came across this answer. From what i understand, it basically scans the console events for any key (or mouse) events.
Although, the provided answer is a good starting point but the problem i face is that between successive 'reads' of keyboard state (and when time lag between successive reads is less than 50 ms), i get different answers (at times pressed, at times released) even when the key remains physically pressed. This messes up the FSM logic. But this is probably accepted behavior considering the console might not have new keyboard events in such short time duration. Unfortunately, this doesn't solve my problem.
So how can i
Read RAW state of keyboard keys through some API? (but it has to be consistent between successive reads in short time frames).
Or have .NET equivalent of KEY_UP, KEY_DOWN sort of events (or messages, callbacks, whatever is possible in c) over which i can write a little wrapper so that i don't have to change the FSM logic.
i have a limited understanding of available windows API to solve the task at hand. i am mainly a Embedded/C# guy who either works bare-metal (when developing firmware) or uses .net framework (when developing for windows).

You can read the raw keyboard state with GetAsyncKeyState or GetKeyboardState APIs.

After the suggestion of arx i looked back at the API and the keyboard events and figured out that it was my design of dealing with events which was giving me a problem.. i was polling the events as if they are 'signals' for the current state of keys. Rather events represent 'change' in the signal.
The original design was something like:
if ( there_is_an_event_for (KEY_A) ) // i assumed that events represents a key high state
Update_fsm_for_KEY_A (with_high_signal);
else
Update_fsm_for_KEY_A (with_low_signal);
Since the console gets events upon key_pressed and key_released, this design was inappropriate. This is a new design:
static bool last_key_state = false;
if ( there_is_a_key_pressed_event_for (KEY_A) )
last_key_state = true;
else if (there_is_a_key_released_event_for (KEY_B) )
last_key_state = false;
Update_fsm_for_KEY_A (last_key_state)
Thank You everyone for Your efforts and Help. i am posting and accepting this as an answer. Who knows this might help someone who is struggling with same problem.

Related

C design pattern performing a list of actions without blocking?

Embedded C. I have a list of things I want to do, procedurally, mostly READ and WRITE and MODIFY actions, acting on the results of the last statement. They can take up to 2 seconds each, I can’t block.
Each action can have states of COMPLETE and ERROR which has sub-states for reason the error occurred. Or on compete I’ll want to check or modify some data.
Each list of actions is a big switch and to re-enter I keep a list of which step I’m on, a success step++ and I come back in further down the list next time.
Pretty simple, but I’m finding that to not block I’m spending a ton of effort checking states and errors and edges constantly. Over and over.
I would say 80% of my code is just checks and moving the system along. There has to be a better way!
Are there any design patterns for async do thing and come back later for results in a way that efficiently handles some of the exception/edge/handling?
Edit: I know how to use callbacks but don’t really see that as “a solution” as I just need to get back to a different part of the same list for the next thing to do. Maybe it’s would be beneficial to know the backend to how async and await in other languages work?
Edit2: I do have an RTOS for other projects but this specific question, assume no threads/tasks, just bare metal superloop.
Your predicament is a perfect fit for state machines (really, probably UML statecharts). Each different request can each be handled in its own state machine, which handle events (such as COMPLETE or ERROR indications) in a non-blocking, run-to-completion manner. As the events come in, the request's state machine moves through its different states towards completion.
For embedded systems, I often use the QP event-driven framework for such cases. In fact, when I looked up this link, I noticed the very first paragraph uses the term "non-blocking". The framework provides much more than state machines with hierarchy (states within states), which is already very powerful.
The site also has some good information on approaches to your specific problem. I would suggest starting with the site's Key Concepts page.
To get you a taste of the content and its relevance to your predicament:
In spite of the fundamental event-driven nature, most embedded systems
are traditionally programmed in a sequential manner, where a program
hard-codes the expected sequence of events by waiting for the specific
events in various places in the execution path. This explicit waiting
for events is implemented either by busy-polling or blocking on a
time-delay, etc.
The sequential paradigm works well for sequential problems, where the
expected sequence of events can be hard-coded in the sequential code.
Trouble is that most real-life systems are not sequential, meaning
that the system must handle many equally valid event sequences. The
fundamental problem is that while a sequential program is waiting for
one kind of event (e.g., timeout event after a time delay) it is not
doing anything else and is not responsive to other events (e.g., a
button press).
For this and other reasons, experts in concurrent programming have
learned to be very careful with various blocking mechanisms of an
RTOS, because they often lead to programs that are unresponsive,
difficult to reason about, and unsafe. Instead, experts recommend [...] event-driven programming.
You can also do state machines yourself without using an event-driven framework like the QP, but you will end up re-inventing the wheel IMO.

X11/Xlib: virtual keyboard input and keyboard mapping synchronization issue

For an automated test application I have to simulate large amount of unicode keyboard input into an old X11 application (of which I don't have any source access).
My program takes the input from an UCS-2 LE encoded input stream via stdin and the basic operation is as follows:
Save current keyboard layout and lock modifiers (XDisplayKeycodes, XGetKeyboardMapping, XkbGetState)
Unlock active modifiers (XkbLockModifiers)
Disable all X11 slave keyboard devices via Xinput2 extension
Read input into a key press queue until n unique symbols are encountered, where n is the number of possible keycodes as returned by XDisplayKeycodes.
Map these n unique X11 KeySyms via XChangeKeyboardMapping on the n available KeyCodes
Type the correct KeyCodes for all enqueued KeySyms via XTestFakeKeyEvent
Clear the queue and continue at 4.) until no input is available
Reactivate keyboards and restore initial modifiers and mappings
Basically this system works better and much more performant than any virtual X11 key input tool I've seen so far.
However, there is an issue I can currently only fix using ugly delays:
As any other X11 application, the target application receives a MappingNotify (request==Keyboard) event from the X server after my application succeeded in changing the keyboard mapping table.
The usual response of a X11 client is to call XRefreshKeyboardMapping to update Xlib's knowledge of the new keyboard layout.
Now if the the client has some lag processing its X11 event queue, the XRefreshKeyboardMapping call might return a too recent mapping that is already some generations too far in the future.
E.g. my input generator has already done the fourth XChangeKeyboardMapping when the target application just arrived at handling the second MappingNotify event in its XEvent queue handler.
Actually it should get the second generation of the map, which isn't available at the X server anymore at that time.
Unfortunately there is no map id or version of any kind in the keyboard MappingNotify event so that XRefreshKeyboardMapping could refer to a specific map ... and the X server does not seem to keep a history either.
The result is that the X11 application's KeyCode to KeySym conversion operates with an invalid layout and generates wrong KeySyms.
So basically I have to wait until all clients (or at least the one with the input focus) have requested and received my last XChangeKeyboardMapping map before I am allowed to do the next XChangeKeyboardMapping.
I can fix 99.9% of the errors using a delay before XChangeKeyboardMapping and that delay is calculated by some ugly witchcraft (amount of key strokes etc.) and is way to high if 100% accuracy has to be achieved.
So my question is if there is any way to programmatically be notified or to check if a X11 client has completed XRefreshKeyboardMapping or if its map is in sync with the server map?
If not, is there a way to get another X11 client's current mapping via xlib (to check if the map is current)?
Thanks for any hints!
I've done something similar on Windows in the past. I had the luxury of being able to use the SendInput function which accepts a KEYBDINPUT structure with KEYEVENTF_UNICODE flag. Unfortunately X11 does not support direct keystroke synthesizing of Unicode characters.
Since I cannot comment yet I'm forced to give a suggestion as answer:
Have you considered using the clipboard instead in order to transfer your "unicode input" into this X11 application's input field ?
You also might consider using direct Unicode input if that application uses a toolkit that supports this:
E.g. programs based on GTK+ (that includes all GNOME applications) support Unicode input.
Hold Ctrl + Shift and type u followed by the Unicode hex digits and release Ctrl and Shift again.
I guess it should be easy to synthesize these sequences using the Xtest extension.

Differences between working with states and windows(time) in Flink streaming

Let's say we want to compute the sum and average of the items,
and can either working with states or windows(time).
Example working with windows -
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/streaming_guide.html#example-program
Example working with states -
https://github.com/dataArtisans/flink-training-exercises/blob/master/src/main/java/com/dataartisans/flinktraining/exercises/datastream_java/ride_speed/RideSpeed.java
Can I ask what would be the reasons to make decision? Can I infer that if the data arrives very irregularly (50% comes in the defined window length and the other 50% don't), the result of the window approach is more biased (because the 50% events are dropped)?
On the other hand, do we spend more time checking and updating the states when working with states?
First, it depends on your semantics... The two examples use different semantics and are thus not comparable directly. Furthermore, windows work with state internally, too. It is hard to say in general with approach is the better one.
As Flink's window semantics are very rich, I would suggest to use windows. If you cannot express your semantics with windows, using state can be a good alternative. Using windows, has the additional advantage that state handling---which is hard to get done right---is done automatically for you.
The decision is definitely independent from your data arrival rate. Flink does not drop any data. If you work with event time (rather than with processing time) your result will be the same independently of the data arrival rate after all.

Radio buttons not selecting in old program

I wrote a large complex C program around 20(!) years go. As far as I can recall it worked fine at the time in all respects - it was probably running on windows 95.
Now I need to use it again. Unfortunately the radio buttons in it do not appear to work properly any more (the ordinary push buttons are all behaving correctly). As I click on the radio buttons, I get some feedback that windows is acknowledging my click in as much as I see a dotted line appear around the button's text and the circle of the button goes grey for as long as my finger is on the button, but when I take my finger off I see that the selected button has not changed.
My suspicion is that I was perhaps getting away with some bad practice at the time which worked with windows 95 but no longer works on newer versions of windows, but I'm struggling work out what I did wrong. Any ideas?
EDIT: Its difficult to extract the relevant code because the message handling in this program was a tangled nightmare. Many buttons were created programatically at runtime and there were different message loops working when the program was in different modes of operation. The program was a customisable environment for running certain types of experiment. It even had its own built-in interpreted language! So I'm not expecting an answer like "you should have a comma instead of a semicolon at line 47", but perhaps something more like "I observed similar symptoms once in my program and it turned out to be ..... " .. or perhaps "the fact that the dotted rectangle is appearing means that process AAA has happened, but maybe step BBB has gone wrong".
EDIT: I've managed to extract some key code which my contain an error...
char *process_messages_one_at_a_time()
{
MSG msg;
int temp;
temp = PeekMessage(&msg,winh,0,0,PM_NOREMOVE);
if (temp)
{
GetMessage (&msg, NULL, 0, 0);
if (msg.message == WM_LBUTTONUP)
{
mouse_just_released_somewhere = TRUE;
}
TranslateMessage (&msg);
DispatchMessage (&msg);
}
if (button_command_waiting)
{
button_command_waiting = FALSE;
return (button_command_string);
}
else
{
return (NULL);
}
}
There are two simple things to check when using radio buttons. First is to make sure that each has the BS_AUTORADIOBUTTON property set. The second is to make sure that the first button in the tab order and the next control after the set of buttons (typically a group box) have the WS_GROUP property set, while the other buttons have it clear.
A few suggestions:
I'd try to use spy++ to monitor the messages in that dialog box, particularly to and from the radiobutton controls. I wonder if you'll see a BM_SETCHECK that your program is sending (ie, somewhere you're unchecking the button programatically).
Any chance your code ever checks the Windows version number? I've been burned a few times with an == where I should have used a >= to ensure version checking compatibility.
Do you sub-class any controls? I don't remember, but it seems to me there were a few ways sub-classing could go wrong (and the effects weren't immediately noticeable until newer versions of Windows rolled in).
Owner-drawing the control? It's really easy to for the owner-draw to not work with newer Windows GUI styles.
Working with old code like that, the memories come back to me in bits and pieces, rather than a flood, so it usually takes some time before it dawns on me what I was doing back then.
If you just want to get the program running to use it, might I suggest "compatibility mode".
http://www.howtogeek.com/howto/windows-vista/using-windows-vista-compatibility-mode/
However, if you have a larger, expected useful life of the software, you might want to consider rewriting it. Rewriting it is not anywhere near the complexity or work of the initial write because of a few factors:
Developing the requirements of a program is a substantial part of the required work in making a software package (the requirements are already done)
A lot of the code is already written and only parts may need to be slightly refactored in order to be updated
New library components may be more stable alternatives to parts of the existing codebase
You'll learn how to write current applications with current library facilities
You'll have an opportunity to comment or just generally refactor and cleanup the code (thus making it more maintainable for the anticipated, extended life)
The codebase will be more maintainable/compatible going forward for additional changes in both requirements and operating systems (both because it's updated and because you've had the opportunity to re-understand the entire codebase)
Hope that helps...

GLUT doesn't detect properly more then 2 keys pressed?

I'm trying to make a small game using (free)GLUT. I know that it's old and there are better alternatives, but currently I prefer to stick with it and use it as much as possible. I program with C.
I'm currently trying to make GLUT detect properly all the keys I press.
I use glutKeyboardFunc, glutKeyboardUpFunc, glutSpecialFunc and glutSpecialUpFunc to detect pressed keys and I store their state in a short array I created (I currently have only 5 usable keys, so I just created a specific array for them).
However, while everything works fine for 2 keys or less, the game doesn't detect properly 3 keys or more. While for some keys it detect the combination properly (that actually happens for only 1 specific combination), for others the functions simply don't detect the third key that I press.
I checked my code a few times, and there is nothing special about the combination that does work.
I also made glutKeyboardFunc and glutSpecialFunc directly print every key-press that they receive, and it seems they simply stop working after I press more then 2 keys.
Is it a known issue with GLUT or something? I googled a lot and didn't find anyone with a similar issue.
I am not very into GLUT but as I know, but you should make sure, that your keyboard supports more than 2 input keys at once. This feature is called n-key rollover. This page says, that 2-key rollover may be a common value for some keyboards, but you dont need to trust this source.
I'll clarify a point: The glutKeyBoardFunc is a callback i.e., it is invoked for every key pressed and re-executed over and over again and all the if-else (or switch-case) statements for various key combinations are executed. What it means is this - if you were to press 'A', '->' (right arrow) and 'D' all at once, depending on which key-press event was received first the callback will be executed accordingly. Sometimes with a delay and sometimes the on screen animation may stop momentarily.
GLUT is purely for educational/learning purposes but not good for full blown applications since that's not what it was designed for. You land up using OS specific libs or other languages (e.g., Qt) to embed OpenGL "window" within them and execute the keyboard events etc., The event handling in those (and/or OS specific frameworks) is radically different (and better) than GLUT.
You may want to keep your simultaneous key presses to a minimum. You may augment it with the mouse to get rid of the jerky response/processing...

Resources