NT Registry Handle Behaviour - c

I am doing an application virtualization project. So I hook applications in NT level and will direct the registry calls to my virtual registry. On running any application, if I go to File -> Open.. I have few registry calls like the below:
ZwOpenKey(registry key path) -> it produces the handle ex:(0x04e8)
ZwQueryKey(0x4ea,...)
Process Monitor says both open and query are performed on same key. I myself tested and confirmed that is the same key.
Also query key produced the right result for the querykey api.
This 2 byte difference is not for all open and query key cases.
How and why the application changes the handle from 0x4e8 to 0x4ea before it invokes querykey?
I have also tested the invocation of ZWDuplicateObject between the open and querykey, however the duplicateobject api is not invoked.
Can anyone say how this handle changes?

The lowest two bits of a handle aren't used by the kernel, and so applications are free to set them to other values and/or some APIs use these as additional flags, rather than having an extra parameter
0x4ea & 0xffc == 0x4e8 & 0xffc
Raymond Chen did a series discussing possible uses for these bits:
Kernel handles are always a multiple of four; the bottom two bits are available for applications to use. But why would an application need those bits anyway?

Related

How can I intercept and block the caps lock toggle event globally, leaving other events intact?

I need to check the toggled state of caps lock and block it.
I have tried using a low-level keyboard hook SetWindowsHookEx with WH_KEYBOARD_LL and checking for WPARAM==WM_KEYDOWN || WPARAM==WM_SYSKEYDOWN messages, and LPARAM.vkCode==VK_CAPITAL || LPARAM.scanCode==0x3A, but this results in me intercepting/blocking caps lock when it's held down/pressed, not when it's actually toggled.
It's important that I intercept the toggled event exclusively because I don't wish to rely on a single press of caps lock toggling its state, and I don't want to disrupt other events in case of caps lock being used as a modifier.
I'm currently using GetKeyState(VK_CAPITAL)&1 to check for caps lock state in my window callback, and forcing it back off with SendInput , but I would rather intercept/block it if any possible.
I have tried Raw Input as well, and it generates a pair of RI_KEY_BREAK And RI_KEY_MAKE messages when caps lock gets toggled, but (unless I'm mistaken), there is no way to block keys based on WM_INPUT messages, and trying to synchronize a hook and Raw Input seems to be difficult because the hook always gets them first.
Using GetKeyState or GetAsyncKeyState from a hook also seems not to work, as they seem to get the event after the hook.
Use GetAsyncKeyState to detect when/if the caps key is hit, and its current state (up or down).
Then call keybd_event (or SendInput) to programmatically set the caps key back to the state that you want it to be.
The following code snippet (along with other setup code) is included in this link, and will toggle CAPS lock on or off when executed:
RUN keybd_event ({&VK_CAPITAL}, 0, {&KEYEVENTF_KEYUP}, 0, OUTPUT intResult).
RUN keybd_event ({&VK_CAPITAL}, 0, {&KEYEVENTF_EXTENDEDKEY}, 0, OUTPUT intResult).
RUN keybd_event ({&VK_SHIFT}, 0, {&KEYEVENTF_KEYUP}, 0, OUTPUT intResult).
RUN keybd_event ({&VK_SHIFT}, 0, {&KEYEVENTF_EXTENDEDKEY}, 0, OUTPUT intResult).
The recommended way to deploy this implementation (GetAsyncKeyState / keybd_event combination) within your application is to encapsulate it into a worker thread set in a forever loop with sleep() set to allow sampling of the state approximately every 100ms.
(Note, I believe GetAsyncKeyState() over GetKeyState() is an improvement for what you want to do here as GetKeyState() gets the key status returned from the thread's message queue. The status does not reflect the
interrupt-level state associated with the hardware. GetAsyncKeyState()
specifies whether the key was pressed since the last call to
GetAsyncKeyState(), and whether the key is currently up or down.) With a reasonable and appropriate sample cycle using GetAsyncKeyState().
The concept above is comprised of functions that run in user-mode, therefore almost certainly limited to implementations of reaction algorithms (detect toggle, then execute another toggle.) as opposed to a true prevention algorithm. (ie, one that either re-maps a key to a no-op at run-time, or trap the request at a low level.)
Most true prevention algorithms would likely make use of Kernel mode driver calls, which are accessible and implementable via the WinAPI and for which concepts are introduce (among other places) by burrowing down through the content here RAWKEYBOARD into areas such as Keyboard and Mouse HID drivers.
A key-mapping approach
The method described below meets the primary need, i.e. to disable the the Caps Lock key from toggling the keyboard into CAPS mode. However, it does not maintain the ability of key to be used as a modifier once it has been re-mapped. (One of the criteria you list.)
The uncap project worked (almost out-of-the-box) for me to disable the Caps Lock key.
Before trying it, I recommend going through the README.md to get the details. In short, it uses a key map approach that allows keys to be mapped to different locations. I found it essentially does what it claims in terms of disabling Caps Lock, and it is capable of doing much more. This could be good or bad. Having the source code available allows you to create a pared down version that simply disables the Caps Key, or do other modifications as needed.
While exploring it, I found a couple of issues that I describe below under problems.
Note that the default behavior is to map Caps Lock key to VK_ESCAPE upon startup. I commented out the following line in the parseArguments(...) function to disable that feature so I could experiment with other mappings...
/*my.keymap[VK_CAPITAL] = VK_ESCAPE;*/
I used uncap.c as the only source file and the following on a Windows 10 machine:
gcc.exe -Wall -g -std=c89 -I"C:\Program Files (x86)\CodeBlocks\MinGW\mingw32\lib" -c C:\play_cb\uncap\uncap.c -o obj\Debug\uncap.o
Problems
It builds with a few warnings related to wrong number of arguments, or format specifiers in sprintf, but once addressing those issues, the code worked as described in this section of documentation.
Although the feature list claims "Disable key mappings easily by stopping Uncap.". did not work. Once the PC was re-booted, normal key mappings are restored.
If the keyboard is set to CAPS ON when uncap is executed, it remains in CAPS mode and the Caps Lock key is not able to undo it :)
I found this link useful when experimenting with mappings: Virtual key codes
You could set a low-level hook with SetWindowsHookEx. Refer to the thread: Best way to intercept pressing of Caps Lock

Requesting irq for a multi channel device

Assume a pci-driver for the linux-kernel. This device can have multiple channels that can be "up'ed" or "down'ed" individually.
Each "up" calls the function .ndo_open and each "down" calls .ndo_stop.
This device needs only one interrupt-line which can be requested with request_irq (). Each request will create one interrupt-line.
Important to note here is, that interrupt-lines are rare and they should not be created mindless.
My question to this situation is, where should I use request_irq()?
In my opinion I have two possible solutions for this.
Right in the probe(). This will only create one interrupt-line but it will always be created when the pc is turned on. So it might be unused.
In .ndo_open. This will create the interrupt-line only when it is needed, but a multichannel device can create mutliple calls of .ndo_open which will result in multiple calls of request_irq()
I was not able to find any information about this situation in the kernel docs. If there is some guideline for this, can you please explain/show it to me? I also checked other pci-drivers from the git-repo but none (or at least the ones I checked) had this problem.

X11/Xlib: virtual keyboard input and keyboard mapping synchronization issue

For an automated test application I have to simulate large amount of unicode keyboard input into an old X11 application (of which I don't have any source access).
My program takes the input from an UCS-2 LE encoded input stream via stdin and the basic operation is as follows:
Save current keyboard layout and lock modifiers (XDisplayKeycodes, XGetKeyboardMapping, XkbGetState)
Unlock active modifiers (XkbLockModifiers)
Disable all X11 slave keyboard devices via Xinput2 extension
Read input into a key press queue until n unique symbols are encountered, where n is the number of possible keycodes as returned by XDisplayKeycodes.
Map these n unique X11 KeySyms via XChangeKeyboardMapping on the n available KeyCodes
Type the correct KeyCodes for all enqueued KeySyms via XTestFakeKeyEvent
Clear the queue and continue at 4.) until no input is available
Reactivate keyboards and restore initial modifiers and mappings
Basically this system works better and much more performant than any virtual X11 key input tool I've seen so far.
However, there is an issue I can currently only fix using ugly delays:
As any other X11 application, the target application receives a MappingNotify (request==Keyboard) event from the X server after my application succeeded in changing the keyboard mapping table.
The usual response of a X11 client is to call XRefreshKeyboardMapping to update Xlib's knowledge of the new keyboard layout.
Now if the the client has some lag processing its X11 event queue, the XRefreshKeyboardMapping call might return a too recent mapping that is already some generations too far in the future.
E.g. my input generator has already done the fourth XChangeKeyboardMapping when the target application just arrived at handling the second MappingNotify event in its XEvent queue handler.
Actually it should get the second generation of the map, which isn't available at the X server anymore at that time.
Unfortunately there is no map id or version of any kind in the keyboard MappingNotify event so that XRefreshKeyboardMapping could refer to a specific map ... and the X server does not seem to keep a history either.
The result is that the X11 application's KeyCode to KeySym conversion operates with an invalid layout and generates wrong KeySyms.
So basically I have to wait until all clients (or at least the one with the input focus) have requested and received my last XChangeKeyboardMapping map before I am allowed to do the next XChangeKeyboardMapping.
I can fix 99.9% of the errors using a delay before XChangeKeyboardMapping and that delay is calculated by some ugly witchcraft (amount of key strokes etc.) and is way to high if 100% accuracy has to be achieved.
So my question is if there is any way to programmatically be notified or to check if a X11 client has completed XRefreshKeyboardMapping or if its map is in sync with the server map?
If not, is there a way to get another X11 client's current mapping via xlib (to check if the map is current)?
Thanks for any hints!
I've done something similar on Windows in the past. I had the luxury of being able to use the SendInput function which accepts a KEYBDINPUT structure with KEYEVENTF_UNICODE flag. Unfortunately X11 does not support direct keystroke synthesizing of Unicode characters.
Since I cannot comment yet I'm forced to give a suggestion as answer:
Have you considered using the clipboard instead in order to transfer your "unicode input" into this X11 application's input field ?
You also might consider using direct Unicode input if that application uses a toolkit that supports this:
E.g. programs based on GTK+ (that includes all GNOME applications) support Unicode input.
Hold Ctrl + Shift and type u followed by the Unicode hex digits and release Ctrl and Shift again.
I guess it should be easy to synthesize these sequences using the Xtest extension.

Block a URL path on Google Appengine

I would like to block a specific path (e.g. https://myapp.appspot.com/foo/bar) from being accessed on the server such that the caller gets a 404 or something to that extent. Please note that I have regex based handlers installed (e.g. /foo/.* - will trigger Handler) so by default the /app/foo/bar is being directed to this Handler. I would like to add a specific handler for '/foo/bar' at a higher level before the lower /app//).
One way to do this is to add url handler and direct it to a not_found app handler such as:
- url: /foo/bar.*
script: not_found.app
If there is a better way to do this, please care to share and will be highly appreciated.
Essentially, I have a rogue client who is using a bot to hit my server continuously and is consuming undesired resources. The specific URL being called by this bot is one that I could completely disable. If there are any tips on how one could use such URL's and direct them to a lower priority instance then that would be also very helpful.
Btw, I have already added a range of IP's being used by this bot to dos.yaml. But that has not helped since it keeps changing its IP-Address.
I am sure this is a pretty typical scenario which the web-masters have expert advice on (any help/recommendation is highly welcomed - pardon my pedestrian question).
You can force-route requests to any module of your choosing with dispatch.yaml:
dispatch:
- url: "*/foo.bar"
module: cheapmodule
and then in cheapmodule.yaml you make sure you have at most a single instance of the cheapest kind, say basic scaling with instance_class B1 and max_instances 1 (not sure what happens if cheapmodule is specified to have zero instances, e.g manual scaling with instances 0, or instances 1 to start but then on its _ah/start handler it calls google.appengine.api.modules.modules.set_num_instances_async(instances, module='cheapmodule') -- perhaps worth experimenting with).

Identifying that a resolution is virtual on a X11 screen by it's API (or extensions)

I'm working in a embarked application on linux that can be used with different PC hardware (displays specifically)
This application should set the environment for the highest allowed resolution (get by
the function XRRSizes from libXrandr).
The problem is: With some hardware, trying to set for the highest option creates a virtual desktop i.e. a desktop where the real resolution is smaller and you have to scroll w/ the mouse in the edges of the screen to access all of it.
Is there a way to detect within the Xlib (or one of it's siblings) that I am working with a virtual resolution (In other words, the re-size didn't go as expected) ?
Hints for a work around for this situation would also be appreciated...
Thanks
Read this: http://cgit.freedesktop.org/xorg/proto/randrproto/tree/randrproto.txt
You need to learn the difference between "screen", "output" and "crtc". You need to check the modes available for each of the outputs you want to use, and then properly set the modes you want on the CRTCs, associate the CRTCs with the outputs, and then make the screen size fit the values you set on each output.
Take a look at the xrandr source code for examples: http://cgit.freedesktop.org/xorg/app/xrandr/tree/xrandr.c

Resources