Multiple raw input buffer - c

Is there a way to have one raw input buffer per device?
So I would like to have a buffer for mouse and another one for keyboard.
Is it possible?

Yes, try SetWindowsHookEx. You will have to convert WM_KEY* messages to WM_CHAR yourself, though.

The answer to your question is YES
The RAWINPUTDEVICE struct permits this via usUsagePage and usUsage
usUsagePage is a value for the type of device (this is a partial list below). 1 is for 'generic desktop controls' and covers all the usual input devices. The usUsage value specifies the device within the 'generic desktop controls' group.
1 - generic desktop controls // we use this
2 - simulation controls
3 - vr
4 - sport
5 - game
6 - generic device
7 - keyboard
8 - LEDs
9 - button
usUsage values when usUsagePage is 1:
0 - undefined
1 - pointer
2 - mouse
3 - reserved
4 - joystick
5 - game pad
6 - keyboard // we use this
7 - keypad
8 - multi-axis controller
9 - Tablet PC controls
I wrote this article on Code Project that may be helpful.

Related

Multiple Sliding Window on a single Data Stream

I am currently working on a problem in Flink, wherein I'll have to compute aggregate functions for three different sliding windows of window sizes 7 days,14 days and 1 month.
From what I've understood I'd have to run three different consumers parallelly having the above mentioned window sizes. Is there a way to implement three sliding windows for a single data stream all using a single consumer code?
Some code or reference to implement this using Flink is very appreciable.
What I know :
consumer 1 computes over a sliding window of size 7 days
consumer 2 computes over a sliding window of size 14 days
and so on.
What I want:
consumer 1 computing all these sliding windows simultaneously for a single data stream.
Is it possible to implement this in Flink?
The various windows can share a single stream produced by one kafka consumer, like this:
consumer = new FlinkKafkaConsumer<>("topic", new topicSchema(), kafkaProps);
stream = env.addSource(consumer);
w1 = stream.keyBy(key)
.window(SlidingEventTimeWindows.of(Time.days(7), Time.days(1))
.process(...)
w2 = stream.keyBy(key)
.window(SlidingEventTimeWindows.of(Time.days(14), Time.days(1))
.process(...)
Or to be more efficient, you might structure it like this:
consumer = new FlinkKafkaConsumer<>("topic", new topicSchema(), kafkaProps);
stream = env.addSource(consumer);
dayByDay = stream.keyBy(key)
.window(TumblingEventTimeWindows.of(Time.days(1))
.process(...)
w1 = dayByDay.keyBy(key)
.window(SlidingEventTimeWindows.of(Time.days(7), Time.days(1))
.process(...)
w2 = dayByDay.keyBy(key)
.window(SlidingEventTimeWindows.of(Time.days(14), Time.days(1))
.process(...)
Note, however, that there is no Time.months(), so if you want windows aligned to month boundaries, I guess you'll have to figure that part out.

How can I get updated system DPI information from X11 in a C program?

I'm trying to create a DPI aware app which responds to user requested DPI change events by resizing the window.
The program in question is created in C and uses SDL2, however to retrieve system DPI information I use xlib directly, as the SDL DPI support in X11 is lacking.
I found two ways to get the correct DPI information on program startup, both involving getting Xft.dpi information from Xresource: one is to use XGetDefault(display, "Xft", "dpi"), while the other is to use XResourceManagerString, XrmGetStringDatabase and XrmGetResource. Both of them return the correct DPI value when the program is created.
The problem is, if the user changes the system scale while the program is running, both XGetDefault abd XrmGetResource still return the old DPI value even though when I run "xrdb -query | grep Xft.dpi" the value has indeed changed.
Does anyone know a way to get the updated Xft.dpi value?
I found out a way to do exactly what I wanted, even though it's rather hackish.
The solution (using XLib) is to create a new, temporary connection to the X server using XOpenDisplay and XCloseDisplay, and poll the resource information from that new connection.
The reason this is needed is because X fetches the resource information only once per new connection, and never updates it. Therefore, by opening a new connection, X will get the updated xresource data, which can then be used for the old main connection.
Be mindful that constantly opening and closing new X connections may not be great for performance, so only do it when you absolutely need to. In my case, since the window has borders, I only check for DPI changes when the title height has changed, as a DPI change will change the size of your title border due to font size differences.
First off it must be noted that the value of the Xft.dpi resource isn't necessarily accurate -- it depends on whether the system and or user login scripts have correctly set it or not.
Also it is important to remember that the Xft.dpi resource is intended to be used by the Xft library, not by arbitrary programs looking for the screen resolution.
The Xft.dpi resource can be set as follows. This example effectively only deals with a display with a single screen, and note that it uses xdpyinfo. This also shows how it might not be exact, but could be rounded. Finally this example shows calculation of both the horizontal and vertical resolution, but Xft really only wants the horizontal resolution:
SCREENDPI=$(xdpyinfo | sed -n 's/^[ ]*resolution:[ ]*\([^ ][^ ]*\) .*$/\1/p;//q')
SCREENDPI_X=$(expr "$SCREENDPI" : '\([0-9]*\)x')
SCREENDPI_Y=$(expr "$SCREENDPI" : '[0-9]*x\([0-9]*\)')
# N.B.: If true screen resolution is within 10% of 100DPI it makes the most
# sense to claim 100DPI to avoid font-scaling artifacts for bitmap fonts.
if expr \( $SCREENDPI_X / 100 = 1 \) \& \( $SCREENDPI_X % 100 \<= 10 \) >/dev/null; then
FontXDPI=100
fi
if expr \( $SCREENDPI_Y / 100 = 1 \) \& \( $SCREENDPI_Y % 100 \<= 10 \) >/dev/null; then
FontYDPI=100
fi
echo "Xft.dpi: ${FontYDPI}" | xrdb -merge
I really wish I knew why Xft didn't at least try to find out the screen's resolution itself instead of relying all of the time on its "dpi" resource being set, but I've found that the current implementation only uses the resource setting, so something like the above is actually always necessary to set the resource properly (and further one must also make sure the X Server itself has been properly configured with the correct physical screen dimensions).
From a C program you want to do just what xdpyinfo itself does and skip all the nonsense about Xft's resources. Here's the xdpyinfo code paraphrased:
Display *dpy;
dpy = XOpenDisplay(displayname);
for (scr = 0; scr < ScreenCount(dpy); scr++) {
int xres, yres;
/*
* there are 2.54 centimeters to an inch; so there are 25.4 millimeters.
*
* dpi = N pixels / (M millimeters / (25.4 millimeters / 1 inch))
* = N pixels / (M inch / 25.4)
* = N * 25.4 pixels / M inch
*/
xres = ((((double) DisplayWidth(dpy, scr)) * 25.4) /
((double) DisplayWidthMM(dpy, scr))) + 0.5;
yres = ((((double) DisplayHeight(dpy, scr)) * 25.4) /
((double) DisplayHeightMM(dpy, scr))) + 0.5;
}
XCloseDisplay(dpy);
Note also that if you are for some odd reason scaling your whole display (e.g. with xrandr), then you should want the fonts to scale equally with everything else. It's just a horrible bad hack to use whole-screen scaling to scale just the fonts, especially when for most things it's simpler to just tell the application to use properly scaled fonts that will display at a constant on-screen point size (which is exactly what Xft uses the "dpi" resource to do). I'm guessing Ubuntu does something stupid to change the screen resolution, e.g. using xrandr to scale up the apparent size of icons and other on-screen widgets without applications having to know about screen size and resolution, then it has to lie to Xft by rewriting the Xft.dpi resource.
Note that if you avoid whole-screen scaling then applications that don't use Xft can still get proper font scaling by correctly requesting a properly scaled font, i.e. even with bitmap fonts you can get them scaled to the proper physical on-screen size by using the screen's actual resolution in the font-spec. E.g. continuing from the above shell fragment:
# For pre-Xft applications we can specify physical font text sizes IFF we also tell
# it the screen's actual resolution when requesting a font. Note the use of the
# rounded values here.
#
DecentDeciPt="80"
DecentPt="8"
export DecentDeciPt DecentPt
#
# Best is to arrange one's font-path to get the desired one first, but....
# If you know the name of a font family that you like and you can be sure
# it is installed and in the font-path somewhere....
#
DefaultFontSpec='-*-liberation mono-medium-r-*-*-*-${DecentDeciPt}-${FontXDPI}-${FontYDPI}-m-*-iso10646-1'
export DefaultFontSpec
#
# For Xft we have set the Xft.dpi resource so this allows the physical font size to
# be specified (e.g. with Xterm's "-fs" option) and for a decent scalable font
# to be chosen:
#
DefaultFTFontSpec="-*-*-medium-r-*-*-*-*-0-0-m-*-iso10646-1"
DefaultFTFontSpecL1="-*-*-medium-r-*-*-*-*-0-0-m-*-iso8859-1"
export DefaultFTFontSpec DefaultFTFontSpecL1
# Set a default font that should work for everything
#
eval echo "*font: ${DefaultFontSpec}" | xrdb -merge
Finally here's an example of starting an xterm (that's been compiled to use Xft) with the above settings (i.e. the Xft.dpi resource and the shell variables above) to show text at physical size of 10.0 Points on the screen:
xterm -fs 10 -fa $DefaultFTFontSpec
You could try to use xdpyinfo(1); on my system it outputs, among a lot of other things:
dimensions: 1280x1024 pixels (332x250 millimeters)
resolution: 98x104 dots per inch
depths (7): 24, 1, 4, 8, 15, 16, 32
I don't know whether it can help you because I don't know how do you change the DPI of your screen, but chances are it works. Good luck!
--- UPDATE after comment ---
In a comment below from the OP, it is said that "there is a setting to change the DPI"... still I don't know which. Anyway, I tried Ctrl+Alt+Plus and Ctrl+Alt+Minus to change the resolution of the X server on the fly. After having changed the resolution, and seeing everything bigger than before, I ran xdpyinfo again. IT DIDN'T WORK: still the same output. But may be the method you use (which?) instead works...

Difference between frames and items in libsndfile?

I am writing a software which processes audio files. I am using libsndfile library for reading wave file data, and I come across a doubt that wasn't solved by their documentation: what is the difference between functions that read items and functions that read frames? Or, in other words, am I getting the same results if I interchange both sf_read_short and sf_readf_short?
I have read in some questions that an audio frame equals a single sample, so I thought that what libsndfile calls items might be the same thing. During my tests they seemed to be the same.
I was concerned too and found the answer.
Q12 : I'm looking at sf_read*. What are items? What are frames?
An item is a single sample of the data type you are reading; ie a
single short value for sf_read_short or a single float for
sf_read_float. For a sound file with only one channel, a frame is the
same as a item (ie a single sample) while for multi channel sound
files, a single frame contains a single item for each channel.
Here are two simple, correct examples, both of which are assumed to be
working on a stereo file, first using items:
#define CHANNELS 2
short data [CHANNELS * 100] ;
sf_count items_read = sf_read_short (file, data, 200) ;
assert (items_read == 200) ;
and now readng the exact same amount of data using frames:
#define CHANNELS 2
short data [CHANNELS * 100] ;
sf_count frames_read = sf_readf_short (file, data, 100) ;
assert (frames_read == 100) ;
This is a copy&paste from:
libsndfile FAQ, question 12.

MediaElement.NaturalDuration is less than the actual duration of the audio

On some audio files the value of MediaElement.NaturalDuration is less than the actual duration of the audio. When I open the file in Windows Media Player the duration is correct (also when I look at the properties of the file). Although the value of the NaturalDuration property is incorrect, the audio is played fully, but at some point the value of the Position property becomes greater than the value of the NaturalDuration property, which, as I understand, should never happen.
I have created a simple application to reproduce the problem: https://skydrive.live.com/redir?resid=ACF8BFD4384116CE!2908&authkey=!AG-wF6Ae-7EAYk8
The duration of the audio file used in the application is 00:02:54, but the value of the NaturalDuration property is 00:01:59.
Does anyone know why and if there is a workaround for this?
Thanks in advance for any help.
Ok, this is not an answer but some results of a short investigation that give some clues why it behaves like that and where those numbers come from (2:58 and 1:59). First look at this thread: Calculating the length of MP3 Frames in milliseconds
Two things that we will use from there:
1) Frame length (in ms) = (samples per frame / sample rate (in hz)) * 1000, and
Duration in sec = Frame length (in ms) * number of frames / 1000
2) There are some standards regarding number of samples for different MPEG versions:
Samples per frame:
MPEG Version 1
384, // Layer1
1152, // Layer2
1152 // Layer3
MPEG Version 2 & 2.5
384, // Layer1
1152, // Layer2
576 // Layer3
Now lets check in winamp what it says about files format info:
MPEG-2.5 layer 3
16 kbps, 2482 frames
Now if you take frames = 2482 and samples per frame = 576 (MPEG-2.5 layer 3) you'll get duration 2:58. But it looks like for some reason silverlight and iTunes uses samples per frame = 384 which gives us 1:59. Next step could be to check the real values of file's headers and if they are correct and it is possible to calculate correct duration - well than you could cook up some hack to get durations separately (from the server for example). But I'm pretty sure - that file has some defects (inconsistent headers and content) and some players can handle it, others - not.

De-indexing Collada

How de-indexing the triangles of my collada mesh?. My goal is get something like:
<triangles material = "mat0" count ="12">
<input semantic = "VERTEX" source = "#mesh1"/>
<input semantic = "NORMAL" source = "#norm1"/>
<p>
0 0 1 1 4 4 3 3 5 5 7 7 6 6 8 8 .... <- same indices
</p>
</triangles>
it's this possible? I use C language and OpenGL API. I want to use VBO.
I still use COLLADA Refinery to fix my mesh data :
http://collada.org/mediawiki/index.php/COLLADA_Refinery
I have a script that goes through all my collada files doing different operations. It might have the operations you are looking for. Note that the last release was in 2007.
Full list of conditioners :
http://collada.org/mediawiki/index.php/Portal:Conditioners_directory
Deindexer
http://collada.org/mediawiki/index.php/Deindexer_conditioner
Rearranges vertices indexes so that each Vertices will reference the
corresponding Position, Normal, Texcoord with the same index number.
The size of source for position, normal, and texcoord might increase.
With meshtool you can run the following:
meshtool --load_collada file.dae
--normalize_indices
--save_collada file-normalized.dae

Resources