How do I retrieve system image list for given DPI?
When an application is system DPI-aware, the SHGetFileInfo and similar functions return a handle to a correctly scaled system image list. C++ example:
handle =
SHGetFileInfo(L"", 0, &fileInfo, sizeof(fileInfo),
SHGFI_SYSICONINDEX | (large ? SHGFI_LARGEICON : SHGFI_SMALLICON));
But with per-monitor DPI awareness, that's not enough, as the application can run on a monitor that does not use system DPI (or the application can have multiple windows, each on different monitor, with different DPI).
For example, on 168 DPI (175% zoom) monitor, with standard 96 system DPI, you get small unscaled 16x16 icons:
So I'm hoping, that there's a DPI-aware variant to the SHGetFileInfo (or similar), the way there are DPI aware variants of other functions like:
GetSystemMetricsForDpi for GetSystemMetrics;
SystemParametersInfoForDpi for SystemParametersInfo;
OpenThemeDataForDpi for OpenThemeData.
As a quick solution, I ended up using SHGetImageList, as suggested by #MickyD.
As mentioned in the function documentation (and as suggested by #JonathanPotter):
The IImageList pointer type, such as that returned in the ppv parameter, can be cast as an HIMAGELIST as needed; for example, for use in a list view.
Hence I use the SHGetImageList to collect all available system image lists sizes by calling it for 0..SHIL_LAST.
For each returned image list, I query its icon size using ImageList_GetIconSize and cache them all.
Then, when an image list is needed for a particular DPI, I pick the closest size available.
An obvious drawback is that on multi monitor systems with high system DPI, but with one low DPI monitor, one cannot retrieve reasonable size of small icons for the low DPI monitor.
Related
Depending on which skin I use in the simulator, the result from the following method differs :
Display.getInstance().getDeviceDensity();
The results have nothing to do with the real device density since for a Xoom skin it outputs 30 (149 ppi in reality), for a an Iphone 6 it outputs 50 (329 in reality).
I noticed that because I need to translate char height measured in Gimp (72 dpi) into the device world so that it looks alike on an image.
Any help on that topic would be appreciated!
Cheers
The JavaDocs for getDeviceDensity state:
Returns one of the density variables appropriate for this device,
notice that density doesn't always correspond to resolution and an
implementation might decide to change the density based on DPI
constraints.
Returns:
one of the DENSITY constants of Display
The DENSITY constants refers to one of these.
Notice you can also use convertToPixels which is probably a far better API to use. The density API is mostly used to pick the right multi image and should rarely be used in user code.
I have two black-and-white printers, two color printers, and some virtual printers (Fax, CutePDF Writer, etc).
According to the DC_COLORDEVICE query to DeviceCapabilities, only the Fax virtual printer is black-and-white.
According to PLANES and BITSPIXEL queries to GetDeviceCaps, all of the printers have one plane, and only Fax and CutePDF have 1 bit/pixel (are black-and-white).
According to the NUMCOLORS query to GetDeviceCaps, only Fax is black-and-white.
I'm not excited about querying the driver directly, so I haven't tried it yet.
How do I accurately detect a color printer with Win32?
Bummer that DC_COLORDEVICE doesn't give the right answer. The rest of your findings don't surprise me.
You could try creating an information context for the printer with CreateIC, and then use GetDeviceCaps to check the COLORRES property.
(An information context is like a device context that you can query but can't actually draw to. It's useful when you want to know what a printer driver is going to do without actually creating a real device context, which may require the printer being online.)
Checking the number of planes is useless, since everything (to a good approximation) uses a single plane. The number of bits per pixel doesn't actually tell you if those pixels can be color or just grayscale (or just palette entries).
Another idea is to look at the dmColor field in the default DEVMODE for the device.
I had to solve the same problem many, many years ago (before DeviceCapabilities), but I don't remember how I did it.
UPDATE 2022-12-27: I just came across my own answer while trying to figure out how to handle the Fax virtual printer. When querying DeviceCapabilitiesW with DC_COLORDEVICE, the Fax driver returns value of -1 and GetLastError reports 122 (ERROR_INSUFFICIENT_BUFFER "The data area passed to a system call is too small.") That's weird, since there's no requirement to pass a buffer for this query.
My current solution is to check everything. If DeviceCapabilities with DC_COLORDEVICE doesn't explicitly say color OR if the DEVMODE's dmFields bitmask doesn't have the DM_COLOR bit set OR if the DEVMODE's dmColor field isn't explicitly DMCOLOR_COLOR OR if GetDeviceCaps NUMCOLORS isn't at least 8, then I assume it's a monochrome printer or that user selected monochrome for this print job.
I've been using 24bit .png with Alpha, from Photoshop, and just tried a .psd which worked fine with OpenGL ES, but Metal didn't see the Alpha channel.
What's the absolutely most performant texture format for particles within SceneKit?
Here's a sheet to test on, if needs be.
It looks white... right click and save as in the blank space. It's an alpha heavy set of rings. You can probably barely make them out if you squint at the screen:
exaggerated example use case:
https://www.dropbox.com/s/vu4dvfl0aj3f50o/circless.mov?dl=0
// Additional points for anyone can guess the difference between the left and right rings in the video.
Use a grayscale/alpha PNG, not an RGBA one. Since it uses 16 bits per pixel (8+8) instead of 32 (8+8+8+8), the initial texture load will be faster and it may (depending on the GPU) use less memory as well. At render time, though, you’re not going to see much of a speed difference, since whatever the texture format is it’s still being drawn to a full RGB(A) render buffer.
There’s also PVRTC, which can get you down as low as 2–4 bits per pixel, but I tried Imagine’s tool out on your image and even the highest quality settings caused a bunch of artifacts like the below:
Long story short: go with a grayscale+alpha PNG, which you can easily export from Photoshop. If your particle system is hurting your frame rate, reduce the number and/or size of the particles—in this case you might be able to get away with layering a couple of your particle images on top of each other in the source texture atlas, which may not be too noticeable if you pick ones that differ in size enough.
I'm trying to get pixel data from my X11 instance, I've seen this thread (How do take a screenshot correctly with xlib?) and the double for loop is taking just too long for me (over a million loops, as the system I'm building requires the highest amount of efficiency possible, sitting around for 600 miliseconds is just not an option). Is there no way to just get a raw array of pixels to avoid the for loop? I know the XImage class has a "data" member which is supposed to contain all of the pixels, but the organizational system that it uses is foreign to me. Any help would be greatly appreciated! The best end game here would be for me to just be able to have X11 write directly to /dev/fb0
Given an image (i.e. newspaper, scanned newspaper, magazine etc), how do I detect the region containing text? I only need to know the region and remove it, don't need to do text recognition.
The purpose is I want to remove these text areas so that it will speed up my feature extraction procedure as these text areas are meaningless for my application. Anyone know how to do this?
BTW, it will be good if this can be done in Matlab!
Best!
You can use Stroke Width Transform (SWT) to highlight text regions.
Using my mex implementation posted here, you can
img = imread('http://i.stack.imgur.com/Eyepc.jpg');
[swt swtcc] = SWT( img, 0, 10 );
Playing with internal parameters of the edge-map extraction and image filtering in SWT.m can help you tweak the resulting mask to your needs.
To get this result:
I used these parameters for the edge map computation in SWT.m:
edgeMap = single( edge( img, 'canny', [0.05 0.25] ) );
Text detection in natural images is an active area of research in computer vision community. U can refer to ICDAR papers. But in your case I think it should be simple enough. As you have text from newspaper or magazines, it should be of fixed size and horizontally oriented.
So, you can apply scanning window of a fixed size, say 32x32. Train it on ICDAR 2003 training dataset for positive windows having text in it. U can use a small feature set of color and gradients and train an SVM which would give a positive or negative result for a window having text or not.
For reference go to http://crypto.stanford.edu/~dwu4/ICDAR2011.pdf . For code, you can try their homepages
This example in the Computer Vision System Toolbox in Matlab shows how to detect text using MSER regions.
If your image is well binarized and you know the usual size of the text you could use the HorizontalRunLengthSmoothing and VerticalRunLengthSmoothing algorithms. They are implemented in the open source library Aforge.Net but it should be easy to reimplement them in Matlab.
The intersection of the result image from these algorithm will give you a good indication that the region contains text, it is not perfect but it is fast.