How to detect color printer with Win32? - c

I have two black-and-white printers, two color printers, and some virtual printers (Fax, CutePDF Writer, etc).
According to the DC_COLORDEVICE query to DeviceCapabilities, only the Fax virtual printer is black-and-white.
According to PLANES and BITSPIXEL queries to GetDeviceCaps, all of the printers have one plane, and only Fax and CutePDF have 1 bit/pixel (are black-and-white).
According to the NUMCOLORS query to GetDeviceCaps, only Fax is black-and-white.
I'm not excited about querying the driver directly, so I haven't tried it yet.
How do I accurately detect a color printer with Win32?

Bummer that DC_COLORDEVICE doesn't give the right answer. The rest of your findings don't surprise me.
You could try creating an information context for the printer with CreateIC, and then use GetDeviceCaps to check the COLORRES property.
(An information context is like a device context that you can query but can't actually draw to. It's useful when you want to know what a printer driver is going to do without actually creating a real device context, which may require the printer being online.)
Checking the number of planes is useless, since everything (to a good approximation) uses a single plane. The number of bits per pixel doesn't actually tell you if those pixels can be color or just grayscale (or just palette entries).
Another idea is to look at the dmColor field in the default DEVMODE for the device.
I had to solve the same problem many, many years ago (before DeviceCapabilities), but I don't remember how I did it.
UPDATE 2022-12-27: I just came across my own answer while trying to figure out how to handle the Fax virtual printer. When querying DeviceCapabilitiesW with DC_COLORDEVICE, the Fax driver returns value of -1 and GetLastError reports 122 (ERROR_INSUFFICIENT_BUFFER "The data area passed to a system call is too small.") That's weird, since there's no requirement to pass a buffer for this query.
My current solution is to check everything. If DeviceCapabilities with DC_COLORDEVICE doesn't explicitly say color OR if the DEVMODE's dmFields bitmask doesn't have the DM_COLOR bit set OR if the DEVMODE's dmColor field isn't explicitly DMCOLOR_COLOR OR if GetDeviceCaps NUMCOLORS isn't at least 8, then I assume it's a monochrome printer or that user selected monochrome for this print job.

Related

How can I get current microphone input level with C WinAPI?

Using Windows API, I want to implement something like following:
i.e. Getting current microphone input level.
I am not allowed to use external audio libraries, but I can use Windows libraries. So I tried using waveIn functions, but I do not know how to process audio input data in real time.
This is the method I am currently using:
Record for 100 milliseconds
Select highest value from the recorded data buffer
Repeat forever
But I think this is way too hacky, and not a recommended way. How can I do this properly?
Having built a tuning wizard for a very dated, but well known, A/V conferencing applicaiton, what you describe is nearly identical to what I did.
A few considerations:
Enqueue 5 to 10 of those 100ms buffers into the audio device via waveInAddBuffer. IIRC, when the waveIn queue goes empty, weird things happen. Then as the waveInProc callbacks occurs, search for the sample with the highest absolute value in the completed buffer as you describe. Then plot that onto your visualization. Requeue the completed buffers.
It might seem obvious to map the sample value as follows onto your visualization linearly.
For example, to plot a 16-bit sample
// convert sample magnitude from 0..32768 to 0..N
length = (sample * N) / 32768;
DrawLine(length);
But then when you speak into the microphone, that visualization won't seem as "active" or "vibrant".
But a better approach would be to give more strength to those lower energy samples. Easy way to do this is to replot along the μ-law curve (or use a table lookup).
length = (sample * N) / 32768;
length = log(1+length)/log(N);
length = max(length,N)
DrawLine(length);
You can tweak the above approach to whatever looks good.
Instead of computing the values yourself, you can rely on values from Windows. This is actually the values displayed in your screenshot from the Windows Settings.
See the following sample for the IAudioMeterInformation interface:
https://learn.microsoft.com/en-us/windows/win32/coreaudio/peak-meters.
It is made for the playback but you can use it for capture also.
Some remarks, if you open the IAudioMeterInformation for a microphone but no application opened a stream from this microphone, then the level will be 0.
It means that while you want to display your microphone peak meter, you will need to open a microphone stream, like you already did.
Also read the documentation about IAudioMeterInformation it may not be what you need as it is the peak value. It depends on what you want to do with it.

Per monitor DPI aware Windows system image list

How do I retrieve system image list for given DPI?
When an application is system DPI-aware, the SHGetFileInfo and similar functions return a handle to a correctly scaled system image list. C++ example:
handle =
SHGetFileInfo(L"", 0, &fileInfo, sizeof(fileInfo),
SHGFI_SYSICONINDEX | (large ? SHGFI_LARGEICON : SHGFI_SMALLICON));
But with per-monitor DPI awareness, that's not enough, as the application can run on a monitor that does not use system DPI (or the application can have multiple windows, each on different monitor, with different DPI).
For example, on 168 DPI (175% zoom) monitor, with standard 96 system DPI, you get small unscaled 16x16 icons:
So I'm hoping, that there's a DPI-aware variant to the SHGetFileInfo (or similar), the way there are DPI aware variants of other functions like:
GetSystemMetricsForDpi for GetSystemMetrics;
SystemParametersInfoForDpi for SystemParametersInfo;
OpenThemeDataForDpi for OpenThemeData.
As a quick solution, I ended up using SHGetImageList, as suggested by #MickyD.
As mentioned in the function documentation (and as suggested by #JonathanPotter):
The IImageList pointer type, such as that returned in the ppv parameter, can be cast as an HIMAGELIST as needed; for example, for use in a list view.
Hence I use the SHGetImageList to collect all available system image lists sizes by calling it for 0..SHIL_LAST.
For each returned image list, I query its icon size using ImageList_GetIconSize and cache them all.
Then, when an image list is needed for a particular DPI, I pick the closest size available.
An obvious drawback is that on multi monitor systems with high system DPI, but with one low DPI monitor, one cannot retrieve reasonable size of small icons for the low DPI monitor.

How should the result of getDeviceDensity() method from Codename One be used?

Depending on which skin I use in the simulator, the result from the following method differs :
Display.getInstance().getDeviceDensity();
The results have nothing to do with the real device density since for a Xoom skin it outputs 30 (149 ppi in reality), for a an Iphone 6 it outputs 50 (329 in reality).
I noticed that because I need to translate char height measured in Gimp (72 dpi) into the device world so that it looks alike on an image.
Any help on that topic would be appreciated!
Cheers
The JavaDocs for getDeviceDensity state:
Returns one of the density variables appropriate for this device,
notice that density doesn't always correspond to resolution and an
implementation might decide to change the density based on DPI
constraints.
Returns:
one of the DENSITY constants of Display
The DENSITY constants refers to one of these.
Notice you can also use convertToPixels which is probably a far better API to use. The density API is mostly used to pick the right multi image and should rarely be used in user code.

How to detect text region in image?

Given an image (i.e. newspaper, scanned newspaper, magazine etc), how do I detect the region containing text? I only need to know the region and remove it, don't need to do text recognition.
The purpose is I want to remove these text areas so that it will speed up my feature extraction procedure as these text areas are meaningless for my application. Anyone know how to do this?
BTW, it will be good if this can be done in Matlab!
Best!
You can use Stroke Width Transform (SWT) to highlight text regions.
Using my mex implementation posted here, you can
img = imread('http://i.stack.imgur.com/Eyepc.jpg');
[swt swtcc] = SWT( img, 0, 10 );
Playing with internal parameters of the edge-map extraction and image filtering in SWT.m can help you tweak the resulting mask to your needs.
To get this result:
I used these parameters for the edge map computation in SWT.m:
edgeMap = single( edge( img, 'canny', [0.05 0.25] ) );
Text detection in natural images is an active area of research in computer vision community. U can refer to ICDAR papers. But in your case I think it should be simple enough. As you have text from newspaper or magazines, it should be of fixed size and horizontally oriented.
So, you can apply scanning window of a fixed size, say 32x32. Train it on ICDAR 2003 training dataset for positive windows having text in it. U can use a small feature set of color and gradients and train an SVM which would give a positive or negative result for a window having text or not.
For reference go to http://crypto.stanford.edu/~dwu4/ICDAR2011.pdf . For code, you can try their homepages
This example in the Computer Vision System Toolbox in Matlab shows how to detect text using MSER regions.
If your image is well binarized and you know the usual size of the text you could use the HorizontalRunLengthSmoothing and VerticalRunLengthSmoothing algorithms. They are implemented in the open source library Aforge.Net but it should be easy to reimplement them in Matlab.
The intersection of the result image from these algorithm will give you a good indication that the region contains text, it is not perfect but it is fast.

How do I detect whether the sample supplied by VideoSink.OnSample() is right-side up?

We're currently using the Silverlight VideoSink to capture video from users' local webcams, kinda like so:
protected override void OnSample(long sampleTime, long frameDuration, byte[] sampleData)
{
if (FrameShouldBeSubmitted())
{
byte[] resampledData = ResizeFrame(sampleData);
mediaController.SetVideoFrame(resampledData);
}
}
Now, on most of the machines that we've tested, the video sample provided in the byte[] sampleData parameter is upside-down, i.e., if you try to take the RGBA data and turn it into, say, a WriteableBitmap, the bitmap will be upside-down. That's odd, but fairly easy to correct, of course -- you just have to reverse the array as you encode it.
The problem is that at least on some machines (e.g., the single Macintosh in our test environment), the video sample provided is no longer upside-down, but right-side up, and hence, flipping the image actually results in an image that's received upside-down on the far side.
I reported this to MS as a bug, but their (terse) response was that it was "As Designed". Further attempts at clarification have so far been ignored.
Now, I'll grant that it's kinda entertaining to imagine the discussions behind this design decision: "OK, just to make it interesting, let's play the video rightside up on a Mac, but let's turn it upside down for Windows!" "Great idea!" "Yeah, that'll keep those developers guessing!" But beyond that, I can't find this, umm, "feature" documented anywhere, nor can I find any documentation on how one is supposed to be able to tell that a given video sample is upside down or rightside up. Any thoughts on how to tell this?
EDIT 3/29/10 4:50 pm - I got a response from MS which said that the appropriate way to tell was through the Stride property on the VideoFormat object, i.e., if the stride value is negative, the image will be upside-down. However, my own testing indicates that unless I'm doing something wrong, this isn't the case. At least on my own machine, whether the stride value is zero or negative (the only options I see), the sampled image is still upside-down.
I was going to suggest looking at VideoFormat.Stride provided at VideoSink.OnFormatChange but then I noticed your edit. I went ahead and tested it at my dev machine, image is upside down and stride is negative as expected. Have you checked again recently?
Even though stride made perfect sense for native applications (using stride at pointer operations), I agree that current behavior is not what you expect from a modern API. However performance wise, it is better not to make changes on data received from native API.
Yet at this point, while we are talking about performance, why not provide samples in formats other than PixelFormatType.Format32bppArgb so that we can avoid color space conversion? BTW, there is a VideoCaptureDevice.DesiredFormat property which only works for resolution as there is no alternative to PixelFormatType.Format32bppArgb.

Resources