Depending on which skin I use in the simulator, the result from the following method differs :
Display.getInstance().getDeviceDensity();
The results have nothing to do with the real device density since for a Xoom skin it outputs 30 (149 ppi in reality), for a an Iphone 6 it outputs 50 (329 in reality).
I noticed that because I need to translate char height measured in Gimp (72 dpi) into the device world so that it looks alike on an image.
Any help on that topic would be appreciated!
Cheers
The JavaDocs for getDeviceDensity state:
Returns one of the density variables appropriate for this device,
notice that density doesn't always correspond to resolution and an
implementation might decide to change the density based on DPI
constraints.
Returns:
one of the DENSITY constants of Display
The DENSITY constants refers to one of these.
Notice you can also use convertToPixels which is probably a far better API to use. The density API is mostly used to pick the right multi image and should rarely be used in user code.
Related
I have a question that's very specific, yet very general at the same time. (Also, I don't know if this is quite the right site for this.)
The Scenario
Let's say I have an uncompressed video vid.avi. It is then run through [Some compression algorithm], which is lossy. I want to compare vid.avi and the new, compressed file to determine just how much data was lost in the compression. How can I compare the files and how can I measure the difference between the two, using the original as the reference point? Is it possible at all? I would prefer a generic answer that will work with any language, but I would also gladly accept an answer that's specific to a language.
EDIT: Let me be more specific. I want something that compares two video files in a similar way that the Notepad++ Compare plugin compares text files. I just want to find out how close each individual pixel's colour is to the original file's colour for that pixel.
Thanks in advance, and thank you for taking the time to read this question.
It is generally the change in video quality that people want to measure when comparing compression methods, rather than a loss of data.
If you did want to measure somehow the data loss, you would have to define what you mean by 'data' and how you wanted to measure it. Video compression is quite complex and the approach may even differ frame by frame within a video. Data could mean the colour depth for each pixel, the number of frames per second, whether a frame is encoded based on a delay to other frames etc.
Video quality is subjective so the reduction in quality after compression will not be an absolute value. The usual way to measure the quality is similar to the technique used for audio - Mean Opinion Score: https://en.wikipedia.org/wiki/Mean_opinion_score. Its essentially uses a well defined process to try to apply some objectivity to a test audiences subjective experience.
How do I retrieve system image list for given DPI?
When an application is system DPI-aware, the SHGetFileInfo and similar functions return a handle to a correctly scaled system image list. C++ example:
handle =
SHGetFileInfo(L"", 0, &fileInfo, sizeof(fileInfo),
SHGFI_SYSICONINDEX | (large ? SHGFI_LARGEICON : SHGFI_SMALLICON));
But with per-monitor DPI awareness, that's not enough, as the application can run on a monitor that does not use system DPI (or the application can have multiple windows, each on different monitor, with different DPI).
For example, on 168 DPI (175% zoom) monitor, with standard 96 system DPI, you get small unscaled 16x16 icons:
So I'm hoping, that there's a DPI-aware variant to the SHGetFileInfo (or similar), the way there are DPI aware variants of other functions like:
GetSystemMetricsForDpi for GetSystemMetrics;
SystemParametersInfoForDpi for SystemParametersInfo;
OpenThemeDataForDpi for OpenThemeData.
As a quick solution, I ended up using SHGetImageList, as suggested by #MickyD.
As mentioned in the function documentation (and as suggested by #JonathanPotter):
The IImageList pointer type, such as that returned in the ppv parameter, can be cast as an HIMAGELIST as needed; for example, for use in a list view.
Hence I use the SHGetImageList to collect all available system image lists sizes by calling it for 0..SHIL_LAST.
For each returned image list, I query its icon size using ImageList_GetIconSize and cache them all.
Then, when an image list is needed for a particular DPI, I pick the closest size available.
An obvious drawback is that on multi monitor systems with high system DPI, but with one low DPI monitor, one cannot retrieve reasonable size of small icons for the low DPI monitor.
I've been using 24bit .png with Alpha, from Photoshop, and just tried a .psd which worked fine with OpenGL ES, but Metal didn't see the Alpha channel.
What's the absolutely most performant texture format for particles within SceneKit?
Here's a sheet to test on, if needs be.
It looks white... right click and save as in the blank space. It's an alpha heavy set of rings. You can probably barely make them out if you squint at the screen:
exaggerated example use case:
https://www.dropbox.com/s/vu4dvfl0aj3f50o/circless.mov?dl=0
// Additional points for anyone can guess the difference between the left and right rings in the video.
Use a grayscale/alpha PNG, not an RGBA one. Since it uses 16 bits per pixel (8+8) instead of 32 (8+8+8+8), the initial texture load will be faster and it may (depending on the GPU) use less memory as well. At render time, though, you’re not going to see much of a speed difference, since whatever the texture format is it’s still being drawn to a full RGB(A) render buffer.
There’s also PVRTC, which can get you down as low as 2–4 bits per pixel, but I tried Imagine’s tool out on your image and even the highest quality settings caused a bunch of artifacts like the below:
Long story short: go with a grayscale+alpha PNG, which you can easily export from Photoshop. If your particle system is hurting your frame rate, reduce the number and/or size of the particles—in this case you might be able to get away with layering a couple of your particle images on top of each other in the source texture atlas, which may not be too noticeable if you pick ones that differ in size enough.
Given an image (i.e. newspaper, scanned newspaper, magazine etc), how do I detect the region containing text? I only need to know the region and remove it, don't need to do text recognition.
The purpose is I want to remove these text areas so that it will speed up my feature extraction procedure as these text areas are meaningless for my application. Anyone know how to do this?
BTW, it will be good if this can be done in Matlab!
Best!
You can use Stroke Width Transform (SWT) to highlight text regions.
Using my mex implementation posted here, you can
img = imread('http://i.stack.imgur.com/Eyepc.jpg');
[swt swtcc] = SWT( img, 0, 10 );
Playing with internal parameters of the edge-map extraction and image filtering in SWT.m can help you tweak the resulting mask to your needs.
To get this result:
I used these parameters for the edge map computation in SWT.m:
edgeMap = single( edge( img, 'canny', [0.05 0.25] ) );
Text detection in natural images is an active area of research in computer vision community. U can refer to ICDAR papers. But in your case I think it should be simple enough. As you have text from newspaper or magazines, it should be of fixed size and horizontally oriented.
So, you can apply scanning window of a fixed size, say 32x32. Train it on ICDAR 2003 training dataset for positive windows having text in it. U can use a small feature set of color and gradients and train an SVM which would give a positive or negative result for a window having text or not.
For reference go to http://crypto.stanford.edu/~dwu4/ICDAR2011.pdf . For code, you can try their homepages
This example in the Computer Vision System Toolbox in Matlab shows how to detect text using MSER regions.
If your image is well binarized and you know the usual size of the text you could use the HorizontalRunLengthSmoothing and VerticalRunLengthSmoothing algorithms. They are implemented in the open source library Aforge.Net but it should be easy to reimplement them in Matlab.
The intersection of the result image from these algorithm will give you a good indication that the region contains text, it is not perfect but it is fast.
I am working on an application that displays surface winds. Wind speed and direction will be displayed using "wind barb" symbols, as described here: Plotted Winds
My question: Are there any standards for the angles and lengths of the "flags" in relation to the wind-barb "pole"?
Eyeballing the diagrams I've seen, I think that an angle of 60 degrees and a flag length about a third as long as the pole length would look fine, but if there are any officially defined standards for these symbols, I'd like to follow them.
Note: This app will not be used for navigation, so it is not very important that it look exactly like an official chart. I just don't want it to be ugly, or to look obviously wrong to a knowledgeable user.
I found this program that draws weather maps. I think you can get the source code.
http://www.ncarg.ucar.edu//supplements/wmap/index.html#HEADING1-139