Background:
I'm controlling a long (8.4 meter) ws2812b strip using a Raspberry Pi. There are a total of 506 pixels in the strip. In addition to the power supplied by the RPi, I have an external power supply. The external power supply is connected at both ends of the strip. It supplies 5 Volts at a maximum of 40 amps. Each connection to the strip has about 3 meters of wire to connect to the power supply. I am using the neopixel library to control the strip.
According to what I read, each pixel should take a maximum of 0.060 amps to achieve maximum brightness. Given that I have 506 pixels, that works out to 30.36 amps needed (i.e. 506 * 0.060 = 30.36). Since I have a 40 Amp power supply, I should be able to set all the pixels to maximum brightness.
Problem:
I am able to light up the whole strip to minimal white brightness (a value of 8,8,8). But whenever I try to maximize the brightness (255,255,255), the first few pixels are fully bright and white, but the rest fade from yellow to red along the length.
What am I missing? Why can't I make the whole length glow bright white?
Related
I'm wirtting a program that changes all the image pixels to grayscale except for the red ones. At first, i thought it would be easier, but I'm having trouble trying to find the best way to determine if a pixel is red or not.
The first method I tried was a formula: Green < Red/2 && Blue < Red/1.5
results:
michael jordan
goldhill
Michael Jordan's image shows some not red pixels that pass the formula, like #7F3222 and #B15432. So i tried a different method, hue >= 345 || hue <= 9, trying to limit only the red part of the color wheel.
results:
michael jordan 2
goldhill 2
Michael Jordan's image now has less not red pixels and goldhill's image has more red pixels than before but still not what I want.
My methods are incorrect or just some adjustments are missing? if they're incorrect, how can I solve this problem?
Your question "How to identify 'real' red pixels", begs the question "what a red pixel actually is, especially if it has to be 'real'".
The RGB (red, green, blue) color model is not well suited to answer that question, therefore you should use the HSV (hue, saturation, value) model.
Hue defines the color in degrees (0 - 360 degrees)
Saturation defines the intensity of the color (0 - 100 %)
Value or Brightness defines the luminosity (0 - 100 %)
Steps:
convert RGB to HSV
if the H value is not red (+/- 30 degrees, you'll have to define a threshold range of what you consider to be red, 'real' red would be 0 degrees)
set S to 0 (zero), by doing so we remove the saturation of the color, which results in a gray shade
leave the brightness (V) as it is (or play around with it and see how it effects the results)
convert HSV to RGB
Convert from RGB to HSV and vice versa:
RGB to HSV
HSV to RGB
More info on HSV:
https://en.wikipedia.org/wiki/HSL_and_HSV
"All cats are gray in the dark"
Implement a dynamic color range. Adjust the 'red' range based on the brightness and/or saturation of the current pixel. Put a weight scale (on how much they affect the range in %) on the saturation and brightness values to determine your range ... play around to achieve the best results.
You used RGB, and HSV method, which it is good, and both are ok.
The problem is about defining red. Hue (or R) is not enough: it contains many other colours (in the broader sense): browns are dark/unsaturated reds (or oranges). Pink is also a tint of red (so red + white, so unsaturated).
So in your first method, I would add a condition: R > 127 (you must check yourself a good threshold). And possibly change the other conditions with a higher ratio of R to G and B and possibly adding also the ration R to (G+B). The first new added condition is about reds (and not "dark reds/browns), and brightness. Your two conditions are about "hue" (hue is defined by the top two values), and the last condition I wrote is about saturation.
You can do in a similar way for HSV: filter H (as you did), but you must filter also V (you want just bright reds), and also an high saturation, so you must filter all channels.
You should test yourself the saturation levels. The problem is that eyes adapt quickly to colours, so some images with a lot of redish colours are seen normally (less redish) by humans, but more red by above calculation. Etc. (so usually for such works there is some sliders to modify, e.v. you can try to automatize, but you need to find overall hue and brightness of image, and possibly complex methods, see CIECAM).
Similar to calibrating a single camera 2D image with a chessboard, I wish to determine the width/height of the chessboard (or of a single square) in pixels.
I have a camera aimed vertically at the ground, ensured to be perfectly level with the surface below. I am using the camera to determine the translation between consequtive frames (successfully achieved using fourier phase correlation), at the moment my result returns the translation in pixels, however I would like to use techniques similar to calibration, where I move the camera over the chessboard which is flat on the ground, to automatically determine the size of the chessboard in pixels, relative to my image height and width.
Knowing the size of the chessboard in millimetres, I can then convert a pixel unit to a real-world-unit in millimetres, ie, 1 pixel will represent a distance proportional to the height of the camera above the ground. This will allow me to convert a translation in pixels to a translation in millimetres, recalibrating every time I change the height of the camera.
What would be the recommended way of achieving this? Surely it must be simpler than single camera 2D calibration.
OpenCV can give you the position of the chessboard's corners with cv::findChessboardCorners().
I'm not sure if the perspective distortion will affect your calculations, but if the chessboard is perfectly aligned beneath the camera, it should work.
This is just an idea so don't hit me.. but maybe using the natural contrast of the chessboard?
"At some point it will switch from bright to dark pixels and that should happen (can't remember number of columns on chessboard) times." should be a doable algorithm.
What is the meaning of the statement below:
My system resolution is 1024 x 768 at 96 DPI
I am not able to understand the internal maths that when we increase the DPI at fixed resoltion the user interface developed in VC++/MFC or C# /Winform application expands ( look larger then that at 96 DPI ).
For example we develop user interface at 96 DPI which mean 96 dots per inch. Now when we increase the DPI then we are increasing the Dots per inch then user interface should look compressed instead of enlarge.
I am doing it at windows 7 machine
Please help!!
My system resolution is 1024 x 768 at 96 DPI
This means that your computer thinks that your monitor has 96 dots (pixels) per inch (at this resolution). When a program does graphical calculations, it uses this setting to convert between real lengths (in inches or centimetres) and pixels.
This will work out correctly if the 96 DPI setting matches your monitor (i.e. the display area is 1024/96=10.67 by 768/96=8 inches).
Why do things get larger when you increase this setting? Let's say we want to make a button 1 inch high, and your monitor's real DPI is 96, but you have set it to 150. One inch times 150 dots per inch gives us 150 pixels, so we will draw our button 150 pixels high. But our monitor's real DPI is 96, so this appears as 150 pixels / 96 dpi = 1.56 inches high.
there is no "DPI"-setting for your monitor, this device only knows about pixels. DPI = either printers or preformatted documents which need to be viewed in special devices. You CAN, however, calculate how many pixels would be needed to display something with the physical size (hence DPI) of X ... which is a rather unprecise calculation, by the way.
If you're calculating physical sizes you're either developing computer-games, writing your own printing-driver or need to fulfill extraordinary project-tasks
Does anyone know what measurement units are used by Silverlight/WFP? For example, if I create a new button and set its height to 150, is that 150 pixels? points? millimeters?
I design all of my applications in Adobe Illustrator before proceeding to code, and although I try and set everything to the dimensions in my Illustrator file, the Silverlight application is usually larger.
Although in theory, 1 unit in WPF is 1/96th of an inch, that's frequently not the case in practice.
It's usually true when printing. But it's rarely true on screen. The reason for this is that Windows almost always knows the true resolution of a printer, but almost never knows the true resolution of a screen.
For example, I have three screens attached to my computer. Windows thinks that they all have a resolution of 96 pixels per inch. Actually they don't. Two of them have a resolution of 101 pixels per inch, and one has a resolution of 94 pixels per inch. (Why? Because Windows has no way of working the true resolutions out for itself, and I haven't told it. The fiction that they all have the same pixel size is close to the truth, and turns out to be a convenient fiction.)
So when I create, say, a Rectangle in WPF with Width and Height both set to 96, the size of the Rectangle actually depends on which screen it appears on. Windows thinks that all 3 screens have a resolution of 96 pixels per inch, and so it'll render the rectangle as being 96 pixels tall and wide no matter which screen it appears on. That'll make it appear 0.95 inches tall on two of the screens, and 1.02 inches tall on the third.
So in practice, that means that units in WPF on my computer here are either 1/100th of an inch, or 1/94th of an inch in practice. (I.e., in practice, the size of 1 unit in WPF is exactly the size of 1 pixel on my particular setup, no matter how big the pixels happen to be.)
I could change that. I could reconfigure Windows - I could tell it the actual resolution of all 3 screens, in which case the nominal and actual WPF unit sizes would coincide. Or I could lie - I could claim that I have 200 pixel per inch screens, in which case everything would be massive...
The basic problem here is that there is no standard way for the computer to discover the true size of the physical pixels on the screen, and very few people bother to set it up by hand. (And in fact you can cause problems by configuring it 'correctly', because a lot of software doesn't behave correctly when you do.) So the majority of Windows computers don't report physical pixel sizes correctly to WPF - they can't because they don't know.
Consequently, there's no reliable answer to the question - 1 unit in WPF could be pretty much anything on screen. (In practice, most of the time, it turns out to be 1 pixel, simply because if you don't tell Windows anything else, it defaults to assuming that your screens have pixels that are 1/96th of an inch tall, which is the same as 1 WPF unit. And for most desktop screens, that's actually quite likely to be a good guess. But this isn't universal. On systems configured with what used to be called 'large fonts' for example, you'll find a different nominal screen resolution, and 1 WPF unit will correspond to slightly more than 1 physical pixel - about 1.2 in fact.)
With printers, it's all much more predictable. Printers are invariably able to report their resolutions correctly. So if you print something that's 96 WPF units high, you can be confident that it will be 1 inch high.
MSDN's documentation states that the FrameworkElement.Height property (for Silverlight) refers to:
The height, in pixels, of the object
However, for WPF it refers to:
a device-independent unit (1/96th inch) measurement
So, to answer your question... pixels for Silverlight, device-independent units for WPF.
The documentation refers to Pixels, however these are Pixels where there are 96 such pixels per inch. A line of Width 96 when display on a 120 DPI display will be 120 actual device pixels. Similarly such a line drawn on a printer output which has 600 DPI will be 600 pixels long.
They are Device Independent Units.
You can find more detailed explanations here.
I'm developing a web based mobile application and I was thinking about the default background color.
Do different color backgrounds use different amounts of battery life? For the best battery life should I choose black or white or some other color?
I would assume that because there's a back light behind the LCD then white would use the least amount of power because no pixels would have to be turned on, is this assumption correct?
For most devices the background colour you use has no effect on the battery usage. The backlight intensity isn't changed.
However on AMOLED displays the power consumption can vary "significantly". See the wikipedia page for details:
"For example, our measurement shows that a commercial QVGA OLED display consumes 3 and 0.7 Watts showing black text on a white background and white text on a black background, respectively."
With ordinary LCD displays, the back light will consume far more power than any number of pixels. If your device has fixed brightness, you can pick any color you want, the difference in power usage will be miniscule.
On the other hand, if you can adjust the brightness, then what you want is the color scheme that gives the best contrast/visibility at the lowest possible screen brightness.
If it's an embedded device and you're in total control of features like screen brightness, this might be worth investing a little time. But if you're writing an app to run in an existing OS framework, something like overall display brightness probably won't be something you will be allowed to control.
For LED displays the background color does not matter.
For AMOLED displays, which are used in many of the newer smartphones, a black background saves a significant amount of energy.
If battery life is the only concern, use a black background.
Latest research confirms that lighter backgrounds use more energy
Microsoft blog on battery conservation