Silverlight - Brush banding - silverlight

Can someone tell me how to increase the number of "color bands" on a Silverlight gradient brush?
The question is marked as answered below, but it really isn't a solution to use bitmaps instead of gradient brushes as my users can modify their background dynamically. This is most obvious when full screen gradients are used.
how to make the brush smooth without lines in the middle

Unfortunately, I can only give you half an answer - the general principles on why this happens and how to avoid it. As to the implementation in SL, I can only give you some general ideas as I'm not an expert there.
Those bands result from the limited color depth of the display. To reduce the effect, the gradient has to be made slightly noisy. The simplest way to do is to calculate the color from the gradient as a floating number, and then round it to an integer in a weighted, random manner - like this:
correct_value = linear_gradient(x,y) // e.g 25.3
whole_part = round_down(correct_value) // 25
frac_part = correct_value - whole_part // 0.3
color_used = whole_part
if (true_with_probability(frac_part)):
color_used = color_used + 1 // color_used == 25 with 70% chance
// 26 with 30% chance
Here is an example of using this technique (on the left is the effect of simply rounding to the nearest integer, right is rounding with noise):
As to how to implement that in your case - I assume there would be some way to create a custom Brush object that would allow you to do the calculations per pixel, or alternatively some sort of shader functionality which could help. If those are not possible, you can also simply generate a bitmap using this logic, and use that bitmap as the brush (I believe this would be ImageBrush, but that's just from googling).
Perhaps somebody with more Silverlight knowledge can chime in and provide the best implementation in this case.

Related

WPF dynamically scale TextBlock Text without filling a container

I have a set of pages that look like this:
I have the content in grids with * Heights and Widths so the grid correctly scales when the entire window resizes. I would like the text to resize with the grid. Basically I would like the user to resize from this:
To this:
(preserving white space)
One way to do this would be to wrap the TextBlock in a ViewBox with margins on the right and bottom (for Grid.Row="3") to account for white space. But because I have several pages with different lengths and line counts I would have to set the margin specifically for each page otherwise the text sizes would differ on each page. Is there a better way to do this??
I don't think there is a better way to do this. There are different ways. But, I think it isn't just a matter of opinion that they would not be better.
Ways I can think of.
Render your text offscreen, rendertargetbitmap that so you've got a picture. Change your textblocks on screen to images and stretch them.
Or
Work out the size your text wants to be. Then do some calculation comes up with a different fontsize which is "better". This is a lot easier to write a description of than do.
In my opinion.
A viewbox is easier to implement. Way less error prone than calculations. Will give at least as good results as rendering to a picture.
I just want to add one more solution to the ones suggested by Andy, which is more of a scientific approach and takes a bit of practice to master.
Suppose you have to find a function F, which maps one or more variables to a desired single value. In your case that would be a function F, which takes aspect ratio of the window as input and outputs an appropriate font size.
How can you find such a function?
Well... you don't need to do any math yourself!
First, you need some data to begin with:
1. Resize the window randomly
2. Calculate aspect ration (X)
3. Pick an appropriate font size that looks good enough (Y)
4. Repeat the measurement 7 to 10 times (sorry data scientists)
5. Enter the data in Excel - one column for X and another one for Y
6. Insert a scatter chart
7. Choose the best trendline for your data, but avoid the polynomial one
8. Display the trendline equation and use the expression in your code
Now I should mention the pros and cons of this regression technique.
Pros:
1. It can solve a wide range of tricky problems:
"I use this 3rd party control, but when the text is too long it overlaps the title bar. How to trim it so it doesn't go beyond the top border?. Deadline is coming!"
2. Even if it doesn't solve the problem perfectly, the results are often acceptable
3. It takes minutes to try out unlike spending a day refreshing your math skills
Cons:
1. The biggest problem is that to keep it simple, you often lower the number of
variables by assuming some of them to be constant. In this post I've assumed that
the font family won't change for example, neither the font weight.
2. If any of the assumptions does not hold the final result could be even worse
This technique is fragile, but powerful. Use it as your last weapon and never leave magic expression like
fontSize = (int)(0.76 + 1.2 * aspectRation) without documenting how it came to be.

Blob detection in C (not with OPENCV)

I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Even if is something written inside the paper. I need to detect the paper and is corner, because what i really want is to draw a opengl polygon over the paper in each corner of the paper will be a corner of the polygon. Then i need the coordinates of the paper to do other stuffs.
So i need to:
- detect a square white blob.
- get the coordinates of the cornes
- draw a polygon over the white sheet.
Any ideias how can i do that?
Much depends on context. For example, suppose that you:
know that the paper is always roughly centered (i.e. W/2, Y/2 is always inside the blob), and no more rotated than 45 degrees (30 would be better)
have a suitable border around the sheet so that the corners never touch the edges of the FOV
are able (through analysis of local variance, or if you're lucky, check of background color or luminance) to say whether a point is inside or outside the blob
the inside/outside function never fails (except possibly in the close vicinity of a border)
then you could walk a line from a point on the border (surely outside) and the center (surely inside), even through bisection, and find a point - an areal - on the edge.
Two edge points give a rect (two areals give a beam), two rects give an intersection (two beams give a larger areal) - and there's your corner. You should carry along the detection uncertainty (areal radius) in order to validate corners (another less elegant approach is to roughly calculate where the corner is, and pinpoint it with a spiral search or drunkard's walk).
This algorithm is amenable to parallelization and, as long as the hypotheses hold, should be really fast.
All that said, it remains a hack -- I agree with unwind, why reinvent the wheel? If you have memory or CPU constraints (embedded systems, etc.), I believe there ought to be OpenCV and e-Vision "lite" ports also for ARM and embedded platforms.
(Sorry for my terminology - I'm monkey-translating from Italian. "Areal" is likely to correspond to your "blob", a beam is the family of lines joining all couples of points in two different blobs, line intensity being the product of distance from a point from its areal's center)
I am trying to do my own blob detection who will receive a real time video, and try to detect a white paper sheet.
Your first shot could be a simple flood-fill. That is, select a good threshold to binarize the image and apply the algorithm. The threshold can be fixed if you know the paper is always brighter than X and the background is always darker than this. Or this can be an adaptive threshold, for example Otsu's method. OpenCV offers this for free.
If you'd need to speed it up you could use a union-find data structure.
Finally you'd need to come up with some heuristic how to identify the corners (e.g. the four extreme values in x/y direction).
Then i need [...] the coordinates of the cornes [...]
Then you don't need blob detection, but corner detection or contour detection in the first place. OpenCV has some nice functionality for exactly this.
If you can't use it, I would suggest to binarize the image as above and use a harris-detector to find the corners of the object.
OpenCV's TBB support could also come quite handy if you'd use it and you have problems to meet your real-time requirements.

Image processing..back ground subtraction

I have a sequence of images taken from a camera. The images consists of hand and surroundings. I need to remove everything except the hand.
I am new to Image processing. Would anyone help me in regard with the above Question. I am comfortable using C and Matlab.
A really simple approach if you have a stationary background and a moving hand (and quite a few images!) is simply to take the average of the set of images away from each image. If nothing else, it's a gentle introduction to Matlab.
The name of the problem you are trying to solve is "Image Segmentation". The Wikipedia page here: wiki is a good start.
If lighting consistency isn't a problem for you, I'd suggest starting with simple RGB thresholding and see how far that gets you before trying anything more complicated.
Have a look at OpenCV, a FOSS library for computer vision applications. Specifically, see the Video Surveillance module. For a walk through of background subtraction in MATLAB, see this EETimes article.
Can you specify what kind of images you have. Is the background moving or static? For a static background it is a bit straightforward. You simply need to subtract the incoming image from the background image. You can use some morphological operations to make it look better. They all depend on the quality of images that you have. If you have moving background I would suggest you go for color based segmentation. Convert the image to YCbCr then threshold appropriately. I know there are some papers available on it(However I dont have time to locate them). I suggest reading them first. Here is one link which might help you. Read the skin segmentation part.
http://www.stanford.edu/class/ee368/Project_03/Project/reports/ee368group08.pdf
background subtraction is simple to implement (estimate background as average of all frames, then subtract each frame from background and threshold resulting absolute difference) but unfortunately only works well if 1. camera has manual gain and exposure 2. lighting conditions do not change 3.background is stationary. 4. the background is visible for much longer than the foreground.
given your description i assume these are not the case - so what you can use - as already pointed out - is colour as a means of segmenting foreground from background. as it's a hand you are trying to isolate best bet is to learn the hand colour. opencv provides some means of doing this. if you want to do this yourself you just get the colour of some of the hand pixels (you would need to specify this manually for at least one frame) and convert them to HUE (which encapsulates the colour in a brightness independen way. skin colour has a very constant hue) and then make a HUE histogram. compare this to the rest of the pixels and then decided if the hue is simmilar enough.

ImageProcessing in WPF (Fant BitmapScalingMode)

My application presents an image that can be scaled to a certain size. I'm using the Image WPF control with the scaling method of FANT.
However, there is no documentation how this scaling algorithm works.
Can anyone reference me to the relevant link for this algorithm description?
Nir
Avery Lee of VirtualDub states that it's a box filter for downscaling and linear for upscaling. If I'm not mistaken, "box filter" here means basically that each output pixel is a "flat" average of several input pixels.
In practice, it's a lot more blurry for downscaling than GDI's cubic downscaling, so the theory about averaging sounds about right.
I know what it is, but I couldn't find much on Google either :(
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4056711 is the appropriate paper I think; behind a pay-wall.
You don't need to understand the algorithm to use it. You should explicitly make the choice each time you create a bitmap control that is scaled whether you want it high-quality scaled or low quality scaled.

From an image, how do I determine the shade?

For a database app I'm trying to determine the average shade of a section of photo, against a colour scale.
Being a novice I'm finding this very difficult to explain so I've created a simple diagram showing exactly what I'm trying to achieve.
http://www.knockyoursocksoff.com/shade/
If anybody has the time to give me some ideas I'd be very grateful.
Best wishes,
Warren.
If you are using color photos, you should first convert the selected area from RBG (red, green, blue) to HSL/HSV (article).
HSL stands for "hue, saturation, lightness".1 The number you are interested in is the lightness.
In the most general terms, the lightness refers to how you perceive the brightness of a colored surface. It's hard to use the red/green/blue components to say whether a patch of red is brighter/darker than, say, a patch of blue. Converting to HSL takes care of that problem.
Once you have done the conversion, you can simply average the lightness values of your selected area.
Quick note on lightness values: Technically, you can't simply average the lightness values because the perception of lightness is not linear (article). But, unless you are writing a deeply scientific application, simply averaging the lightness will give you an "accurate enough" value.
1 In Adobe Photoshop, they call it HSB (hue, saturation, brightness)
I think I would start by just averaging the pixel values:
for x = start_x to end_x
for y = stary_y to end_y
total += getPixel(x,y)
shade = total / (xlen*ylen)
Its going to be more complicated if you're doing it in color.

Resources