CN1: Gradient with alpha channel - codenameone

I would like to have a gradient which goes from black to transparent (not white). How can I achieve this?
From my attempt below I assume the gradient style color's alpha value is not considered:
gui_Footer.allStyles.apply {
backgroundType = Style.BACKGROUND_GRADIENT_LINEAR_VERTICAL
border = RoundRectBorder.create().topOnlyMode(true).cornerRadius(1f)
backgroundGradientEndColor = ColorUtil.BLACK
backgroundGradientStartColor = ColorUtil.argb(0, 255, 255, 255)
}

Gradients in Codename One ignore the alpha byte. While we could technically add support for alpha gradients it's not something that's planned at this time. You can probably generate such an image by manipulating the RGB data but it would be more efficient to just generate an RGB image of a gradient and draw it scaled.
Notice that this is generally the most efficient approach since the GPU works by drawing textures very efficiently. If an image is a power of 2 (e.g. 256x128 pixels) it can fit perfectly in a texture and it's drawn very fast. Much faster than our builtin gradients.

Related

How do we dectect edges of image with same background?

I am trying to find the Contours using
cvFindContours( gray, storage, &contour, sizeof(CvContour), CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cvPoint(0, 0));
and Canny but i am not able to dectect it.
Is there any way i can detect it.( all the processing is done in c)
First equalize image for better contrast. It will increase noise and other textures also but object will be quite visible.
Now, find gradient, it will evade most of gradual color and noise and some texture.
Now you can experiment with some thresholding, it will give good edge and region around shadow and object both.
To remove noise and texture, whole image can be median blurred with large kernel and then thresholded to get interesting region mask.
You can try multiple iteration of above method also. It lots of accuracy is not required, try blurring out texture.

Detect Eye using HSV value in Open CV

I want to detect an Eye, I have some code where I can detect blue color object, so if I made changes(how I can?) then it would be possible for me to detect an eye. As the below color has its own specific range value so, if I specify the eye color HSV value then can I detect EYE with this method.
In this below code I am going to detect BLUE Color Object, please tell me that where I do changes in my code so that I could get EYE using Open CV.
IplImage* GetThresholdedImage(IplImage* img)
{
// Convert the image into an HSV image
IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3);
cvCvtColor(img, imgHSV, CV_BGR2HSV);
IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1);
//For detecting BLUE color i have this HSV value,
cvInRangeS(imgHSV, cvScalar(112, 100, 100), cvScalar(124, 255, 255), imgThreshed);//this will not recognize the yellow color
cvReleaseImage(&imgHSV);
return imgThreshed;
}
Eye detection is much easier with Haar classifier.
link here
Such a simple method may work at extracting a blue object using some thresholding but even if it could be adapted using a different colour black? blue? green? Everyone has different eye colours. I don't see a non hacky method working for you using blob extraction like this based on a HSV threshold value. This method works well on large blocks of the same colour, i.e. removing a blue background.
Look more at shape, everyone has different coloured eyes but the shape is circular/ellipse ish. There are varients of the Hough Transform for detecting circles.
...the Hough transform has been extended to identifying positions of
arbitrary shapes, most commonly circles or ellipses.

How can I create beveled corners on a border in WPF?

I'm trying to do simple drawing in a subclass of a decorator, similar to what they're doing here...
How can I draw a border with squared corners in wpf?
...except with a single-pixel border thickness instead of the two they're using there. However, no matter what I do, WPF decides it needs to do its 'smoothing' (e.g. instead of rendering a single-pixel line, it renders a two-pixel line with each 'half' about 50% of the opacity.) In other words, it's trying to anti-alias the drawing. I do not want anti-aliased drawing. I want to say if I draw a line from 0,0 to 10,0 that I get a single-pixel-wide line that's exactly 10 pixels long without smoothing.
Now I know WPF does that, but I thought that's specifically why they introduced SnapsToDevicePixels and UseLayoutRounding, both of which I've set to 'True' in the XAML. I'm also making sure that the numbers I'm using are actual integers and not fractional numbers, but still I'm not getting the nice, crisp, one-pixel-wide lines I'm hoping for.
Help!!!
Mark
Aaaaah.... got it! WPF considers a line from 0,0 to 10,0 to literally be on that logical line, not the row of pixels as it is in GDI. To better explain, think of the coordinates in WPF being representative of the lines drawn on a piece of graph paper whereas the pixels are the squares those lines make up (assuming 96 DPI that is. You'd need to adjust accordingly if they are different.)
So... to get the drawing to refer to the pixel locations, we need to shift the drawing from the lines themselves to be the center of the pixels (squares on graph paper) so we shift all drawing by 0.5, 0.5 (again, assuming a DPI of 96)
So if it is a 96 DPI setting, simply adding this in the OnRender method worked like a charm...
drawingContext.PushTransform(new TranslateTransform(.5, .5));
Hope this helps others!
M
Have a look at this article: Draw lines exactly on physical device pixels
UPD
Some valuable quotes from the link:
The reason why the lines appear blurry, is that our points are center
points of the lines not edges. With a pen width of 1 the edges are
drawn excactly between two pixels.
A first approach is to round each point to an integer value (snap to a
logical pixel) an give it an offset of half the pen width. This
ensures, that the edges of the line align with logical pixels.
Fortunately the developers of the milcore (MIL stands for media
integration layer, that's WPFs rendering engine) give us a way to
guide the rendering engine to align a logical coordinate excatly on a
physical device pixels. To achieve this, we need to create a
GuidelineSet
protected override void OnRender(DrawingContext drawingContext)
{
Pen pen = new Pen(Brushes.Black, 1);
Rect rect = new Rect(20,20, 50, 60);
double halfPenWidth = pen.Thickness / 2;
// Create a guidelines set
GuidelineSet guidelines = new GuidelineSet();
guidelines.GuidelinesX.Add(rect.Left + halfPenWidth);
guidelines.GuidelinesX.Add(rect.Right + halfPenWidth);
guidelines.GuidelinesY.Add(rect.Top + halfPenWidth);
guidelines.GuidelinesY.Add(rect.Bottom + halfPenWidth);
drawingContext.PushGuidelineSet(guidelines);
drawingContext.DrawRectangle(null, pen, rect);
drawingContext.Pop();
}

Generate Texture in Silverlight imitate leather

I would like to display textures in different colors pretty much having this texture.
How do I do this in Silverlight?
Thanks!
alt text http://a.imageshack.us/img535/5255/leathertexture.png
Turn your texture into alpha textue. Exact steps will depend on your image manipulation software. After that simply place your texture on top of colored rectangle.
You could make pixel shader for even better result, but that would be an overkill in your case.

opengl invert framebuffer pixels

I was wondering was the best way to invert the color pixels in the frame buffer is. I know it's possible to do with glReadPixels() and glDrawPixels() but the performance hit of those calls is pretty big.
Basically, what I'm trying to do is have an inverted color cross-hair which is always visible no matter what's behind it. For instance, I'd have an arbitrary alpha mask bitmap or texture, have it render without depth test after the scene is drawn, and all the frame buffer pixels under the masked (full alpha) pixels of the texture would be inverted.
I've been trying to do this with a texture, but I'm getting some strange results, also all the blending options I still find confusing.
Give something like this a try:
glEnable(GL_COLOR_LOGIC_OP);
glLogicOp(GL_XOR);
// render geometry
glDisable(GL_COLOR_LOGIC_OP);
how about:
glEnable (GL_BLEND);
glBlend (GL_ONE_MINUS_DST_COLOR, GL_ZERO);

Resources