linear gradient x proportion codenameone - codenameone

i am using linear gradient horizontal. I want to use more of the starting gradient color than ending one. Presently both color have equal proportion ie meet in the center. How can i get more of one color than the other? Also i came to know that setBackgroundGradientRelativeX etc method only applies to radial gradient not linear one. Moreover i have lots of various simple gradients in the designs, so i dont want to use images everywhere. It would be troublesome.
categoryTitle.setUIID("partyCategoryTitle");
categoryTitle.getAllStyles().setBackgroundGradientStartColor(0x73a0ff);
categoryTitle.getAllStyles().setBgImage(null);
categoryTitle.getAllStyles().setBackgroundGradientEndColor(0xffffff);
categoryTitle.getAllStyles().setBackgroundGradientRelativeX(1);
categoryTitle.getAllStyles().setBackgroundGradientRelativeY(10);
categoryTitle.getAllStyles().setBackgroundType(Style.BACKGROUND_GRADIENT_LINEAR_HORIZONTAL);

As gradients currently don't really work on Android I'd avoid it.
I can't stress enough how badly gradients perform in the current implementation when they do work. Adding features to something that isn't working properly at this time is not a priority.

Related

iOS 6 AutoLayout Scale and Translate Animation

My aim is to have 3 images shrink, grow, and move along a horizontal axis depending on selection. Using Auto Layout seems to make the images jump about as they try to fulfil the Top space to superview / Bottom space to superview constraints.
So to combat this I have put all the images inside their own UIView. The UIView is set to the maximum size the images can grow to, it is centred on the horizontal axis. So now all the images must do is stay centred inside their corresponding UIView. This has fixed my problem as the UIViews perform the horizontal translation, while the images shrink/grow inside while remaining centred. My question is - is this the correct way to do this? It seems very long and like I am perhaps misusing the ability of Auto Layout. I have to perform similar tasks with more images and so any advice is welcome! Thanks.
I've just written a little essay on this topic here:
How do I adjust the anchor point of a CALayer, when Auto Layout is being used?
Basically autolayout does not play at all well with any kind of view transform. The easiest solution is to take your view out of autolayout's control altogether, but alternatively you can give it only constraints that won't fight back against the particular kind of transform you intend to apply. That second solution sounds like just the sort of thing you're doing.

Display percentage value as a fill in a custom shape

I'm looking at some new options for displaying a percentage value as a fill in a custom shape. Consider the effect to be similar to a "progress thermometer" in a traditional dashboard UI sense.
Considerations
Goal - a graphic element showing a percentage value for a custom report.
Format - Either a full graphic (or infographic) itself, or part of a PDF via Photoshop/InDesign or even iBooks (as an excuse to use it).
Usage - I'd like the process to be programmatic, for re-use. It needs to be accurate, and I'd like the solution to be somewhat object oriented to apply to other datasets and graphical containers. This rules out hand-drawn charting.
Source data - currently a pivot table in Excel, but I can work with any other host as required.
Shape - is a custom vector shape that will originate from Illustrator/Inkscape. final format as best fits resolution and rendering of the report. I would also be interested in any other generative shape ideas (such as java/javascript).
Fill - I'd like to be able to represent the fill as both an actual percentage of total area (true up), and as a percentage of the vertical scale. I'd imagine this flexibility would also help reuse of the method as a fill value against selected object variables (height, area, whatever).
I know I'm being slightly vague in the programming languages or hosts side of things, but this gives me an opportunity to break out of the usual analytic toolchain and scope out some innovative or new solutions. I'm specifically interested in open source solutions, but I'm very keen to review other current methods you might suggest.
This might be a little open ended for you, but d3.js is very powerful. There might be some useful templates on the site, or you can build your own from the library.
If you limit yourself to shapes where the percentage can be easily converted into a new shape by varying one of the dimensions, then the display part can be covered by creating a second shape based on the first one, and filling in 100% of the second shape.
This obviously works best with simple shapes like squares, rectangles, circles, etc, where it is simple to convert "50% of the area" or "75% of the height" into manipulation of vector nodes.
However, things gets significantly more difficult if you want to support genuinely arbitrary custom shapes. One way to handle that would be to break up a complex "progress bar" into "progress pieces" (e.g. a thermometer bulb that represents 10% of total progress, then a simple bar for the remaining 90%).
As has been mentioned, D3 seems like it would meet your needs - here are some simple examples of what I think you are asking:
Changing the fill color of a distinct shape: http://jsfiddle.net/jsl6906/YCMb8/
Changing the 'fill amount' of a simple shape: http://jsfiddle.net/jsl6906/YCMb8/1/

Add convex effect to image programmatically

I am looking for some algorithms to add a convex mirror effect and concave mirror effect to an image. I want to know also how to make this efficiently: applying the algorithm to image data or overlay it by a transparent image that contains the effect. But I don't think the second choice is applicable in this case.
If you are doing it manually instead of using hardware primitives, then the bresenham interpolation algorithm (usually used for line drawing) is the way to go: error propagation is far more efficient than other, more complex, methods.
What Bresenham does is just interpolation. Don't miss the opportunity to use its efficient design elsewhere (slope calculation for line-drwaing is just one of the many applications of interpolation: you can interpolate another dimension: 2D, 3D, transparency, reflection, colors, etc.).
25 years ago, I remember having used it to resize bitmaps and even do texture mapping in a real-time 3D engine! That was at a time graphic-accelerated video boards costed a fortune...
CImg library has a fisheye sample, in examples\CImg_demo.cpp. The core algorithm seems very simple (and fast, as generally this library). I think it's an approximation of the real optical effect, but could be modified to handle the convex mirroring. I don't know if it could be extended to handle 'negative' curvature.
You can use a pre-calculated sin() table and interpolate values to match the size of your bitmap. The inverse effect is achieved by either using an offset or a larger table.
Remembers me the (great times of the) DOS demos in the 80s...

Image processing..back ground subtraction

I have a sequence of images taken from a camera. The images consists of hand and surroundings. I need to remove everything except the hand.
I am new to Image processing. Would anyone help me in regard with the above Question. I am comfortable using C and Matlab.
A really simple approach if you have a stationary background and a moving hand (and quite a few images!) is simply to take the average of the set of images away from each image. If nothing else, it's a gentle introduction to Matlab.
The name of the problem you are trying to solve is "Image Segmentation". The Wikipedia page here: wiki is a good start.
If lighting consistency isn't a problem for you, I'd suggest starting with simple RGB thresholding and see how far that gets you before trying anything more complicated.
Have a look at OpenCV, a FOSS library for computer vision applications. Specifically, see the Video Surveillance module. For a walk through of background subtraction in MATLAB, see this EETimes article.
Can you specify what kind of images you have. Is the background moving or static? For a static background it is a bit straightforward. You simply need to subtract the incoming image from the background image. You can use some morphological operations to make it look better. They all depend on the quality of images that you have. If you have moving background I would suggest you go for color based segmentation. Convert the image to YCbCr then threshold appropriately. I know there are some papers available on it(However I dont have time to locate them). I suggest reading them first. Here is one link which might help you. Read the skin segmentation part.
http://www.stanford.edu/class/ee368/Project_03/Project/reports/ee368group08.pdf
background subtraction is simple to implement (estimate background as average of all frames, then subtract each frame from background and threshold resulting absolute difference) but unfortunately only works well if 1. camera has manual gain and exposure 2. lighting conditions do not change 3.background is stationary. 4. the background is visible for much longer than the foreground.
given your description i assume these are not the case - so what you can use - as already pointed out - is colour as a means of segmenting foreground from background. as it's a hand you are trying to isolate best bet is to learn the hand colour. opencv provides some means of doing this. if you want to do this yourself you just get the colour of some of the hand pixels (you would need to specify this manually for at least one frame) and convert them to HUE (which encapsulates the colour in a brightness independen way. skin colour has a very constant hue) and then make a HUE histogram. compare this to the rest of the pixels and then decided if the hue is simmilar enough.

ImageProcessing in WPF (Fant BitmapScalingMode)

My application presents an image that can be scaled to a certain size. I'm using the Image WPF control with the scaling method of FANT.
However, there is no documentation how this scaling algorithm works.
Can anyone reference me to the relevant link for this algorithm description?
Nir
Avery Lee of VirtualDub states that it's a box filter for downscaling and linear for upscaling. If I'm not mistaken, "box filter" here means basically that each output pixel is a "flat" average of several input pixels.
In practice, it's a lot more blurry for downscaling than GDI's cubic downscaling, so the theory about averaging sounds about right.
I know what it is, but I couldn't find much on Google either :(
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4056711 is the appropriate paper I think; behind a pay-wall.
You don't need to understand the algorithm to use it. You should explicitly make the choice each time you create a bitmap control that is scaled whether you want it high-quality scaled or low quality scaled.

Resources