I'm working with mediapipe face mesh landmarks model. What I want is to find the 468 landmarks for a face and then filter out any faces with occluded landmarks. The model has these attributes defined as landmarks 'visibility' and 'presence'. But when I print out these values for all the landmarks they appear to be 0. For a frontal face they should atleast have a value greater than 0.5
Related
I'm wirtting a program that changes all the image pixels to grayscale except for the red ones. At first, i thought it would be easier, but I'm having trouble trying to find the best way to determine if a pixel is red or not.
The first method I tried was a formula: Green < Red/2 && Blue < Red/1.5
results:
michael jordan
goldhill
Michael Jordan's image shows some not red pixels that pass the formula, like #7F3222 and #B15432. So i tried a different method, hue >= 345 || hue <= 9, trying to limit only the red part of the color wheel.
results:
michael jordan 2
goldhill 2
Michael Jordan's image now has less not red pixels and goldhill's image has more red pixels than before but still not what I want.
My methods are incorrect or just some adjustments are missing? if they're incorrect, how can I solve this problem?
Your question "How to identify 'real' red pixels", begs the question "what a red pixel actually is, especially if it has to be 'real'".
The RGB (red, green, blue) color model is not well suited to answer that question, therefore you should use the HSV (hue, saturation, value) model.
Hue defines the color in degrees (0 - 360 degrees)
Saturation defines the intensity of the color (0 - 100 %)
Value or Brightness defines the luminosity (0 - 100 %)
Steps:
convert RGB to HSV
if the H value is not red (+/- 30 degrees, you'll have to define a threshold range of what you consider to be red, 'real' red would be 0 degrees)
set S to 0 (zero), by doing so we remove the saturation of the color, which results in a gray shade
leave the brightness (V) as it is (or play around with it and see how it effects the results)
convert HSV to RGB
Convert from RGB to HSV and vice versa:
RGB to HSV
HSV to RGB
More info on HSV:
https://en.wikipedia.org/wiki/HSL_and_HSV
"All cats are gray in the dark"
Implement a dynamic color range. Adjust the 'red' range based on the brightness and/or saturation of the current pixel. Put a weight scale (on how much they affect the range in %) on the saturation and brightness values to determine your range ... play around to achieve the best results.
You used RGB, and HSV method, which it is good, and both are ok.
The problem is about defining red. Hue (or R) is not enough: it contains many other colours (in the broader sense): browns are dark/unsaturated reds (or oranges). Pink is also a tint of red (so red + white, so unsaturated).
So in your first method, I would add a condition: R > 127 (you must check yourself a good threshold). And possibly change the other conditions with a higher ratio of R to G and B and possibly adding also the ration R to (G+B). The first new added condition is about reds (and not "dark reds/browns), and brightness. Your two conditions are about "hue" (hue is defined by the top two values), and the last condition I wrote is about saturation.
You can do in a similar way for HSV: filter H (as you did), but you must filter also V (you want just bright reds), and also an high saturation, so you must filter all channels.
You should test yourself the saturation levels. The problem is that eyes adapt quickly to colours, so some images with a lot of redish colours are seen normally (less redish) by humans, but more red by above calculation. Etc. (so usually for such works there is some sliders to modify, e.v. you can try to automatize, but you need to find overall hue and brightness of image, and possibly complex methods, see CIECAM).
I am trying to implement the following example inside a Recharts LineChart: the Tooltip value is relative to the blue point, because my mouse happens to be near it, and further away from the grey point with the same x-value. If I move the mouse closer to the grey point, the tooltip content changes.
However, all available examples show that a Recharts Tooltip receives data about all the data series being drawn and that it seems not possible to discriminate the point nearest to the mouse, so that the Tooltip may provide its value only.
Is there a way to specify for which dot I want to send data to the Tooltip?
At the end of a long fruitless search, I decided to solve this problem myself.
The minimal code is published in this Github gist.
The basic problem to solve is that any standard Recharts tooltip receives information about:
the x-value where the mouse pointer is at the moment, expressed in pixels on the chart canvas
the y-values for all the data series in closest position to the mouse x-value, expressed in the y-axis real-world unit (euros, kilograms, etc.)
It is necessary therefore to feed the custom tooltip also with y-axis mouse position information expressed in pixels on the chart canvas.
The tooltip can then calculate which data series is closest to the vertical mouse position and display only the value belonging to that data series.
Extracting the y-position in pixels is tricky, because Recharts changes the mapping between pixel and ordinate values each time it redraws the chart. But there is a chart component that must know very well this mapping, in order to place itself at the right vertical position and display the corresponding real-world ordinate value: that's every tick on the y-axis.
Problem is: how do we plug into the Recharts drawing workflow in order to get to know the mapping?
Here's how: the tick property of the Recharts YAxis component allows to provide a custom React component, albeit not documented with examples.
This custom component is instantiated one time for each tick that Recharts decides to place on the y-axis.
By trial and error I found out that my custom Tick component receives the following properties:
{ x, y, payload, ...anyCustomPropertyAddedByMe }
Where x and y are the cartesian coordinates of the tick (canvas pixels) and payload is such an object:
{ coordinates, isShow, offset, tickCoord, value }
Where value is expressed in real-world y-axis units.
The idea is to find out the couple (y, value) for the lowest and highest tick in each drawing and calculate the conversion factor between pixels and real values.
This will allow the custom tooltip to perform the computations mentioned above.
(Strictly speaking it would be enough to collect two couples from the first two ticks that are instantiated at each chart repaint, but choosing the two most far apart gives more precision)
The whole algorithm is divided among three components:
a tooltipCollector: this is a JavaScript module that presents two methods:
collect(value, y), invoked by the customized tick, that stores all couples (y, value) in a private array _collection
maxAndMin(), invoked by the custom tooltip, that reads the _collection array and returns the couple of items in the collection that represent the lowest and highest ticks (watch out that vertical pixel values in a canvas are measured upside down!)
a CustomizedTick React component that:
receives the tooltip collector among its custom properties
sends its y and payload.value to the collector by invoking its collect(y, value) method
returns a very simple JSX tick markup that makes usage of y (to place itself at the right vertical position) and payload.value (to display the user the real-world value the tick indicates)
a CustomTooltip React component that:
receives the tooltip collector among its custom properties and invokes its maxAndMin() method
verifies (by considering its prop coordinate.y) whether it's close enough to one of the chart data series, using a threshold value; this ensures that the tooltip is drawn only when the mouse cursor is very close to a point on the graph
modifies its returned JSX markup to contain only the value relative to the data series the mouse is closest to; in case more points in the chart are closer than the threshold, the tooltip will present more than one value
The code in my gist has been simplified to remove all unnecessary JSX markup. It presents a chart component that puts at work all the above mentioned components.
Please note that the standard Recharts behaviour of highlighting all the data series' points to which the tooltip abscissa is pointing has not been changed. It is therefore good practice to put a color code in the tooltip content to illustrate clearly to which data series the displayed value belongs.
I am trying to find Y-Axis value from X-Axis but so far no luck . My graph is plotted with 10-15 points with smooth curve set to true. Now what I need to locate is probable value of Y for any given x-value. Slimier to what occurs on click of graph (i think its nearest value displayed on tracker). Please note that I cant use tracker and click events. X-axis value will be provided from external input.
sample image
http://i.stack.imgur.com/jMwJo.png
I am using JFreeChart to make XYLineCharts with a Logarithmic y-axis but am facing an issue that I am unable to resolve.
My x, y values are very low in some cases (in one such case, the y-axis values for the dataset range between 4.5e-8 to 1.7). I plot these values on a XYLineChart using a Logarithmic Axis for the y-axis (and using LogAxis.createLogTickUnits(Locale.ENGLISH) and .setExpTickLabelsFlag(true) on the y-axis to create the exponential tick units). I set my range's bounds from 4.5e-8 to 1.7 and can see the points plotted clearly but there are no tick labels visible for the y-axis !
I was earlier having this issue while zooming into the charts but I have fixed the zoom & AutoZoom by over-riding those methods.
My current LogarithmicAxis works well for most of my x, y datasets but in a few cases, the y-axis is plotted but does not show any Tick Labels on it, despite my creating them & setting their visibility to true.
If anyone has any suggestions on how to fix this & ensure that the Tick Labels are visible no matter what the y-axis values may be, please let me know soon as I need to get this done ASAP.
Thanks.
I am using WPF 3D, but I think this question should apply to any 3d texture mapping.
Suppose I have a model of a cow, and I want to draw a circular spot on the cow (and I want to do this dynamically -- supposed I don't know the location of the spot until run-time). I could do this by coloring the vertexes (vertexes are assigned a color based on their distance from the center of the spot), but if the model is fairly low-poly, that will give a pretty jagged-edged circle.
I could do it using a pixel shader, where the shader colors each pixel based on its distance from the center of the spot. But suppose I don't have access to pixel shaders (since I don't in WPF).
So, it seems that what I want to do is dynamically create a texture with the circle pattern on it, and texture the cow with it.
The question is: As I'm drawing that texture, how can I know what 3d coordinate in model space a given xy coordinate on the texture image corresponds to?
That is, suppose I have already textured my model with a plain white texture -- I've set up texture coordinates, done texture mapping, but don't have the texture image yet. So I have this 1000x1000 (or whatever) pixel image that gets draped nicely over the cow according to some nice texture coordinates that have been set up on the model beforehand. I understand that when the 3D hardware goes to draw a given triangle, it uses the texture coordinates of the vertexes of the triangle to find the corresponding triangular region of the image, and then interpolates across the surface of the triangle to fill displayed model pixels with colors from that triangular region of the image.
How do I go the other way? How do I say, for this given xy point on my texture image, and given the texture coordinates that have already been set up on the model, what's the 3d coordinate in model space that this image pixel is going to correspond to once texture mapping happens?
If I had such a function, I could color my texture map image such that all the points (in 3d space) within a certain distance of the circle center point on the cow would get one color, and all points outside that distance would get another color, and I'd end up with a nice, crisp circular spot on the cow, even with a relatively low-poly model. Does that sound right?
I do understand that given the texture coordinates for the vertexes of each triangle, I can step through the triangles in my model, find the corresponding triangle on the texture image, and do my own interpolation, across the texture pixels in that triangle, by interpolating across the 3d plane determined by the vertex points. And that doesn't sound too hard. But I'm just trying to understand if there is some standard 3d concept/function where I can just call a ready-made function to give me the model space coordinates for a given texture xy.
I did end up getting this working. I walk every point on the texture (1024 x 1024 points). Using the model's texture coordinates, I determine which polygon face, if any, the given u,v point is inside of. If it's inside of a face, I get the model coordinates for each point on that face. I then do a barycentric interpolation as described here: http://paulbourke.net/texture_colour/interpolation/
That is, for each u,v point on the texture, I use an inside-polygon check to determine which quad it's in on the 2D texture sheet and then I use an interpolation on that same 2D geometry as described in the link above, but instead of interpolating colors or normals I'm interpolating 3D coordinates.
I can then use the 3D coordinate to color the point on the texture (e.g., to color a circular spot on the cow based on how far in model space the given texture point is from the spot center point). And then I can apply the texture to the model, and it works.
Again, it seems like this must be a standard procedure with a name...
One issue is that the result is very sensitive to the quality of the the texturing as set up by the modeler. For instance, if a relatively large quad on the cow corresponds to a small quad on the texture image, there just aren't enough pixels to work with to get a smooth curve within that model quad once the texture is applied. You can of course use a higher-res texture, such as 2048x2048, but then your loop time is 4x.
It's actually a rasterization process if I didn't misunderstand your question. In lightmapping, one may also need to find the corresponding positions and normals in world space for each texel in the lightmap space and then baking irradiance. (which seems similar to your goal)
You can use standard Graphics API to do this task instead of writing your own implementation. Let:
Size of texture -> Size of G-buffers
UVs of each mesh triangle -> Vertex positions vec3(u, v, 0) of the input stage
Indices of each mesh triangle -> Indices of the input stage
Positions (and normals, etc.) of each mesh triangle -> Attributes of the input stage
After the rasterizer stage of the graphics pipeline, all fragments that lie within the UV triangle are generated, and the attributes that have been supplied are interpolated automatically. You can do whatever you want now in pixel shader!