Create GitHub punch card like plots with JFreeChart - jfreechart

I am looking for suggestions as how to create plots similar to GitHub punch cards with JFreeChart. E.g.
I guess it's some variant of a heat map, or two dimensional histogram.

Ok, so I found XYBubbleRenderer which looks like a good starting point.
create a MatrixSeries with rows = 7, columns = 24
fill in the frequencies accordingly. I found it useful to normalise the values first to 0...1, then take the square root (smaller values have a bit better visible circles), then multiply by 0.5 (otherwise the circles are too large)
create a MatrixSeriesCollection from that
use ChartFactory.createBubbleChart
the circle outline can only be removed via plot.getRenderer.setSeriesOutlinePaint(0, new Color(0, 0, 0, 0))
ensure integer tick units on both axis
x-axis range -0.5 to 23.5, y-axis range -0.5 to 6.5 (or 0.5 to 7.5 if you use Calendar.DAY_OF_WEEK)
custom NumberTickUnit for the y-axis to use day labels instead of numbers
The result:

In addition to XYBubbleRenderer, suggested here, also consider a suitable implementation of TableCellRenderer and Icon, illustrated here.

Related

Uniform random sampling of CIELUV for RGB colors

Selecting a random color on a computer is a touch harder than I thought it would be.
The naive way of uniform random sampling of 0..255 for R,G,B will tend to draw lots of similar greens. It would make sense to sample from a perceptually uniform space like CIELUV.
A simple way to do this is to sample L,u,v on a regular mesh and ensure the color solid is contained in the bounds (I've seen different bounds for this). If the sample falls outside embedded RGB solid (tested by mapping it XYZ then RGB), reject it and sample again. You can settle for a kludgy-but-guaranteed-to-terminate "bailout" selection (like the naive procedure) if you reject more then some arbitrary threshold number of times.
Testing if the sample lies within RGB needs to be sure to test for the special case of black (some implementations end up being silent on the divide by zero), I believe. If L=0 and either u!=0 or v!=0, then the sample needs to be rejected or else you would end up oversampling the L=0 plane in Luv space.
Does this procedure have an obvious flaw? It seems to work but I did notice that I was rolling black more often than I thought made sense until I thought about what was happening in that case. Can anyone point me to the right bounds on the CIELUV grid to ensure that I am enclosing the RGB solid?
A useful reference for those who don't know it:
https://www.easyrgb.com/en/math.php
The key problem with this is that you need bounds to reject samples that fall outside of RGB. I was able to find it worked out here (nice demo on page, API provides convenient functions):
https://www.hsluv.org/
A few things I noticed with uniform sampling of CIELUV in RGB:
most colors are green and purple (this is true independent of RGB bounds)
you have a hard time sampling what we think of as yellow (very small volume of high lightness, high chroma space)
I implemented various strategies that focus on sampling hues (which is really what we want when we think of "sampling colors") by weighting according to the maximum chromas at that lightness. This makes colors like chromatic light yellows easier to catch and avoids oversampling greens and purples. You can see these methods in actions here (select "randomize colors"):
https://www.mysticsymbolic.art/
Source for color randomizers here:
https://github.com/mittimithai/mystic-symbolic/blob/chromacorners/lib/random-colors.ts
Okay, while you don't show the code you are using to generate the random numbers and then apply them to the CIELUV color space, I'm going to guess that you are creating a random number 0.0-100.0 from a random number generator, and then just assigning it to L*.
That will most likely give you a lot of black or very dark results.
Let Me Explain
L* of L * u * v* is not linear as to light. Y of CIEXYZ is linear as to light. L* is perceptual lightness, so an exponential curve is applied to Y to make it linear to perception but then non-linear as to light.
TRY THIS
To get L* with a random value 0—100:
Generate a random number between 0.0 and 1.0
Then apply an exponent of 0.42
Then multiply by 100 to get L*
Lstar = Math.pow(Math.random(), 0.42) * 100;
This takes your random number that represents light, and applies a powercurve that emulates human lightness perception.
UV Color
As for the u and v values, you can probably just leave them as linear random numbers. Constrain u to about -84 and +176, and v to about -132.5 and +107.5
Urnd = (Math.random() - 0.5521) * 240;
Vrnd = (Math.random() - 0.3231) * 260;
Polar Color
It might be interesting converting uv to LChLUV or LshLUV
For hue, it's probably as simple as H = Math.random() * 360
For chroma contrained 0—178: C = Math.random() * 178
The next question is, should you find chroma? Or saturation? CIELUV can provide either Hue or Sat — but for directly generating random colors, it seems that chroma is a bit better.
And of course these simple examples are not preventing over-runs, so they color values to be tested to see if they are legal sRGB or not. There's a few things that can be done to constrain the generated values to legal colors, but the object here was to get you to a better distribution without excess black/dark results.
Please let me know of any questions.

LiveCharts - How to prevent the Y-Axis from showing double values?

I have a Columndiagram. This diagram may have Y values from 0 to very larg numbers.
My problem:
When the Y values are small (from my observation smaller than 7), the chart shows double values. (for example: 0, 0.01, 0.02 ... , 0.1), which is in my case not correct.
What I want:
force the Y Axis to use integers.
What I cannot do:
I cannot define a seperator for the Y axis and set its Step to 1. Becuase if I do it, I'll have Step= 1 even if the values are very large, which is not desirable.
Is there any workaround for it?
By default, the library decides the step (when you don't force it) with the CalculateSeparator() method (for more info see this), since the library should also allow you to plot decimal values, it can not be forced to display only integers.
A simple work around I can think of, is to force the Axis.MaxValue property.
In your case, when your values are less than 7, I would force the Axis.MaxValue to 10 (or any other value that works fine for you), then when the data in your chart is greater than 7, you can set Axis.MaxValue back to double.NaN and the library will calculate this limit by it self.
I hope it helps you.

Cannot implement correct billboard behaviour in WPF

Good eve all!
I'm making simple model editor on WPF for creating cube-head freaks with pixelate skins and want add billboarding to it.
So, what I do
Vector3D unitZ = new Vector3D(0, 0, 1);
Vector3D direction = -this.camera.LookDirection;
double yaw = Vector3D.AngleBetween(unitZ, new Vector3D(0, 0, direction.Z));
and the apply rotation to plane.
It works, but only on half, couse AngleBetween function allways returns positive values.
So when I rotate parent shape to -45 degrees AngleBetween returns 45 and it annihilate parent rotation. But when I rotate parent on 45 degrees AngleBetween again returns 45 and in result I got 90 degrees rotation.
Any solutions?
And one more thing: please, do not offer any frameworks or toolkits! Thanks.
Well, that's how AngleBetween method works for 3D vectors in WPF. It returns value in the range of [0..180] degrees.
As far as your problem goes, I am not 100% I understand you. The angle between [0, 0, 1] and [0, 0, arbitrary number here] can only give you three different values: :
impossible to calculate
180
0
-180 if signed is possible.
As I don't really see how you can ever get 45.
Either way, it's possible that you can use Vector.AngleBetween which should preserve the sign. That will work if your rotation is only on one axis, eg one of the components stay the same.
If that does not suit your needs, you should just write what you need: Signed angle between two 3D vectors with same origin within the same plane

Making a dynamic gradient with HSL or RGB

I have a standard 50-state map built with d3 in which I'm dynamically coloring states according to various datasets. Whatever the dataset, the values are normalized on a scale of 0 to 1, where 1 corresponds to the state with the highest value. I'm looking for a way to calculate the shade of the state using the value of the normalized data point.
In the past, I've chosen a base color that I like -- say, #900 -- and set the fill of each state to that color and the opacity to the normalized value. This works okay save for two problems:
when the canvas has a background color, it requires drawing a blank white state beneath every shaded state; and
fading out colors this way can look pasty
But I really like being able to set the color dynamically rather than dealing with bins for the data and preset arrays of RGB values for the gradient. So I'm wondering if there's a better way. I can take care of conversion if an alternate color system would work better.
d3 has a baked-in HSL converter, so I tried this:
// 0 <= val <= 1
function colorize(val) {
// nudge in the extremes
val = 0.2 + 0.6 * val;
return d3.hsl(0, val, 1 - val);
}
It works okay -- This is a map of fishing jobs, which are most prevalent in Maine and Oregon -- but I suspect there's a better way. Ideas?
I like what you did actually, but if you wish to do something different, you can always do a D3 scale. For example:
var scale = d3.scale.linear().domain([rangeMin, rangeMid,
rangeMax]).range(["#Color1","#Color2","#Color3"]);
And then set each state by
return scale(dataValue);
You can set your rangeMin and rangeMax variables to be the minimum and maximum values of your data. The median number, rangeMid, that I added is optional. I would suggest using this if you would like some variety in your color. I have used this scale feature to make a word frequency heatmap that came out pretty nice. I hope that I was able to help in some way!
Note: I used this with css hex values, but I believe RGB and HSL could also work.

How to identify optimal parameters for cvCanny for polygon approximation

This is my source image (ignore the points, they were added manually later):
My goal is to get a rough polygon approximation of the two hands. Something like this:
I have a general idea on how to do this; I want to use cvCanny to find edges, cvFindContours to find contours, and then cvApproxPoly.
The problem I'm facing is that I have no idea on how to properly use cvCanny, particularly, what should I use for the last 3 parameters (threshold1&2, apertureSize)? I tried doing:
cvCanny(source, cannyProcessedImage, 20, 40, 3);
but the result is not ideal. The left hand looks relatively fine but for the right hand it detected very little:
In general it's not as reliable as I'd like. Is there a way to guess the "best" parameters for Canny, or at least a detailed explanation (understandable by a beginner) of what they do so I can make educated guesses? Or perhaps there's a better way to do this altogether?
It seems you have to lower your thresholds.
The Canny algorithm work on the hysteresis threshold: it selects a contour if at least a pixel is as bright as the max threshold, and takes all the connected contour pixels if they are above the lower threshold.
Papers recommend to take the two thresholds in a scale of 2:1 oe 3:1 (by example 10 and 30, or 20 and 60, etc). For some applications, a threshold determined manually and hardcoded is enough. It may your case, too. I suspect that if you lower your thresholds, you will have good results, because the images are not that complicated.
A number of methods to automatically determine the best canny thresholds have been proposed. Most of them rely on gradient magnitudes to estimate the best thresholds.
Steps:
Extract the gradients (Sobel is a good option)
You can convert it to uchar. Gradients teoretically can have greater numerical values than 255, but that's ok. opencv's sobel returns uchars.
make a histogram of the resulting image.
take the max threshold at the 95th percentile of your histogram, and the lower as high/3.
You should probably adjust the percentile value depending on your app, but the results will be much more robust than a hardcoded hig and low values
Note: An excellent threshold detection algorithm is implemented in Matlab. It is based on the idea above, but a bit more sophisticated.
Note 2: This methods will work if the contours and illumination do not varies a lot between image areas. If the contours are crisper on one part of the image, then you need locally adaptive thresholds, and that's another story. But looking at you pics, it should not be the case.
Maybe one of the easiest solution is make Otsu thresholding on grayscale image, find contours on the binary image and than approximate them. Here's code:
Mat img = imread("test.png"), gray;
vector<Vec4i> hierarchy;
vector<vector<Point2i> > contours;
cvtColor(img, gray, CV_BGR2GRAY);
threshold(gray, gray, 0, 255, THRESH_OTSU);
findContours(gray, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for(size_t i=0; i<contours.size(); i++)
{
approxPolyDP(contours[i], contours[i], 5, false);
drawContours(img, contours, i, Scalar(0,0,255));
}
imshow("result", img);
waitKey();
And this is result:

Resources