[giraffe][rickshaw] How to dynamically scaly Y axis on positive and negative sides - rickshaw

I am using Giraffe (https://github.com/kenhub/giraffe) to draw some graphs based on some graphite metrics.
The values for one of the graphs can go positive as well as negative. I do not want to set explicit scale for the graph using "min" and "max". Is there a way that the graph can dynamically scale and display the negative values as well?
Currently, the graph scales dynamically on the positive y axis but the values on negative y axis are not displayed.
Thanks!

I'm the creator of Giraffe.
The issue was resolved in the latest version. Thanks for reporting it.
Older versions can set min: 'auto' in the dashboards.js file.

Related

How can you exclude a large large initial value from a running delta calculation?

I'm trying to use a running delta calculation to graph how much additional storage is used per hour, from a field that contains how much storage is used. Let's say I have a field like disk_space_used_mb. If I have the values 50000, 50100, 50300, the running delta would be 50000, 100, 200, but I don't really care about the first value, and it throw off my graph. I can of course set the max value of the y axis manually, but that isn't dynamic.
How can I prevent this first large value from throwing off my graph? is there a way to force that to 0?
Here's an example of why this is a problem (with different numbers):
Sadly, this is currently not possible and it is a very common problem when plotting running delta.
To workaround, if your initial value is static, you can create a new calculated field where you subtract the initial value from all rows (so the initial value will be always zero). But obviously, this is not an elegant solution and your chart Y-axis values will be different from the real values.
But if the initial value can be changed by the user (it is dynamic), you're really out of lucky. The only solution I can imagine is to search for an alternative visualization that support this feature or develop your own visualization.
The second option probably solves your problem, but the development of community visualizations is far from being an easy task.

Google Earth Engine: How can I perform the comparison of two maps (different resolutions)?

I have two maps- Hansen 30m forest cover and another product of 500 m resolution. I want to look on correlation between these two maps at 500 m resolution.
Does anyone know, how to aggregate one map (30 m) to the resolution of another (500 m) in Google Earth Engine, so they will perfectly overlay?
Thanks in advance!
In Google Earth Engine you don't have to worry about it: the scaling happens automagically and they will overlay perfectly. Of course lack of control can be an issue for certain applications, but to my knowledge there is not much you can do about it.
Now, to the point: scaling Hansen's map (Global Forest Change - GFC) to lower resolution map:
The GEE loads the data internally at native resolution and then generates down-sampled versions at multiples of two, until the entire image fits in a single tile. Then, GEE fetches the down-sampled pixels from the nearest higher level of the pyramid, and re-samples those to the requested scale. For bands that cannot be interpolated, like GFC's "lossyear" (pixel value indicating the year loss occurred), the GEE selects the first value (of the four "child" values) at each pyramid level.
In other words, if we consider example of a treecover2000 band with the following pixel values
10 20
30 40
the down-sampling will result in value '25'. The categorical values like 'lossyear' will scale differently:
1 2
3 4
Here the result will be simply "1"

Lock scale on multiple axis ssrs

I have a chart in SSRS (reporting services) that is using both the primary and secondary y axis to plot 4 different years of data. I am also using a series group for multiple values stacked. The problem that I am having is that the 2 axis's are not using the same scale so the numbers look like they don't match the table that is also on this report. How can I lock the 2 axis's so they use the same scale. I would like to still use auto axis if possible so I don't have to calculate the max and min myself.
Right click an axis and select "Axis Properties". In the Axis Options tab you can set the minimum and maximum values of the axis, you want these values to be set to a value (Not Auto) and for both graphs to hold the same values. This is to say for example that a minimum = 0 and maximum = 10 that the graph will start at 0 and end at 10 , if you have a value of 11 it will just hit the top of the graph. You will also want to set the interval. An interval of 1 would make the axis read 0,1,2,3 etc, an interval of 2 would make it go 0,2,4,6 etc and again you want the same value set on both graphs.

Antipole Clustering

I made a photo mosaic script (PHP). This script has one picture and changes it to a photo buildup of little pictures. From a distance it looks like the real picture, when you move closer you see it are all little pictures. I take a square of a fixed number of pixels and determine the average color of that square. Then I compare this with my database which contains the average color of a couple thousand of pictures. I determine the color distance with all available images. But to run this script fully it takes a couple of minutes.
The bottleneck is matching the best picture with a part of the main picture. I have been searching online how to reduce this and came a cross “Antipole Clustering.” Of course I tried to find some information on how to use this method myself but I can’t seem to figure out what to do.
There are two steps. 1. Database acquisition and 2. Photomosaic creation.
Let’s start with step one, when this is all clear. Maybe I understand step 2 myself.
Step 1:
partition each image of the database into 9 equal rectangles arranged in a 3x3 grid
compute the RGB mean values for each rectangle
construct a vector x composed by 27 components (three RGB components for each rectangle)
x is the feature vector of the image in the data structure
Well, point 1 and 2 are easy but what should I do at point 3. How do I compose a vector X out of the 27 components (9 * R mean, G mean, B mean.)
And when I succeed to compose the vector, what is the next step I should do with this vector.
Peter
Here is how I think the feature vector is computed:
You have 3 x 3 = 9 rectangles.
Each pixel is essentially 3 numbers, 1 for each of the Red, Green, and Blue color channels.
For each rectangle you compute the mean for the red, green, and blue colors for all the pixels in that rectangle. This gives you 3 numbers for each rectangle.
In total, you have 9 (rectangles) x 3 (mean for R, G, B) = 27 numbers.
Simply concatenate these 27 numbers into a single 27 by 1 (often written as 27 x 1) vector. That is 27 numbers grouped together. This vector of 27 numbers is the feature vector X that represents the color statistic of your photo. In the code, if you are using C++, this will probably be an array of 27 number or perhaps even an instance of the (aptly named) vector class. You can think of this feature vector as some form of "summary" of what the color in the photo is like. Roughly, things look like this: [R1, G1, B1, R2, G2, B2, ..., R9, G9, B9] where R1 is the mean/average of red pixels in the first rectangle and so on.
I believe step 2 involves some form of comparing these feature vectors so that those with similar feature vectors (and hence similar color) will be placed together. Comparison will likely involve the use of the Euclidean distance (see here), or some other metric, to compare how similar the feature vectors (and hence the photos' color) are to each other.
Lastly, as Anony-Mousse suggested, converting your pixels from RGB to HSB/HSV color would be preferable. If you use OpenCV or have access to it, this is simply a one liner code. Otherwise wiki HSV etc. will give your the math formula to perform the conversion.
Hope this helps.
Instead of using RGB, you might want to use HSB space. It gives better results for a wide variety of use cases. Put more weight on Hue to get better color matches for photos, or to brightness when composing high-contrast images (logos etc.)
I have never heard of antipole clustering. But the obvious next step would be to put all the images you have into a large index. Say, an R-Tree. Maybe bulk-load it via STR. Then you can quickly find matches.
Maybe it means vector quantization (vq). In vq the image isn't subdivide in rectangles but in density areas. Then you can take a mean point of this cluster. First off you need to take all colors and pixels separate and transfer it to a vector with XY coordinate. Then you can use a density clustering like voronoi cells and get the mean point. This point can you compare with other pictures in the database. Read here about VQ: http://www.gamasutra.com/view/feature/3090/image_compression_with_vector_.php.
How to plot vector from adjacent pixel:
d(x) = I(x+1,y) - I(x,y)
d(y) = I(x,y+1) - I(x,y)
Here's another link: http://www.leptonica.com/color-quantization.html.
Update: When you have already computed the mean color of your thumbnail you can proceed and sort all the means color in a rgb map and using the formula I give to you to compute the vector x. Now that you have a vector of all your thumbnails you can use the antipole tree to search for a thumbnail. This is possbile because the antipole tree is something like a kd-tree and subdivide the 2d space. Read here about antipole tree: http://matt.eifelle.com/2012/01/17/qtmosaic-0-2-faster-mosaics/. Maybe you can ask the author and download the sourcecode?

Maps: Does calculating distance between 2 points factor in altitude?

Does Postgres' Spatial plugin, or any Spatial package for that manner, factor in the altitude when calculating the distance between 2 points?
I know the Spatial packages factor in the approximate curvature of the earth but if one location is at the top of a mountain and the other location is close to the sea - it seems like the calculated difference between those two points would greatly vary if the difference in altitude was not factored into account.
Also keep in mind that if I have 2 points are at the same ocean altitude but a mountain exists between the 2 points - the distance package should account for this.
Those factors are not being counted at all. Why? The software only knows about the two features (the two points you are getting the distance, the sphere/spheroid and a datum/projection factor).
For that to happen you need to probably use a developed linestring, in which you will connect your point with n vertices, each of them being Z aware.
Imagine this (loose WKT): LINESTRING((0,1,2),(0,2,3),(0,3,4),(0,10,15),(0,11,-1)).
Asking the software to calculate the distance between each vertex and summing it up, will consider the variations of terrain. But without something like that, it is impossible to map for irregularities in terrain.
All GIS softwares cannot tell, by themselves, what are those irregularities in terrain, and therefore, not take them in account.
You can create such linestrings (automatically) with softwares like ArcGIS (and others), using a line (between two points), and a surface file, such as the ones provided freely by NASA (SRTM project). These files come in a raster format, and each pixel has a X Y and Z value, in meters. Traversing the line you want, coupled with that terrain profile, you can achieve the calculation you want to achieve. If you need to have super extra precise calculations, you need a precise surface, and precise Z values in each vertex of this profile line.
That cleared up?
If the distance formula you're using does not take the altitude of the two points as parameters (in addition to the Latitudes and Longitudes of the two points), then it does not factor in altitude to the distance calculation. In any event, altitude difference does not have a very significant effect on calculated distance.
As usual with GPS, the difference in distance calculations that altitude would make is probably smaller than the error in most commercial GPS devices anyway, so in most applications altitude can be safely dispensed with (altitude measurements themselves are pretty inaccurate with commercial GPS devices, although survey data on altitudes is quite accurate).
PostgreSQL does not factor in altitude when calculating distances. It is all done in a planar surface.
Most of database spatial packages will not take this into account, altought, if your point is 3d, i.e., has a Z coordinate that might happend.
I don´t have PostgreSQL in this machine, but try this.
SELECT ST_DISTANCE(ST_POINT(0,0,10),ST_POINT(0,0,0));
It´s fairly easy to know if it is taking into account your Z value, since the return should be > 0; If that turns out to be true, just create Z aware features, and you will be successfull.
What SQL SERVER 2008, for example, takes into account when calculating distances, is the position of a Geography feature in a sphere. Geometry features in SQL SERVER will always use planar calculations.
EDIT: checked this in PostGIS manual
For Z aware points you must use the ST_MakePoint function. It takes up to 4 arguments (X Y Z and M). St_POINT only takes two (X Y)
http://postgis.refractions.net/documentation/manual-1.4/ST_Distance.html
ST_DISTANCE = 2D calculations
ST_DISTANCE_SPHERE documentation (takes in account a fixed sphere for calculations - aka not planar)
http://postgis.refractions.net/documentation/manual-1.4/ST_Distance_Sphere.html
ST_DISTANCE_SPHEROID documentation (takes into account a choosen spheroid for your calculations)
http://postgis.refractions.net/documentation/manual-1.4/ST_Distance_Spheroid.html
ST_POINT documentation
http://postgis.refractions.net/documentation/manual-1.4/ST_Point.html

Resources