R - Contour map - arrays

I have plotted a contour map but i need to make some improvements. This is the structure of the data that are used:
str(lon_sst)
# num [1:360(1d)] -179.5 -178.5 -177.5 -176.5 -175.5 ...
str(lat_sst)
# num [1:180(1d)] -89.5 -88.5 -87.5 -86.5 -85.5 -84.5 -83.5 -82.5 -81.5 -80.5 ...
dim(cor_Houlgrave_SF_SST_JJA_try)
# [1] 360 180
require(maps)
maps::map(database="world", fill=TRUE, col="light blue")
maps::map.axes()
contour(x=lon_sst, y=lat_sst, z=cor_Houlgrave_SF_SST_JJA_try[c(181:360, 1:180),],
zlim=c(-1,1), add=TRUE)
par(ask=TRUE)
filled.contour(x = lon_sst, y=lat_sst,
z=cor_Houlgrave_SF_SST_JJA_try[c(181:360, 1:180),],
zlim=c(-1,1), color.palette=heat.colors)
Because most of the correlations are close to 0, it is very hard to see the big ones.
Can i make it easier to see, or can i change the resolution so i can zoom it in? At the moment the contours are too tightly spaced so I can't see what the contour levels were.
Where can i see the increment, i set my range as (-1,1), i don't know how to set the interval manually.
Can someone tell me how to plot a specific region of the map, like longitude from 100 to 160 and latitude from -50 to -80? I have tried to replace lon_sst and lat_sst, but it has a dimension error. Thanks.

To answer 1 and 3 which appear to be the same request, try:
maps::map(database="world", fill=TRUE, col="light blue",
ylim=c(-80, -50), xlim=c(100,160) )
To address 2: You have a much smaller range than [-1,1]. The labels on those contour lines are numbers like .06, -.02 and .02. The contour function will accept either an 'nlevels' or a 'levels' argument. Once you have a blown up section you can use that to adjust the z-resolution of contours.

contourplot in the lattice package can also produce these types of contour plots, and makes it easy to both contour lines and fill colours. This may or may not suit your needs, but by filling contour intervals, you can do away with the text labels, which can get a little crowded if you want to have high resolution contours.
I don't have your sea surface temperature data, so the following figure uses dummy data, but you should get something similar. See ?contourplot and ?panel.levelplot for possible arguments.
For your desired small scale plot, overlaying the world map plot is probably inappropriate, especially considering that the area of interest is in the ocean.
library(lattice)
contourplot(cor_Houlgrave_SF_SST_JJA_try, region=TRUE, at=seq(-1, 1, 0.25),
labels=FALSE, row.values=lon_sst, column.values=lat_sst,
xlim=c(100, 160), ylim=c(-80, -50), xlab='longitude', ylab='latitude')
Here, the at argument controls the position at values at which contour lines will be calculated and plotted (and hence the number of breaks in the colour ramp). In my example, contour lines are provided at -0.75, -0.5, -0.25, 0, 0.25, 0.5, 0.75 and 1 (with -1 being the background). Changing to at=seq(-1, 1, 0.5), for example, would produce contour lines at -0.5, 0, 0.5, and 1.

Related

CNN with RGB input and BW binary output

I am a beginner to deep learning and I am working with Keras built on top of Tensorflow. I am trying to using RGB images (540 x 360) resolution to predict bounding boxes.
My labels are binary (black/white) 2 dimensional np array of dimensions (540, 360) where all pixels are 0 except for the box edges which are a 1.
Like this:
[[0 0 0 0 0 0 ... 0]
[0 1 1 1 1 0 ... 0]
[0 1 0 0 1 0 ... 0]
[0 1 0 0 1 0 ... 0]
[0 1 1 1 1 0 ... 0]
[0 0 0 0 0 0 ... 0]]
There can be more than one bounding box in every picture. A typical image could look like this:
So, my input has the dimension (None, 540, 360, 3), output has dimensions (None, 540, 360) but if I add an internal array I can change the shape to (None, 540, 360, 1)
How would I define a CNN model such that my model could fit this criteria? How can I design a CNN with these inputs and outputs?
You have do differentiate between object detection and object segmentation. While both can be used for similar problems, the underlying CNN architectures look very different.
Object detection models use a CNN classification/regression architecure, where the output refers to the coordinates of the bounding boxes. It's common practice to use 4 values belonging to vertical center, horizontal center, width and height of each bounding box. Search for Faster R-CNN, SSD or YOLO to find popular object detection models for keras. In your case you would need to define a function that converts the current labels to the 4 coordinates I mentioned.
Object segmentation models commonly use an architecture referred to as encoder-decoder networks, where the original image is scaled down and compressed on the first half and then brought back to it's original resolution to predict a full image. Search for SegNet, U-Net or Tiramisu to find popular object segmentation models for keras. My own implementation of U-Net can be found here. In your case you would need to define a custom function, that fills all the 0s inside your bounding boxes with 1s. Understand that this solution will not predict bounding boxes as such, but segmentation maps showing regions of interest.
What is right for you, depends on what precisely you want to achieve. For getting actual bounding boxes you want to perform an object detection. However, if you're interested in highlighting regions of interest that go beyond rectangle windows a segmentation may be a better fit. In theory, you can use your rectangle labels for a segmentation, where the network will learn to create better masks than the inaccurate segmentation of the ground truth, provided you have enough data.
This is a simple example of how to write intermediate layers to achieve the output. You can use this as a starter code.
def model_360x540(input_shape=(360, 540, 3),num_classes=1):
inputs = Input(shape=input_shape)
# 360x540x3
downblock0 = Conv2D(32, (3, 3), padding='same')(inputs)
# 360x540x32
downblock0 = BatchNormalization()(block0)
downblock0 = Activation('relu')(block0)
downblock0_pool = MaxPooling2D((2, 2), strides=(2, 2))(block0)
# 180x270x32
centerblock0 = Conv2D(1024, (3, 3), padding='same')(downblock0_pool)
#180x270x1024
centerblock0 = BatchNormalization()(center)
centerblock0 = Activation('relu')(center)
upblock0 = UpSampling2D((2, 2))(centerblock0)
# 180x270x32
upblock0 = concatenate([downblock0 , upblock0], axis=3)
upblock0 = Activation('relu')(upblock0)
upblock0 = Conv2D(32, (3, 3), padding='same')(upblock0)
# 360x540x32
upblock0 = BatchNormalization()(upblock0)
upblock0 = Activation('relu')(upblock0)
classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(upblock0)
#360x540x1
model = Model(inputs=inputs, outputs=classify)
model.compile(optimizer=RMSprop(lr=0.001), loss=bce_dice_loss, metrics=[dice_coeff])
return model
The downblock represents the block of layers which perform downsampling(MaxPooling2D).
The centerblock has no sampling layer.
The upblock represents the block of layers which perform up sampling(UpSampling2D).
So here you can see how (360,540,3) is being transformed to (360,540,1)
Basically, you can add such blocks of layers to create your model.
Also check out Holistically-Nested Edge Detection which will help you better with the edge detection task.
Hope this helps!
I have not worked with keras but I will provide a solution approach in more generalized way which can be used on any framework.
Here is full procedure.
Data preparation: I know your labels are edges of boxes which will also work but i will recommend that instead of edges you prepare dataset marking complete box like given in sample (I have marked for two boxes). Now your dataset have three classes (Box,Edges of box and background). Create two lists, Image and label.
Get a pre-trained model (RESNET-51 recommended) solver and train prototxt from here, Remove fc1000 layer and add de-convolution/up-sampling layers to match your input size. use paddding in first layer to make it square and crop in deconvolution layer to match input output dimensions.
Transfer weights from previously trained network (Original) and train your network.
Test your dataset and create bounding boxes using detected blobs.

OpenGL model, view, projection matrices

I am trying to understand cameras in opengl that use matrices.
I've written a simple shader that looks like this:
#version 330 core
layout (location = 0) in vec3 a_pos;
layout (location = 1) in vec4 a_col;
uniform mat4 u_mvp_mat;
uniform mat4 u_mod_mat;
uniform mat4 u_view_mat;
uniform mat4 u_proj_mat;
out vec4 f_color;
void main()
{
vec4 v = u_mvp_mat * vec4(0.0, 0.0, 1.0, 1.0);
gl_Position = u_mvp_mat * vec4(a_pos, 1.0);
//gl_Position = u_proj_mat * u_view_mat * u_mod_mat * vec4(a_pos, 1.0);
f_color = a_col;
}
It's a bit verbose but that's because I am testing passing in either the model, view or projection matrices and doing the multiplication on the gpu or doing the multiplication on the cpu and passing in the mvp matrix and then just doing the mvp * position matrix multiplication.
I understand that the later one can offer performance increase but drawing 1 quad I don't really see any issues with performance at this point.
Right now I use this code to get the locations from my shader and create the model view and projection matrices.
pos_loc = get_attrib_location(ce_get_default_shader(), "a_pos");
col_loc = get_attrib_location(ce_get_default_shader(), "a_col");
mvp_matrix_loc = get_uniform_location(ce_get_default_shader(), "u_mvp_mat");
model_mat_loc = get_uniform_location(ce_get_default_shader(), "u_mod_mat");
view_mat_loc = get_uniform_location(ce_get_default_shader(), "u_view_mat");
proj_matrix_loc =
get_uniform_location(ce_get_default_shader(), "u_proj_mat");
float h_w = (float)ce_get_width() * 0.5f; //width = 320
float h_h = (float)ce_get_height() * 0.5f; //height = 480
model_mat = mat4_identity();
view_mat = mat4_identity();
proj_mat = mat4_identity();
point3* eye = point3_new(0, 0, 0);
point3* center = point3_new(0, 0, -1);
vec3* up = vec3_new(0, 1, 0);
mat4_look_at(view_mat, eye, center, up);
mat4_translate(view_mat, h_w, h_h, -20);
mat4_ortho(proj_mat, 0, ce_get_width(), 0, ce_get_height(), 1, 100);
mat4_scale(model_mat, 30, 30, 1);
mvp_mat = mat4_identity();
after this I setup my vao and vbo's then get ready to do rendering.
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(ce_get_default_shader()->shader_program);
glBindVertexArray(vao);
mvp_mat = mat4_multi(mvp_mat, view_mat, model_mat);
mvp_mat = mat4_multi(mvp_mat, proj_mat, mvp_mat);
glUniformMatrix4fv(mvp_matrix_loc, 1, GL_FALSE, mat4_get_data(mvp_mat));
glUniformMatrix4fv(model_mat_loc, 1, GL_FALSE, mat4_get_data(model_mat));
glUniformMatrix4fv(view_mat_loc, 1, GL_FALSE, mat4_get_data(view_mat));
glUniformMatrix4fv(proj_matrix_loc, 1, GL_FALSE, mat4_get_data(proj_mat));
glDrawElements(GL_TRIANGLES, quad->vertex_count, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
Assuming that all the matrix math is correct, I would like to abstract view and projection matrix out into a camera struct as well as the model matrix into a sprite struct so that I can avoid all this matrix math and make things easier to use.
The matrix multiplication order is:
Projection * View * Model * Vector
so the camera would hold the projection and view matrices while the sprite holds the model matrix.
Do all your camera transformations and your sprite transformations then right before you send the data to the gpu you do your matrix multiplications.
If I remember correctly matrix multiplication isn't commutative so doing
view * projection * model will result in the wrong resulting matrix.
pseudo code
glClearxxx(....);
glUseProgram(..);
glBindVertexArray(..);
mvp_mat = mat4_identity();
proj_mat = camera_get_proj_mat();
view_mat = camera_get_view_mat();
mod_mat = sprite_get_transform_mat();
mat4_multi(mvp_mat, view_mat, mod_mat); //mvp holds model * view
mat4_multi(mvp_mat, proj_mat, mvp_mat); //mvp holds proj * model * view
glUniformMatrix4fv(mvp_mat, 1, GL_FALSE, mat4_get_data(mvp_mat));
glDrawElements(...);
glBindVertexArray(0);
Is that a performant way to go about doing this that is scalable?
Is that a performant way to go about doing this that is scalable?
Yes, unless you have a very exotic use case of some sort which is very unlike the norm.
The last thing you should typically ever be worrying about is with respect to the performance of retrieving a modelview and projection matrix out of a camera.
It's because those matrices typically only need to be fetched once per frame per viewport. There's millions of iterations worth of other work that could occur in a frame while scanline-rasterizing primitives, and pulling matrices out of a camera is just a simple constant-time operation.
So typically you want to just make it as convenient as you like. In my case, I go all the way through an abstract interface of function pointers in a central SDK, at which point the functions then compute the proj/mv/ti_mv matrix on the fly out of user-defined properties associated with the camera. In spite of this, it never shows up as a hotspot -- it doesn't even show up in the profiler at all.
There's far more expensive things to worry about. Scalability implies scale -- the complexity of retrieving matrices out of camera doesn't scale. The number of triangles or quads or lines or other primitives you want to render could scale, the number of fragments processed in a frag shader can scale. Cameras typically don't scale except with respect to the number of viewports, and no one should ever have use for a million viewports.
I haven't checked that bit-wise, but it generally looks ok what you're doing.
I would like to abstract view and projection matrix out into a camera struct
That's a most appropriate idea; I can hardly imagine a serious GL application without such an abstraction
Is that a performant way to go about doing this that is scalable?
General constraints of scalability are
diffuse and specular BRDFs (which also require, btw, a light uniform, a normal attribute and calculation of a normal matrix if the scaling of the model is non-uniform) and need per-pixel illumination for quality rendering.
same with multiple lights (e.g. the sun and a close spotlight)
shadow maps! shadow maps? (one for each light-source?)
transparency
reflections (mirrors, glass, water)
textures
As you may take it from the list, you will not get very far with just an MVP uniform and a vertex coordinate attribute.
But the mere number of uniforms is by far not the most crucial points for performance - seeing your code I'm positive that you will not recompile your shaders unnecessarily, update your uniforms only if needed, use Uniform Buffer Objects etc..
The issue is the data that is plugged into those uniforms and VBOs. Or not.
Consider humanoid mesh "Alice" running (that's a mesh morph + translation) across a city square on a windy (water will have ripples) evening (more than one relevant light source), passing a fountain.
Lets' consider we collect it all for by all means on the CPU and old-school only plug ready-to render data into the shaders:
Alice's mesh is morphed, thus her VBOs need an update
Alice's mesh will move; thus all affected shadow maps will need an update (OK, given. they are generated by shadow illumination loops on the GPU, but if you do the wrong way you will shove a lot of data around)
Alice's reflection in the fountain will come and go
Alice's hair will be swirled - the CPU may have quiet a busy time, to say the least
(in fact the latter is so difficult that you will hardly see any halfway-realistic real-time long open hair animation, but amazingly (no, not really) many pony-tails and short hair cuts)
And we've not yet talked about Alice's attire; let's just hope she's wearing a t-shirt and a jeans (not wide shirt and a skirt, which would require fall-of-the-folds and collision calculations).
As you may have guessed that old-school approach doesn't take us far and thus, there is a fit to be found between between CPU and GPU operations.
In addition, one should think about parallelization of calculations at an early stage. It is advantageous to have the data as flat as possible in chunks as large as reasonable, so one just puts a pointer and size into a gl-call and bids that data farewell without any copying, re-arranging, looping or further ado.
That's my 2 cents of wisdom for today about GL performance and scalability.

Openlayers 3 Circle radius in meters

How to get Circle radius in meters
May be this is existing question, but i am not getting proper result. I am trying to create Polygon in postgis with same radius & center getting from openlayers circle.
To get radius in meters I followed this.
Running example link.
var radiusInMeters = circleRadius * ol.proj.METERS_PER_UNIT['m'];
After getting center, radius (in meters) i am trying to generate Polygon(WKT) with postgis (server job) & drawing that feature in map like this.
select st_astext(st_buffer('POINT(79.25887485937808 17.036647682474722 0)'::geography, 365.70644956827164));
But both are not covering same area. Can any body please let me know where i am doing wrong.
Basically my input/output to/from Circle will be in meters only.
ol.geom.Circle might not represent a circle
OpenLayers Circle geometries are defined on the projected plane. This means that they are always circular on the map, but the area covered might not represent an actual circle on earth. The actual shape and size of the area covered by the circle will depend on the projection used.
This could be visualized by Tissot's indicatrix, which shows how circular areas on the globe are transformed when projected onto a plane. Using the projection EPSG:3857, this would look like:
The image is from OpenLayer 3's Tissot example and displays areas that all have a radius of 800 000 meters. If these circles were drawn as ol.geom.Circle with a radius of 800000 (using EPSG:3857), they would all be the same size on the map but the ones closer to the poles would represent a much smaller area of the globe.
This is true for most things with OpenLayers geometries. The radius, length or area of a geometry are all reported in the projected plane.
So if you have an ol.geom.Circle, getting the actual surface radius would depend on the projection and features location. For some projections (such as EPSG:4326), there would not be an accurate answer since the geometry might not even represent a circular area.
However, assuming you are using EPSG:3857 and not drawing extremely big circles or very close to the poles, the Circle will be a good representation of a circular area.
ol.proj.METERS_PER_UNIT
ol.proj.METERS_PER_UNIT is just a conversion table between meters and some other units. ol.proj.METERS_PER_UNIT['m'] will always return 1, since the unit 'm' is meters. EPSG:3857 uses meters as units, but as noted they are distorted towards the poles.
Solution (use after reading and understanding the above)
To get the actual on-the-ground radius of an ol.geom.Circle, you must find the distance between the center of the circle and a point on it's edge. This could be done using ol.Sphere:
var center = geometry.getCenter()
var radius = geometry.getRadius()
var edgeCoordinate = [center[0] + radius, center[1]];
var wgs84Sphere = new ol.Sphere(6378137);
var groundRadius = wgs84Sphere.haversineDistance(
ol.proj.transform(center, 'EPSG:3857', 'EPSG:4326'),
ol.proj.transform(edgeCoordinate, 'EPSG:3857', 'EPSG:4326')
);
More options
If you wish to add a geometry representing a circular area on the globe, you should consider using the method used in the Tissot example above. That is, defining a regular polygon with enough points to appear smooth. That would make it transferable between projections, and appears to be what you are doing server side. OpenLayers 3 enables this by ol.geom.Polygon.circular:
var circularPolygon = ol.geom.Polygon.circular(wgs84Sphere, center, radius, 64);
There is also ol.geom.Polygon.fromCircle, which takes an ol.geom.Circle and transforms it into a Polygon representing the same area.
My answer is a complement of the great answer by Alvin.
Imagine you want to draw a circle of a given radius (in meters) around a point feature. In my particular case, a 200m circle around a moving vehicle.
If this circle has a small diameter (< some kilometers), you can ignore earth roudness. Then, you can use the marker "Circle" in the style function of your point feature.
Here is my style function :
private pointStyle(feature: Feature, resolution: number): Array<Style> {
const viewProjection = map.getView().getProjection();
const coordsInViewProjection = (<Point>(feature.getGeometry())).getCoordinates();
const longLat = toLonLat(coordsInViewProjection, viewProjection);
const latitude_rad = longLat[1] * Math.PI / 180.;
const circle = new Style({
image: new CircleStyle({
stroke: new Stroke({color: '#7c8692'});,
radius: this._circleRadius_m / (resolution / viewProjection.getMetersPerUnit() * Math.cos(latitude_rad)),
}),
});
return [circle];
}
The trick is to scale the radius by the latitude cosine. This will "locally" disable the distortion effect we can observe in the Tissot Example.

x3d drawing a pyramid

Hello fellow programmers. I have to say I just started drawing figures on x3d and I'm really needing to constroy a pyramid for a project of mine. Yet nothing I search seems to help me as I cannot understand the logic beyond how the figures are drawn just from looking at code from other people.
I managed to draw a cone using some keywords i found like : "bottomRadius", "height", etc...
But have no idea how I could convert something like this to a pyramid, what keywords should I be aware of that could help me draw the base triangle isntead of a circle like the cone does with the keyword bottomRadius?
Use IndexedFaceSet's coord to define points in space that you can connect (create triangles) using the coordIndex.
e.g.:
Shape {
geometry IndexedFaceSet {
coord Coordinate {
point [
1 0 0,
0 1 0,
0 0 1,
0 0 0,
]}
coordIndex [
0,1,2,-1 #face1
0,1,3,-1 #face2
0,2,3,-1 #face3
1,2,3,-1 #face4
]
color Color {
color [ 1 0 0,0 1 0,0 0 1,1 0 1,]}
colorPerVertex TRUE
}
}
There is no fundamental shape of a pyramid. The only fundamental shapes are box, cone, cylinder, and sphere. You will need to use one of the detailed geometry shapes: IndexedFaceSet or TriangleSet. These can be coded by hand where you determine the coordinates of all of the verticies. You can also use a modeling tool (Blender is open source) to construct the geometry and export it as X3D.

Logarithmic scale with jFreeChart BoxAndWhiskerChart

I've got a box and whisker chart being populated like so:
JFreeChart chart = ChartFactory.createBoxAndWhiskerChart(
"Average Fitness",
"Generation",
"Fitness",
aveDataSet,
true);
ChartFrame frame = new ChartFrame("Average Fitness", chart);
frame.pack();
frame.setVisible(true);
But the scale of the data makes the chart hard to read. At the left of the x-axis, the values are in the neighborhood of 250 000 000, but by about the halfway point the values are below 10 (but still converging toward 0). This long-tail convergence is impossible to see with the linear y-axis, but I can't figure if I'm able to replace it with a log scale.

Resources