calculation of needed zoom in postGIS based on extent - postgis

Is there someway in postGIS to calculate the zoom I would need in my webapplication (leaflet.js) to visualize the full extent of a geo table? I know i can get it on Leaflet.js passing its bounds, but I need to calculate in postGIS.
I can get the centroid and the extent of my geo_table like this
SELECT ST_AsText(ST_centroid(extent(the_geom))), ST_AsText(extent(the_geom))
FROM my_geo_table
thanks

You can't because it depends on map's size. In other words, it's possible if the size is fixed(with little math!). Refer leaflet source code.

Related

How to train a custom Object detector from scratch in tensorflow.js?

I followed multiple example, to train a custom object detector in TensorflowJS . The main problem I am facing every where it is using pretrained model.
Pretrained models are fine for general use cases, but custom scenario it fails. For example, take this this is example form official Tensorflowjs examples, here it is using mobilenet, and mobilenet and mobilenet has image size restriction 224x224 which defeats all the purpose, because my images are big and also not of same ratio so resizing is not an option.
I have tried multiple example, all follows same path oneway or another.
What I want ?
Any example by which I can train a custom objector from scratch in Tensorflow.js.
Although the answer sounds simple but trust me I searching for this for multiple days. Any help will be greatly appreciated. Thanks
Currently it is not yet possible to use tensorflow object detection api in nodejs. But the image size should not be a restriction. Instead of resizing, you can crop your image and keep only the part that contain your object to be detected.
One approach will be like partition the image in 224x224 and run for all partitions but what if the object is between two partitions
The image does not need to be partitioned for it. When labelling the image, you will need to know the x, y coordinates (from the top left) and the w, h of the detected box. You only need to crop a part of the image that will contain the box. Cropping at the coordinates x - (224-w)/2, y- (224-h)/2 can be a good start. There are two issues with these coordinates:
the detected boxes will always be in the center, so the training will be biaised. To prevent it, a randomn factor can be used. x - (224-w)/r , y- (224-h)/r where r can be randomly taken from [1-10] for instance
if the detected boxes are bigger than 224 * 224 maybe you might first choose to resize the video keeping it ratio before cropping. In this case the boxe size (w, h) will need to be readjusted according to the scale used for the resizing

How to mark some grid points on netcdf map?

I can make 2D dimensional netcdf maps of some quantity. I open it in panoply and there is color map of that quantity. But I cannot visualize some boolean value.
Can I somehow mark particular grid points with some symbol on the map (it can be diamond, square, triangle... whatever), is there a way how to do it in Fortran90? I accept also python related help.
Again: I mean there would be color map (from real values) (which I can do) and at the same time some values will have e. g. triangle on it.
If I understand the question correctly, then you can easily do that with Python and using some plotting library (e.g Matplotlib). With Fortran it is extremely tricky as it does not natively support plotting in my mind.
Basically with Python you just have to :
read the wanted variables (coordinates and the field itself)
make the map of the field i.e make the plot
find the locations you want to highlight and just add those locations to the plot

How to get my MapContainer bounding box in Codename One

My Codename One app features a MapContainer. I need to show points of interest (POIs) on it which coordinates reside on the server. There can be hundreds (maybe thousands in the future) of such POIs on the server. That's why I would like to only download from the server the POIs that can be shown on the map. Consequently I need to get the map boundaries to pass them to the server.
I read this for Android and this other SO question for iOS and the key seems to get the map Projection and the map bounding box. However the getProjection() method or the getBoundingBox() seem not to be exposed.
A solution could be to mix the coordinates from getCameraLocation() which is the map center and getZoom() to infer those boundaries. But it may vary depending on the device (see the shown area can be larger).
How can get the map boundaries in Codename one ?
Any help appreciated,
Cheers,
The problem is in the javadocs for getCoordAtPosition(). This will be corrected. getCoordAtPosition() expects absolute coordinates, not relative.
E.g
Coord NE = currentMap.getCoordAtPosition(currentMap.getWidth(), 0);
Coord SW = currentMap.getCoordAtPosition(0, currentMap.getHeight());
Should be
Coord NE = currentMap.getCoordAtPosition(currentMap.getAbsoluteX() + currentMap.getWidth(), currentMap.getAbsoluteY());
Coord SW = currentMap.getCoordAtPosition(currentMap.getAbsoluteX(), currentMap.getAbsoluteY() + currentMap.getHeight());
I tried this out on the coordinates that you provided and it returns valid results.
EDIT March 21, 2017 : It turns out that some of the platforms expected relative coordinates, and others expected absolute coordinates. I have had to standardize it, and I have chosen to use relative coordinates across all platforms to be consistent with the Javadocs. So your first attempt:
Coord NE = currentMap.getCoordAtPosition(currentMap.getWidth(), 0);
Coord SW = currentMap.getCoordAtPosition(0, currentMap.getHeight());
Will now work in the latest version of the library.
I have also added another method : getBoundingBox() that will get the bounding box for you without worrying about relative/absolute coordinates.
This is probably something that can be exposed easily by forking the project and providing a pull request. We're currently working on updating the map component so this is a good time to make changes and add features.

Leaflet JS: Custom 2D projection that uses meters instead of lat,long

I am working on a custom game map. This map is basically a raster image, overlayed with some paths and markers. I want to use Leaflet to display the map.
What I am struggling with, is that Leaflet uses Latitude and Longitude to calculate positions, while it uses meters for distances (path lengths, radii of circles, etc).
This is very understandable when dealing with a spherical world like our Earth, but it complicates the custom map, which is flat a lot.
I would like to be able to specify the positions in the same unit as the distances.
Now, by default Leaflet uses a Spherical Mercator projection. According to the Docs, it is possible to define your own projections and coordinate reference systems, but I have been unable to do this thus far.
How would this be possible? Or is there a simpler way?
You should take a look at the simple coordinate reference system (L.CRS.Simple) included with Leaflet:
A simple CRS that maps longitude and latitude into x and y directly. May be used for maps of flat surfaces (e.g. game maps).
You can define the CRS of your L.Map instead upon initialization like so:
new L.Map('myDiv', {
crs: L.CRS.Simple
});
Some further elaboration: As #ghybs pointed out in the comment below and the comment to your question the default sperical mercator projection (L.CRS.EPSG3857) already works in meters. When you calculate the distance between two coordinates, Leaflet returns meters, example:
var startCoordinate = new L.LatLng(0, -1);
var endCoordinate = new L.LatLng(0, 1);
var distance = startCoordinate.distanceTo(endCoordinate);
console.log(distance);
The above will print 222638.98158654713 to your console which is the distance between those two coordinates in meters. Problem is that when using spherical projection, distance between two coordinates will become less the further you get from the equator which will become problematic when creating a flat gameworld. That's why you should use L.CRS.Simple, you won't have said problem.

Information Modeling

The sensor module in my project consists of a rotating camera, that collects noisy information about moving objects in the surrounding environment.
The information consists of distance, angle and relative change of the moving objects..
The limiting view range of the camera makes it essential to rotate the camera periodically to update environment information...
I was looking for algorithms / ways to model these information, in order to be able to guess / predict / learn motion properties of these object..
My current proposed idea is to store last n snapshots of each object in a queue. I take weighted average of positions and velocities of moving object, but I think it is a poor method...
Can you state some titles that suit this case?
Thanks
Kalman {Extended, unscented, ... } filters and particle filters only after reading about Kalman filters.
Kalman filters learn and predict the correct data from noisy data with a Gaussian assumption, so it may be of use to you. If you need non-Gaussian methods, look at the particle filter.

Resources