Azure Maps - Polygon bounding box - azure-maps

I am using Azure Maps and the javascript atlas library:
https://learn.microsoft.com/en-us/javascript/api/azure-maps-control/atlas?view=azure-maps-typescript-latest
Below code returns undefined when i access bbox property of Polygon class:
var hull = atlas.math.getConvexHull(positions);
var boundingBox = hull.bbox //returns undefined.
var polygon = new atlas.data.Polygon(positions);
var bBox = polygon.bbox //returns undefined even here.
Code which works is:
var boundingBox = atlas.data.BoundingBox.fromPositions(positions); //Works fine.
I need to calculate centroid from convex hull using:
var centroid = atlas.data.BoundingBox.getCenter(hull.bbox)
Can anyone please help me.
Thanks.

The bbox property of a feature is only defined if it was defined/calculated directly, often this is populated in GeoJSON files and thus would be populated when read in. By default the map does not populate this field if it isn't already populated as it would mean a lot of unnecessary calculations in majority of apps.
For your scenario you would do this:
var hull = atlas.math.getConvexHull(positions);
var boundingBox = atlas.data.BoundingBox.fromData(hull);
var centroid = atlas.data.BoundingBox.getCenter(boundingBox);
Here is a similar sample: https://azuremapscodesamples.azurewebsites.net/index.html?sample=Polygon%20labels%20-%20calculated
If you are looking to place a label on the center of the polygon, you might also want to consider this approach: https://azuremapscodesamples.azurewebsites.net/index.html?sample=Polygon%20labels%20-%20symbol%20layer

Related

The ML.net prediction has HUGE different compared with Custom Vision

I've trained a model(object detection) using Azure Custom Vision, and export the model as ONNX,
then import the model to my WPF(.net core) project.
I use ML.net to get prediction from my model, And I found the result has HUGE different compared with the prediction I saw on Custom Vision.
I've tried different order of extraction (ABGR, ARGB...etc), but the result is very disappointed, can any one give me some advice as there are not so much document online about Using Custom Vision's ONNX model with WPF to do object detection.
Here's some snippet:
// Model creation and pipeline definition for images needs to run just once, so calling it from the constructor:
var pipeline = mlContext.Transforms
.ResizeImages(
resizing: ImageResizingEstimator.ResizingKind.Fill,
outputColumnName: MLObjectDetectionSettings.InputTensorName,
imageWidth: MLObjectDetectionSettings.ImageWidth,
imageHeight: MLObjectDetectionSettings.ImageHeight,
inputColumnName: nameof(MLObjectDetectionInputData.Image))
.Append(mlContext.Transforms.ExtractPixels(
colorsToExtract: ImagePixelExtractingEstimator.ColorBits.Rgb,
orderOfExtraction: ImagePixelExtractingEstimator.ColorsOrder.ABGR,
outputColumnName: MLObjectDetectionSettings.InputTensorName))
.Append(mlContext.Transforms.ApplyOnnxModel(modelFile: modelPath, outputColumnName: MLObjectDetectionSettings.OutputTensorName, inputColumnName: MLObjectDetectionSettings.InputTensorName));
//Create empty DataView. We just need the schema to call fit()
var emptyData = new List<MLObjectDetectionInputData>();
var dataView = mlContext.Data.LoadFromEnumerable(emptyData);
//Generate a model.
var model = pipeline.Fit(dataView);
Then I use the model to create context.
//Create prediction engine.
var predictionEngine = _mlObjectDetectionContext.Model.CreatePredictionEngine<MLObjectDetectionInputData, MLObjectDetectionPrediction>(_mlObjectDetectionModel);
//Load tag labels.
var labels = File.ReadAllLines(LABELS_OBJECT_DETECTION_FILE_PATH);
//Create input data.
var imageInput = new MLObjectDetectionInputData { Image = this.originalImage };
//Predict.
var prediction = predictionEngine.Predict(imageInput);
Can you check on the image input (imageInput) is resized with the same size as in the model requirements when you prepare the pipeline for both Resize parameters:
imageWidth: MLObjectDetectionSettings.ImageWidth,
imageHeight: MLObjectDetectionSettings.ImageHeight.
Also for the ExtractPixels parameters especially on the ColorBits and ColorsOrder should follow the model requirements.
Hope this help
Arif
Maybe because the aspect ratio is not preserved during the resize.
Try with an image with the size of:
MLObjectDetectionSettings.ImageWidth * MLObjectDetectionSettings.ImageHeight
And you will see much better results.
I think Azure does preliminary processing on the image, maybe Padding (also during training?), or Cropping.
Maybe during the processing it also uses a moving window(the size that the model expects) and then do some aggregation

Latitude out of bounds

I get following exception when using Entity Framework to get data from bounds from a google map:
FormatException: 24201: Latitude values must be between -90 and 90 degrees.
POLYGON((81.9882716924738 140.187434563007,21.5587046599696 140.187434563007,21.5587046599696 -40.1641279369925,81.9882716924738 -40.1641279369925,81.9882716924738 140.187434563007))
I can see other have same problem, but havent found anything that solves this. I would expect that first coordinate for point is the latitude and second for longitude? And none is above 90 so why do I get this error? I tried swapping lat and lng but with same problem.
This is the failing line:
var poly = FindByBoundingBox(northEastLat, northEastLng, southWestLat, southWestLng);
DbGeography polygon = DbGeography.FromText(poly, 4326);
var parksWithinPolygon = dbCtx.SiteList.Where(p =>
p.PolygonCenter.Intersects(polygon)).Select(p=>p.SiteName).ToList();
As Damien states first problem is that Sql server expects longitude first and then lattitude. This throws another error, redirecting to another problem:
"This operation cannot be completed because the instance is not valid".
My best bet is it's the they way/order I build the polygon. Has anyone succeeded in mapping google bounds to a polygon in SQL server? In short I am trying to get any data (data has a point column) within the google map bounds.
The function to calculate polygon is listed below:
public string FindByBoundingBox(double northEastLat, double northEastLng, double southWestLat, double southWestLng)
{ //Create poylgon of bounding box
System.Globalization.CultureInfo customCulture = (System.Globalization.CultureInfo)System.Threading.Thread.CurrentThread.CurrentCulture.Clone();
customCulture.NumberFormat.NumberDecimalSeparator = ".";
System.Threading.Thread.CurrentThread.CurrentCulture = customCulture;
var bboxWKT = string.Format("POLYGON(({1} {0},{1} {2},{3} {2},{3} {0},{1} {0}))", northEastLat, northEastLng, southWestLat, southWestLng);
return bboxWKT;
}
OK i figured it out. As Damien stated the order of coordinates is oppesite of google. In SQL long needs to be first. Next thing is what is the "left hand rule". You need to create your polygon starting from lower left corner and then counter clock wise.

How to create array of coordinates?

I am working on geo location search in nodejs backend. And mongodb is the database. In order to do that, I am getting values from front end. I am using map getbound to get viewport data. Below are some variables that holds latitude and longitude.
var topLeftLong = parseFloat(req.body.topLeftLong);
var topLeftLat = parseFloat(req.body.topLeftLat);
var topRightLong = parseFloat(req.body.topRightLong);
var topRightLat = parseFloat(req.body.topRightLat);
var bottomRightLong = parseFloat(req.body.bottomRightLong);
var bottomRightLat = parseFloat(req.body.bottomRightLat);
var bottomLeftLong = parseFloat(req.body.bottomLeftLong);
var bottomLeftLat = parseFloat(req.body.bottomLeftLat);
Though I am getting these data, I want to put latitude and longitude in an array. But whenever I tried to push data in the array, I get the below error.
Seems like wrong data type is to blame. In your code if you applied cast, for e.g. (int)string-var, then it could be the error due to that. Data types are important. Mismatch data types would cause errors.

How can I bind the radius of my Geofire query in AngularFire?

I have a model in angularJS which is bound to firebase $scope.items=$firebase(blah) and I use ng-repeat to iterate through the items.
Every item in firebase has a corresponding geofire location by the key of the item.
How can I update my controller to only include items by a custom radius around the user? I don't want to filter by distance in angular, just ask firebase to only retrieve closer items (say 0.3km around a location). I looked around geoqueries but they have a different purpose and I don't know how to bind them to the model anyway. The user may change the radius and the items list should be updated accordingly, so they need to be bound somehow.
Any suggestion is welcome, but an example would be greatly appreciated as I don't have fluency in this trio of angular/firebase/geofire yet :P
It's difficult to figure out what you need to do without seeing your code. But in general you'll need to query a Firebase ref that contains the Geohash as either the name of the child or the priority.
A good example of such a data structure can be found here: https://publicdata-transit.firebaseio.com/_geofire/i
i
9mgzcy8ewt:lametro:8637: true
9mgzgvu3hf:lametro:11027: true
9mgzuq55cc:lametro:11003: true
9mue7smpb9:nctd:51117: true
...
l
...
lametro:11027
0: 33.737797
1: -118.294708
actransit:1006
actransit:1011
actransit:1012
...
The actual transit verhicles are under the l node. Each of them has an array contains the location of that vehicle as a longitutude and latitude pair.
The i node is an index that maps each vehicle to a Geohash. You can see that the name of each node is built up as <geohash>:<metroarea>:<vehicleid>.
Since the Geohash is at the start of the name, we can filter on Geohash with a Query:
var ref = new Firebase("https://publicdata-transit.firebaseio.com/_geofire");
var query = ref.child('i').startAt(null, '9mgzgvu3ha').endAt(null, '9mgzgvu3hz');
query.once('child_added', function(snapshot) { console.log(snapshot.name()); });
With this query Firebase will give us all nodes whose name falls within the range. If all is well, this will output the name of one node:
9mgzgvu3hf:lametro:11027
Once you have that node, you can parse the name to extract the vehicleid and then lookup the actual location of the vehicle under l.
Calculating Geohashes based on a location and a range
In the snippet above, I hardcoded the geohash values to use. Normally you'll want to to get all nodes in a certain range around a center. Instead of calculating these yourself, I recommend using the geohashQueries function from GeoFire for that:
var whitehouse = [38.8977, -77.0366];
var rangeInKm = 0.3;
var hashes = geohashQueries(center, radiusInKm*1000);
console.log(JSON.stringify(hashes));
This outputs a number of Geohash ranges:
[["dqcjqch","dqcjqc~"],["dqcjr10","dqcjr1h"],["dqcjqbh","dqcjqb~"],["dqcjr00","dqcjr0h"]]
You can pass each of these Geohash ranges into a Firebase query:
hashes.forEach(function(hash) {
var query = geoFireRef.child('i').startAt(null, hash[0]).endAt(null, hash[1]);
query.once('child_added', function(snapshot) { log(snapshot.name()); });
});
I hope this helps you settings things up.
Here is a Fiddle that I created a while ago to experiment with this stuff: http://jsfiddle.net/aF9mN/.

How to find the closest point in the DB using objectify-appengine

I'm using objectify-appengine in my app. In the DB I store latitude & longitude of places.
at some point I'd like to find the closest place (from the DB) to a specific point.
As far as i understood i can't perform regular SQL-like queries.
So my question is how can it be done in the best way?
You should take a look at GeoModel, which enables Geospatial Queries with Google App Engine.
Update:
Let's assume that you have in your Objectify annotated model class, a GeoPt property called coordinates.
You need to have in your project two libraries:
GeoLocation.java
Java GeoModel
In the code that you want to perform a geo query, you have the following:
import com.beoui.geocell.GeocellManager;
import com.beoui.geocell.model.BoundingBox;
import you.package.path.GeoLocation;
// other imports
// in your method
GeoLocation specificPointLocation = GeoLocation.fromDegrees(specificPoint.latitude, specificPoint.longitude);
GeoLocation[] bc = specificPointLocation.boundingCoordinates(radius);
// Transform this to a bounding box
BoundingBox bb = new BoundingBox((float) bc[0].getLatitudeInDegrees(),
(float) bc[1].getLongitudeInDegrees(),
(float) bc[1].getLatitudeInDegrees(),
(float) bc[0].getLongitudeInDegrees());
// Calculate the geocells list to be used in the queries (optimize
// list of cells that complete the given bounding box)
List<String> cells = GeocellManager.bestBboxSearchCells(bb, null);
// calculate geocells of your model class instance
List <String> modelCells = GeocellManager.generateGeoCell(myInstance.getCoordinate);
// matching
for (String c : cells) {
if (modelCells.contains(c)) {
// success, do sth with it
break;
}
}

Resources