I am new to AzureMap with limited knowledge of JavaScript and looking help in getting the real time alert based on some random flag during the fleet movement based on the co ordinates .
I tried multiple sources to design it like followed Sample code enter link description here
My requirement is :
Pip should pop up or appear only on arrival of the fleet (truck) .
Thanks
To verify, when the truck gets to the end of the line you want to open a popup. I'm assuming you have a constant flow of data updating the truck position and that you can easily grab the trucks coordinate. As such you would only need a function to determine if the truck coordinate is at the end of the route line. You will likely need to account for a margin of error (i.e. within 15 meters of the end of the line) as a single coordinate can represent a single molecule with enough decimal places and GPS devices typically have an accuracy of +/- 15 meters. With this in mind all you would need to do is calculate the distance from the truck coordinate to the last coordinate of the route line. For example:
var lastRouteCoord = [-110, 45];
var truckCoord = [-110.0001, 45.0001];
var minDistance = 15;
//Get the distance between the coordinates (by default this function returns a distance in meters).
var distance = atlas.math.getDistanceTo(lastRouteCoord, truckCoord);
if(distance <= minDistance){
//Open popup.
//Examples: https://azuremapscodesamples.azurewebsites.net/index.html#Popups
}
Related
I want to do text detection in an image using only tensorfow.js or opencv.js, i have already build a EAST model on keras and converted to tensorflowjs model
can anyone help me with this, any resource will be great
Thanks.
So, initially you need to download the East frozen model and then conver it to tensorflow.js model by using the below command
tensorflowjs_converter --input_format=tf_frozen_model --output_node_names='feature_fusion/Conv_7/Sigmoid,feature_fusion/concat_3' /path_to_model /path_to_where_you_want_save_converted_model.
Next after taking an input image and loading the model the below code will detect the text is there or not
$("#predict-button").click(async function () {
let image = $("#selected-image").get(0);
let tensor = tf.browser.fromPixels(image)
.resizeNearestNeighbor([640, 320])
.expandDims(0);
tensor = tf.cast(tensor, 'float32')
const [output1, output2] = await model.predict(tensor);
const data1 = await output1.data();
const data2 = await output2.data();
As the east model gives two outputs i.e. scores and geometry. so here data1 will give the geometry (which I ignored because my end goal was to detect if the text is present not to localize it) and data2 will give the scores.
Next, I put a threshold of 0.5 to differentiate between the text is present or not. if the probability is greater than 0.5 then the text is present and if less than 0.5 than the text is not present on text.
Note: for now, I have skipped the preprocessing step (except resize) where they subtract the mean RGB value from an RGB value of input image.
I'm creating a hyper local delivery service app . I can only receive order if there is a store within 5 km radius from the user . I stored the store locations in geojson format . Is there a function in h3-js which will take radius , array of stores , h3 index and then give back the list of stores which are within 5 km range from the given h3 index . or how can i implement this using h3-js?
There are a few different parts here:
Pick a resolution: Pick an H3 resolution for lookup. Finer res means more accuracy but more memory usage. Res 8 is roughly a few city blocks in size.
Indexing Data: To use H3 for the radius lookup, you need to index the stores by H3 index. If you want this to be efficient, you'd be better off indexing all the stores ahead of time. How you do this is up to you; one easy way in JS might be to create a map of id arrays:
const lookupIndexes = stores.features.reduce((map, feature) => {
const [lon, lat] = feature.geometry.coordinates;
const h3Index = h3.geoToH3(lat, lon, res);
if (!map[h3Index]) map[h3Index] = [];
map[h3Index].push(feature.id);
return map;
}, {})
Perform the lookup: To search, index your search location and get all the H3 indexes within some radius. You can use the h3.edgeLength function to get the approximate radius of a cell at your current resolution.
const origin = h3.geoToH3(searchLocation.lat, searchLocation.lon, res);
const radius = kmToRadius(searchRadiusKm, res);
// Find all the H3 indexes to search
const lookupIndexes = h3.kRing(origin, radius);
// Find all points of interest in those indexes
const results = lookupIndexes.reduce(
(output, h3Index) => [...output, ...(lookupMap[h3Index] || [])],
[]);
See a working example on Observable
Caveats: This is not a true radius search. The k-ring is a roughly hexagonal shape centered on the origin. This is good enough for many use cases, and much faster than a traditional Haversine radius search, especially if you have many rows to search over. But if you care about the exact distance H3 might not be appropriate (or, in some cases, H3 might be fine, but you might want the indexes inside a "true" circle - one option here is to convert your circle to a close-to-circular polygon, then get the indexes via h3.polyfill).
I am trying to implement mini batch training to my neural network instead of the "online" stochastic method of updating weights every training sample.
I have developed a somewhat novice neural network in C whereby i can adjust the number of neurons in each layer , activation functions etc. This is to help me understand neural networks. I have trained the network on mnist data set but it takes around 200 epochs to get down do an error rate of 20% on the training set which seams very poor to me. I am currently using online stochastic gradient decent to train the network. What i would like to try is use mini batches instead. I understand the concept that i must accumulate and average the error from each training sample before i propagate the error back. My problem comes in when i want to calculate the changes i must make to the weights. To explain this better consider a very simple perceptron model. One input, one hidden layer one output. To calculate the change i need to make to the weight between the input and the hidden unit i will use this following equation:
∂C/∂w1= ∂C/∂O*∂O/∂h*∂h/∂w1
If you do the partial derivatives you get:
∂C/∂w1= (Output-Expected Answer)(w2)(input)
Now this formula says that you need to multiply the back propogated error by the input. For online stochastic training that makes sense because you use 1 input per weight update. For minibatch training you used many inputs so which input does the error get multiplied by?
I hope you can assist me with this.
void propogateBack(void){
//calculate 6C/6G
for (count=0;count<network.outputs;count++){
network.g_error[count] = derive_cost((training.answer[training_current])-(network.g[count]));
}
//calculate 6G/6O
for (count=0;count<network.outputs;count++){
network.o_error[count] = derive_activation(network.g[count])*(network.g_error[count]);
}
//calculate 6O/6S3
for (count=0;count<network.h3_neurons;count++){
network.s3_error[count] = 0;
for (count2=0;count2<network.outputs;count2++){
network.s3_error[count] += (network.w4[count2][count])*(network.o_error[count2]);
}
}
//calculate 6S3/6H3
for (count=0;count<network.h3_neurons;count++){
network.h3_error[count] = (derive_activation(network.s3[count]))*(network.s3_error[count]);
}
//calculate 6H3/6S2
network.s2_error[count] = = 0;
for (count=0;count<network.h2_neurons;count++){
for (count2=0;count2<network.h3_neurons;count2++){
network.s2_error[count] = += (network.w3[count2][count])*(network.h3_error[count2]);
}
}
//calculate 6S2/6H2
for (count=0;count<network.h2_neurons;count++){
network.h2_error[count] = (derive_activation(network.s2[count]))*(network.s2_error[count]);
}
//calculate 6H2/6S1
network.s1_error[count] = 0;
for (count=0;count<network.h1_neurons;count++){
for (count2=0;count2<network.h2_neurons;count2++){
buffer += (network.w2[count2][count])*network.h2_error[count2];
}
}
//calculate 6S1/6H1
for (count=0;count<network.h1_neurons;count++){
network.h1_error[count] = (derive_activation(network.s1[count]))*(network.s1_error[count]);
}
}
void updateWeights(void){
//////////////////w1
for(count=0;count<network.h1_neurons;count++){
for(count2=0;count2<network.inputs;count2++){
network.w1[count][count2] -= learning_rate*(network.h1_error[count]*network.input[count2]);
}
}
//////////////////w2
for(count=0;count<network.h2_neurons;count++){
for(count2=0;count2<network.h1_neurons;count2++){
network.w2[count][count2] -= learning_rate*(network.h2_error[count]*network.s1[count2]);
}
}
//////////////////w3
for(count=0;count<network.h3_neurons;count++){
for(count2=0;count2<network.h2_neurons;count2++){
network.w3[count][count2] -= learning_rate*(network.h3_error[count]*network.s2[count2]);
}
}
//////////////////w4
for(count=0;count<network.outputs;count++){
for(count2=0;count2<network.h3_neurons;count2++){
network.w4[count][count2] -= learning_rate*(network.o_error[count]*network.s3[count2]);
}
}
}
The code i have attached is how i do the online stochastic updates. As you can see in the updateWeights() function the weight updates are based on the input values (dependent on the sample fed in) and the hidden unit values (also dependent on the input sample value fed in). So when i have the minibatch average gradient that i am propogating back how will i update the weights? which input values do i use?
Ok so i figured it out. When using mini batches you should not accumulate and average out the error at the output of the network. Each training examples error gets propogated back as you would normally except instead of updating the weights you accumulate the changes you would have made to each weight. When you have looped through the mini batch you then average the accumulations and change the weights accordingly.
I was under the impression that when using mini batches you do not have to propogate any error back until you have looped through the mini batch. I was wrong you still need to do that the only difference is you only update the weights once you have looped through your mini batch size.
For minibatch training you used many inputs so which input does the error get multiplied by?
"Many inputs" this is a proportion of the dataset size N, which typically segments your data into sizes which are not too large to fit into memory. DL needs Big Data and the full batch cannot fit into most computer systems to process in one go and therefore the mini-batch is necessary.
The error which gets backpropagated is the sum or average error calculated for the data samples in your current mini-batch $X^{{t}}$ which is of size M where $M<N$, $J^{{t}} = 1/m \sum_1^M ( f(x_m^{t})-y_m^{t} )^2$. This is the sum of the squared distances to the target across samples in the batch 't'. This is the forward step and then the backwards propagation of this error is made using the chain rule through the 'neurons' of the network; using this single value of the error for the whole batch. The update of the parameters is based upon this value for this mini-batch.
There are variations to how this scheme is implemented but if you consider your idea of using "many inputs" in the calculation of the parameter update using multiple input samples from the batch, we are averaging over multiple gradients to smooth over the gradient in comparison to stochastic gradient descent.
I am working on an app where I have a set of pre-defined co-ordinates stored in an array like the one below:
var SpeedcameraLocationDictionary: [[String : AnyObject]] = [
["camindex": "1a", "Latitude":xxxx, "Longitude":xxxx,"Distance":34, "legalspeed":110],
["camindex": "1b", "Latitude":xxxx, "Longitude":xxxx,"Distance":34, "legalspeed":110],
["camindex": "2a", "Latitude":xxxx, "Longitude":xxxx,"Distance":26, "legalspeed":110],
["camindex": "2b", "Latitude":xxxx, "Longitude":xxxx,"Distance":26, "legalspeed":110]]
What I want to do is, when the users location matches any lat & long in the array (+ or - 5 meters) , start doing some calculations. I have variables set up to hold the lat & long of the user, and the relevant CLLocation manager. Implanting on checking the users location every 30 secs, so i have a timer set up to trigger at 30sec intervals.
I believe I need to set up a geofence around the relevant point, but I'm not really sure how to do the check & set up.
I imagine it would be along the lines of the following:
if Timer = 30secs
let region = CLCircularRegion(center: Latitudefromarray,longitudefromarray, radius: 5,)
if (Latitude +/- 5m == region AND longitude +/- 5m == region)
do calculations....
could somebody either point me in the direction of a tutorial that could help me achieve this goal, or provide some sample code to achieve the goal I'm after? i have been following this tutorial but am struggling to adapt it to my needs.
Thanks
How to get Circle radius in meters
May be this is existing question, but i am not getting proper result. I am trying to create Polygon in postgis with same radius & center getting from openlayers circle.
To get radius in meters I followed this.
Running example link.
var radiusInMeters = circleRadius * ol.proj.METERS_PER_UNIT['m'];
After getting center, radius (in meters) i am trying to generate Polygon(WKT) with postgis (server job) & drawing that feature in map like this.
select st_astext(st_buffer('POINT(79.25887485937808 17.036647682474722 0)'::geography, 365.70644956827164));
But both are not covering same area. Can any body please let me know where i am doing wrong.
Basically my input/output to/from Circle will be in meters only.
ol.geom.Circle might not represent a circle
OpenLayers Circle geometries are defined on the projected plane. This means that they are always circular on the map, but the area covered might not represent an actual circle on earth. The actual shape and size of the area covered by the circle will depend on the projection used.
This could be visualized by Tissot's indicatrix, which shows how circular areas on the globe are transformed when projected onto a plane. Using the projection EPSG:3857, this would look like:
The image is from OpenLayer 3's Tissot example and displays areas that all have a radius of 800 000 meters. If these circles were drawn as ol.geom.Circle with a radius of 800000 (using EPSG:3857), they would all be the same size on the map but the ones closer to the poles would represent a much smaller area of the globe.
This is true for most things with OpenLayers geometries. The radius, length or area of a geometry are all reported in the projected plane.
So if you have an ol.geom.Circle, getting the actual surface radius would depend on the projection and features location. For some projections (such as EPSG:4326), there would not be an accurate answer since the geometry might not even represent a circular area.
However, assuming you are using EPSG:3857 and not drawing extremely big circles or very close to the poles, the Circle will be a good representation of a circular area.
ol.proj.METERS_PER_UNIT
ol.proj.METERS_PER_UNIT is just a conversion table between meters and some other units. ol.proj.METERS_PER_UNIT['m'] will always return 1, since the unit 'm' is meters. EPSG:3857 uses meters as units, but as noted they are distorted towards the poles.
Solution (use after reading and understanding the above)
To get the actual on-the-ground radius of an ol.geom.Circle, you must find the distance between the center of the circle and a point on it's edge. This could be done using ol.Sphere:
var center = geometry.getCenter()
var radius = geometry.getRadius()
var edgeCoordinate = [center[0] + radius, center[1]];
var wgs84Sphere = new ol.Sphere(6378137);
var groundRadius = wgs84Sphere.haversineDistance(
ol.proj.transform(center, 'EPSG:3857', 'EPSG:4326'),
ol.proj.transform(edgeCoordinate, 'EPSG:3857', 'EPSG:4326')
);
More options
If you wish to add a geometry representing a circular area on the globe, you should consider using the method used in the Tissot example above. That is, defining a regular polygon with enough points to appear smooth. That would make it transferable between projections, and appears to be what you are doing server side. OpenLayers 3 enables this by ol.geom.Polygon.circular:
var circularPolygon = ol.geom.Polygon.circular(wgs84Sphere, center, radius, 64);
There is also ol.geom.Polygon.fromCircle, which takes an ol.geom.Circle and transforms it into a Polygon representing the same area.
My answer is a complement of the great answer by Alvin.
Imagine you want to draw a circle of a given radius (in meters) around a point feature. In my particular case, a 200m circle around a moving vehicle.
If this circle has a small diameter (< some kilometers), you can ignore earth roudness. Then, you can use the marker "Circle" in the style function of your point feature.
Here is my style function :
private pointStyle(feature: Feature, resolution: number): Array<Style> {
const viewProjection = map.getView().getProjection();
const coordsInViewProjection = (<Point>(feature.getGeometry())).getCoordinates();
const longLat = toLonLat(coordsInViewProjection, viewProjection);
const latitude_rad = longLat[1] * Math.PI / 180.;
const circle = new Style({
image: new CircleStyle({
stroke: new Stroke({color: '#7c8692'});,
radius: this._circleRadius_m / (resolution / viewProjection.getMetersPerUnit() * Math.cos(latitude_rad)),
}),
});
return [circle];
}
The trick is to scale the radius by the latitude cosine. This will "locally" disable the distortion effect we can observe in the Tissot Example.