I am new to Azure maps and reading through the documentation.
This blurb describes Points, Features and Shapes.
But it doesn't really help me understand why I would use one over the other. Can someone help me understand the differences and/or point me to some articles that shed light on the subject?
please give this article a try and see if it's helpful. It describes what different geometries are available in the GeoJSON standard and what's point vs shape vs feature.
AzureMaps, like many other map libraries, uses the GeoJSON format to encode geographic data structures.
This format includes Geometry, Feature and FeatureCollection objects.
Geometry:
GeoJSON supports different geometry types:
Point
MultiPoint
LineString
MultiLineString
Polygon
MultiPolygon
GeometryCollection
These geometry types, except for the GeometryCollection, are represented in a Geometry object with the following properties:
type A GeoJSON type descriptor
coordinates A collection of coordinates
Example. Point Geometry object:
{
"type": "Point",
"coordinates": [0, 0]
}
The GeometryCollection is also a Geometry object, but with the following properties:
type A GeoJSON type descriptor with value "GeometryCollection"
geometries A collection of Geometry objects
Example: GeometryCollection Geometry object
{
"type": "GeometryCollection",
"geometries": [
{
"type": "Point",
"coordinates": [0, 0]
},
// N number of Geometry objects
]
}
Feature:
Geometry objects with additional properties are called Feature objects:
type A GeoJSON type descriptor with value "Feature"
geometry The Geometry object
properties N number of additional properties
Example. Point feature object
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [0, 0]
},
"properties": {
"name": "Null Island"
// N number of additional properties
}
}
Feature collection:
Sets of features are contained by FeatureCollection objects:
type A GeoJSON type descriptor with value "FeatureCollection"
features A collection of Feature objects
Example. Feature collection with a Point Feature object
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [0, 0]
},
"properties": {
"name": "Null Island"
// N number of additional properties
}
}
// N number of Feature objects
]
}
Shape:
Since GeoJSON objects are only geographic data structures and have no functionality on their own, Azure Maps provides the Shape helper class to make it easy to update and maintain them.
The Shape class wraps a Geometry or a Feature.
Geometry: Base class that constructs a GeoJSON Geometry object.
Feature: Class that constructs a GeoJSON Feature object.
Examples.
Creating a Shape by passing in a Geometry and an object containing properties.
var shape1 = new atlas.Shape(
new atlas.data.Point([0, 0], {
myProperty: 1,
// N number of additional properties
})
)
Creating a Shape using a Feature.
var shape2 = new atlas.Shape(
new atlas.data.Feature(new atlas.data.Point([0, 0]), {
myProperty: 1,
// N number of additional properties
})
)
Related
I need to loop through this optional array (it's only the sectional of JSON I have trouble with).
As you can see from the code:
The optional bullseye has an array rings. rings has arrays of expansionCriteria and expansionCriteria may or may not have actions.
How do I iterate and get all type, threshold in expansionCriteria? I also need to access all skillsToRemove under actions, if available.
I am rather new to Logic Apps, so any help is appreciated.
"bullseye": {
"rings": [
{
"expansionCriteria": [
{
"type": "TIMEOUT_SECONDS",
"threshold": 180
}
],
"actions": {
"skillsToRemove": [
{
"name": "Claims Foundation",
"id": "60bd469a-ebab-4958-9ca9-3559636dd67d",
"selfUri": "/api/v2/routing/skills/60bd469a-ebab-4958-9ca9-3559636dd67d"
},
{
"name": "Claims Advanced",
"id": "bdc0d667-8389-4d1d-96e2-341e383476fc",
"selfUri": "/api/v2/routing/skills/bdc0d667-8389-4d1d-96e2-341e383476fc"
},
{
"name": "Claims Intermediate",
"id": "c790eac3-d894-4c00-b2d5-90cd8a69436c",
"selfUri": "/api/v2/routing/skills/c790eac3-d894-4c00-b2d5-90cd8a69436c"
}
]
}
},
{
"expansionCriteria": [
{
"type": "TIMEOUT_SECONDS",
"threshold": 5
}
]
}
]
}
Please let me know if you need more info.
To generate the schema, you can remove the name of the object at the top of the code: "bullseye":
Thank you pramodvalavala-msft for posting your answer in MS Q&A for the similar thread .
" As you are working with a JSON Object instead of an Array, unfortunately there is no built-in function to loop over the keys. There is a feature request to add a method to extract keys from an object for scenarios like this, that you could up vote for it gain more traction.
You can use the inline code action to extract the keys from your object as an array (using Object.keys()). And then you can loop over this array using the foreach loop to extract the object that you need from the main object, which you could then use to create records in dynamics."
For more information you can refer the below links:
. How to loop and extract items from Nested Json Array in Logic Apps .
.Nested ForEach Loop in Workflow. .
Here is an example of Solr heatmap response:
{
"responseHeader": {
"params": {
"q": "*:*",
"facet.heatmap": "location_p",
"facet.heatmap.geom": "[\"0.6247379779815674 51.52351760864258\" TO \"5.051644802093506 51.570556640625\"]",
"facet.heatmap.distErrPct": "0.28",
"facet": "true",
"wt": "json"
}
},
"response": {
"numFound": 5876,
"start": 0,
"docs": [
// docs...
]
},
"facet_counts": {
"facet_queries": {},
"facet_fields": {},
"facet_ranges": {},
"facet_intervals": {},
"facet_heatmaps": {
"location_p": [
"gridLevel",
4,
"columns",
14,
"rows",
1,
"minX",
0.3515625,
"maxX",
5.2734375,
"minY",
51.50390625,
"maxY",
51.6796875,
"counts_ints2D",
[
// heatmap...
]
]
}
}
}
The bounds in the 'facet_heatmaps' (minX, maxY...) are not equal to the bounds passed in 'params'. Is there a way to force Solr build headmap by specified bounds?
No. There is not a way to force a Solr facet heatmap facet response to equal the exact bounds that are passed in. The Solr facet heatmap response returns a faceted response based off of the facet counts of underlying prefix tree spatial grid implementation.
Try experimenting with changing the distErrPct values or switching the prefixTree to grid to obtain a finer grain response.
Source the Solr documentation:
You’ll experiment with different distErrPct values (probably 0.10 - 0.20) with various input geometries till the default size is what you’re looking for. The specific details of how it’s computed isn’t important. For high-detail grids used in point-plotting (loosely one cell per pixel), set distErr to be the number of decimal-degrees of several pixels or so of the map being displayed. Also, you probably don’t want to use a geohash based grid because the cell orientation between grid levels flip-flops between being square and rectangle. Quad is consistent and has more levels, albeit at the expense of a larger index.
https://lucene.apache.org/solr/guide/6_6/spatial-search.html#SpatialSearch-HeatmapFaceting
is it possible to add custom property to "Actor" via Tincan API to save it
in LRS.
Detail:
i am using Learning Locker as LRS system & Tincan API of Drupal & as its known there are 3 object inside the statement record that is saved in LRS which are [Actor - Verb - Object]
now the Actor has 2 properties which are [name - mbox]
and i tried to modify the Tincan module to add custom property which is [country] but the LRS API "Learning Locker" refused it.
so is there a custom way to additional properties so that i can filter with later like [age - gender - country] or its standard API strict on the defined attributes
{
"version": "1.0.0",
"actor": {
"objectType": "Agent",
"name": "Creative User",
"mbox": "mailto:register#example.com"
},
"verb": {
"id": "http://adlnet.gov/expapi/verbs/action_custom_verb",
"display": {
"en-US": "action_custom_verb"
}
},
"object": {
"objectType": "Activity",
"id": "http://localhost",
"definition": {
"name": {
"en-US": "master"
}
}
},
"authority": {
"objectType": "Agent",
"name": "drupaladmin",
"mbox": "mailto:hello#learninglocker.net"
},
"stored": "2017-02-06T16:58:23.625600+00:00",
"timestamp": "2017-02-06T16:58:23.625600+00:00",
"id": "9c1d552b-c825-4403-9c89-a9381b8d5320"
}
The standard API is strict with respect to the addition of properties. And the Agent/Group objects (which are what actor can contain) do not include the ability to expand their scope.
Additional data points can be added in special properties called extensions that are available in a couple of places in the statement's objects. In this case you could use extensions in the context property's value to include your additional information about the actor. You could do this as single discrete pieces of information where each has its own extensions key, or you could use a single key that uses an object as its value and include individual pieces of information in properties of that object. For more information about extensions see: http://tincanapi.com/deep-dive-extensions/
Note that extensions keys are not filterable via the /statements stream resource, so any querying based off of their key or value will have to be done through other means than the specification's API.
I've been playing with the Entity system in Draft.js. One limitation I see is that entities have to correspond with a range of text in the content they are inserted into. I was hoping I could make a zero-length entity which would have a display based on the data in the entity rather than the text-content in the block. Is this possible?
This is possible when you have a whole block. As you can see in the code example this serialised blockMap contains a block containing no text, but the character list has one entry with an entity attached to it. There is also some discussion going on regarding adding meta-data to a block. see https://github.com/facebook/draft-js/issues/129
"blockMap": {
"80sam": {
"key": "80sam",
"type": "sticker",
"text": "",
"characterList": [
{
"style": [],
"entity": "1"
}
],
"depth": 0
},
},
Not to confuse anybody, I'll start with validating arrays...
Regarding arrays, JSON Schema can check whether elements of an (((...)sub)sub)array conform to a structure:
"type": "array",
"items": {
...
}
When validating objects, I know I can pass certain keys with their corresponding value types, such as:
"type": "object",
"properties": {
// key-value pairs, might also define subschemas
}
But what if I've got an object which I want to use to validate values only (without keys)?
My real-case example is that I'm configuring buttons: there might be edit, delete, add buttons and so on. They all have specific, rigid structure, which I do have JSON schema for. But I don't want to limit myself to ['edit', 'delete', 'add'] only, there might be publish or print in the future. But I know they all will conform to my subschema.
Each button is:
BUTTON = {
"routing": "...",
"params": { ... },
"className": "...",
"i18nLabel": "..."
}
And I've got an object (not an array) of buttons:
{
"edit": BUTTON,
"delete": BUTTON,
...
}
How can I write such JSON schema? Is there any way of combining object with items (I know there are object-properties and array-items relations).
You can use additionalProperties for this. If you set additionalProperties to a schema instead of a boolean, then any properties that aren't explicitly declared using the properties or patternProperties keywords must match the given schema.
{
"type": "object",
"additionalProperties": {
... BUTTON SCHEMA ...
}
}
http://json-schema.org/latest/json-schema-validation.html#anchor64