Azure logic apps: Nullable JSON values not available as dynamic content - azure-logic-apps

I'm building a logic app that pulls some JSON data from a REST API, parses it with the Parse JSON block, and pushes it to Azure Log Analytics. The main problem I'm hitting is an important JSON field can either be an object or null. Per this post I changed the relevant part of my JSON schema to something like this
"entity": {"type": ["object", "null"] }
While this works, I'm now no longer to access entity later in the logic app as dynamic content. I can access all other fields parsed by the Parse JSON block downstream in the logic (that don't have nullable field). If I remove the "null" option and just have the type set to object, I can access entity in dynamic content once again. Does anyone know why this might be happening and/or how to access the entity field downstream?

Through the test, if we use "entity": {"type": ["object", "null"] }, we really cannot directly select entity in dynamic content.
But we can use the following expression to get the entity:
body('Parse_JSON')?['entity']
The test results seem to be no problem:
For a better understanding, let me cite a few more examples:
1. If your json is like this:
{
"entity": {
"testKey": "testValue"
}
}
Your expression is like this:
body('Parse_JSON')?['entity']
2. If your json is like this:
{
"test": {
"entity": {
"testKey": "testValue"
}
}
}
Your expression should like this:
body('Parse_JSON')?['test']?['entity']

Related

Using search.in with all

Follwing statement find all profiles that has Facebook or twitter and this works:
$filter=SocialAccounts/any(x: search.in(x, 'Facebook,Twitter'))
But I cant find any samples for finding all that has both Facebook and twitter. I tried:
$filter=SocialAccounts/all(x: search.in(x, 'Facebook,Twitter'))
But this is not valid query.
Azure Search does not support the type of ‘all’ filter that you’re looking for. Using search.in with ‘all’ would be equivalent to using OR, but Azure Search can only handle AND in the body of an ‘all’ lambda (which is equivalent to OR in the body of an ‘any’ lambda).
You might try a workaround like this:
$filter=tags/any(t: t eq 'Facebook') and tags/any(t: t eq 'Twitter')
However, this isn't actually equivalent to using all with search.in. The query as expressed using all is matching documents where every social account is strictly either Facebook or Twitter. If any other social account is present, the document won’t match. The workaround doesn’t have this property. A document must have at least Facebook and Twitter in order to match, but not exclusively those. This is certainly a valid scenario; it just isn't the same as using all with search.in, which was the original question.
No matter how you try to rewrite the query, you won’t be able to express an equivalent to the all query. This is a limitation due to the way Azure Search stores collections of strings and other primitive types in the inverted index.
Please vote on user voice to help prioritize:
https://feedback.azure.com/forums/263029-azure-search/suggestions/37166749-efficient-way-to-express-a-true-all
A possible workaround is to use the new Complex Types feature, which does allow more expressive filters inside lambda expressions. For example, if you model tags as objects with a single value property instead of as a collection of strings, you should be able to execute a filter like this:
$filter=tags/all(t: search.in(t/value, 'Facebook,Twitter'))
In the REST API, you'd define tags like this:
{
"name": "myindex",
"fields": [
...
{
"name": "tags",
"type": "Collection(Edm.ComplexType)",
"fields": [
{ "name": "value", "type": "Edm.String", "filterable": true }
]
}
]
}
Note that this feature is in preview at the time of this writing, but will be generally available (and publicly documented) soon.

Read JSON from rest API as is with Azure Data Factory

I'm trying to get Azure Data Factory to read my REST API and put it in SQL Server. The source is a REST API and the sink is a SQL Server table.
I tried to do something like:
"translator": {
"type": "TabularTranslator",
"schemaMapping": {
"$": "json"
},
"collectionReference": "$.tickets"
}
The source looks like:
{ "tickets": [ {... }, {...} ] }
Because of the poor mapping capabilities I'm choosing this path. I'll then split the data with a query. Preferbly I'd like to store each object inside tickets as a row with JSON of that object.
In short, how can I get the JSON output from the RestSource to a SqlSink single column text/nvarchar(max) column?
I managed to solve the same issue by modifying mapping manually.
ADF anyway tries to parse json, but from the Advanced mode you can edit json paths. Ex., this is the original schema parsed automatically by ADF
https://imgur.com/Y7QhcDI
Once opened in Advanced mode it will show full paths by adding indexes of the elements, something similar to $tickets[0][] etc
Try to delete all other columns and keep the only one $tickets (the highest level one), in my case it was $value https://i.stack.imgur.com/WnAzC.jpg. As the result the entire json will be written into the destination column.
If there are pagination rules in place, each page will be written as a single row.

How do I import my JSON array into Firebase as a FirebaseArray?

I have a large JSON file which contains an array. I am using Firebase for my app's backend and I want to use FirebaseArray to store the data.
It is simple to create a FirebaseArray from my Angular app and add data to it, but the nature of my app is that I have fetched data which I need to first import into Firebase somehow.
On the Firebase website the only option for importing is from a JSON. When I import my JSON file, the result is an object with numerical keys, which I realize is like an array, but has a major issue.
{
"posts": {
"0": {
"id": "iyo0iw",
"title": "pro patria mori"
},
"1": {
"id": "k120iw",
"title": "an english title"
},
"2": {
"id": "p6124w",
"title": "enim pablo espa"
}
}
}
Users are able to change the position of items, and the position of an item is also how items are uniquely identified. With multiple users this means the following problem can occur.
Sarah: Change post[0] title to "Hello everyone"
Trevor: Swap post[1] position with post[2]
Sarah: Change post[1] title to "This is post at index 1 right?"
If the following actions happen in a short space of time, Firebase doesn't know for sure what Sarah saw as post[1] when they changed the title, and can't know for sure which post object to update.
What I want is a way to import my JSON file and have the arrays become FirebaseArrays, not objects with numerical keys, which are like arrays and share the issue described above.
What you imported into your database is, in fact, an array. Firebase Realtime Database only really represents data as a nested hierarchy of key/value pairs. An array is just a set of key/value pairs where the the keys are all numbers, typically starting at 0. That's exactly the structure you're showing in your question.
To generate the sort of data that would be created by writing to the database using an AngularFire FirebaseArray, you would need to pre-process your JSON.
Firebase push IDs are generated on the client and you can generate one by calling push without arguments.
You could convert an array to an object with Firebase push ID keys like this:
let arr = ["alice", "bob", "mallory"];
let obj = arr.reduce((acc, val) => {
let key = firebase.database().ref().push().key;
acc[key] = val;
return acc;
}, {});

URL with reference to object from HATEOAS REST response in AngularJS

I am using #RepositoryRestResource annotation to expose Spring JPA Data as restful service. It works great. However I am struggling with referencing specific entity within angular app.
As known, Spring Data Rest doesn't serialise #Id of the entity, but HAL response contains links to entities (_links.self, _embedded.projects[]._links.self) like in the following example:
{
"_links": {
"self": {
"href": "http://localhost:8080/api/projects{?page,size,sort}",
"templated": true
}
},
"_embedded": {
"projects": [
{
"name": "Sample Project",
"description": "lorem ipsum",
"_links": {
"self": {
"href": "http://localhost:8080/api/projects/1f888ada-2c90-48bc-abbe-762d27842124"
}
}
},
...
My Angular application requires to put kind of reference to specific project entity in the URL, like http://localhost/angular-app/#/projects/{id}. I don't think using href is good idea. UUID (#Id) seems to be better but is not explicitly listed as a field. This is point I got stuck. After reading tons of articles I came up with 2 ideas, but I don't consider neither of those as a perfect one:
Idea 1:
Enable explicitly serialisation of #Id field and just use it to reference to the object.
Caveat: exposing database specific innards to front-end.
Idea 2:
Keep #Id field internal and create an extra "business identifier" field which can be used to identify specific object.
Caveat: Extra field in table (wasting space).
I would appreciate your comment on this. Maybe I am just unnecessarily too reserved to implement either of presented ideas, maybe there is a better one.
To give you another option, there is a special wrapper for Angular+Spring Data Rest that could probably help you out:
https://github.com/guylabs/angular-spring-data-rest

Angular alphabetizes GET response

I am currently trying to make an interactive table in Angular that reflects table information from a SQL database.
The stack I am using is MSSQL, Express.js, and AngularJS. When I log the response in Node, the data is in the desired order. However, when I log the data from .success(function(data)), the fields are alphabetized and the rows are put in random order).
I am sending a JSON object (an array of rows EX. {"b":"blah","a":"aye"}). However the row is received in Angular as {"a":"aye","b":"blah"}.
Desired affect -> Use column and row ordering from SQL query in client view. Remove "magic" angular is using to order information.
In Javascript, the properties of an object do not have guaranteed order. You need to send a JSON array instead:
["blah", "aye"]
If you need the column names as well you can send down an array of objects:
[{ "col":"b", "value":"blah" }, { "col":"a", "value":"aye" }]
Or alternatively, an object of arrays:
{ "col": ["b", "a"], "value": ["blah", "aye"] }
Edit: After some more thought, you're ideal JSON structure would probably look like this:
{
"col": ["b","a"],
"row": [
["blah","aye"],
["second","row"],
["and","so on"]
]
}
Now instead of getting "blah" from accessing table[0]['b'] like you would've before, you'll need to do something like table.row[0][table.col.indexOf('b')]

Resources