How to use Secure String Parameters in Logic Apps - azure-logic-apps

I don't understand the purpose of the greyed out Actual Value field in the Logic Apps Designer. I guess it might be relevant when dealing with a Secure String, especially since this message comes up when entering a value in Default Value for this type of Parameter:
It is not recommended to set a default value for type 'SecureString'
because it will be stored as plain text.
I imagine there might be a way to pass the actual ApiKey value via a different mechanism at runtime, which would supersede my default string "secret", but I have no idea how to do this. For instance, if my Logic App is on an HTTP Trigger, would passing a header? or query param? called "ApiKey" with the actual key work?
Where is this process documented for all Trigger types?
And still, what is the purpose of the Actual Value field in Designer?
Edit following #hury-shen's comment dated 2020-10-29 below:
This is what I've tried:
however this (still) returns:
Operation failed: The request content is not valid and could not be
deserialized: 'Could not find member 'reference' on object of type
'FlowTemplateParameter'. Path
'properties.parameters.ApiKey.reference', line 1, position 3710.'.
when I try to switch back to Designer mode.

When we define parameters in logic app, the value will be visible in the code view of the logic app although we choose "Secure String" type. If you want to use username, password and secret as parameters, you can store them in azure key vault and then define the parameters at the workflow definition level like this:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
// Template parameter values
"parameters": {
"<parameter-name-1>": {
"value": "<parameter-value>"
},
"<parameter-name-2>": {
"value": "<parameter-value>"
},
"<secured-parameter-name>": {
"reference": {
"keyVault": {
"id": "/subscriptions/<Azure-subscription-ID>/resourceGroups/<Azure-resource-group-name>/Microsoft.KeyVault/vaults/<key-vault-name>",
},
"secretName: "<secret-name>"
}
},
<other-parameter-values>
}
}

Related

How to index blob content with existing "content" field that is Collection(Edm.String)?

I can successfully index documents like PDFs, etc... from blob storage with Azure Search and it will go into a field by default called content.
But what I want to achieve is:
index the blob file content to a field called fileContent (Edm.String)
have a field for other uses called content (Collection(Edm.String))
And I cannot make this work without an error. I've tried everything with some success but from what I can tell it's not possible to redirect the data to a different field other than content while also having a content field defined that is Collection(Edm.String).
Here's what I've tried:
Have output field mappings setup so that the content goes into a field called "fileContent". For example:
"outputFieldMappings": [
{
"sourceFieldName": "/document/content",
"targetFieldName": "fileContent"
}
]
This works fine and the content of the file goes into the fileContent field defined as Edm.String. However, if I create add a custom field called content in my index defined as Collection(Edm.String) I get an exception during the indexing operation:
The data field 'content' in the document with key '1234' has an invalid value of type 'Edm.String' (String maps to Edm.String). The expected type was 'Collection(Edm.String)'.
Why does it care what my data type for content is when I'm mapping this to a different field?
I have verified that if I make the content field just Edm.String I don't get an error but now I have duplicate entries in the index since both content and fileContent contain the same information.
According to the documentation it's possible to change the field from content to something else (but then it doesn't tell you how):
A content field is common to blob content. It contains the text extracted from blobs. Your definition of this field might look similar to the one above. You aren't required to use this name, but doing lets you take advantage of implicit field mappings. The blob indexer can send blob contents to a content Edm.String field in the index, with no field mappings required.
I've also tried using normal (non output) fieldMappings to redirect the input content field to fileContent but I end up with the same error if content is also defined with Collection(Edm.String)
{
"sourceFieldName": "content",
"targetFieldName": "fileContent",
"mappingFunction": null
}
I've also tried redirecting this content through a skillset but even though I can capture that output in a custom field, as soon as I add the content (Collection(Edm.String)) everything explodes.
Any pointers are much appreciated.
Update Turns out that the above (non output) fieldMapping does work so long as the fileContent type is just Edm.String. However, if you want to add a skillset to process this data, that data needs to be redirected to yet-another-field. It will not allow you to redirect that back to fileContent and you end up an error like: "Target
Parameter name: Enrichment target name 'fileContent' collides with existing '/document/fileContent'". So it seems that you end up being required to store the raw blob document data in a field and if you want to process it, it requires another field which is quite annoying.
The indexer will try to index as much content as possible by matching index field names, that's why it attempts to put the blob content string into the index field content collection (and fails).
To get around this you need to add a (non output) field mapping from content to another name that's not an index field name, such as blobContent to prevent the indexer from being too eager. Then in the skillset you can use blobContent by either
replacing all occurrences of /document/content with /document/blobContent, or
setting a value for /document/content which is only accessible within the skillset (and output field mappings), with a conditional skill to minimize other changes to your skillset
{
"#odata.type": "#Microsoft.Skills.Util.ConditionalSkill",
"context": "/document",
"inputs": [
{ "name": "condition", "source": "= true" },
{ "name": "whenTrue", "source": "/document/blobContent" },
{ "name": "whenFalse", "source": "= null" }
],
"outputs": [ { "name": "output", "targetName": "content" } ]
}

Azure logic apps: Nullable JSON values not available as dynamic content

I'm building a logic app that pulls some JSON data from a REST API, parses it with the Parse JSON block, and pushes it to Azure Log Analytics. The main problem I'm hitting is an important JSON field can either be an object or null. Per this post I changed the relevant part of my JSON schema to something like this
"entity": {"type": ["object", "null"] }
While this works, I'm now no longer to access entity later in the logic app as dynamic content. I can access all other fields parsed by the Parse JSON block downstream in the logic (that don't have nullable field). If I remove the "null" option and just have the type set to object, I can access entity in dynamic content once again. Does anyone know why this might be happening and/or how to access the entity field downstream?
Through the test, if we use "entity": {"type": ["object", "null"] }, we really cannot directly select entity in dynamic content.
But we can use the following expression to get the entity:
body('Parse_JSON')?['entity']
The test results seem to be no problem:
For a better understanding, let me cite a few more examples:
1. If your json is like this:
{
"entity": {
"testKey": "testValue"
}
}
Your expression is like this:
body('Parse_JSON')?['entity']
2. If your json is like this:
{
"test": {
"entity": {
"testKey": "testValue"
}
}
}
Your expression should like this:
body('Parse_JSON')?['test']?['entity']

How to handle null or Non required column in json in Azure Logic App

sample api url: https://jsonplaceholder.typicode.com/posts
title is not a mandatory field in received json. It may or may not be part of each record.
When this field is missing in record, #{items('For_each')['title']} this throws an exception.
I want the value of myVariable to set as 'N/A' in that case. How do i do this?
I make the assumption that this is an array and that you have a set schema in the HTTP trigger. If the schema is set, make sure you remove Title as a required field.
Using these asumptions you should be able to do the following with Coalesce()
If Title now is not present in the body of the HTTP request the Title will be equal to 'N/A'
Using postman to test, note, the result is backward as it is the first object sent in my array.
Cause you url data all have the title, so I test with When a HTTP request is recived trigger to get the json data.
In this situation the data could only be like this:
{
"userId": 1,
"id": 2,
"body": "xxxxx"
}
not like the below one:
{
"userId": 1,
"id": 2,
"title":,
"body": "xxxxxxxxx"
}
I test with the first one, it did show the error message: property 'title' doesn't exist,. So here is my solution, after the action Set variable, add an action to set the variable you want and set the action run after Set variable has failed like the below picture.
After configuration, if the title doesn't exist, it will be set as N/A as you want.
Hope this could help you, if this is not what you want or you have other questions, please let me know.

Using search.in with all

Follwing statement find all profiles that has Facebook or twitter and this works:
$filter=SocialAccounts/any(x: search.in(x, 'Facebook,Twitter'))
But I cant find any samples for finding all that has both Facebook and twitter. I tried:
$filter=SocialAccounts/all(x: search.in(x, 'Facebook,Twitter'))
But this is not valid query.
Azure Search does not support the type of ‘all’ filter that you’re looking for. Using search.in with ‘all’ would be equivalent to using OR, but Azure Search can only handle AND in the body of an ‘all’ lambda (which is equivalent to OR in the body of an ‘any’ lambda).
You might try a workaround like this:
$filter=tags/any(t: t eq 'Facebook') and tags/any(t: t eq 'Twitter')
However, this isn't actually equivalent to using all with search.in. The query as expressed using all is matching documents where every social account is strictly either Facebook or Twitter. If any other social account is present, the document won’t match. The workaround doesn’t have this property. A document must have at least Facebook and Twitter in order to match, but not exclusively those. This is certainly a valid scenario; it just isn't the same as using all with search.in, which was the original question.
No matter how you try to rewrite the query, you won’t be able to express an equivalent to the all query. This is a limitation due to the way Azure Search stores collections of strings and other primitive types in the inverted index.
Please vote on user voice to help prioritize:
https://feedback.azure.com/forums/263029-azure-search/suggestions/37166749-efficient-way-to-express-a-true-all
A possible workaround is to use the new Complex Types feature, which does allow more expressive filters inside lambda expressions. For example, if you model tags as objects with a single value property instead of as a collection of strings, you should be able to execute a filter like this:
$filter=tags/all(t: search.in(t/value, 'Facebook,Twitter'))
In the REST API, you'd define tags like this:
{
"name": "myindex",
"fields": [
...
{
"name": "tags",
"type": "Collection(Edm.ComplexType)",
"fields": [
{ "name": "value", "type": "Edm.String", "filterable": true }
]
}
]
}
Note that this feature is in preview at the time of this writing, but will be generally available (and publicly documented) soon.

URL with reference to object from HATEOAS REST response in AngularJS

I am using #RepositoryRestResource annotation to expose Spring JPA Data as restful service. It works great. However I am struggling with referencing specific entity within angular app.
As known, Spring Data Rest doesn't serialise #Id of the entity, but HAL response contains links to entities (_links.self, _embedded.projects[]._links.self) like in the following example:
{
"_links": {
"self": {
"href": "http://localhost:8080/api/projects{?page,size,sort}",
"templated": true
}
},
"_embedded": {
"projects": [
{
"name": "Sample Project",
"description": "lorem ipsum",
"_links": {
"self": {
"href": "http://localhost:8080/api/projects/1f888ada-2c90-48bc-abbe-762d27842124"
}
}
},
...
My Angular application requires to put kind of reference to specific project entity in the URL, like http://localhost/angular-app/#/projects/{id}. I don't think using href is good idea. UUID (#Id) seems to be better but is not explicitly listed as a field. This is point I got stuck. After reading tons of articles I came up with 2 ideas, but I don't consider neither of those as a perfect one:
Idea 1:
Enable explicitly serialisation of #Id field and just use it to reference to the object.
Caveat: exposing database specific innards to front-end.
Idea 2:
Keep #Id field internal and create an extra "business identifier" field which can be used to identify specific object.
Caveat: Extra field in table (wasting space).
I would appreciate your comment on this. Maybe I am just unnecessarily too reserved to implement either of presented ideas, maybe there is a better one.
To give you another option, there is a special wrapper for Angular+Spring Data Rest that could probably help you out:
https://github.com/guylabs/angular-spring-data-rest

Resources