I have created a view for a "details" page, and have set up a visual query to get the required item from a query string variable.
Testing the query works, and a single item is returned in the test:
{
"Default": [
{
"Title": "The Person",
"JobTitle": "CEO",
"Organization": "The Company",
etc (exactly what is expected)
}
]
}
This is piped to the Default input of the 2sxc Target, and running the test shows that 1 item is sent to the Target.
Now, when I actually execute the module, what I get is "No demo item exists for the selected template." which indicates that the data is not actually getting to the module.
I have selected the query as the data source for the view.
How to debug this?
You're running into a different problem: your view has a content-type defined, but no demo-item. In this scenario, it shows the message you see. So either say "no content-type" or give it a demo-item. Then the template is run.
Related
I can successfully index documents like PDFs, etc... from blob storage with Azure Search and it will go into a field by default called content.
But what I want to achieve is:
index the blob file content to a field called fileContent (Edm.String)
have a field for other uses called content (Collection(Edm.String))
And I cannot make this work without an error. I've tried everything with some success but from what I can tell it's not possible to redirect the data to a different field other than content while also having a content field defined that is Collection(Edm.String).
Here's what I've tried:
Have output field mappings setup so that the content goes into a field called "fileContent". For example:
"outputFieldMappings": [
{
"sourceFieldName": "/document/content",
"targetFieldName": "fileContent"
}
]
This works fine and the content of the file goes into the fileContent field defined as Edm.String. However, if I create add a custom field called content in my index defined as Collection(Edm.String) I get an exception during the indexing operation:
The data field 'content' in the document with key '1234' has an invalid value of type 'Edm.String' (String maps to Edm.String). The expected type was 'Collection(Edm.String)'.
Why does it care what my data type for content is when I'm mapping this to a different field?
I have verified that if I make the content field just Edm.String I don't get an error but now I have duplicate entries in the index since both content and fileContent contain the same information.
According to the documentation it's possible to change the field from content to something else (but then it doesn't tell you how):
A content field is common to blob content. It contains the text extracted from blobs. Your definition of this field might look similar to the one above. You aren't required to use this name, but doing lets you take advantage of implicit field mappings. The blob indexer can send blob contents to a content Edm.String field in the index, with no field mappings required.
I've also tried using normal (non output) fieldMappings to redirect the input content field to fileContent but I end up with the same error if content is also defined with Collection(Edm.String)
{
"sourceFieldName": "content",
"targetFieldName": "fileContent",
"mappingFunction": null
}
I've also tried redirecting this content through a skillset but even though I can capture that output in a custom field, as soon as I add the content (Collection(Edm.String)) everything explodes.
Any pointers are much appreciated.
Update Turns out that the above (non output) fieldMapping does work so long as the fileContent type is just Edm.String. However, if you want to add a skillset to process this data, that data needs to be redirected to yet-another-field. It will not allow you to redirect that back to fileContent and you end up an error like: "Target
Parameter name: Enrichment target name 'fileContent' collides with existing '/document/fileContent'". So it seems that you end up being required to store the raw blob document data in a field and if you want to process it, it requires another field which is quite annoying.
The indexer will try to index as much content as possible by matching index field names, that's why it attempts to put the blob content string into the index field content collection (and fails).
To get around this you need to add a (non output) field mapping from content to another name that's not an index field name, such as blobContent to prevent the indexer from being too eager. Then in the skillset you can use blobContent by either
replacing all occurrences of /document/content with /document/blobContent, or
setting a value for /document/content which is only accessible within the skillset (and output field mappings), with a conditional skill to minimize other changes to your skillset
{
"#odata.type": "#Microsoft.Skills.Util.ConditionalSkill",
"context": "/document",
"inputs": [
{ "name": "condition", "source": "= true" },
{ "name": "whenTrue", "source": "/document/blobContent" },
{ "name": "whenFalse", "source": "= null" }
],
"outputs": [ { "name": "output", "targetName": "content" } ]
}
Is there a way to filter the columns I'm getting from CDS in a Logic App, something similar to SELECT Column1, Column2 FROM table instead of SELECT * FROM table.
I have tried $select=Column1,Column2 in Filter Query without any success.
===== UPDATE =====
I'm dealing with large amounts of data and I'm trying to avoid getting throttled(Logic App Content throughput limit per 5 minutes: 600MB). Filtering out all the unnecessary fields means the Logic App will get only KB of data from CDS instead of hundreds of MB
For your requirement, I think it is difficult to implement if we just use a common action in logic app. But if we use liquid template, it will be easy, please refer to the steps I provdied below:
1. You need to create an integration account in your azure portal (I think "Free" pricing tier is enough)
2. In my logic app, I use "List records" action to get data from CDS. My data show like below:
{
"#odata.context": "https://logic-apis-eastasia.azure-apim.net/apim/commondataservice/xxxxxxxxxx/$metadata#datasets('orgd46a4820.crm')/tables('activitypointers')/items",
"value": [
{
"#odata.id": "xxxxxxxxxx",
"#odata.etag": "",
"ItemInternalId": "xxxxxxxxxx",
"statecode": 1,
"_statecode_label": "Completed",
.......
},
{
"#odata.id": "xxxxxxxxxx",
"#odata.etag": "",
"ItemInternalId": "xxxxxxxxxx",
"statecode": 1,
"_statecode_label": "Completed",
.......
},
.....
]
}
For example, I just want two columns(ItemInternalId and statecode) of the data as your requirement.
To implement the requirment get two columns(ItemInternalId and statecode) of the data, we need to write a liquid template. The code show as below:
{
"value": [
{% for item in content %}
{
"ItemInternalId": "{{item.ItemInternalId}}",
"statecode":"{{item.statecode}}"
},
{% endfor %}
]
}
Please save the liquid template in local with .liquid as its suffix (such as map.liquid).
3. Go to your integration account and click "Maps".
Then click "Add" and upload the map.liquid to integration account.
4. Go to your logic app and click "Workflow settings" tab and choose the integration account which we created just now.
5. In your logic app designer, add a "Transform json to json" action. Use the value from "List records" as the "Content" box and choose the map which we upload to integration account.
6. Run the logic app, we can see the result show as below (contains only two columns).
====================================Update================================
After "List records" action, we can use a "Parse JSON" action to parse the value from "List records".
Then use "Select" action to select the columns which you want.
In an Azure Logic App, how can I get the name of the Resource Group containing the current logic app?
I want to include some tracking details in the JSON output that I am sending to another system.
I can get the run Identifier ( using #{workflow()['run']['name']} ),
and the current logic app name ( using #{workflow()['name']} )
However, I cant work out how to get the name of the resource group to which the logic app is deployed.
As a last resort, I will use the resource group name used by the deployment template, but that will be wrong if the logic app is moved later.
I could also use tags, but again that could get out of step if the logic app is moved.
Thanks
A simple formula may be:
split(workflow().id, "/")[4]
If you're deploying the Logic Apps using ARM templates (e.g. edit in Visual Studio, check into Azure DevOps git repo and deploy using release pipeline), you can create an ARM parameter:
"resGroup_ARM": {
"type": "string",
"defaultValue": "[resourceGroup().name]",
"metadata": {
"description": "Resouce group name"
}
}
Then, you can create a workflow parameter:
"resGroup_LA": {
"type": "string",
"defaultValue": "ResGroup LA default"
}
... and give it a value in the parameters initialisation section:
"resGroup_LA": {
"value": "[parameters('resGroup_ARM')]"
}
You can get all the other properties of resourceGroup() in a similar manner, see: https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/template-functions-resource?tabs=json#resourcegroup
First we can create a "Initialize variable" action to get all of the data in workflow, shown as below screenshot:
Then we can find the data in workflow is:
{
"id": "/subscriptions/*****/resourceGroups/huryTest/providers/Microsoft.Logic/workflows/hurylogicblob",
"name": "hurylogicblob",
"type": "Microsoft.Logic/workflows",
"location": "eastus",
"tags": {},
"run": {
"id": "/subscriptions/*****/resourceGroups/huryTest/providers/Microsoft.Logic/workflows/hurylogicblob/runs/*****",
"name": "*****",
"type": "Microsoft.Logic/workflows/runs"
}
}
It contains the resource group name, so we just need to get the property "id" and substring it to get resource group name. The length of "resourceGroups/" is 15, so in the expression below I use add(,15) and sub(,15).
You can use the expression as below:
substring(workflow()['id'],add(indexOf(workflow()['id'],'resourceGroups/'),15),sub(sub(indexOf(workflow()['id'],'/providers'),indexOf(workflow()['id'],'resourceGroups/')),15))
At last, I got the resource group name of the logic app:
I recorded a test with Selenium IDE but when I try to run the test I get an error [error] Element id=jsonform-0-elt-businessActor not found
I also noticed this particular field's id is slightly different.. The rest of the fields have this format id=jsonform-0-elt-0.nameOfJsonAttribute
Could there be any reason why the bussinessActor ID is not working and captured differently?
JsonSchema used to render the form:
{
"type":"object",
"id": "001",
"title": "testSchema",
"properties":{
"businessActor": {
"type":"string",
"title": "Name",
"description": "example of a description."
}
}
}
Note: Am using jsonForm to render the form based on json shema. Form id's are generated dynamically by jsonFom. And am also using Angular.js (angular is not playing a role in this aprticular issue, I think)
As #MarkRowlands suggested, it sounds like your page is dynamic.
Try this out as your target...
css=[id^='jsonform'][id$='businessActor']
^= means 'starts with' in css. $= means 'ends with' in css.
Change that selector to match whatever you would like to select.
The playground has an example card that includes a "creator" field with the name and an image representing "Google Glass". The JSON used to create this card is
{
"text": "Hello Explorers,\n\nWelcome to Glass!\n\n+Project Glass\n",
"creator": {
"displayName": "Project Glass",
"imageUrls": [
"https://lh3.googleusercontent.com/-quy9Ox8dQJI/T3xUHhub6PI/AAAAAAAAHAQ/YvjqA3Pw1sM/glass_photos.jpg?sz=360"
]
},
"notification": {
"level": "DEFAULT"
}
}
When this is sent to Glass, however, the imageUrl isn't displayed. The documentation at https://developers.google.com/glass/v1/reference/timeline/insert simply says that "creator" is a "nested object", but with no clear indication what this nested object should be. The example seems to indicate that this should be a Contact (see https://developers.google.com/glass/v1/reference/contacts), and the object returned by the insert seems to be of type "mirror#contact", confirming this.
Does the contact used in a creator need to be pre-created via the contacts API call first? Is there something else necessary to get the creator to display or work correctly?
The creator is currently displayed only if the REPLY menu item is provided along with the timeline item.
This seems like a bug, please file it on our issue tracker