Where to put knowledge base deployments details in QnA bot sdk4? - azureportal

I'm following instructions for migrating my knowledge base from https://learn.microsoft.com/en-us/azure/cognitive-services/qnamaker/tutorials/migrate-knowledge-base.
Point 9 says I have to use the endpoint (image in the instructions below this point) to my bot. I have created a Web App Bot on Azure Portal.
For sdk3, I am able to set this endpoint information to my Web App Bot and get the KB to function. However, for sdk4 I can't do the same.
How do I migrate my knowledge base to sdk4 Web App Bot (QnA Maker)?

There is a good sample of QnA Maker bot with SDK v4 available here in the official samples:
C#: https://github.com/Microsoft/BotBuilder-Samples/tree/master/samples/csharp_dotnetcore/11.qnamaker
Js: https://github.com/Microsoft/BotBuilder-Samples/blob/master/samples/javascript_nodejs/11.qnamaker
With these samples you can see that the endpoint (hostname) information is located on the .bot file, named here qnamaker.bot and looking like the following:
{
"name": "qnamaker",
"description": "",
"services": [
{
"type": "endpoint",
"name": "development",
"endpoint": "http://localhost:3978/api/messages",
"appId": "",
"appPassword": "",
"id": "25"
},
{
"type": "qna",
"name": "qnamakerService",
"kbId": "",
"subscriptionKey": "",
"endpointKey": "",
"hostname": "",
"id": "227"
}
],
"padlock": "",
"version": "2.0"
}
These values are used in the code.

Related

Locale ignored in APLA Alexa Developer Console

I'm new to developing skills with Alexa. I've followed the Build Multi-turn Skills Tutorial with Alexa Conversations tutorial up to module 3.
Because I want to develop a skill only for German users I've altered the language settings in the Alexa developer console of my skill to only support German language.
I change the APLA code in the tutorial with the APLA with the "edit audio response" to this:
{
"type": "APLA",
"version": "0.8",
"mainTemplate": {
"parameters": [
"payload"
],
"item": {
"type": "Selector",
"strategy": "randomItem",
"items": [
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'de-DE'}",
"content": "Willkommen bei meiner App"
},
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'de-DE'}",
"content": "Willkommen."
},
{
"type": "Speech",
"contentType": "text",
"when": "${environment.alexaLocale == 'en-US'}",
"content": "Welcome."
}
]
}
}
}
At the bottom of the console I see that my locale is set to German but when I preview the APL above the audio player always says "Welcome." with the English voice, the other two options are never triggered. What am I missing here?
The audio response tool doesn't take in account the language of the website.
There are no ways to test the condition environment.alexaLocale in this tool.
To test it, update the code of your skill and test it either on the test tabyour skill in the developer console or directly on a real device. Just tested with your code, it works perfectly. Just not on the audio tool.

How does Google Smart Home determine channelNumber for action.devices.commands.selectChannel?

Created Google Smart Home Action.
Implemented device with:
a. deviceType = action.devices.types.SETTOP
b. deviceTrait = action.devices.traits.Channel
Device is successfully discovered and added to Google Home App's Homegraph.
User sends command: "Ok Google, change to ESPN"
Receives the following json in fulfillment URL:
{
"requestId": "[RequestId GUID]",
"inputs": [{
"intent": "action.devices.EXECUTE",
"payload": {
"commands": [{
"devices": [{
"id": "[SettopBox device Id]"
}],
"execution": [{
"command": "action.devices.commands.selectChannel",
"params": {
"channelCode": "espn",
"channelName": "ESPN",
"channelNumber": "206"
}
}]
}]
}
}]
}
Questions:
How does Google Smart Home determine the "channelNumber" value for "ESPN"? The user's command was "Ok Google, change to "ESPN". This does not contain any information about the channel number.
If a provider was set automatically, is there a setting in Google Home or Google Assistant to change this provider?
The number of a channel for the Channel trait is provided in the SYNC request along with any relevant labels.
{
"availableChannels": [
{
"key": "ktvu2",
"names": [
"Fox",
"KTVU"
],
"number": "2"
},
{
"key": "abc1",
"names": [
"ABC",
"ABC East"
],
"number": "4-11"
}
]
}
As shown in the snippet, the channel number comes from the service. This may be up to the developer of the integration how these numbers may be determined, whether from a cable provider or over-the-air. The field is optional, so a service without channel numbers may still work by saying its name.

Azure Indoor Maps not rendering

Trying to create floor plan upload to azure indoor maps. It is uploaded using postman and got the tilesetid but when I provide tilesetid in the Azure Indoor maps sample, it is not rendering the image in html file. When I use sample zip file provided by azure it is working fine.
Following the article as shown in Azure Indoor Maps
Autocad settings
Below is the manifest file
{
"version": "1.1",
"directoryInfo": {
"name": "Digital Twins Testing Building",
"streetAddress": "Contoso Way",
"unit": "1",
"locality": "Eastside",
"postalCode": "00000",
"adminDivisions": [
"Contoso City",
"Contoso State",
"United States"
],
"hoursOfOperation": "Mo-Fr 08:00-17:00 open",
"phone": "1 (425) 555-1234",
"website": "www.contoso.com",
"nonPublic": false,
"anchorLatitude": 33.44277,
"anchorLongitude": -112.072754,
"anchorHeightAboveSeaLevel": 1000,
"defaultLevelVerticalExtent": 2
},
"buildingLevels": {
"levels": [{
"levelName": "Ground Level",
"ordinal": 0,
"verticalExtent": 5,
"filename": "./GroundLevelFloorPlan.dwg"
}]
},
"georeference": {
"lat": 33.44277,
"lon": -112.072754,
"angle": 0
},
"dwgLayers": {
"exterior": [
"exterior"
],
"unit": [
"unit"
]
}
}
From the manifest, I see you expressed loading only exterior and unit, and didn't pass the label layer which is what brings labels and let you add more properties for the units (or zones). If you don't see the map, I would suggest checking the conversion results (see here) which is always a good practice. Another good way to troubleshoot is review the content in the dataset via WFS API, for example units via https://atlas.microsoft.com/wfs/datasets//collections/unit/items?api-version=1.0&subscription-key={{subcriptionkey}}

Alexa skill Rest API

Can we use Rest API instead of using Lambda. The reason im asking is because we got the request, we know what alexa accepts as a response, and we know that it is a POST. So connect all of these into REST API. The reason im asking is that the whole project is based in Jax-RS, so we want to have it all in one place, wihtout using lamda or anything. Not that lamda isn't that great.
So the request that alexa passes to Lambda is:
{
"session": {
"sessionId": "SessionId.a82f0b92-3650-4d45-8f12-e030ffc10894",
"application": {
"applicationId": "amzn1.echo-sdk-ams.app.8f35038e-13ac-4327-8e4f-e5df52dc1432"
},
"attributes": {},
"user": {
"userId": "amzn1.ask.account.AFP3ZWPOS2BGJR7OWJZ3DHPKMOMNWY4AY66FUR7ILBWANIHQN73QGGUEQZ7YXOLC7NYVD3JPUAHAGUS4ZFXJ6ZMS4EHO2CJFPWFLWLYZLDP7S227ADI54A2ZMLZLDO5CXSIB47ELNY54S2M7FDNJFHTSU67B7HB3UZUN6OUUR5BYS3UBRSIPBG4IWRLHUN36NXDYBWUM3NMQZRA"
},
"new": true
},
"request": {
"type": "IntentRequest",
"requestId": "EdwRequestId.bfdb3c27-028b-4224-977a-558129808e9a",
"timestamp": "2016-07-11T17:52:55Z",
"intent": {
"name": "HelloWorldIntent",
"slots": {}
},
"locale": "en-US"
},
"version": "1.0"
}
Response:
{
"version": "1.0",
"response": {
"outputSpeech": {
"type": "PlainText",
"text": "Hello World!"
},
"card": {
"content": "Hello World!",
"title": "Greeter",
"type": "Simple"
},
"shouldEndSession": true
},
"sessionAttributes": {}
}
Sure you can. In fact, when you are creating your skill in the Alexa Developer Portal, you have that option. The caveat is that you will need to manage your own TLS certificate and will have to make sure that the latency/responsiveness is decent based on the location of your users.
If you would like to explore this further, you can use Amazon's Java code examples. They can be found at: https://github.com/amzn/alexa-skills-kit-java.
You can definitely set up a RESTful service API for use with Alexa.
And, if you set it up in Azure, you don't even need to create your own certificate.
You can use a rest api as the endpoint for alexa skills. The apis will be invoked in the following manner
[Configured_URL]>/**alexa/[intent]**
Where [Configured_URL] - is the url endpoint configured in amazon site for invoking
[intent] - is the name of the intent
You should host your service accordingly
https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/developing-an-alexa-skill-as-a-web-service
https://iwritecrappycode.wordpress.com/2016/04/01/create-an-alexa-skill-in-node-js-and-hosting-it-on-heroku/

Is there an open-source version of Facebook's Linter?

When you post a link to Facebook, it grabs the article title, description and relevant images. Most major sites have the required OG tags, making it easy to grab this info, but FB is also able to handle websites that don't have them (you can try it here).
Clearly they've got a system in place for grabbing this info in the absence of OG tags. Does anyone know if there's an open-source version?
I'm thinking it would need (in order of preference for each section):
Title:
Check for og:title tag.
Check for regular meta "title" tag.
Check for h1 tag.
Description:
Check for og:description tag.
Check for regular meta "description tag"
Check for div or p tags with sufficient content to indicate a body paragraph
Images:
Check for og:image tags
Check for images over a certain size (say 100x100) and give priority to those that come first.
Thanks a lot!
https://github.com/Anonyfox/node-htmlcarve
The htmlcarve module for Node.js does most of what you're after, here's the output generated from this page:
htmlcarve = require('htmlcarve');
htmlcarve.fromUrl('https://scotch.io/tutorials/using-mongoosejs-in-node-js-and-mongodb-applications', function(error, data) {
console.log(JSON.stringify(data, null, 2));
});
This produces:
{
"source": {
"html_meta": {
"title": "Easily Develop Node.js and MongoDB Apps with Mongoose ⥠Scotch",
"summary": "",
"image": "/wp-content/themes/thirty/img/scotch-logo.png",
"language": "en-US",
"feed": "https://scotch.io/feed",
"favicon": "https://scotch.io/wp-content/themes/thirty/img/icons/favicon-57.png",
"author": "Chris Sevilleja"
},
"open_graph": {
"title": "Easily Develop Node.js and MongoDB Apps with Mongoose",
"summary": "",
"image": "https://scotch.io/wp-content/uploads/2014/11/mongoosejs-node-mongodb-applications.png"
},
"twitter_card": {
"title": "Easily Develop Node.js and MongoDB Apps with Mongoose",
"summary": "",
"author": "sevilayha"
}
},
"result": {
"title": "Easily Develop Node.js and MongoDB Apps with Mongoose",
"summary": "",
"image": "https://scotch.io/wp-content/uploads/2014/11/mongoosejs-node-mongodb-applications.png",
"author": "sevilayha",
"language": "en-US",
"feed": "https://scotch.io/feed",
"favicon": "https://scotch.io/wp-content/themes/thirty/img/icons/favicon-57.png"
},
"links": {
"deep": "https://scotch.io/tutorials/using-mongoosejs-in-node-js-and-mongodb-applications",
"shallow": "https://scotch.io/tutorials/using-mongoosejs-in-node-js-and-mongodb-applications",
"base": "https://scotch.io"
}
}
If you've got Node.js installed, then install it using
npm i -g htmlcarve
and you can run it from the command line directly.

Resources