Alexa Skill renders display card in Alexa companion app - alexa

How To add an image to this card or how remove it totally?

How to add an image to Alexa Card ?
Form the official documentation(https://developer.amazon.com/docs/custom-skills/include-a-card-in-your-skills-response.html#creating-a-basic-home-card-to-display-text)
A home card can include a single image. In this case, you provide the
title, text, and two URLs (a small version and large version) for the
image to display.
Note that the total number of characters (title, content, and both
URLs combined) for the card cannot exceed 8000. Each URL cannot exceed
2000 characters.
To create a card with an image, include the card property in your JSON
response:
Set the type to Standard. Set the title and text properties to the
text to display. Note that this type of card uses a text property, not
a content property like Simple. Use either "\r\n" or "\n" within the
text to insert line breaks. Include an image object with smallImageUrl
and largeImageUrl properties. Set smallImageUrl and largeImageUrl to
the URLs of a small and large version of the image to display. See
below for details about the image format, size, and hosting
requirements.
{
"version": "1.0",
"response": {
"outputSpeech": {"type":"PlainText","text":"Your Car-Fu car is on the way!"},
"card": {
"type": "Standard",
"title": "Ordering a Car",
"text": "Your ride is on the way to 123 Main Street!\nEstimated cost for this ride: $25",
"image": {
"smallImageUrl": "https://carfu.com/resources/card-images/race-car-small.png",
"largeImageUrl": "https://carfu.com/resources/card-images/race-car-large.png"
}
}
}
}
When using the Java library:
Create a StandardCard object. Call the object's setTitle() and
setText() methods to set the title and content. Create an Image object
and assign the URLs with the object's setSmallImageUrl() and
setLargeImageUrl() methods. Pass the Image object to the StandardCard
object with the setImage() method. Pass the StandardCard object to
either SpeechletResponse.newTellResponse() or
SpeechletResponse.newAskResponse() to get a SpeechletResponse that
includes the card.

Related

Hyperlink Markdown dynamically in discord

I'm trying to make an embed message where the title of the video is the link. To achieve that I'm using square brackets and parentheses around the objects, but they are being shown as strings in the message. Any ideas on how this could be done?
for (let i in listaResultados){
embed.addField(
`${parseInt(i)+1}: [${listaResultados[i].tituloVideo}](${listaResultados[i].link})`,
listaResultados[i].descricao)
}
Embed field names do not support markdown, including masked links.
I think there are couple solutions you can do.
Utilize the Embed's title and url properties. However, this would limit you to producing only one embed per video, which means only 10 total (at one time) since 10 is the limit per message.
{
"embed": {
"title": "Music Video Title",
"description": "Description goes here",
"url": "https://video-url.com"
}
}
The other way is to do what the other user mentioned - use the description field.
{
"embed": {
"title": "Music Videos List",
"description": "1. [Video 1](https://google.com) \n 2. [Video 2](https://google.com/) \n 3. [Video 3](https://google.com/)"
}
}
You can use this website for quick reference and experimentation of the embed code.

Microsoft Graph returns 400 when querying default photo

I'm querying the user's photo metadata and then use the ID from the metadata to query the actual photo:
https://graph.microsoft.com/v1.0/users/<user id>/photo returns
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('<user id>')/photo/$entity",
"#odata.mediaContentType": "image/jpeg",
"#odata.mediaEtag": "W/\"...\"",
"id": "default",
"height": 64,
"width": 64
}
Note that its ID is default. When I then try to get the content using https://graph.microsoft.com/v1.0/users/<user id>/photos/default/$value, graph returns HTTP 400 with
{
"error": {
"code": "ErrorInvalidImageId",
"message": "The image ID is not valid",
"innerError": {
"date": "2021-12-02T14:59:49",
"request-id": "...",
"client-request-id": "..."
}
}
}
If I replace default to 64x64 (the photo's dimensions from the metadata) in the above URL, everything's fine, i.e. https://graph.microsoft.com/v1.0/users/<user id>/photos/64x64/$value returns the photo.
For many users the ID returned from https://graph.microsoft.com/v1.0/users/<user id>/photo looks like 64x64 and this sort of IDs always work.
Why does Graph return default for some users and how should I handle it? Is it safe to create the an ID from width and height whenever I get default?
We may refer to the official document to find the reason for this scenario.
If we call the url /users/<user id>/photo to get the metadata of the photo, we will get profilePhoto as the response object which contains id, height, width and the id is made up of height and width.
Then let's see this feature of the api, it is designed for querying the metadata with a specific photo size, so if the id stands for the photo size, we can use id as the query parameter, but if the id is some other value, it will not work here.
My idea on your case is that the api is designed for using photo size as the query parameter and we may always use the height and the width as the query parameter all the time so that we don't need to check if it returns a correct id for querying.

How to index blob content with existing "content" field that is Collection(Edm.String)?

I can successfully index documents like PDFs, etc... from blob storage with Azure Search and it will go into a field by default called content.
But what I want to achieve is:
index the blob file content to a field called fileContent (Edm.String)
have a field for other uses called content (Collection(Edm.String))
And I cannot make this work without an error. I've tried everything with some success but from what I can tell it's not possible to redirect the data to a different field other than content while also having a content field defined that is Collection(Edm.String).
Here's what I've tried:
Have output field mappings setup so that the content goes into a field called "fileContent". For example:
"outputFieldMappings": [
{
"sourceFieldName": "/document/content",
"targetFieldName": "fileContent"
}
]
This works fine and the content of the file goes into the fileContent field defined as Edm.String. However, if I create add a custom field called content in my index defined as Collection(Edm.String) I get an exception during the indexing operation:
The data field 'content' in the document with key '1234' has an invalid value of type 'Edm.String' (String maps to Edm.String). The expected type was 'Collection(Edm.String)'.
Why does it care what my data type for content is when I'm mapping this to a different field?
I have verified that if I make the content field just Edm.String I don't get an error but now I have duplicate entries in the index since both content and fileContent contain the same information.
According to the documentation it's possible to change the field from content to something else (but then it doesn't tell you how):
A content field is common to blob content. It contains the text extracted from blobs. Your definition of this field might look similar to the one above. You aren't required to use this name, but doing lets you take advantage of implicit field mappings. The blob indexer can send blob contents to a content Edm.String field in the index, with no field mappings required.
I've also tried using normal (non output) fieldMappings to redirect the input content field to fileContent but I end up with the same error if content is also defined with Collection(Edm.String)
{
"sourceFieldName": "content",
"targetFieldName": "fileContent",
"mappingFunction": null
}
I've also tried redirecting this content through a skillset but even though I can capture that output in a custom field, as soon as I add the content (Collection(Edm.String)) everything explodes.
Any pointers are much appreciated.
Update Turns out that the above (non output) fieldMapping does work so long as the fileContent type is just Edm.String. However, if you want to add a skillset to process this data, that data needs to be redirected to yet-another-field. It will not allow you to redirect that back to fileContent and you end up an error like: "Target
Parameter name: Enrichment target name 'fileContent' collides with existing '/document/fileContent'". So it seems that you end up being required to store the raw blob document data in a field and if you want to process it, it requires another field which is quite annoying.
The indexer will try to index as much content as possible by matching index field names, that's why it attempts to put the blob content string into the index field content collection (and fails).
To get around this you need to add a (non output) field mapping from content to another name that's not an index field name, such as blobContent to prevent the indexer from being too eager. Then in the skillset you can use blobContent by either
replacing all occurrences of /document/content with /document/blobContent, or
setting a value for /document/content which is only accessible within the skillset (and output field mappings), with a conditional skill to minimize other changes to your skillset
{
"#odata.type": "#Microsoft.Skills.Util.ConditionalSkill",
"context": "/document",
"inputs": [
{ "name": "condition", "source": "= true" },
{ "name": "whenTrue", "source": "/document/blobContent" },
{ "name": "whenFalse", "source": "= null" }
],
"outputs": [ { "name": "output", "targetName": "content" } ]
}

Does the Gmail REST API have access to label colors?

Is it possible to get label colors through the new Gmail REST API? Many of our users color code their emails and it would be great to be able to carry that color coding over to our applications.
It doesn't as per the docs a label consists of:
{
"id": string,
"name": string,
"messageListVisibility": string,
"labelListVisibility": string,
"type": string
}
see: https://developers.google.com/gmail/api/v1/reference/users/labels
That does seem like a useful enhancement though.
python below
def MakeLabel(label_name, mlv='show', llv='labelShow'):
"""Create Label object.
Args:
label_name: The name of the Label.
mlv: Message list visibility, show/hide.
llv: Label list visibility, labelShow/labelHide.
Returns:
Created Label.
"""
bg_red_color = {'backgroundColor': '#cc3a21', 'textColor': '#000000'}
label = {
'color': bg_red_color,
'messageListVisibility': mlv,
'name': label_name,
'labelListVisibility': llv}
return label
I you need more colors go here https://developers.google.com/gmail/api/v1/reference/users/labels/create

Use of "creator" property in timetime insert doesn't seem to work

The playground has an example card that includes a "creator" field with the name and an image representing "Google Glass". The JSON used to create this card is
{
"text": "Hello Explorers,\n\nWelcome to Glass!\n\n+Project Glass\n",
"creator": {
"displayName": "Project Glass",
"imageUrls": [
"https://lh3.googleusercontent.com/-quy9Ox8dQJI/T3xUHhub6PI/AAAAAAAAHAQ/YvjqA3Pw1sM/glass_photos.jpg?sz=360"
]
},
"notification": {
"level": "DEFAULT"
}
}
When this is sent to Glass, however, the imageUrl isn't displayed. The documentation at https://developers.google.com/glass/v1/reference/timeline/insert simply says that "creator" is a "nested object", but with no clear indication what this nested object should be. The example seems to indicate that this should be a Contact (see https://developers.google.com/glass/v1/reference/contacts), and the object returned by the insert seems to be of type "mirror#contact", confirming this.
Does the contact used in a creator need to be pre-created via the contacts API call first? Is there something else necessary to get the creator to display or work correctly?
The creator is currently displayed only if the REPLY menu item is provided along with the timeline item.
This seems like a bug, please file it on our issue tracker

Resources