Is it possible to get sentiment for emojis using IBM Watson NLU? - ibm-watson

I am using IBM Watson to get sentiments of social media texts. But many of these texts are just emojis. Currently, I am not able to get any sentiment for emojis. I get unsupported text language error. Is there anyway to get the sentiments for emojis using Watson NLU?
{
"language": "unknown",
"error": "unsupported text language: unknown",
"code": 400
}

Provided you are sending them in as utf characters in a utf string, then it really depends on whether the NLU corpus has been trained to recognise them - I don't know if it is, however ...
I tried testing the following strings for document sentiment:
“I am feeling 😭 today” - Sentiment : -0.823 : Negative
“I am feeling 😊 today" - Sentiment : +0.939 : Positive
You need some text for the service to recognise the language and hence which corpus to use.

Related

Why are my sentences returning my intent even tough they are not in the utterance list of my intent?

We are developing a skill and my invocation name is "call onstar"
I got an intent "CallOnStarIntent"
I got the next utterances
"switch to onstar",
"access onstar emergency",
"access onstar advisor",
"access onstar",
"connect to onstar emergency",
"connect to onstar advisor",
"connect to onstar",
"i want to use onstar",
"open onstar",
"call onstar emergency",
"call onstar advisor",
"call onstar",
"use onstar",
"start onstar",
"onstar information",
"onstar services",
"onstar please",
"onstar emergency",
"onstar advisor"
These are the listed utterances and they are working fine when i try a utterance "call square" i get Amazon.FallBackIntent as expected. But when i tried with utterances like "ping onstar" , "play onstar", or any utterances that has the word onstar it returns CallOnStarIntent.
Does any one know why is this happening?
Thanks in advance.
Your utterances are processed by a machine learning algorithm that creates a model that will also match similar utterances so this is normal (your extra utterances seem to be similar enough for the model to determine there's a match). However there's something that you can do to make the model more precise at matching:
You can extend the sample utterances of AMAZON.FallbackIntent to include the ones where you don't want a match (e.g. "ping onstar")
You can try to change the sensitivity tuning of the AMAZON.FallbackIntent to HIGH so matching out-of-domain utterances becomes more aggressive
From the Alexa developer docs:
"You can extend AMAZON.FallbackIntent with more utterances. Add utterances when you identify a small number of utterances that invoke custom intents, but should invoke AMAZON.FallbackIntent instead. For large number of utterances that route incorrectly, consider adjusting AMAZON.FallbackIntent sensitivity instead."
To adjust the AMAZON.FallbackIntent sensitivity to HIGH you can use either the ASK CLI or JSON Editor to update the interactionModel.languageModel.modelConfiguration.fallbackIntentSensitivity.level setting in the JSON for your interaction model. Set fallbackIntentSensitivity.level to HIGH, MEDIUM, or LOW.
{
"interactionModel": {
"languageModel": {
"invocationName": "...",
"intents": [],
"types": [],
"modelConfiguration": {
"fallbackIntentSensitivity": {
"level": "HIGH"
}
}
},
"dialog": {},
"prompts": []
}
}
The list of utterances for an intent are not to be seen as a closed set of values like an enumeration in programming languages.
They are only samples used to train your Alexa skill. It's described in the documentation page about best practices for sample utterances:
"Alexa also attempts to generalize based on the samples you provide to interpret spoken phrases that differ in minor ways from the samples specified."

Google Smart Home Toggles Trait mysterious utterances

I'm struggling to complete the development for a SmartHome action on our security panels, involving different trait implementations (including ArmDisarm, Power, Thermostats, etc.).
One specific problem is related to Toggles Trait.
I need to accept commands to enable or disable intrusion sensor bypass/exclusion.
I've added to the SYNC response the following block, for instance, for a window sensor in the kitchen:
{
'id': '...some device id...',
'name': {'name': 'Window Sensor'},
'roomHint': 'Kitchen',
'type': 'action.devices.types.SENSOR',
'traits': 'action.devices.traits.Toggles',
'willReportState': true,
'attributes': {
'commandOnlyToggles': false,
'queryOnlyToggles': false,
'availableToggles': [
{
'name': 'bypass',
'name_values': {
{ 'name_synonym': ['bypass', 'bypassed', 'exclusion'}, 'lang': 'en'],
{ 'name_synonym': ['escluso', 'bypass', 'esclusa', 'esclusione'], 'lang': 'it'}
},
}
]
}
}
I was able to trigger the EXECUTE intent by saying
"Turn on bypass on Window Sensor" (although very unnatural).
I was able to trigger the QUERY intent by saying
"Is bypass on Window Sensor?" (even more unnatural).
These two utterances where found somewhere in a remote corner of a blog.
My problem is with Italian language (and also other western EU languages such as French/Spanish/German).
The EXECUTE Intent seems to be triggered by this utterance (I bet no Italian guy will ever say anything like that):
"Attiva escluso su Sensore Finestra"
(in this example the name provided in the SYNC request was translated from "Window Sensor" to "Sensore Finestra" when running in the context of an Italian linked account).
However I was not able to find the utterance for the QUERY request, I've tried everything that could make some sense, but the QUERY intent never gets triggered, and the assistant redirects me to a simple search on the web.
Why is there such a mistery over utterances? The sample English utterances in assistant docs are very limited, and most of the times it's difficult to guess their counterpart in specific languages; furthermore no one from AOG has ever been able to give me any piece of information on this topic.
It's been more than a year now for me, trying to create a reference guide for utterances to be included in our device user manual, but still with no luck.
Can any one of you point me to some reference?
Or is there anything wrong with my SYNC data?
You can file a bug on the public tracker and include the QUERYs you have attempted. Since the execution intents seem to work, it may just be a bug in the backend grammar that isn't triggering.

Form Recognizer invalid model status [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
We tried Form Recognizer custom training, with these steps (API 2.0)
https://pnagarjuna.wordpress.com/2020/01/07/azure-form-recognizer-service-custom-model-training-steps/
The Training modell is success (201), but after Check Custom Model Status we got this error
{ "modelInfo": { "modelId": "f17bd306-3c6a-4067-8ef1-5f2e6ced79e1", "status": "invalid", "createdDateTime": "2020-02-05T17:24:30Z", "lastUpdatedDateTime": "2020-02-05T17:24:31Z" }, "trainResult": { "trainingDocuments": [], "errors": [{ "code": "2014", "message": "No valid blobs found in the specified Azure blob container. Please conform to the document format/size/page/dimensions requirements." }] }}
We also check
https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview#custom-model
and everything is okay.
How can go further?
Thank you!
Gabor
Could you check if the prefix value in your post train request is consistent with the path in your azure blob container? If you put the sample files under the root path of your blob container, then give an empty string for prefix. As train and get trained model request are asynchronized in form recognizer v2.0, so some post request argument related error can only be fetched via get trained model request.
#Nini,
Could you provide an example for prefix value?
I face the same issue like author does.
I use 2.0 API version.
I generated SAS for whole container, the I use the next request in order to train custom model
{
"source": "https://{resourcename}.blob.core.windows.net/{containername}?sp=rl&st=2020-02-13T11:19:53Z&se=2021-02-14T11:19:00Z&sv=2019-02-02&sr=c&sig={signature}",
"sourceFilter": {
"prefix": "/USMF/VendorInvoices/Vendor - 1001/",
"includeSubFolders": false
},
"useLabelFile": false
}
target folder URI:
https://{resourcename}.blob.core.windows.net/{container name}/USMF/VendorInvoices/Vendor - 1001/
Response body:
{
"modelInfo": {
"modelId": "4e23f488-d8db-4c98-8018-4cd337d9a655",
"status": "invalid",
"createdDateTime": "2020-02-13T12:07:52Z",
"lastUpdatedDateTime": "2020-02-13T12:07:52Z"
},
"keys": {
"clusters": {}
},
"trainResult": {
"trainingDocuments": [],
"errors": [{
"code": "2014",
"message": "No valid blobs found in the specified Azure blob container. Please conform to the document format/size/page/dimensions requirements."
}]
}
}
If I keep training data set under root and therefore the prefix value is empty string then everything is OK.
Thank you for reporting this.
Any chance you can switch from policy defined SAS token (one with sig={signature}) to sas token with explicit permissions? (one with sp={permissionenum})
Could you explain your thought in details?
Here is what I did.
I generated the SAS token without applying any access policy. SAS is generated for whole container. I just chose Read, List permissions from the list and expiration date.
I am wondered that if I keep training data set under root folder then everything is OK. But when I put files under folder structure then the form recognizer service can't find those files.
The question has been resolved.
It's not an service issue definitely.
First of all, my prefix shouldn't contain '/' symbol at the beginning.
Another important point is the prefix is case sensitive.
In my case I've uploaded file with "USMF/VendorInvoices/Vendor - 1001/" prefix but requested model training with "usmf/VendorInvoices/Vendor - 1001/". So, this led to the error message - No valid blobs found in the specified Azure blob container. Please conform to the document format/size/page/dimensions requirements.

Discord.js bot Custom presence with image

Currently trying to use setPresence for my bot, and I can get the name but no extra details for the 2nd line and no images.
bot.user.setPresence({
game: {
name: "Ready to brawl!",
application_id: 'the id',
details: "These sharks are ready for a fight.",
type: 0
},
assets: {
large_image: "large",
large_text: "Do not jump into that tank...",
small_image: "small",
small_text: "c!help"
},
status: "dnd"
});
So what shows up is:
My bot is on DND, shows "Playing Ready to brawl!", but nothing else. The details part doesn't show up and there's no large or small image.
I've used a custom presence application before so I assumed you needed your own discord application on the developers site, so I made one. I have the name of it the exact same as the "name: '.' " part and I have the id in the application id part. (I'm not sure if it's bad to share the ID so I excluded it.)
Before I tried using large_image and small_image as pure inputs and gave them links, but neither that nor this application one worked.
So if I'm seriously fucking something up here, help would be appreciated
It's normal, you can't use Rich Presence with a bot actually. Maybe someday Discord will allowed this.

"locale" information missing in Alexa HouseholdList event request / How to get multi language support in event handler?

I have successfully integrated the Alexa.HousholdListEvents in my node.js AWS lambda based skill. Now I am trying to use language translation as for usual Intents / Requests.
Unfortunately in the HousholdListEvent the "request" does not contain locale information and instead of a translated string I am getting just the identifier repeated when using t(). See example below. I cannot get the locale information from the received event and would have to fall back to english which is blocking me from starting skill the certification process.
If you need further information - feel free to ask. I am more than happy to provide more details if needed.
Any advice? Help is appreciated!
Why do I have no locale information as part of the event?
Why is t() not working as expected (just like for normal intents)?
How could I translate in the event handler based on the origin locale?
My event request:
"request": {
"type": "AlexaHouseholdListEvent.ItemsCreated",
"requestId": "4a3d1715-e9b3-4980-a6eb-e4047ac40907",
"timestamp": "2018-03-12T11:20:13Z",
"eventCreationTime": "2018-03-12T11:20:13Z",
"eventPublishingTime": "2018-03-12T11:20:13Z",
"body": {
"listId": "YW16bjEuYWNjb3VudC5BRVlQT1hTQ0MyNlRQUU5RUzZITExKN0xNUUlBLVNIT1BQSU5HX0lURU0= ",
"listItemIds": [
"fbcd3b22-7954-4c9a-826a-8a7322ffe57c"
]
}
},
My translation usage:
this.t('MY_STRING_IDENTIFIER')
My result (in the ItemsCreated event handler):
MY_STRING_IDENTIFIER
Expected result (as for other requests):
"This is my translated text"

Resources