Is it possible make Watson Assistant search a data on IBM Cloudant? - ibm-watson

I am developing a chat bot using IBM Watson Assistant. Because the project is still in its early stage, I am still using the free plan. Everything was working relatively well until a couple weeks ago when I hit a brick wall.
I need my Assistant to communicate with IBM's database Cloudant but it just won't work. I set up the webhook as instructed, and gave full adm permission to my Assistant, but, every time I try to make it call the database, an error occurs. The error code is 405, which is supposed to be an error related to language, but both my data base and my assistant were created with the same language (this case, it is Portuguese-Brazil).
Unfortunately, Watson has no detailed log to analyse, so error code 405 is all I got.
I am looking for answers ever since, but haven't found anything yet.
So, I have to ask: is it possible to make Watson Assistant connect with Cloudant?
Edit
I am adding screenshots:
1) This is Cloudant's overview page. Here, I copied the external endpoint.
2) The, I opened my assistant, called "Teste_BD", and pasted the endpoint in the URL field in order to set it up as a webhook
3) In this screen, I gave full adm permissions to my Test_BD Assistant
4) Here is where I created a node to test. The idea is as simple as it gets: it will enter by recognizing the "Test" intent as soon as I type "hi". It is supposed to search for any of the keys set and save on the "$result" variable...
5) ... then, it is supposed to print the result on a sentence. In this case, it is meant to print the "id" number if it is found or print anything else the variable might have store in the "anything_else" condition.
6) And that's when the error is triggered. As I said there is no log to consult, despite the error message clearly saying so...
7) ... the best I could get, is this.
8) Also, as you can see, the system just associate the value "null" for the variable
9) At first, I thought the Assistant was just not recognizing the webhook, so I altered it to some nonsense just to see what would happen.
10) It triggered another error message saying the URL was not valid, so, at least, I got the confirmation that my Assistant was recognizing the Cloudant URL as valid.

You would use webhooks for something like this. If you can share the full error message coming back from the cloudant API that might help. Also any screenshots of how your webhook is set up could be helpful as well.

Related

User NOT prompted for API KEY with getAuthType() = KEY - GOOGLE DATA STUDIO COMMUNITY CONNECTOR

I've been up and down the Google Data Studio Community Connector docs, as well as many open source examples. I've watched the videos numerous times, and read thoroughly.
https://developers.google.com/datastudio/connector/auth?hl=en
My prototype works with .setAuthType(AuthTypes.NONE) - no authorization needed.
However, when I set the authorization scheme to API KEY
.setAuthType(cc.AuthType.KEY)
(which, by the way - requires the checkForValidKey function - which I have added)...
-- my understanding is that the user will be prompted for a key on the first screen of the connector configuration AUTOMATICALLY... However, this is not happening.
The call defined in checkForValidKey IS happening. When I trap it, it shows "null" for the TOKEN value (which is the key)...
What am I missing? Do I need to trigger the interface somehow? Been at this for too many hours. Any help would be greatly appreciated.
The user will be prompted for credentials if your isAuthValid() returns false and getAuthType returns a value other than NONE. See the Authorization guide for examples.

Azure AD B2C Custom Policy Localized REST API Conflict Response

This is sort of an extension of this question here. I have a policy that calls a REST API. The API returns an error message and this message needs to be localized.
One way is to of course get the API to return a localized message, but is there a way for the CustomPolicy itself to localize the error code? According to the CustomPolicy Docs, a REST API can send an error code along with the Conflict error code. Our thinking was to use this error code as a key and select a localized message (from the messageValue enum mentioned in the answer in the link).
However, we can't seem to capture/handle the error data returned by the API. The Policy seems to handle error codes by itself and we would like to know if it is possible to inject localized exception/error messages from the policy itself.
Thanks in advance!
Edit: A little more information about the setup. We have a TechnicalProfile that has a DisplayWidget and a ValidationTechnicalProfile. The DisplayWidget is used for entering & verifying the user's phone/email and the ValidationTechnicalProfile makes the final call to the RestAPI with all the user's information to register him/her. This RestAPI call output is what we want to localize.
The suggestion in the linked SO question, from what I understand, is that we integrate another DisplayClaim (that references an enum) in the DisplayWidget, and depending on the ErrorCode returned by the call, change it to display the appropriate code. However, as per my understanding, this would also require editing the API to return only 200 along with a code. This code would indicate the true nature of the result - success or a code for one of the enums to be displayed.
Our aim therefore is to check if there is a way to follow the Policy's flow (disrupt the SignUp/SignIn process) but at the same time localize the API's displayed response.
We managed to find a workaround to this, so I'm posting this here for anyone else who might be interested in this.
Our restriction for localizations was the fact that used Phrase to manage our translations and wanted the CustomPolicy specific translations all in one place. Our CD workflow was as follows:
PolicyCommit -> Build Variable Replacement through PS -> Release Variable Replacement and localized strings replacement through PS & Policy Uploads
Barring the policy from localizing the APIs response, we had the following options to achieve this:
Sending the language to the API and having the API return the appropriate error message
in the appropriate language. We were reluctant to follow this because of a multitude of reasons, but mostly because we would also have to handle different regions, etc. in the API - something the policy does by itself.
We actually had only one API that we called, and also only two error messages that were used. Hence we created an enum with the two error messages that would be localized. We then used a chain of InputClaimsTransformations that did the following:
Repeat Steps 1 through 3 for all the errors
1. CreateStringClaim (Create ClaimTypes for each of the error codes, holding the index of the error code in the enum)
2. GetMappedValueFromLocalizedCollection (Make the localized enum choose and hold the value of the required error code)
3. AddItemToStringCollection (Add the localized error from the enum to a StringCollection)
4. GenerateJson (Add the error codes StringCollection to the JSON payload to be sent to the API)
This way, the policy performed the localization for all the errors and we sent them along with the request to the API. The API, when an error occurred, picked one of the error messages from the policy and sent it back. This method was for us, because of our CD structure and Phrase integration, much easier than actually having the translations in a file hosted on the cloud to be accessed by the API.
Hope this helps someone; I can also add code in case someone needs it :)

Aws Iot Rule republish to a dynamic topic

I subscribe Aws Iot topic;
12345678/state
I try to write a rule to get this topic's payload at
12345678/shadow/update
I have written my rule by following these steps;
My query string is
SELECT * FROM '+/state'
My action is republishing everything without changing to other topic like this below
$$aws/things/${topic(1)}/shadow/update
When i write some static data instead of topic(1) function like "test", it works. However, i couldn't get topic name dynamically. There is no suitable document explaining how can achieve getting this topic name.
What is the right way to get topic name which is in my case "12345678"?
Actually, there was no problem getting topic name by using topic(1) function like this below;
$$aws/things/${topic(1)}/shadow/update
The problem was about policy permission. After adding necessary publish permissions to my policy. I start getting payloads.
For anyone else who can't figure out why
${topic(1)}
works for Arda, here is why:
https://docs.aws.amazon.com/iot/latest/developerguide/iot-substitution-templates.html
Turns out you can do "substitution templates" in the republish topic string.
Your next issue will be making sure the role assigned to the rule has a policy attached to it that allows it to publish to the iot core topic (doesn't get a policy that permits this automatically for some reason).

Getting http 500 backend error when posting to Gmail API

I am using the Gmail API to put messages into a Google Apps email account. I use
the OAuth 2.0 authentication protocol with a service account. This is more or
less working fine. One of our customers has asked us to put messages
directly into a Google Vault. I don't see a Vault API, but I did find this
information related to the "insert" method (which is what we use to add
messages to a normal account):
parameter "deleted" (boolean): Mark the email as permanently deleted
(not TRASH) and only visible in Google Apps Vault to a Vault administrator.
Only used for Google Apps for Work accounts.
When I do this, some messages are accepted, but frequently I get http error
500 in response to the POST. The error text says "Backend Error". I thought
the pattern was that the first time the message was posted, it would work,
but the second time would generate the error. Therefore I was thinking it
was a duplicate check issue. However I now see some examples of messages
that fail immediately. The POST url looks like this:
https://www.googleapis.com/upload/gmail/v1/users/user#domain.com/messages?uploadType=multipart&internalDateSource=dateHeader&deleted=true&access_token=ABC...
As I mentioned, the same message to the same url (without deleted=true) will
always work. Any ideas what is causing the error?
Was just fighting this issue myself. Apparently the error has something to do if the message is compatible with the Google vault retention policies:
If I turn on a default policy of "Retain everything" then I've been able to get the messages to import correctly. HTH!
I'm using the import api method and the backendError seems to be related to filters/policies. For example we asked Google to reject messages with xls and macros and we get the error on mail with that kind of attachment

Problems using Twitter4j on GAE throws 401 just after deploy

Well, I'm having a weird error here:
I'm developing one GAE app to read some Twitter Data, and after read a lot of docs, I have it working on my test server (Running on my pc) but after deploy and test on the real (my appspot domain) it shows this message:
401:Authentication credentials (https://dev.twitter.com/pages/auth) were missing or >incorrect. Ensure that you have set valid consumer key/secret, access token/secret, and the >system clock is in sync.
message - Could not authenticate you
code - 32
I've tried to recreate my OAuthAppToken and OAuthAppTokenSecret keys, even changing the permissions to "Write, Read and Direct Messages" and even assingning one Callback URL but nothing seems to work...
I've tried using twitter4j.properties OR using setOAuthConsumer(TW_CONSUMER_KEY, TW_CONSUMER_SECRET) OR a ConfigurationBuilder whith the correct constants and I'm experimenting the same Issue.
I'm working with AppEngine 1.8.3 and Twitter4j 3.0.4
Iv'e been writing on log and the Twitter object seems to be well created... I dont understand why is working on my PC but not on the real app.
On some other post someone says that could be because it needs to use Sync clock.. but he doesn't explains where to change that property...
Did someone had a clue?
Ok, the problem was me (and Twitter.... well..... I really think it was Twitter problem for being so dark on his api messages)...
On testing server I was looking for an existing account and on the cloud I was looking for an inexistent one. So, It was my mistake. But seriously, what about Twitter saying: "Access Forbidden"? That doesn't have any sense...

Resources