I subscribe Aws Iot topic;
12345678/state
I try to write a rule to get this topic's payload at
12345678/shadow/update
I have written my rule by following these steps;
My query string is
SELECT * FROM '+/state'
My action is republishing everything without changing to other topic like this below
$$aws/things/${topic(1)}/shadow/update
When i write some static data instead of topic(1) function like "test", it works. However, i couldn't get topic name dynamically. There is no suitable document explaining how can achieve getting this topic name.
What is the right way to get topic name which is in my case "12345678"?
Actually, there was no problem getting topic name by using topic(1) function like this below;
$$aws/things/${topic(1)}/shadow/update
The problem was about policy permission. After adding necessary publish permissions to my policy. I start getting payloads.
For anyone else who can't figure out why
${topic(1)}
works for Arda, here is why:
https://docs.aws.amazon.com/iot/latest/developerguide/iot-substitution-templates.html
Turns out you can do "substitution templates" in the republish topic string.
Your next issue will be making sure the role assigned to the rule has a policy attached to it that allows it to publish to the iot core topic (doesn't get a policy that permits this automatically for some reason).
Related
I've been up and down the Google Data Studio Community Connector docs, as well as many open source examples. I've watched the videos numerous times, and read thoroughly.
https://developers.google.com/datastudio/connector/auth?hl=en
My prototype works with .setAuthType(AuthTypes.NONE) - no authorization needed.
However, when I set the authorization scheme to API KEY
.setAuthType(cc.AuthType.KEY)
(which, by the way - requires the checkForValidKey function - which I have added)...
-- my understanding is that the user will be prompted for a key on the first screen of the connector configuration AUTOMATICALLY... However, this is not happening.
The call defined in checkForValidKey IS happening. When I trap it, it shows "null" for the TOKEN value (which is the key)...
What am I missing? Do I need to trigger the interface somehow? Been at this for too many hours. Any help would be greatly appreciated.
The user will be prompted for credentials if your isAuthValid() returns false and getAuthType returns a value other than NONE. See the Authorization guide for examples.
This is sort of an extension of this question here. I have a policy that calls a REST API. The API returns an error message and this message needs to be localized.
One way is to of course get the API to return a localized message, but is there a way for the CustomPolicy itself to localize the error code? According to the CustomPolicy Docs, a REST API can send an error code along with the Conflict error code. Our thinking was to use this error code as a key and select a localized message (from the messageValue enum mentioned in the answer in the link).
However, we can't seem to capture/handle the error data returned by the API. The Policy seems to handle error codes by itself and we would like to know if it is possible to inject localized exception/error messages from the policy itself.
Thanks in advance!
Edit: A little more information about the setup. We have a TechnicalProfile that has a DisplayWidget and a ValidationTechnicalProfile. The DisplayWidget is used for entering & verifying the user's phone/email and the ValidationTechnicalProfile makes the final call to the RestAPI with all the user's information to register him/her. This RestAPI call output is what we want to localize.
The suggestion in the linked SO question, from what I understand, is that we integrate another DisplayClaim (that references an enum) in the DisplayWidget, and depending on the ErrorCode returned by the call, change it to display the appropriate code. However, as per my understanding, this would also require editing the API to return only 200 along with a code. This code would indicate the true nature of the result - success or a code for one of the enums to be displayed.
Our aim therefore is to check if there is a way to follow the Policy's flow (disrupt the SignUp/SignIn process) but at the same time localize the API's displayed response.
We managed to find a workaround to this, so I'm posting this here for anyone else who might be interested in this.
Our restriction for localizations was the fact that used Phrase to manage our translations and wanted the CustomPolicy specific translations all in one place. Our CD workflow was as follows:
PolicyCommit -> Build Variable Replacement through PS -> Release Variable Replacement and localized strings replacement through PS & Policy Uploads
Barring the policy from localizing the APIs response, we had the following options to achieve this:
Sending the language to the API and having the API return the appropriate error message
in the appropriate language. We were reluctant to follow this because of a multitude of reasons, but mostly because we would also have to handle different regions, etc. in the API - something the policy does by itself.
We actually had only one API that we called, and also only two error messages that were used. Hence we created an enum with the two error messages that would be localized. We then used a chain of InputClaimsTransformations that did the following:
Repeat Steps 1 through 3 for all the errors
1. CreateStringClaim (Create ClaimTypes for each of the error codes, holding the index of the error code in the enum)
2. GetMappedValueFromLocalizedCollection (Make the localized enum choose and hold the value of the required error code)
3. AddItemToStringCollection (Add the localized error from the enum to a StringCollection)
4. GenerateJson (Add the error codes StringCollection to the JSON payload to be sent to the API)
This way, the policy performed the localization for all the errors and we sent them along with the request to the API. The API, when an error occurred, picked one of the error messages from the policy and sent it back. This method was for us, because of our CD structure and Phrase integration, much easier than actually having the translations in a file hosted on the cloud to be accessed by the API.
Hope this helps someone; I can also add code in case someone needs it :)
I am developing a chat bot using IBM Watson Assistant. Because the project is still in its early stage, I am still using the free plan. Everything was working relatively well until a couple weeks ago when I hit a brick wall.
I need my Assistant to communicate with IBM's database Cloudant but it just won't work. I set up the webhook as instructed, and gave full adm permission to my Assistant, but, every time I try to make it call the database, an error occurs. The error code is 405, which is supposed to be an error related to language, but both my data base and my assistant were created with the same language (this case, it is Portuguese-Brazil).
Unfortunately, Watson has no detailed log to analyse, so error code 405 is all I got.
I am looking for answers ever since, but haven't found anything yet.
So, I have to ask: is it possible to make Watson Assistant connect with Cloudant?
Edit
I am adding screenshots:
1) This is Cloudant's overview page. Here, I copied the external endpoint.
2) The, I opened my assistant, called "Teste_BD", and pasted the endpoint in the URL field in order to set it up as a webhook
3) In this screen, I gave full adm permissions to my Test_BD Assistant
4) Here is where I created a node to test. The idea is as simple as it gets: it will enter by recognizing the "Test" intent as soon as I type "hi". It is supposed to search for any of the keys set and save on the "$result" variable...
5) ... then, it is supposed to print the result on a sentence. In this case, it is meant to print the "id" number if it is found or print anything else the variable might have store in the "anything_else" condition.
6) And that's when the error is triggered. As I said there is no log to consult, despite the error message clearly saying so...
7) ... the best I could get, is this.
8) Also, as you can see, the system just associate the value "null" for the variable
9) At first, I thought the Assistant was just not recognizing the webhook, so I altered it to some nonsense just to see what would happen.
10) It triggered another error message saying the URL was not valid, so, at least, I got the confirmation that my Assistant was recognizing the Cloudant URL as valid.
You would use webhooks for something like this. If you can share the full error message coming back from the cloudant API that might help. Also any screenshots of how your webhook is set up could be helpful as well.
I have used Google Cloud Functions for quite a long time, with no real authentication problem for now.
Today I meet this error while deploying a new function
ERROR: (gcloud.functions.deploy) ResponseError: status=[400], code=[Bad Request], message=[Default service account 'PROJECT-ID#appspot.gserviceaccount.com' doesn't exist. Please recreate this account (for example by disabling and enabling the Cloud Functions API), or specify a different account.]
I tried several things :
disable/enable GCF API : no service account recovered
gcloud beta app repair reference here
No default service account recovered
the undelete API POST call
If I understand well the current GCP features, using the last option is my best solution, but somehow I keep getting a 400 error
I found my unique-id in my log activity at the creation of the default service account
I really can't see where is the problem in the undelete API call and would be really thankful if you could help with it
Thanks to #Maxim, I know now that my problem comes from the fact that the deleting of this service account happened more than 30 days ago. Which means that it has already been purged from the system and it's not recoverable anymore.
In case you meet this same kind of problem, please try out this link :
https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
I see three alternative ways in how to proceed here next:
Create a new project from scratch to work from.
File a support case via the support center.
Open a private issue by providing your project number in the following component.
I believe it's convenient in reaching out GCP Support for help at this stage, and recommend you to do so; seeing as you've attempted most if not all ways of Service Account recovery to no success.
On a last note, as for the latter option, the contents of the private issue will only be visible to you, and to the GCP Support staff (us). If you choose this option, please let me know when it's opened, and I'll start working on it as soon as possible.
I am having trouble using Microsoft Face API. Below is my sample request:
curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=age,gender" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: 1xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxd" --data-ascii "{\"url\":\"http://www.mrbeantvseries.co.uk/bean3.jpg\"}"
I use the subscription id from my cognitive services account and I got below response:
{
"error": {
"code": "Unspecified",
"message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
}
}
Not sure if I've missed out anything there. Can someone help me on this? Very much appreciated.
I ran into the same problem. I read the API documentation and it states the following.
You must use the same region in your REST API call as you used to obtain your subscription keys.
First, you must find the location of your subscription.
In order to find the location of your subscription region, you must go to Cognitive Services -> Properties under the Label Location, you will find your subscription region.
See below.
Second you must find the correct endpoint to make the call to.
For example, if I want to make a call to the Computer Vision API,
My location is East US, I will use either key 1 or 2, then I will use the following endpoint
East US - https://eastus.api.cognitive.microsoft.com/face/v1.0/detect
You will now be able to have access to the API.
It appears that you've entered your Azure subscription ID instead?
In the Azure portal, you can find the API key under 'Keys', shown below:
It will be a 32-digit hexadecimal number, no hyphens.
I had faced the same issue, it seems like there is some problem with the keys generated newly. To fix this you can actually add your endpoint as well, when you create the object for IFaceServiceClient. You can see the code below.
private readonly IFaceServiceClient faceServiceClient = new FaceServiceClient("your key", "Your endpoint");
CesarB is correct. You must create a Resource of Cognitive Service in Azure first and then get the subscription key from it.
the region is not always 'westus', it really depends on what region you select when you created the resource. You can also check it on the endpoint of overview of the Resource
I ran into a similar problem. I figure it might be helpful to some people, so I am posting it here. (btw Azure support points me to this post here)
I was trying to run through the sample file for ImageSearch of Azure. I was refering to these pages:
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-image-search/quickstarts/csharp
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-image-search/quickstarts/client-libraries?tabs=visualstudio&pivots=programming-language-csharp
https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/BingSearchv7/BingImageSearch/quickstart/bing-image-search-quickstart-csharp.cs
I was receiving a mixture of 404 Not Found error & 401 unauthorized error when send requests to the Bing Search resource, using
Microsoft.Azure.CognitiveServices.Search.ImageSearch. I figure it must be something wrong with either my credentials or my endpoints.
After struggling with it for hours, reading through posts and talking to Azure support member, I finally find the problems:
The base Uri Endpoint I was assigned on the Azure Keys & Endpoints webpage is incomplete. (https://api.bing.microsoft.com/)
The base Uri Endpoint on the sample tutorial pages was outdated because of the 2020.10.30 transition between Cognitive Services to Bing Search Services. (https://api.cognitive.microsoft.com/bing/v7.0/images/search)
As of 2021.09.22, the correct global base Uri Endpoint for Bing Image Search is:
https://api.bing.microsoft.com/v7.0/images/search
Hope this would be helpful to anyone and save mankind some time.
Endpoint
https://westeurope.api.cognitive.microsoft.com/face/v1.0
Endpoint and the subscription key must be consistent.
look at Microsoft Overview for this info!