I am getting this message when trying to use aws iot with node red on Raspberry Pi, using node-red-contrib-aws-iot.
I've tried putting the certs in various directories, ~/, ~/.node-red/, ~/.node-red/certs/ etc but always get the same error
Does anyone know where is the correct place for the certs, and what is the correct corresponding key path for the aws-iot-broker node Security tab settings?
Thank you in advance
Here, We need to add client id along with the name of the certificate names, then only we can able to connect it.
If you get this error, be aware that there has been a change in the recent versions
File specification for the certificates changed! So you might have to change the filenames of your certificates.
See https://github.com/cuongquay/node-red-contrib-aws-iot/commit/bba0df4b0fa8e13a60f98c0ba05e57f423ad9bf8
Until February it was:
clientId + '-private.pem.key'
clientId + '-certificate.pem.crt'
Now it must be:
clientId + '.private.key'
clientId + '.cert.pem'
Related
We are trying to connect to the snowflake instance using snowflake-sqlalchemy library (latest version).
Getting next error:
[2020-09-28 14:47:47,558] {{connection.py:409}} WARNING - Certificate did not match expected hostname: xxxxxxx.europe-west4.snowflakecomputing.com. Certificate: {'subject': ((('commonName', '*.us-west-2.snowflakecomputing.com'),),), 'subjectAltName': [('DNS', '*.us-west-2.snowflakecomputing.com'), ('DNS', '*.snowflakecomputing.com'), ('DNS', '*.global.snowflakecomputing.com'), ('DNS', '*.prod1.us-west-2.aws.snowflakecomputing.com'), ('DNS', '*.prod2.us-west-2.aws.snowflakecomputing.com'), ('DNS', '*.us-west-2.aws.snowflakecomputing.com')]}
Seems like the certificates for the snowflake instance do not match the host.
Is there any way to resolve this issue?
This is on a trial account if that matters.
As noted by #Suzy Lockwood, the domain being generated is wrong. The reason it ends up pointing to *.us-west-2.snowflakecomputing.com is because the target, lacking the gcp or azure ends up getting a redirect to us-west-2, where (of course) the certificate is wrong for what was expected.
The solution (for me) turned out to be that region needs the .azure suffix, not just the region. I'd given it that information under 'account' - I'm not sure if the presence of the region parameter got in the way, or if both are needed. But, it is working now, and I'm loathe to touch it more today. :)
I noticed europe-west4. Is that a GCP account? If so, I think your URL/hostname is supposed to look like this, but you can double-check in the UI:
XXXXX.europe-west4.GCP.snowflakecomputing.com
The airflow snowflake objects are built for AWS, and not compatible for GCP so I will need to find GCP versions or create GCP compatible versions.
I think this is how you would solve the issue. The account name should also contain the gcp. as shown in the article above
{
"account":"xxxxx.us-central1.gcp",
"warehouse":"COMPUTE_WH",
"region":"us-central1",
"database":"CITIBIKE",
"schema":"PUBLIC"
}
I'm creating a VS Code extension with a webview that contains a React application. In the React code, I'm making a GET request to a REST API, but it keeps failing due to the following error:
Failed to load resource: net::ERR_CERT_AUTHORITY_INVALID
Any ideas on why this may be happening or a workaround? Maybe this is a restriction of webviews?
If I make the call in the extension code, it works fine.
I upgrade the version of my browser to the latest and it worked me.
find this below given link to know how to update browser version.
https://www.computerhope.com/issues/ch001388.htm
Assuming that you get this error about the certificate of the remote side (the one serving the REST API), you get this error because of one of the following:
the authority that signed the certificate is not recognized on the client side (ie : the authority is not installed on your PC)
the certificate has expired
your PC has a wrong date
You can correct the above, or as a workaround you can (depending on your tools) explicitly ignore the untrusted remote certificate. But this workaround should remain for test purpose only, as it is a security breach.
I just downloaded the latest version of xCode (9.0 beta (9M136h)).
However, when I try to make a request to my server in iOS 11 simulator (Using NSURLConnection sendAsynchronousRequest), an error is received:
NSURLSession/NSURLConnection HTTP load failed (kCFStreamErrorDomainSSL, -9807)
NSURLConnection finished with error - code -1202
NSError object contains the message - #"NSLocalizedDescription" : #"The certificate for this server is invalid. You might be connecting to a server that is pretending to be “***” which could put your confidential information at risk."
The plist contains:
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
so it is not the problem in this case (I guess)
Needless to say that it is working in iOS 10/9/8
Any suggestions?
Thanks in advance!
You need to allow your application to run HTTP (no S) connections. By default, Apple only allows HTTPS:
go to your info.plist
then press the plus icon on any of them
Search for "App Transport Security Settings"
click the little arrow to the left and find "Allow arbitrary loads", by default it is set to "NO" change it to "YES"
For all of you who get this error in iOS 11, please make sure you're working against valid (secured) certificate in your server.
In our case, the certificate wasn't strict enough.
Once our server guy integrated new valid certificate, the problem has gone.
One way to check if the certificate is secured, is to past the problematic link in the browser.
As a result, you might see that the connection is not secured:
Since you've got an invalid certificate error, I'll make the following suggestion based on my personal security practice.
If you're still in your servicing terms with your CA, ask them to issue a new valid certificate for you.
Check your Keychain setting and make sure no CA cert is missing.
Alternatively, you can issue your own self-signed certificate for testing purposes, and add it to your local Keychain as trust anchor. A search for "how to create self-signed x509 certificate" will return something you might find useful.
I am having trouble using Microsoft Face API. Below is my sample request:
curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=age,gender" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: 1xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxd" --data-ascii "{\"url\":\"http://www.mrbeantvseries.co.uk/bean3.jpg\"}"
I use the subscription id from my cognitive services account and I got below response:
{
"error": {
"code": "Unspecified",
"message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
}
}
Not sure if I've missed out anything there. Can someone help me on this? Very much appreciated.
I ran into the same problem. I read the API documentation and it states the following.
You must use the same region in your REST API call as you used to obtain your subscription keys.
First, you must find the location of your subscription.
In order to find the location of your subscription region, you must go to Cognitive Services -> Properties under the Label Location, you will find your subscription region.
See below.
Second you must find the correct endpoint to make the call to.
For example, if I want to make a call to the Computer Vision API,
My location is East US, I will use either key 1 or 2, then I will use the following endpoint
East US - https://eastus.api.cognitive.microsoft.com/face/v1.0/detect
You will now be able to have access to the API.
It appears that you've entered your Azure subscription ID instead?
In the Azure portal, you can find the API key under 'Keys', shown below:
It will be a 32-digit hexadecimal number, no hyphens.
I had faced the same issue, it seems like there is some problem with the keys generated newly. To fix this you can actually add your endpoint as well, when you create the object for IFaceServiceClient. You can see the code below.
private readonly IFaceServiceClient faceServiceClient = new FaceServiceClient("your key", "Your endpoint");
CesarB is correct. You must create a Resource of Cognitive Service in Azure first and then get the subscription key from it.
the region is not always 'westus', it really depends on what region you select when you created the resource. You can also check it on the endpoint of overview of the Resource
I ran into a similar problem. I figure it might be helpful to some people, so I am posting it here. (btw Azure support points me to this post here)
I was trying to run through the sample file for ImageSearch of Azure. I was refering to these pages:
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-image-search/quickstarts/csharp
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-image-search/quickstarts/client-libraries?tabs=visualstudio&pivots=programming-language-csharp
https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/BingSearchv7/BingImageSearch/quickstart/bing-image-search-quickstart-csharp.cs
I was receiving a mixture of 404 Not Found error & 401 unauthorized error when send requests to the Bing Search resource, using
Microsoft.Azure.CognitiveServices.Search.ImageSearch. I figure it must be something wrong with either my credentials or my endpoints.
After struggling with it for hours, reading through posts and talking to Azure support member, I finally find the problems:
The base Uri Endpoint I was assigned on the Azure Keys & Endpoints webpage is incomplete. (https://api.bing.microsoft.com/)
The base Uri Endpoint on the sample tutorial pages was outdated because of the 2020.10.30 transition between Cognitive Services to Bing Search Services. (https://api.cognitive.microsoft.com/bing/v7.0/images/search)
As of 2021.09.22, the correct global base Uri Endpoint for Bing Image Search is:
https://api.bing.microsoft.com/v7.0/images/search
Hope this would be helpful to anyone and save mankind some time.
Endpoint
https://westeurope.api.cognitive.microsoft.com/face/v1.0
Endpoint and the subscription key must be consistent.
look at Microsoft Overview for this info!
Hello to everybody after all correct configurations as I followed at the "Documentation of Google Cloud Storage - Configuring a Bucket as a Static Website" my bucket it work as a website but the problem is about the prefix WWW in fact when I visit the bucket http://www.pieropretti.net I can see the content of the public bucket, but if I visit (from the browser Chromium version 42.0.2281.0 (64-bit) and same problem from the browser FirefoxESR 31.4.0 (Tor Browser 4.0.3) without the prefix WWW I receive the error server not found. This is the screenshot of the domain name DNS configuration in the picture here http://tinyurl.com/ncoc9y5
"www.pieropretti.net" and "pieropretti.net" are different domain names, and thus also correspond with different buckets. If you want to serve content from "pieropretti.net", you will need to create a bucket named exactly "pieropretti.net" in the same way you created the "www.pieropretti.net" bucket.
From what I can see, the DNS setting for "pieropretti.net." doesn't seem to have a CNAME, despite what your panel is telling you. Perhaps it just hasn't propagated to me yet.
From DNS records I notice that you are on OVH, if it is you can use the OVH Redirection Technology to redirect from naked to WWW.
If you are on other maintainers don't worry all offers service like OVH for redirect the naked to the www.
For top-level domain name we can't add CNAME, we can add A record, so I goto my Terminal and ping c.storage.googleapi.com to get the IP address, then I use the IP address to add into the A record, it works for me, just not sure whether it's an appropriate way to do it this way or not, but it just works for me.