How to list unattached AWS IoT certificates - aws-iot

Is there a way to list all the unattached certificates? I want to delete them?
aws iot list-certificates only show me the certificate status.
https://docs.aws.amazon.com/cli/latest/reference/iot/index.html#cli-aws-iot
I could not find anything helpful.
aws cli will do.
I am thinking to have a lambda function that will delete unattached certs.

You have to do it in two steps:
Get the ARNs of your certificates using list-certificates
For each ARN, use list-principal-things which will list any things that are associated with the given cert.

Related

Salesforce CI/CD Pipeline with Github Actions

Please help me with this UseCase. Thanks for any help in advance.
I am creating some Salesforce project in VS Code. I have cloned the repository and pushed it in Github. I have three branches in the repo named as Feature, Developer and Master. Feature is the base branch. Whenever I change or write a code, on deployment it is pushed to Feature.
Now I want that there should be Dev org attached to Developer branch as well and whenever a code after testing is pushed from Feature to Developer branch, or I pull the code to Developer from Feature, all the code shall be deployed to the attached org.
And similarly on pushing the code from Developer to Master.
I wrote the workflow rule and did it nearly.But On creating a pull req, the workflow was working but it was showing error while build and deploy and decryption was failing with error something like that- can't read the directory. Lastly, when I removed encrypt Decrypt keys, Authorization step is not passing and showing the same endless error- OAuth client secret of personal connected app?: sh: 1: read: Illegal option -s.
So the youtube video I followed confused me while encrypting server key. He got some hashkey and hash IV from somewhere and generated new hash key and hash IV to generate some server.key.enc
What tutorial(?) are you following? I don't recognise steps about decrypting keys, it might be some old or overcomplicated method.
There's cool blog post at https://tigerfacesystems.com/blog/sfdx-continuous-integration and/or you can look what SF does themselves in the LWC recipes repository (you know, the one to which most of LWC documentation points to): https://github.com/trailheadapps/lwc-recipes/blob/main/.github/workflows/ci.yml
You'd have to login with sfdx to all target orgs you need, use that "sfdx force:org:display" and "Sfdx Auth Url" trick, save each org's value as different Github variable and create similar scripts.

Connecting GCP Snowflake to Airflow certificate issue

We are trying to connect to the snowflake instance using snowflake-sqlalchemy library (latest version).
Getting next error:
[2020-09-28 14:47:47,558] {{connection.py:409}} WARNING - Certificate did not match expected hostname: xxxxxxx.europe-west4.snowflakecomputing.com. Certificate: {'subject': ((('commonName', '*.us-west-2.snowflakecomputing.com'),),), 'subjectAltName': [('DNS', '*.us-west-2.snowflakecomputing.com'), ('DNS', '*.snowflakecomputing.com'), ('DNS', '*.global.snowflakecomputing.com'), ('DNS', '*.prod1.us-west-2.aws.snowflakecomputing.com'), ('DNS', '*.prod2.us-west-2.aws.snowflakecomputing.com'), ('DNS', '*.us-west-2.aws.snowflakecomputing.com')]}
Seems like the certificates for the snowflake instance do not match the host.
Is there any way to resolve this issue?
This is on a trial account if that matters.
As noted by #Suzy Lockwood, the domain being generated is wrong. The reason it ends up pointing to *.us-west-2.snowflakecomputing.com is because the target, lacking the gcp or azure ends up getting a redirect to us-west-2, where (of course) the certificate is wrong for what was expected.
The solution (for me) turned out to be that region needs the .azure suffix, not just the region. I'd given it that information under 'account' - I'm not sure if the presence of the region parameter got in the way, or if both are needed. But, it is working now, and I'm loathe to touch it more today. :)
I noticed europe-west4. Is that a GCP account? If so, I think your URL/hostname is supposed to look like this, but you can double-check in the UI:
XXXXX.europe-west4.GCP.snowflakecomputing.com
The airflow snowflake objects are built for AWS, and not compatible for GCP so I will need to find GCP versions or create GCP compatible versions.
I think this is how you would solve the issue. The account name should also contain the gcp. as shown in the article above
{
"account":"xxxxx.us-central1.gcp",
"warehouse":"COMPUTE_WH",
"region":"us-central1",
"database":"CITIBIKE",
"schema":"PUBLIC"
}

gcloud cli app engine domain mapping error

I am trying to get multiple microservices to run on a single app engine of a single project. I am following this official documentation from GCP
https://cloud.google.com/appengine/docs/standard/python3/mapping-custom-domains
When I try to create a wild card mapping like this
gcloud app domain-mappings create '*.example.com'
So that GCP backend engines can match the request accordingly:
[VERSION_ID].[SERVICE_ID].example.com
I get the following error
ERROR: (gcloud.app.domain-mappings.create) INVALID_ARGUMENT: A managed certificate cannot be created on a wildcard domain mapping. Set `ssl_management_type` to `MANUAL` and retry the domain mapping creation. You can manually create an SSL certificate with `AuthorizedCertificates.CREATE` and map it to this domain mapping with `AuthorizedCertificates.UPDATE`.
Could anyone help with this?
It looks like by default the command attempts to configure managed SSL certificates, which aren't compatible with wildcard domain mappings. From Wildcard mappings:
**Note**: Wildcard mappings are not supported for managed SSL certificates.
As the error message suggests you can disable that with an option. From gcloud beta app domain-mappings create:
--certificate-management=CERTIFICATE_MANAGEMENT
Type of certificate management. 'automatic' will provision an SSL
certificate automatically while 'manual' requires the user to provide
a certificate id to provision. CERTIFICATE_MANAGEMENT must be one
of: automatic, manual.
So just try instead:
gcloud app domain-mappings create '*.example.com' --certificate-management=manual
I see a discrepancy: the error message mentions the ssl_management_type option while the doc page shows certificate-management. Try both if needed - it may be just an error or it may be a renamed option (which may or may not still be supported under the hood).
Of course, if you want SSL, you'd have to manage the SSL certificate(s) yourself (maybe using the --certificate-id option, documented on the same page?). In that case also check out the related Google App Engine custom subdomain mapping for a specific version for potential implications of variable domain nesting.

Access denied due to invalid subscription key (Face API)

I am having trouble using Microsoft Face API. Below is my sample request:
curl -v -X POST "https://westus.api.cognitive.microsoft.com/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&returnFaceAttributes=age,gender" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: 1xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxd" --data-ascii "{\"url\":\"http://www.mrbeantvseries.co.uk/bean3.jpg\"}"
I use the subscription id from my cognitive services account and I got below response:
{
"error": {
"code": "Unspecified",
"message": "Access denied due to invalid subscription key. Make sure you are subscribed to an API you are trying to call and provide the right key."
}
}
Not sure if I've missed out anything there. Can someone help me on this? Very much appreciated.
I ran into the same problem. I read the API documentation and it states the following.
You must use the same region in your REST API call as you used to obtain your subscription keys.
First, you must find the location of your subscription.
In order to find the location of your subscription region, you must go to Cognitive Services -> Properties under the Label Location, you will find your subscription region.
See below.
Second you must find the correct endpoint to make the call to.
For example, if I want to make a call to the Computer Vision API,
My location is East US, I will use either key 1 or 2, then I will use the following endpoint
East US - https://eastus.api.cognitive.microsoft.com/face/v1.0/detect
You will now be able to have access to the API.
It appears that you've entered your Azure subscription ID instead?
In the Azure portal, you can find the API key under 'Keys', shown below:
It will be a 32-digit hexadecimal number, no hyphens.
I had faced the same issue, it seems like there is some problem with the keys generated newly. To fix this you can actually add your endpoint as well, when you create the object for IFaceServiceClient. You can see the code below.
private readonly IFaceServiceClient faceServiceClient = new FaceServiceClient("your key", "Your endpoint");
CesarB is correct. You must create a Resource of Cognitive Service in Azure first and then get the subscription key from it.
the region is not always 'westus', it really depends on what region you select when you created the resource. You can also check it on the endpoint of overview of the Resource
I ran into a similar problem. I figure it might be helpful to some people, so I am posting it here. (btw Azure support points me to this post here)
I was trying to run through the sample file for ImageSearch of Azure. I was refering to these pages:
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-image-search/quickstarts/csharp
https://learn.microsoft.com/en-us/azure/cognitive-services/bing-image-search/quickstarts/client-libraries?tabs=visualstudio&pivots=programming-language-csharp
https://github.com/Azure-Samples/cognitive-services-dotnet-sdk-samples/blob/master/BingSearchv7/BingImageSearch/quickstart/bing-image-search-quickstart-csharp.cs
I was receiving a mixture of 404 Not Found error & 401 unauthorized error when send requests to the Bing Search resource, using
Microsoft.Azure.CognitiveServices.Search.ImageSearch. I figure it must be something wrong with either my credentials or my endpoints.
After struggling with it for hours, reading through posts and talking to Azure support member, I finally find the problems:
The base Uri Endpoint I was assigned on the Azure Keys & Endpoints webpage is incomplete. (https://api.bing.microsoft.com/)
The base Uri Endpoint on the sample tutorial pages was outdated because of the 2020.10.30 transition between Cognitive Services to Bing Search Services. (https://api.cognitive.microsoft.com/bing/v7.0/images/search)
As of 2021.09.22, the correct global base Uri Endpoint for Bing Image Search is:
https://api.bing.microsoft.com/v7.0/images/search
Hope this would be helpful to anyone and save mankind some time.
Endpoint
https://westeurope.api.cognitive.microsoft.com/face/v1.0
Endpoint and the subscription key must be consistent.
look at Microsoft Overview for this info!

did I mess up with multiple certificates?

I'm quite confused :S
scenario:
I m trying to add a certificate to my domain serverName. com.
I create a active directory sercer (DC) with ad.servername. com and cms.servername. com for Alfresco page.
and as far as I know, because it's alfresco bundle, it already has a tomcat with java and a Alfresco certificate configured (expire 2112).
then i generated my own .cer with microsoft certificate services with "servername. com" and imported to my java keystore (alfresco/java/jre/lib/security/cacert) expires 2014
later I decide to try a free trial third-party CA startssl. com and I generated for "servername. com" but they asked me to put an alternative, so I put cms.servername.com and I also imported to my alfresco/java key store (expire 2013)
now, whenever I put https://servername:8443/myApp or https://cms.servername:8443 browser shows using Alfresco certificate. did I mess up with certificates or java or something is just selecting the older expiring date certificate to use?
Hi did you put the right certificates and keys etc. in the right directory?
Like alf_data/keystore and in the tomcat config the 8443 points to the alfresco keys.

Resources