Authentication and authorization way for Azure Blobs - azure-active-directory

In my azure blob storage , I have a container where I have multiple folders with folder name as project name and each folder has a metadata.json file along with subfolders in first level.
Blob Container --> Folder A, FolderB
Folder A --> metadata.json, subfolder(s)
Folder B --> metadata.json, subfolders(s)..
Now, I have a Powershell commandlet which I have created to get the blob metadata file of a project.
So if It is like Get-Documents -Project "FolderA" and I get all the information from metadata.json in Folder A. To get this I am having my Blob storage account, SAS token and container stored in a config file in the solution.
Going forward, I would like to make it more secure and would like my customers, if they call Get-Documents command, user should be prompted with login prompt and once their authentication is verified i.e. they are in organization active directory and have permission to Blob folder, they should be able to get metadata information.
How do I start with this? if anyone can guide to me proper documentation or share similar solution if implemented.

You could access Azure blob storage with Azure AD, and get Blob with Rest API.
Code sample with Powershell:
# login
Connect-AzAccount
# get accessToken
$resource = "https://storage.azure.com/"
$context = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile.DefaultContext
$accessToken = [Microsoft.Azure.Commands.Common.Authentication.AzureSession]::Instance.AuthenticationFactory.Authenticate($context.Account, $context.Environment, $context.Tenant.Id.ToString(), $null, [Microsoft.Azure.Commands.Common.Authentication.ShowDialog]::Never, $null, $resource).AccessToken
#request REST API
$uri = "https://<myaccount>.blob.core.windows.net/<mycontainer>/<myblob>"
$xmsdate = get-date
$xmsdate = $xmsdate.ToUniversalTime()
$xmsdate = $xmsdate.toString('r')
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("x-ms-version", "2019-12-12")
$headers.Add("x-ms-date", $xmsdate)
$headers.Add("Authorization", "Bearer " + $accessToken)
Invoke-RestMethod -Method 'Get' -Uri $uri -Headers $headers
For more details, see here. It will help you understand.

Related

Terraform Azuread provider Authorizer Error

My GitLab CI/CD pipeline is throwing a Terraform Azuread provider authoriser error which has become a major blocker for me and I simply can't find a way round it.
My Terraform configuration includes a data.tf file which has the following single line entry:
data "azuread_client_config" "current" {}
I also have a provider.tf file, the content of which includes the following azuread provider block:
provider "azuread" {
tenant_id = "#TenantID#"
use_cli = "false"
}
Running the GitLab CI/CD pipeline, it throws the below error:
Error: no Authorizer could be configured, please check your configuration
with provider ["registry.terraform.io/hashicorp/azuread"],
on provider.tf line 29, in provider "azuread":
29: provider "azuread" {
If I exclude the data.tf file from my terraform configuration or comment out its single line entry, the pipeline runs without throwing any errors. What am I doing wrong, or what do I need to do to get the pipeline run successfully, upon inclusion of the data.tf file?
Data Source: azuread_client_config
Use this data source to access the configuration of the AzureAD provider.
#This is while Terraform authenticating via the Azure CLI
data "azuread_client_config" "current" {}
output "object_id" {
value = data.azuread_client_config.current.object_id
}
#Configure the Azure Active Directory Provider
provider "azuread" {
# NOTE: Environment Variables can also be used for Service Principal authentication
# client_id = "..."
# client_secret = "..."
# tenant_id = "..."
}
So would suggest you remove data "azuread_client_config" "current" {} line from data.tf file if you are using provider azuread {} in provider.tf file. Because you are already authenticating with Service Principle so there is no point of using data source of azuread.
You can also refer this Documention regarding the Data Sources and Resources supported by the Azure Active Directory Provider

Modern Approach to WPF Telerik Grid Manifest Azure DevOps

Is there a modern approach to exclude manifest certificate private keys from a repository deployment using Azure DevOps without losing related functionality?
I am migrating a code repository that contains a WPF Telerik grid from Team Foundation Server to Azure DevOps. I noticed sensitive information such as an X.509 Certificate Private Key in a TemporaryKey.pfx file that seems to handle the Telerik grid manifest download in production.
I attempted to removed the manifests and OneClick signing outright and see related pages are now throwing errors like the following:
Application manifest has either a different computed hash than the one specified or no hash
Within the .csproj
I see 2 potential lines to remove from the .csproj but I do not want to introduce a security risk if this is a critical security component.
<GenerateManifests>true</GenerateManifests>
<SignManifests>false</SignManifests>
Using a key vault would be another alternative, however I imagine this is circumventing a larger security issue.
Edit:
After some trial and error I have included the corresponding pfx as a secure file and added powershell scripts to install the pfx on the local agent and sign the manifest as I would in the regular application (Sign the OneClick manifests). Now I am receiving
Error MSB3482: An error occurred while signing: A certificate chain
could not be built to a trusted root authority.
My YAML looks like the following:
- task: DownloadSecureFile#1
name: TemporaryKey
displayName: 'Download TemporaryKey certificate'
inputs:
secureFile: 'TemporaryKey.pfx'
#Install TemporaryKey certificate for manifest
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
Write-Host "Start adding the PFX file to the certificate store."
$secName = "TemporaryKey.pfx"
$tempDirectory = $env:AGENT_TEMPDIRECTORY
$pfxFilePath = Join-Path $tempDirectory $secName
Add-Type -AssemblyName System.Security
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import($pfxFilePath, "$(Password)", [System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]"PersistKeySet")
$store = new-object system.security.cryptography.X509Certificates.X509Store -argumentlist "MY", CurrentUser
$store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]"ReadWrite")
$store.Add($cert)
$store.Close()
#Sign manifest using TemporaryKey
- task: PowerShell#2
displayName: "Sign TemporaryKey PowerShell script"
inputs:
targetType: 'inline'
script: |
$magicToken = "#PerformScriptSigning"
$encoding = "UTF8"
$scriptFolder = "."
#No files found here
$scripts = Get-ChildItem -Path $scriptFolder -Filter "*.ps1" -Recurse -ErrorAction Stop
foreach ($script in $scripts) {
try {
$content = Get-Content -Path $script.FullName -Encoding $encoding
if ($content.Contains($magicToken)) {
$content = $content | Where-Object {$_ -notmatch $magicToken}
Set-Content -Value $content -Path $script.FullName -Encoding $encoding -Force
# load cert
$codeSigningCert = Get-ChildItem Cert:\CurrentUser\My -CodeSigningCert | Select-Object -First 1
Write-Output "Signing script `"$($script.Name)`" with certificate `"$($codeSigningCert.Thumbprint)`""
# sign script
$null = Set-AuthenticodeSignature -Certificate $codeSigningCert -FilePath $script.FullName -TimestampServer "http://timestamp.comodoca.com/rfc3161"
# copy to artifact staging location
$null = Copy-Item -Path $script.FullName -Destination $env:Build_ArtifactStagingDirectory
}
}
catch {
Write-Error $_
}
}
From my understanding this process should create .ps1 files to sign the project, however there are no .ps1 files found in the signing script. The install script can open the file and does successfully install it to the store. I wrote out the TemporaryKey.pfx cert from before storing it to ensure it was opening without error.
I'm not clear how signing works in this case.
The "modern" or recommended approach would be to upload the .pfx file as secure file to Azure DevOps and then download it and sign your app with it during the build or release pipeline.
This article contains an example of a YAML pipeline that uses a secure .pfx to sign an MSIX packaged WPF app.

AngularJS page refresh problems

When working with Angular and its routes, if you reload the page in, let's say, localhost:9000/products, the response will be a 404.
I am using Python server created using python -m SimpleHTTPServer port no. How to solve this problem since .htaccess file is not working in this?
.htaccess files are for apache http server not the python server, the .htaccess is for setting up redirects that the apache server will observe but if you use nginx or in this case python simple http server you'd have to use redirects specific to that particular http server this may help:
https://gist.github.com/chrisbolin/2e90bc492270802d00a6
copied here not written by myself apparently also from SO
''' Taken from: http://stackoverflow.com/users/1074592/fakerainbrigand http://stackoverflow.com/questions/15401815/python-simplehttpserver '''
import SimpleHTTPServer, SocketServer import urlparse, os
PORT = 3000 INDEXFILE = 'index.html'
class MyHandler(SimpleHTTPServer.SimpleHTTPRequestHandler): def do_GET(self):
# Parse query data to find out what was requested
parsedParams = urlparse.urlparse(self.path)
# See if the file requested exists
if os.access('.' + os.sep + parsedParams.path, os.R_OK):
# File exists, serve it up
SimpleHTTPServer.SimpleHTTPRequestHandler.do_GET(self);
else:
# send index.html, but don't redirect
self.send_response(200)
self.send_header('Content-Type', 'text/html')
self.end_headers()
with open(INDEXFILE, 'r') as fin:
self.copyfile(fin, self.wfile)
Handler = MyHandler
httpd = SocketServer.TCPServer(("", PORT), Handler)
print "serving at port", PORT httpd.serve_forever()
Also personally I use apache locally and just have browsersync proxy to the apache server and that handles the redirect if a file isn't found, from there the angular page takes over and routing kicks in to restore the view or go to a page not found view.

Google App Engine App failed to access Google Cloud Storage bucket

Unable to access default Google Cloud Storage bucket from Appengine Project. This project was created with App engine SDK version prior to 1.9.0. I've created the bucket manually, as per GCS Documentation it was said by default the bucket is accessible to Appengine Projects, but its not accessible in my case. This is the code snippet that tries to create a file..
...
GcsService gcsService = GcsServiceFactory.createGcsService();
GcsFilename file = new GcsFilename(getGcsDefaultBucketName(), fileName);
GcsFileOptions.Builder builder = new GcsFileOptions.Builder();
GcsFileOptions options = builder.mimeType(mimeType).build();
GcsOutputChannel channel = gcsService.createOrReplace(file, options); //erroring in this line
...
Error found in Logs:
: com.google.appengine.tools.cloudstorage.NonRetriableException: java.lang.RuntimeException: Server replied with 403, verify ACLs are set correctly on the object and bucket: Request: POST https://storage.googleapis.com/1-ebilly.appspot.com/SERVICESTAGEREPORT-DEVICENAME-LYF2-CREATEDDATE-01012017-CREATEDDATE-19022017-.ZIP
: User-Agent: AppEngine-Java-GCS
: Content-Length: 0
: x-goog-resumable: start
: Content-Type: application/zip
:
: no content: Response: 403 with 212 bytes of content
: X-GUploader-UploadID: AEnB2Upq0Lhtfy5pbt06pVib8J0-L0XiGqW4JpB0G9PL87keY3WV7RCMVLCPeclD-D4UATEddvvwpAG2qeeIxUJx--brKxdQFw
: Content-Type: application/xml; charset=UTF-8
: Content-Length: 212
: Vary: Origin
: <?xml version='1.0' encoding='UTF-8'?><Error><Code>AccessDenied</Code><Message>Access denied.</Message><Details>Caller does not have storage.objects.create access to bucket myprojectID.appspot.com.</Details></Error>
:
: at com.google.appengine.tools.cloudstorage.RetryHelper.doRetry(RetryHelper.java:120)
: at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:166)
: at com.google.appengine.tools.cloudstorage.RetryHelper.runWithRetries(RetryHelper.java:156)
: at com.google.appengine.tools.cloudstorage.GcsServiceImpl.createOrReplace(GcsServiceImpl.java:70)
PS: I've tried to create a new Google Appengine Project and deployed the app init. This project is automatically created with a default GCS bucket and the same code is working fine without any error. My old project has lots of DB data which I want to retain and continue to use the same project without disposing it.
Please help with your thoughts to make the GCS bucket accessible in old project.
Resolved the issue by adding IAM permissin for appengine project. After reading the IAM "Access Control at the Project Level" document and comparing old project and new project permissions came to know that Appengine project level permission is not found in old project. After adding the permission the same code started to access the default bucket.
IAM Permissions before fix screenshot
IAM Permissions after fix screenshot
Try this:-
GcsService gcsService = GcsServiceFactory.createGcsService(RetryParams.getDefaultInstance());
GcsFileOptions options = new GcsFileOptions.Builder().mimeType(mime).build();
GcsFilename gcsfilename = new GcsFilename(BUCKET_NAME, fileName);
GcsOutputChannel outputChannel = gcsService.createOrReplace(gcsfilename, options);

Authorization Header for WebHDFS with Azure Data Lake

I'm trying to use WebHDFS with Azure Data Lake. According to Microsoft's documentation, the steps one should follow are:
Create a new application in Azure AD with a key and delegated permissions to Azure Management Services
Using the client_id, tenant_id, and secret key, make a request to the OAUTH2 endpoint
curl -X POST https://login.microsoftonline.com/<TENANT-ID>/oauth2/token \
-F grant_type=client_credentials \
-F resource=https://management.core.windows.net/ \
-F client_id=<CLIENT-ID> \
-F client_secret=<AUTH-KEY>
Upon success, you then get back some JSON including an "access_token" object, which content you should include with subsequent WebHDFS requests by adding the header
Authorization: Bearer <content of "access_token">
where <content of "access_token"> is the long string in "access_token" object.
Once you have included that header, you should be able to make WebHDFS calls, such as to list directories, you could do
curl -i -X GET -H "Authorization: Bearer <REDACTED>" https://<yourstorename>.azuredatalakestore.net/webhdfs/v1/?op=LISTSTATUS
Having followed all those steps, I am getting an HTTP 401 error when running the above curl command to list directories:
WWW-Authenticate: Bearer authorization_uri="https://login.windows.net/<REDACTED>/", error="invalid_token", error_description="The access token is invalid."
with the body
{"error":{"code":"AuthenticationFailed","message":"Failed to validate the access token in the 'Authorization' header."}}
Does anyone know what might be the problem?
I pasted the token into jwt.io and it is valid (didn't check the signature). The content is something like this:
{
typ: "JWT",
alg: "RS256",
x5t: "MnC_VZcATfM5pOYiJHMba9goEKY",
kid: "MnC_VZcATfM5pOYiJHMba9goEKY"
}.
{
aud: "https://management.core.windows.net",
iss: "https://sts.windows.net/<TENANT-ID>/",
iat: 1460908119,
nbf: 1460908119,
exp: 1460912019,
appid: "<APP-ID>",
appidacr: "1",
idp: "https://sts.windows.net/<TENANT-ID>/",
oid: "34xxxxxx-xxxx-xxxx-xxxx-5460xxxxxxd7",
sub: "34xxxxxx-xxxx-xxxx-xxxx-5460xxxxxxd7",
tid: "<TENANT-ID>",
ver: "1.0"
}.
Please click the Data Explorer button then highlight the root folder and click Access. Then grant your AAD app permissions to WebHDFS there. I believe what you have done already is just to grant that AAD app permissions to manage your Azure Data Lake Store with the portal or Azure PowerShell. You haven't actually granted WebHDFS permissions yet. Further reading on security is here.

Resources