Modern Approach to WPF Telerik Grid Manifest Azure DevOps - wpf

Is there a modern approach to exclude manifest certificate private keys from a repository deployment using Azure DevOps without losing related functionality?
I am migrating a code repository that contains a WPF Telerik grid from Team Foundation Server to Azure DevOps. I noticed sensitive information such as an X.509 Certificate Private Key in a TemporaryKey.pfx file that seems to handle the Telerik grid manifest download in production.
I attempted to removed the manifests and OneClick signing outright and see related pages are now throwing errors like the following:
Application manifest has either a different computed hash than the one specified or no hash
Within the .csproj
I see 2 potential lines to remove from the .csproj but I do not want to introduce a security risk if this is a critical security component.
<GenerateManifests>true</GenerateManifests>
<SignManifests>false</SignManifests>
Using a key vault would be another alternative, however I imagine this is circumventing a larger security issue.
Edit:
After some trial and error I have included the corresponding pfx as a secure file and added powershell scripts to install the pfx on the local agent and sign the manifest as I would in the regular application (Sign the OneClick manifests). Now I am receiving
Error MSB3482: An error occurred while signing: A certificate chain
could not be built to a trusted root authority.
My YAML looks like the following:
- task: DownloadSecureFile#1
name: TemporaryKey
displayName: 'Download TemporaryKey certificate'
inputs:
secureFile: 'TemporaryKey.pfx'
#Install TemporaryKey certificate for manifest
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
Write-Host "Start adding the PFX file to the certificate store."
$secName = "TemporaryKey.pfx"
$tempDirectory = $env:AGENT_TEMPDIRECTORY
$pfxFilePath = Join-Path $tempDirectory $secName
Add-Type -AssemblyName System.Security
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import($pfxFilePath, "$(Password)", [System.Security.Cryptography.X509Certificates.X509KeyStorageFlags]"PersistKeySet")
$store = new-object system.security.cryptography.X509Certificates.X509Store -argumentlist "MY", CurrentUser
$store.Open([System.Security.Cryptography.X509Certificates.OpenFlags]"ReadWrite")
$store.Add($cert)
$store.Close()
#Sign manifest using TemporaryKey
- task: PowerShell#2
displayName: "Sign TemporaryKey PowerShell script"
inputs:
targetType: 'inline'
script: |
$magicToken = "#PerformScriptSigning"
$encoding = "UTF8"
$scriptFolder = "."
#No files found here
$scripts = Get-ChildItem -Path $scriptFolder -Filter "*.ps1" -Recurse -ErrorAction Stop
foreach ($script in $scripts) {
try {
$content = Get-Content -Path $script.FullName -Encoding $encoding
if ($content.Contains($magicToken)) {
$content = $content | Where-Object {$_ -notmatch $magicToken}
Set-Content -Value $content -Path $script.FullName -Encoding $encoding -Force
# load cert
$codeSigningCert = Get-ChildItem Cert:\CurrentUser\My -CodeSigningCert | Select-Object -First 1
Write-Output "Signing script `"$($script.Name)`" with certificate `"$($codeSigningCert.Thumbprint)`""
# sign script
$null = Set-AuthenticodeSignature -Certificate $codeSigningCert -FilePath $script.FullName -TimestampServer "http://timestamp.comodoca.com/rfc3161"
# copy to artifact staging location
$null = Copy-Item -Path $script.FullName -Destination $env:Build_ArtifactStagingDirectory
}
}
catch {
Write-Error $_
}
}
From my understanding this process should create .ps1 files to sign the project, however there are no .ps1 files found in the signing script. The install script can open the file and does successfully install it to the store. I wrote out the TemporaryKey.pfx cert from before storing it to ensure it was opening without error.
I'm not clear how signing works in this case.

The "modern" or recommended approach would be to upload the .pfx file as secure file to Azure DevOps and then download it and sign your app with it during the build or release pipeline.
This article contains an example of a YAML pipeline that uses a secure .pfx to sign an MSIX packaged WPF app.

Related

configure automatic domain join and computer naming

Configure automatic domain join, automatic computer naming, and place new computer accounts in the appropriate organizational unit (OU).
Receive a Deploy request from the reference computer and deploy the install image using PXE and network installation.
this is what I have to do after I configured the active directory and create the users and peppered the WPS server in windows server 2016 but I have no idea what is this or how to do it any think for help please.
• You can achieve the three things, i.e., joining the computer to domain, placing it in an OU and renaming the computer system through a single script but the other two tasks as stated by you, i.e., receive a deploy request from the reference computer and deploy the install image using PXE and network installation should be done afterwards. For the other two tasks, you can configure a Windows Deployment Services server and deploy an image to be deployed to the respective networks, while also enabling PXE in it and the DHCP server scope also.
• Please find the below script to join the computer to a particular OU and rename it thereafter. Create a ‘.csv’ file containing the current names of the computers to be joined to the domain and renamed. Then run the below script on each computer individually: -
‘ Enable-PSRemoting -SkipNetworkProfileCheck -Force ’ --> After running this cmd locally on each computer, run the below script from a domain controller as it will connect remotely through powershell to each computer and join it to domain
‘ Import-Module ActiveDirectory
$inputFilePath = <Path of the csv file>
$computers = Get-Content -Path $inputFilePath
$domain = <domainname>
$credentials = Get-credential -username <domain admin username> -password <password>
foreach ($computer in $computers)
{
$ScriptBlock = {Param($computer)
Add-Computer -DomainName $domain -ComputerName $computer -newname <NewComputerName> -OUPath “OU=testOU,DC=domain,DC=Domain,DC=com”-Credential $credentials -Restart -Force
}
$session = New-PSSession -ComputerName $computer
Invoke-Command -Session $session -ScriptBlock $ScriptBlock -ArgumentList $computer
Remove-PSSession -Session $session
} ’
• For deploying an image through WDS after domain joining through PXE, refer the following link: -
https://www.microsoftpressstore.com/articles/article.aspx?p=3089351&seqNum=4

Authentication and authorization way for Azure Blobs

In my azure blob storage , I have a container where I have multiple folders with folder name as project name and each folder has a metadata.json file along with subfolders in first level.
Blob Container --> Folder A, FolderB
Folder A --> metadata.json, subfolder(s)
Folder B --> metadata.json, subfolders(s)..
Now, I have a Powershell commandlet which I have created to get the blob metadata file of a project.
So if It is like Get-Documents -Project "FolderA" and I get all the information from metadata.json in Folder A. To get this I am having my Blob storage account, SAS token and container stored in a config file in the solution.
Going forward, I would like to make it more secure and would like my customers, if they call Get-Documents command, user should be prompted with login prompt and once their authentication is verified i.e. they are in organization active directory and have permission to Blob folder, they should be able to get metadata information.
How do I start with this? if anyone can guide to me proper documentation or share similar solution if implemented.
You could access Azure blob storage with Azure AD, and get Blob with Rest API.
Code sample with Powershell:
# login
Connect-AzAccount
# get accessToken
$resource = "https://storage.azure.com/"
$context = [Microsoft.Azure.Commands.Common.Authentication.Abstractions.AzureRmProfileProvider]::Instance.Profile.DefaultContext
$accessToken = [Microsoft.Azure.Commands.Common.Authentication.AzureSession]::Instance.AuthenticationFactory.Authenticate($context.Account, $context.Environment, $context.Tenant.Id.ToString(), $null, [Microsoft.Azure.Commands.Common.Authentication.ShowDialog]::Never, $null, $resource).AccessToken
#request REST API
$uri = "https://<myaccount>.blob.core.windows.net/<mycontainer>/<myblob>"
$xmsdate = get-date
$xmsdate = $xmsdate.ToUniversalTime()
$xmsdate = $xmsdate.toString('r')
$headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]"
$headers.Add("x-ms-version", "2019-12-12")
$headers.Add("x-ms-date", $xmsdate)
$headers.Add("Authorization", "Bearer " + $accessToken)
Invoke-RestMethod -Method 'Get' -Uri $uri -Headers $headers
For more details, see here. It will help you understand.

Load data into SQL Server using Powershell without Credential Prompt

I am trying to load a CSV file into a remote SQL Server instance. This is the command I am running:
Write-SqlTableData -DatabaseName my_db_name -TableName my_table_name -ServerInstance my_server_instance -SchemaName dbo -InputData $data -force
I am trying to use the credential parameter, but it is giving me a prompt. I want to automate this process, so I need to bypass the prompt.
Write-SqlTableData -DatabaseName my_db_name -TableName my_table_name -ServerInstance my_server_instance -SchemaName dbo -Credential $cred -InputData $data -force

bat file for setting IIS App pool

Can i set an Website's App Pool in IIS using the bat file or any other scripts?
You could run msdeploy.exe and pass in parameters and set the AppPool as follows something like this:
msdeploy.exe
-verb:sync -source:appHostConfig="Default Web Site"
-enableLink:AppPoolExtension
-dest:package=site.zip
-declareParam:name="Application Pool",
defaultValue="Default Web Site",
description="Application pool for this site",
kind=DeploymentObjectAttribute,
scope=appHostConfig,
match="application/#applicationPool"
You can use powershell:
import-module WebAdministration
Set-ItemProperty 'IIS:\Sites\Default Web Site' ApplicationPool NewAppPoolName

Connecting to remote SQL Server instance using SQL auth via SqlPowershell?

I'm trying to use SqlPowershell/SqlPS (Import-Module sqlps) to connect to a server on my local network. I followed the instructions here to create a function that predefines a server/database and the login and prompts each time for the password:
function sqldrive
{
param( [string]$name, [string]$login = "<myLogin>", [string]$root = "SQLSERVER:\SQL\<serverName>\<databaseName>" )
$pwd = read-host -AsSecureString -Prompt "Password"
$cred = new-object System.Management.Automation.PSCredential -argumentlist $login,$pwd
New-PSDrive $name -PSProvider SqlServer -Root $root -Credential $cred -Scope 1
}
However, after running sqldrive <someName> and entering my password, it fails to connect, giving two messages:
WARNING: Could not obtain SQL Server Service information. An attempt to connect to WMI on '<serverName>' failed with
the following error: The RPC server is unavailable. (Exception from HRESULT: 0x800706BA)
and
New-PSDrive : SQL Server PowerShell provider error: Path SQLSERVER:\SQL\<serverName>\<databaseName> does not exist. Please
specify a valid path.
At line:6 char:5
+ New-PSDrive $name -PSProvider SqlServer -Root $root -Credential $ ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (SQLSERVER:\SQL\<serverName>\<databaseName>:String) [New-PSDrive], GenericProviderException
+ FullyQualifiedErrorId : PathDoesNotExist,Microsoft.PowerShell.Commands.NewPSDriveCommand
Can anyone spot where I'm going wrong? I can work with the same server just fine using those credentials via SSMS.

Resources