Updating value in array with PowerShell - arrays

I'm working on a deployment script for VMs on a VMWare platform but I'm stuck.
Basically, my script does this:
Receive information from Excel and first runs some check. It does that for each row, which will represent VM information, before any VM is created.
When all VMs are validated, the first VM will be created, and then the next one, etc.
One of my functions will calculate the best available storage disk. It returns the first storage disk with the most available diskspace.
That function looks like this:
Function Get-AvailableStorage {
$Threshold = "80" # GB
$TotalFreeSpace = Get-Cluster -Name Management |
Get-Datastore |
where Name -notlike "*local*" |
sort FreeSpaceGB -Descending
if ($SqlServices -notlike "NONE") {
$VMSize = 30 + $VMStorage + 10
} else {
$VMSize = 30 + $VMStorage
}
foreach ($StorageDisk in $TotalFreeSpace) {
[math]::floor($StorageDisk.FreeSpaceGB) | Set-Variable RoundedSpace
$FreeAfter = $RoundedSpace - $VMSize | sort -Descending
if ($FreeAfter -lt $Threshold) {
return $StoragePool = "VSAN"
} else {
return $StorageDisk.Name
}
}
}
The problem
When I have multiple VMs in my Excel, the storage disk is always the same, because the available diskspace is not being updated (because none of the VMs is being deployed yet).
I did some investigation on my own:
I have to figure out a way to update the column FreeSpaceGB but that is a ReadOnly property.
I then though to push every item in another array which I created myself but that also doesn't work. Still Readonly property.
Then I thought about using PSObject with an Add-Member, but I cannot get that working either (or I'm doing it wrong).
$ownarray= #()
$item = New-Object PSObject
$Global:TotalFreeSpaceManagement = Get-Cluster -Name Management |
Get-Datastore |
where Name -notlike "*local*" |
sort FreeSpaceGB -Descending
foreach ($StorageDisk in $Global:TotalFreeSpaceManagement) {
$item | Add-Member -Type NoteProperty -Name "$($StorageDisk.Name)" -Value "$($StorageDisk.FreeSpaceGB)"
$ownarray += $item
}
UPDATE
I use the hashtable like #Ansgar suggested. When I try it manually it's working perfectly, but in my script it's not.
When I have multiple VMs in an array, the previous datastore is being used and the space that is left is UPDATED.
Example:
VM1 is 120GB and uses VM-105. That disk has 299GB left.
VM2 is 130GB and uses VM-105. Then the disk has 289GB left.
Both VMs are getting suggested VM-105 based on the most free space.
VM-105 should have 299 - 130 = 160GB left is the script was working correct but somehow the $FreeSpace is updated, or $max is overwritten, and I cannot figure how this happens.

You need to keep track of the changes to the free space, while also maintaining the association between disk and the calculated remaining free space. To do that read your storage into a hashtable that associates disks with the available free space. Then use that hashtable for selecting a disk and updating the respective free space value when placing a VM.
$threshold = 80 # GB
# initialize global hashtable
$storage = #{}
Get-Cluster -Name Management | Get-Datastore | Where-Object {
$_.Name -notlike '*local*'
} | ForEach-Object {
$script:storage[$_.Name] = [Math]::Floor($_.FreeSpaceGB)
}
function Get-AvailableStorage([int]$VMSize) {
# find disk that currently has the most free space
$disk = ''
$max = 0
foreach ($key in $script:storage.Keys) {
$freespace = $script:storage[$key] # just to shorten condition below
if ($freespace -gt $max -and $threshold -lt ($freespace - $VMSize)) {
$max = $freespace
$disk = $key
}
}
# return storage and update global hashtable
if (-not $disk) {
return 'VSAN' # fallback if no suitable disk was found
} else {
$script:storage[$disk] -= $VMSize
return $disk
}
}

Related

Powershell match properties and then selectively combine objects to create a third

I have a solution for this but I believe it is not the best method as it takes forever so I am looking for a faster/better/smarter way.
I have multiple pscustomObject objects pulled from .csv files. Each object has at least one common property. One is relatively small (around 200-300 items/lines in the object) but the other is sizable (around 60,000-100,000 items). The contents of one may or may not match the contents of the other.
I need to find where the two objects match on a specific property and then combine the properties of each object into one object with all or most properties.
An example snippet of the code (not exact but for this it should work - see the image for the sample data):
DataTables
Write-Verbose "Pulling basic Fruit data together"
$Purchase = import-csv "C:\Purchase.csv"
$Selling = import-csv "C:\Selling.csv"
Write-Verbose "Combining Fruit names and removing duplicates"
$Fruits = $Purchase.Fruit
$Fruits += $Selling.Fruit
$Fruits = $Fruits | Sort-Object -Unique
$compareData = #()
Foreach ($Fruit in $Fruits) {
$IndResults = #()
$IndResults = [pscustomobject]#{
#Adding Purchase and Selling data
Farmer = $Purchase.Where({$PSItem.Fruit -eq $Fruit}).Farmer
Region = $Purchase.Where({$PSItem.Fruit -eq $Fruit}).Region
Water = $Purchase.Where({$PSItem.Fruit -eq $Fruit}).Water
Market = $Selling.Where({$PSItem.Fruit -eq $Fruit}).Market
Cost = $Selling.Where({$PSItem.Fruit -eq $Fruit}).Cost
Tax = $Selling.Where({$PSItem.Fruit -eq $Fruit}).Tax
}
Write-Verbose "Loading Individual results into response"
$CompareData += $IndResults
}
Write-Output $CompareData
I believe the issue is in lines like these:
Farmer = $Purchase.Where({$PSItem.Fruit -eq $Fruit}).Farmer
If I understand this it is looking through the $Purchase object each time it goes through this line. I am looking for a way to speed that whole process up instead of having it look through the entire object for each match attempt.
Using this Join-Object:
$Purchase | Join $Selling -On Fruit | Format-Table
Result (using Simon Catlin's data):
Fruit Farmer Region Water Market Cost Tax
----- ------ ------ ----- ------ ---- ---
Apple Adam Alabama 1 MarketA 10 0.1
Cherry Charlie Cincinnati 2 MarketC 20 0.2
Damson Daniel Derby 3 MarketD 30 0.3
Elderberry Emma Eastbourne 4 MarketE 40 0.4
Fig Freda Florida 5 MarketF 50 0.5
using Join-Object
http://ramblingcookiemonster.github.io/Join-Object/
Join-Object -Left $purchase -Right $selling -LeftJoinProperty fruit -RightJoinProperty fruit -Type OnlyIfInBoth | ft
I had this very problem when trying to consolidate employee data from our HR system against employee data in our AD forest. With many thousands of rows, the process was taking an age.
I eventually walked away from custom objects and reverted to old school hash tables.
The hash tables entries themselves then held a sub-hash table with the data. In your instance, the outer hash would be keyed on $fruit, with the sub-hash containing the various attributes, e.g.: farmer, region, Etc.
Hash tables are lightning quick in comparison. It's a shame that PowerShell is slow in this regard.
Shout if you need more info.
26/01 Example code... assuming I'm correctly understanding the requirement:
PURCHASE.CSV:
Fruit,Farmer,Region,Water
Apple,Adam,Alabama,1
Cherry,Charlie,Cincinnati,2
Damson,Daniel,Derby,3
Elderberry,Emma,Eastbourne,4
Fig,Freda,Florida,5
SELLING.CSV
Fruit,Market,Cost,Tax
Apple,MarketA,10,0.1
Cherry,MarketC,20,0.2
Damson,MarketD,30,0.3
Elderberry,MarketE,40,0.4
Fig,MarketF,50,0.5
CODE
[String] $Local:strPurchaseFile = 'c:\temp\purchase.csv';
[String] $Local:strSellingFile = 'c:\temp\selling.csv';
[HashTable] $Local:objFruitHash = #{};
[System.Array] $Local:objSelectStringHit = $null;
[String] $Local:strFruit = '';
if ( (Test-Path -LiteralPath $strPurchaseFile -PathType Leaf) -and (Test-Path -LiteralPath $strSellingFile -PathType Leaf) ) {
#
# Populate data from purchase file.
#
foreach ( $objSelectStringHit in (Select-String -LiteralPath $strPurchaseFile -Pattern '^([^,]+),([^,]+),([^,]+),([^,]+)$' | Select-Object -Skip 1) ) {
$objFruitHash[ $objSelectStringHit.Matches[0].Groups[1].Value ] = #{ 'Farmer' = $objSelectStringHit.Matches[0].Groups[2].Value;
'Region' = $objSelectStringHit.Matches[0].Groups[3].Value;
'Water' = $objSelectStringHit.Matches[0].Groups[4].Value;
};
} #foreach-purchase-row
#
# Populate data from selling file.
#
foreach ( $objSelectStringHit in (Select-String -LiteralPath $strSellingFile -Pattern '^([^,]+),([^,]+),([^,]+),([^,]+)$' | Select-Object -Skip 1) ) {
$objFruitHash[ $objSelectStringHit.Matches[0].Groups[1].Value ] += #{ 'Market' = $objSelectStringHit.Matches[0].Groups[2].Value;
'Cost' = [Convert]::ToDecimal( $objSelectStringHit.Matches[0].Groups[3].Value );
'Tax' = [Convert]::ToDecimal( $objSelectStringHit.Matches[0].Groups[4].Value );
};
} #foreach-selling-row
#
# Output data. At this point, you could now build a PSCustomObject.
#
foreach ( $strFruit in ($objFruitHash.Keys | Sort-Object) ) {
Write-Host -Object ( '{0,-15}{1,-15}{2,-15}{3,-10}{4,-10}{5,10:C}{6,10:P}' -f
$strFruit,
$objFruitHash[$strFruit]['Farmer'],
$objFruitHash[$strFruit]['Region'],
$objFruitHash[$strFruit]['Water'],
$objFruitHash[$strFruit]['Market'],
$objFruitHash[$strFruit]['Cost'],
$objFruitHash[$strFruit]['Tax']
);
} #foreach
} else {
Write-Error -Message 'File error.';
} #else-if
I needed to do this myself for something similar. I wanted to take two system array objects and compare them pulling out the matches without having to manipulate the input data each time. Here's the method I used, which although I appreciate this is inefficient, it was instantaneous for the 200 or so records I had to work with.
I tried to translate what I was doing (users and their old and new home directories) into farmers, fruit and markets etc so I hope it makes sense!
$Purchase = import-csv "C:\Purchase.csv"
$Selling = import-csv "C:\Selling.csv"
$compareData = #()
foreach ($iPurch in $Purchase) {
foreach ($iSell in $Selling) {
if ($iPurch.fruit -match $iSell.fruit) {
write-host "Match found between $($iPurch.Fruit) and $($iSell.Fruit)"
$hash = #{
Fruit = $iPurch.Fruit
Farmer = $iPurch.Farmer
Region = $iPurch.Region
Water = $iPurch.Water
Market = $iSell.Market
Cost = $iSell.Cost
Tax = $iSell.Tax
}
$Build = New-Object PSObject -Property $hash
$Total = $Total + 1
$compareData += $Build
}
}
}
Write-Host "Processed $Total records"

Storing and reading objects in to objects or array

I need to create VEEAM Replication jobs. When creating the job I need to provide a list of SourceNetworks and matching TargetNetworks. I have a CSV file that has the matching list in text and then I run a cmdlet to retrieve the matching network object.
CSV:
SourcePortGroup, TargetPortGroup
VLAN 103,LAN0_DMZ
VLAN 120,LAN0_JDE
VLAN 121,LAN0_IT-BDC
I wrote a foreach in which I retrieve the network object using:
foreach ($item in $csvlist) {
Get-VBRServer -Name $SourceESXi | Get-VBRViServerNetworkInfo | Where-Object {
$_.NetworkName -eq $Mapping.SourcePortGroup
}
Get-VBRServer -Name $TargetESXi | Get-VBRViServerNetworkInfo | Where-Object {
$_.NetworkName -eq $Mapping.TargetPortGroup
}
}
This works when debugging, I get the correct result, which is an object. But now I need to store each of them in a new object or in an array, so that later on when creating the job I can easily use the source and target mapping.
I have no clue on what the best way is to store the results and then call them when needed.
Assuming that each Get-VBRServer statement produces only a single item per iteration you could for example use a hashtable for mapping source to target networks:
$networks = #{}
foreach ($item in $csvlist) {
$key = '{0}/{1}' -f
$networks[$key] = #{
'Source' = Get-VBRServer -Name $SourceESXi |
Get-VBRViServerNetworkInfo |
Where-Object { $_.NetworkName -eq $Mapping.SourcePortGroup }
'Target' = Get-VBRServer -Name $TargetESXi |
Get-VBRViServerNetworkInfo |
Where-Object { $_.NetworkName -eq $Mapping.TargetPortGroup }
}
}

What do the values of 'A' and 'R' for msiScriptName represent?

first time stackoverflower.
I have a need to remove 'ghost' entries from the PackageRegistrations of my software deployment GPOs.
What I mean by that is that there are more entries in the ADSI object than there are MSI/MST files associated to the GPO. i.e. what this blogger also seems to be experiencing http://justanotheritblog.co.uk/2016/11/15/list-msi-paths-from-software-installation-policies/ (I just found this when looking into my issue).
When nosing around the properites in ADSI, I stubmbled across 'msiScriptName', which seems to have a value of either 'A' or 'R'.
What I cannot seem to find, is any information as to what these values may represent.
Any ideas on what the 'A' and 'R' mean and/or how to correctily identify and/or remove the 'ghost' entries greatly recieved.
The reason for this is that I have a whole bunch of software deployment GPOs that need the file path updating, and rather than manually editing each one I wanted to use PowerShell to bulk update them - we are moving to DFS from fixed file server, so I need to update the msiFileList properties. This I can do, but do not want to waste processing overhead on irrelevant objects.
The following is rough code suggesting how I am doing this
$MSIFiles = #()
# Get all the SoftwareDeployment GPOs, indicated by a displayname continaing 'Install' and create an object for each MSI/MST associated to it.
$Packages = Get-GPO -All | Where-Object { $_.DisplayName -like "*Install*" } | Get-ADObjectGPOPackages -Domain 'skyriver.internal'
foreach ($p in $Packages)
{
$msiCount = ($p.msiFileList | Measure-Object).Count
$msiFileListNew = #()
for ($i = 0; $i -lt $msiCount; $i ++)
{
$msiFile = $p.msiFileList[$i] -replace 'hoth(01|01.skyriver.internal|02.skyriver.internal)','skyriver.internal\data'
$msiFileListNew += $msiFile
}
$Properties = [ordered]#{
'gpoDisplayName' = $p.gpoDisplayName
'PackageNumber' = $p.PackageNumber
'DisplayName' = $p.DisplayName
'CN' = $p.CN
'DistinguishedName' = $p.DistinguishedName
'Identity' = $p.Identity
'msiFileList' = $msiFileListNew
}
$obj = New-Object -TypeName psobject -Property $Properties
$MSIFiles += $obj
}
# Now make the replacements.
foreach ($m in $MSIFiles)
{
Set-ADObject -Identity $m.Identity -Server dagobah.skyriver.internal -Replace #{msiFileList = $m.msiFileList}
}
So far as I can tell, A is Advertised (ie available for install), and R is Remove. The Ghost packages probably have an R as they are no longer valid and are therefore to be uninstalled (I'm not sure if this only applies if the "uninstall when it falls out of scope" option is enabled before deleting?).

PowerShell: modify elements of array

My cmdlet get-objects returns an array of MyObject with public properties:
public class MyObject{
public string testString = "test";
}
I want users without programming skills to be able to modify public properties (like testString in this example) from all objects of the array.
Then feed the modified array to my second cmdlet which saves the object to the database.
That means the syntax of the "editing code" must be as simple as possible.
It should look somewhat like this:
> get-objects | foreach{$_.testString = "newValue"} | set-objects
I know that this is not possible, because $_ just returns a copy of the element from the array.
So you'd need to acces the elements by index in a loop and then modify the property.This gets really quickly really complicated for people that are not familiar with programming.
Is there any "user-friendly" built-in way of doing this? It shouldn't be more "complex" than a simple foreach {property = value}
I know that this is not possible, because $_ just returns a copy of the element from the array (https://social.technet.microsoft.com/forums/scriptcenter/en-US/a0a92149-d257-4751-8c2c-4c1622e78aa2/powershell-modifying-array-elements)
I think you're mis-intepreting the answer in that thread.
$_ is indeed a local copy of the value returned by whatever enumerator you're currently iterating over - but you can still return your modified copy of that value (as pointed out in the comments):
Get-Objects | ForEach-Object {
# modify the current item
$_.propertyname = "value"
# drop the modified object back into the pipeline
$_
} | Set-Objects
In (allegedly impossible) situations where you need to modify a stored array of objects, you can use the same technique to overwrite the array with the new values:
PS C:\> $myArray = 1,2,3,4,5
PS C:\> $myArray = $myArray |ForEach-Object {
>>> $_ *= 10
>>> $_
>>>}
>>>
PS C:\> $myArray
10
20
30
40
50
That means the syntax of the "editing code" must be as simple as possible.
Thankfully, PowerShell is very powerful in terms of introspection. You could implement a wrapper function that adds the $_; statement to the end of the loop body, in case the user forgets:
function Add-PsItem
{
[CmdletBinding()]
param(
[Parameter(Mandatory,ValueFromPipeline,ValueFromRemainingArguments)]
[psobject[]]$InputObject,
[Parameter(Mandatory)]
[scriptblock]$Process
)
begin {
$InputArray = #()
# fetch the last statement in the scriptblock
$EndBlock = $Process.Ast.EndBlock
$LastStatement = $EndBlock.Statements[-1].Extent.Text.Trim()
# check if the last statement is `$_`
if($LastStatement -ne '$_'){
# if not, add it
$Process = [scriptblock]::Create('{0};$_' -f $Process.ToString())
}
}
process {
# collect all the input
$InputArray += $InputObject
}
end {
# pipe input to foreach-object with the new scriptblock
$InputArray | ForEach-Object -Process $Process
}
}
Now the users can do:
Get-Objects | Add-PsItem {$_.testString = "newValue"} | Set-Objects
The ValueFromRemainingArguments attribute also lets users supply input as unbounded parameter values:
PS C:\> Add-PsItem { $_ *= 10 } 1 2 3
10
20
30
This might be helpful if the user is not used to working with the pipeline
Here's a more general approach, arguably easier to understand, and less fragile:
# $dataSource would be get-object in the OP
# $dataUpdater is the script the user supplies to modify properties
# $dataSink would be set-object in the OP
function Update-Data {
param(
[scriptblock] $dataSource,
[scriptblock] $dataUpdater,
[scriptblock] $dataSink
)
& $dataSource |
% {
$updaterOutput = & $dataUpdater
# This "if" allows $dataUpdater to create an entirely new object, or
# modify the properties of an existing object
if ($updaterOutput -eq $null) {
$_
} else {
$updaterOutput
}
} |
% $dataSink
}
Here are a couple of examples of use. The first example isn't applicable to the OP, but it's being used to create a data set that is applicable (a set of objects with properties).
# Use updata-data to create a set of data with properties
#
$theDataSource = #() # will be filled in by first update-data
update-data {
# data source
0..4
} {
# data updater: creates a new object with properties
New-Object psobject |
# add-member uses hash table created on the fly to add properties
# to a psobject
add-member -passthru -NotePropertyMembers #{
room = #('living','dining','kitchen','bed')[$_];
size = #(320, 200, 250, 424 )[$_]}
} {
# data sink
$global:theDataSource += $_
}
$theDataSource | ft -AutoSize
# Now use updata-data to modify properties in data set
# this $dataUpdater updates the 'size' property
#
$theDataSink = #()
update-data { $theDataSource } { $_.size *= 2} { $global:theDataSink += $_}
$theDataSink | ft -AutoSize
And then the output:
room size
---- ----
living 320
dining 200
kitchen 250
bed 424
room size
---- ----
living 640
dining 400
kitchen 500
bed 848
As described above update-data relies on a "streaming" data source and sink. There is no notion of whether the first or fifteenth element is being modified. Or if the data source uses a key (rather than an index) to access each element, the data sink wouldn't have access to the key. To handle this case a "context" (for example an index or a key) could be passed through the pipeline along with the data item. The $dataUpdater wouldn't (necessarily) need to see the context. Here's a revised version with this concept added:
# $dataSource and $dataSink scripts need to be changed to output/input an
# object that contains both the object to modify, as well as the context.
# To keep it simple, $dataSource will output an array with two elements:
# the value and the context. And $dataSink will accept an array (via $_)
# containing the value and the context.
function Update-Data {
param(
[scriptblock] $dataSource,
[scriptblock] $dataUpdater,
[scriptblock] $dataSink
)
% $dataSource |
% {
$saved_ = $_
# Set $_ to the data object
$_ = $_[0]
$updaterOutput = & $dataUpdater
if ($updaterOutput -eq $null) { $updaterOutput = $_}
$_ = $updaterOutput, $saved_[1]
} |
% $dataSink
}

Powershell - Searching and Comparing Arrays with Quest CMDlets

Trying to determine if there are any user folders on the network that don’t have an associated user account. All results return "Missing" when the majority should return "Found". Any ideas?
$Dir = "\\ServerName\Share\"
$FolderList = Get-ChildItem($Dir) | where {$_.psIsContainer -eq $true}
$UserList = get-qaduser -sizelimit 0 | select LogonName
foreach ($Folder in $FolderList)
{
if ($UserList -contains $Folder.name)
{
"Found: " + $Folder.name
}
Else
{
"Missing: " + $Folder.name
}
}
How about trying a slightly different approach that uses a hashtable (which offers exceptionally fast lookups of keys):
$users = #{}
Get-QADUser -sizelimit 0 | Foreach {$users["$($_.LogonName)"] = $true}
$dir = "\\ServerName\Share\"
Get-ChildItem $dir | Where {$_.PSIsContainer -and !$users["$($_.Name)"]}
If the folder name doesn't exactly match the LogonName, then as EBGreen notes, you will need to adjust the key ($users["$($.LogonName)"]) or the folder name when you use it to index the hashtable (!$users["$($.Name)"]).
-contains will match if the item in the collection is identical to what you are testing so be sure that the $Folder.Name is exactly the same as LogonName. Usually it wouldn't be. Most companies would have the folder name be foo$ for a user named foo.

Resources