I have an array of objects, a list of computers, so the array looks something like this:
+----------+-----------+---------------+
| size(MB) | Computer | Location |
+----------+-----------+---------------+
| 100 | compname1 | C:\Users\etc\ |
| Offline | compname2 | Offline |
| 30 | compname3 | C:\users\etc2 |
+----------+-----------+---------------+
I need to iterate through these and any that are offline, scan them again. The issue I have is that on each iteration of the loop, how do I find the element number of the current object in the array?
I had:
$element = [array]::IndexOf($complist.computer,$compname)
Which did return a value but doesn't seem to be correct (the returned element does not match the correct array element).
The end game is that I need to run through the array for the computers marked as 'Offline', run the scan on that computer and then replace the element in the array with the new data.
Not tested:
$newarray =
foreach ($item in $array)
{
if (
( $item.location -eq 'offline' ) -and
( Test-Connection $Item.computer -Quiet )
)
{
Write-Verbose "Found $($item.Computer) online. Updating."
foreach ($userdir in (Get-ChildItem "\\$($item.Computer)\C$\users" -Directory))
{
$DirSize = (Get-ChildItem $userdir -Recurse | measure -Sum length).sum
[PSCustomobject]#{
Size = $DirSize
Computer = $item.Computer
Location = 'C:\users\' + $userdir.Name
}
}
}
Else { $item }
}
Since it's not a 1:1 replacement, you can't just do a replace - you're going to have to create a new array. This reads in the original array, and creates new entries for computers that were offline but are now online. Everything else just passes through.
Related
I'm trying to find the row with an attribute that is larger than the other row's attributes. Example:
$Array
Name Value
---- ----
test1 105
test2 101
test3 512 <--- Selects this row as it is the largest value
Here is my attempt to '1 line' this but It doesn't work.
$Array | % { If($_.value -gt $Array[0..($Array.Count)].value){write-host "$_.name is the largest row"}}
Currently it outputs nothing.
Desired Output:
"test1 is the largest row"
I'm having trouble visualizing how to do this efficiently with out some serious spaghetti code.
You could take advantage of Sort-Object to rank them by the property "Value" like this
$array = #(
[PSCustomObject]#{Name='test1';Value=105}
[PSCustomObject]#{Name='test2';Value=101}
[PSCustomObject]#{Name='test3';Value=512}
)
$array | Sort-Object -Property value -Descending | Select-Object -First 1
Output
Name Value
---- -----
test3 512
To incorporate your write host you can just run the one you select through a foreach.
$array | Sort-Object -Property value -Descending |
Select-Object -First 1 | Foreach-Object {Write-host $_.name,"has the highest value"}
test3 has the highest value
Or capture to a variable
$Largest = $array | Sort-Object -Property value -Descending | Select-Object -First 1
Write-host $Largest.name,"has the highest value"
test3 has the highest value
PowerShell has many built in features to make tasks like this easier.
If this is really an array of PSCustomObjects you can do something like:
$Array =
#(
[PSCustomObject]#{ Name = 'test1'; Value = 105 }
[PSCustomObject]#{ Name = 'test2'; Value = 101 }
[PSCustomObject]#{ Name = 'test3'; Value = 512 }
)
$Largest = ($Array | Sort-Object Value)[-1].Name
Write-host $Largest,"has the highest value"
This will sort your array according to the Value property. Then reference the last element using the [-1] syntax, then return the name property of that object.
Or if you're a purist you can assign the variable like:
$Largest = $Array | Sort-Object Value | Select-Object -Last 1 -ExpandProperty Name
If you want the whole object just remove .Name & -ExpandProperty Name respectively.
Update:
As noted PowerShell has some great tools to help with common tasks like sorting & selecting data. However, that doesn't mean there's never a need for looping constructs. So, I wanted to make a couple of points about the OP's own answer.
First, if you do need to reference array elements by index use a traditional For loop, which might look something like:
For( $i = 0; $i -lt $Array.Count; ++$i )
{
If( $array[$i].Value -gt $LargestValue )
{
$LargestName = $array[$i].Name
$LargestValue = $array[$i].Value
}
}
$i is commonly used as an iteration variable, and within the script block is used as the array index.
Second, even the traditional loop is unnecessary in this case. You can stick with the ForEach loop and track the largest value as and when it's encountered. That might look something like:
ForEach( $Row in $array )
{
If( $Row.Value -gt $LargestValue )
{
$LargestName = $Row.Name
$LargestValue = $Row.Value
}
}
Strictly speaking you don't need to assign the variables beforehand, though it may be a good practice to precede either of these with:
$LargestName = ""
$LargestValue = 0
In these examples you'd have to follow with a slightly modified Write-Host command
Write-host $LargestName,"has the highest value"
Note: Borrowed some of the test code from Doug Maurer's Fine Answer. Considering our answers were similar, this was just to make my examples more clear to the question and easier to test.
Figured it out, hopefully this isn't awful:
$Count = 1
$CurrentLargest = 0
Foreach($Row in $Array) {
# Compare This iteration vs the next to find the largest
If($Row.value -gt $Array.Value[$Count]){$CurrentLargest = $Row}
Else {$CurrentLargest = $Array[$Count]}
# Replace the existing largest value with the new one if it is larger than it.
If($CurrentLargest.Value -gt $Largest.Value){ $Largest = $CurrentLargest }
$Count += 1
}
Write-host $Largest.name,"has the highest value"
Edit: its awful, look at the other answers for a better way.
I have a 100 column table in sql server and I want to make it so not all of the columns need to be passed in the file to load. I have assigned column names in a table that then compares the columns in a hash table to find matching columns. I then create the code based on the match for the array I want to use to insert the data from the file. The problem is, it doesn't like calling the one variable to create the custom object.
I store the following below in a array. (up to a 100 of these, few below for sample (notice sqlcolumn2 is skipped for example)).
sqlcolumn1 = if ([string]::IsNullOrEmpty($obj.P1) -eq $true) {$null} else {"$obj.P1"}
sqlcolumn3 = if ([string]::IsNullOrEmpty($obj.P2) -eq $true) {$null} else {"$obj.P2"}
sqlcolumn4 = if ([string]::IsNullOrEmpty($obj.P3) -eq $true) {$null} else {"$obj.P3"}
sqlcolumn5 = if ([string]::IsNullOrEmpty($obj.P4) -eq $true) {$null} else {"$obj.P4"}
Here is the array:
foreach($line in $Final)
{
$DataRow = "$($line."TableColumnName") = if ([string]::IsNullOrEmpty(`$obj.$($line."PName")) -eq `$true) {`$null} else {`"`$obj.$($line."PName")`"}"
$DataArray += $DataRow
}
I then try to add it to a final array where I would want this to be looped through for each row of data after which I would perform the insert from the array. Even though the "string" value in the array above is correct if it were hand coded, I can't get it to recognize the rows and run.
foreach ($obj in $data2)
{
$test = [PSCustomObject] #{
$DataArray = Invoke-Expression $DataArray
}
If I just type $DataArray, it doesn't like this because it wants the = sign which I already have built into the string.
Is what I am trying to do even possible.
I was attempting to template out various different ways we receive this data, where some people send us 30 of the 100 columns, other more or less, and no one person using the exact columns to cut down on individual scripts for everything.
Adding more code:
Function ArrayCompare() {
[CmdletBinding()]
PARAM(
[Parameter(Mandatory=$True)]$Array1,
[Parameter(Mandatory=$True)]$A1Match,
[Parameter(Mandatory=$True)]$Array2,
[Parameter(Mandatory=$True)]$A2Match)
$Hash = #{}
foreach ($Data In $Array1) {
$Hash[$Data.$A1Match] += ,$Data
}
foreach ($Data In $Array2) {
$Hash[$Data.$A2Match] += ,$Data
}
foreach ($KeyValue In $Hash.GetEnumerator()){
$Match1, $Match2 = $KeyValue.Value.Where( {$_.$A1Match}, 'Split')
[PSCustomObject]#{
MatchValue = $KeyValue.Key
A1Matches = $Match1.Count
A2Matches = $Match2.Count
TablePosition = [int]$Match2.TablePosition
TableColumnName = $Match2.TableColumnName
# PName is the P(##) that is a generic ascending column value back to import-excel module. ColumnA = P1, ColumnB = P2 etc..until no data is detected. Allows flexibility and not having to know how many columns there are
PName = $Match1.Name}
}
}
$Server = 'ServerName'
$Catalog = 'DBName'
$DestinationTable = 'ImportIntoTableName'
$FileIdentifierID = 10
$FileName = 'Test.xlsx'
$FilePath = 'C:\'
$FullFilePath = $FilePath + $FileName
$data = Import-Excel -Path $FullFilePath -NoHeader -StartRow 1 # Import-
Excel Module for working with xlsx excel files
$data2 = Import-Excel -Path $ullFilePath -NoHeader -StartRow 2 # Import-
Excel Module for working with xlsx excel files
$ExpectedHeaderArray = #()
$HeaderArray = #()
$DataArray = #()
$HeaderDetect = #()
$HeaderDetect = $data | Select-Object -First 1 # Header Row In File
$HeaderDetect |
ForEach-Object {
$ColumnValue = $_
$ColumnValue |
Get-Member -MemberType *Property |
Select-Object -ExpandProperty Name |
ForEach-Object {
$HeaderValues = [PSCustomObject]#{
Name = $_
Value = $ColumnValue.$_}
$HeaderArray += $HeaderValues
}
}
# Query below provides a list of all expected file headers and the table
column name they map to
$Query = "SELECT TableColumnName, FileHeaderName, TablePosition FROM
dbo.FileHeaders WHERE FileIdentifierID = $($FileIdentifierID)"
$ds = Invoke-Sqlcmd -ServerInstance $Server -Database $Catalog -Query $Query
-OutputAs DataSet
$ExpectedHeaderArray = foreach($Row in $ds.Tables[0].Rows)
{
new-object psObject -Property #{
TableColumnName = "$($row.TableColumnName)"
FileHeaderName = "$($row.FileHeaderName)"
TablePosition = "$($row.TablePosition)"
}
}
#Use Function Above
#Bring it together so we know what P(##) goes with which header in file/mapped to table column name
$Result = ArrayCompare -Array1 $HeaderArray -A1Match Value -Array2 $ExpectedHeaderArray -A2Match FileHeaderName
$Final = $Result | sort TablePosition
foreach($Line in $Final)
{
$DataRow = "$($Line."TableColumnName") = if ([string]::IsNullOrEmpty(`$obj.$($Line."PName")) -eq `$true) {`$null} else {`"`$obj.$($Line."PName"))`"}"
$DataArray += $DataRow
}
# The output below is what the code inside the last array would be that I would use to import into excel.
# The goal is to be dynamic and match headers in the file to the stored header value and import into a table (mapped from header column to table column name)
# The reason for this is before I was here, there were many different "versions" of a layout that was given out. In the end, it is all one in the same
# but some send all 100 columns, some only send a handful, some send 80 etc. I am trying to have everything flow through here vs. 60+ pieces of code/stored procedures/ssis packs
Write-Output $DataArray
# Output Sample -- Note how in the sample, P2 and subsequent skip SQLColumn2 because P2 maps to the header value of position 3 in the sql table and each after is one off.
# In this example, SqlColumn2 would not be populated
# SqlColumn1 = if ([string]::IsNullOrEmpty($obj.P1) -eq $true) {$null} else {"$obj.P1"}
# SqlColumn3 = if ([string]::IsNullOrEmpty($obj.P2) -eq $true) {$null} else {"$obj.P2"}
# SqlColumn4 = if ([string]::IsNullOrEmpty($obj.P3) -eq $true) {$null} else {"$obj.P3"}
# SqlColumn5 = if ([string]::IsNullOrEmpty($obj.P4) -eq $true) {$null} else {"$obj.P4"}
# I know this doesn't work. This is where I'm stuck, how to build an array now off of this output from above
foreach ($obj in $data2)
{
$test = [PSCustomObject] #{
$DataArray = Invoke-Expression $DataArray}
}
I'm gong to re-state your question first, just to make sure I understand it properly (it's possible I don't!)...
You've got an excel file that looks something like this:
+---+---------+---------+---------+
| | A | B | C |
+---+---------+---------+---------+
| 1 | HeaderA | HeaderB | HeaderC |
+---+---------+---------+---------+
| 2 | Value P | Value Q | Value R |
+---+---------+---------+---------+
| 3 | Value S | Value T | Value U |
+---+---------+---------+---------+
You've also got a database table which looks like this:
+---------+---------+---------+---------+
+ ColumnW | ColumnX | ColumnY | ColumnZ |
+---------+---------+---------+---------+
+ ....... | ....... | ....... | ....... |
+---------+---------+---------+---------+
and a column mapping table like this (note, ColumnX isn't mapped in this example):
+-----------------+----------------+---------------+
| TableColumnName | FileHeaderName | TablePosition |
+-----------------+----------------+---------------+
| ColumnW | HeaderA | 1 |
+-----------------+----------------+---------------+
| ColumnY | HeaderB | 2 |
+-----------------+----------------+---------------+
| ColumnZ | HeaderC | 3 |
+-----------------+----------------+---------------+
You want to insert the values from the spreadsheet into the database table, using the data in your mapping table so you get this:
+---------+---------+---------+---------+
+ ColumnW | ColumnX | ColumnY | ColumnZ |
+---------+---------+---------+---------+
+ Value P | null | Value Q | Value R |
+---------+---------+---------+---------+
+ Value S | null | Value T | Value U |
+---------+---------+---------+---------+
So let's load the spreadsheet (letting the header row generate meaningful property names this time):
$data = Import-Excel -Path ".\MySpreadsheet.xlsx";
write-host ($data | ft | out-string);
# HeaderA HeaderB HeaderC
# ------- ------- -------
# Value P Value Q Value R
# Value S Value T Value U
and get your column mapping data (I'm programmatically creating an in-memory dataset, but you obviously read yours from your database instead):
$mappings = new-object System.Data.DataTable;
$null = $mappings.Columns.Add("TableColumnName", [string]);
$null = $mappings.Columns.Add("FileHeaderName", [string]);
$null = $mappings.Columns.Add("TablePosition", [int]);
#(
#{ "TableColumnName"="ColumnW"; "FileHeaderName"="HeaderA"; "TablePosition"=1 },
#{ "TableColumnName"="ColumnY"; "FileHeaderName"="HeaderB"; "TablePosition"=2 },
#{ "TableColumnName"="ColumnZ"; "FileHeaderName"="HeaderC"; "TablePosition"=3 }
) | % {
$row = $mappings.NewRow();
$row.TableColumnName = $_.TableColumnName;
$row.FileHeaderName = $_.FileHeaderName;
$row.TablePosition = $_.TablePosition;
$mappings.Rows.Add($row);
}
$ds = new-object System.Data.DataSet;
$ds.Tables.Add($mappings);
write-host ($ds.Tables[0] | ft | out-string)
# TableColumnName FileHeaderName TablePosition
# --------------- -------------- -------------
# ColumnW HeaderA 1
# ColumnY HeaderB 2
# ColumnZ HeaderC 3
Now we can build the "mapped" objects:
$values = #();
foreach( $row in $data )
{
$properties = [ordered] #{};
foreach( $mapping in $mappings )
{
$properties.Add($mapping.TableColumnName, $row."$($mapping.FileHeaderName)");
}
$values += new-object PSCustomObject -Property $properties;
}
write-host ($values | ft | out-string)
# ColumnW ColumnY ColumnZ
# ------- ------- -------
# Value P Value Q Value R
# Value S Value T Value U
The tricksy bit is $properties.Add($mapping.TableColumnName, $row."$($mapping.FileHeaderName)"); - basically, you can access object properties in PowerShell using a dotted string literal or variable (I'm not sure of the exact feature name) - e.g.
PS> $myValue = new-object PSCustomObject -Property #{ "aaa"="bbb"; "ccc"="ddd" }
PS> $myValue."aaa"
bbb
PS> $myProperty = "aaa"
PS> $myValue.$myProperty
"bbb"
so $row."$($mapping.FileHeaderName)" is an expression that evaluates to the value of the property of $row named in $mapping.FileHeaderName.
And then finally you can insert the objects into your database using your existing process...
Note that I couldn't quite work out what your ArrayCompare is actually doing so it's possible the above doesn't solve your problem 100%, but it's hopefully close enough that you can either work the difference out yourself, or leave a comment with where it differs from your desired solution.
Hope this helps.
I'm using Powershell to set up IIS bindings on a web server, and having a problem with the following code:
$serverIps = gwmi Win32_NetworkAdapterConfiguration
| Where { $_.IPAddress }
| Select -Expand IPAddress
| Where { $_ -like '*.*.*.*' }
| Sort
if ($serverIps.length -le 1) {
Write-Host "You need at least 2 IP addresses for this to work!"
exit
}
$primaryIp = $serverIps[0]
$secondaryIp = $serverIps[1]
If there's 2+ IPs on the server, fine - Powershell returns an array, and I can query the array length and extract the first and second addresses just fine.
Problem is - if there's only one IP, Powershell doesn't return a one-element array, it returns the IP address (as a string, like "192.168.0.100") - the string has a .length property, it's greater than 1, so the test passes, and I end up with the first two characters in the string, instead of the first two IP addresses in the collection.
How can I either force Powershell to return a one-element collection, or alternatively determine whether the returned "thing" is an object rather than a collection?
Define the variable as an array in one of two ways...
Wrap your piped commands in parentheses with an # at the beginning:
$serverIps = #(gwmi Win32_NetworkAdapterConfiguration
| Where { $_.IPAddress }
| Select -Expand IPAddress
| Where { $_ -like '*.*.*.*' }
| Sort)
Specify the data type of the variable as an array:
[array]$serverIps = gwmi Win32_NetworkAdapterConfiguration
| Where { $_.IPAddress }
| Select -Expand IPAddress
| Where { $_ -like '*.*.*.*' }
| Sort
Or, check the data type of the variable...
IF ($ServerIps -isnot [array])
{ <error message> }
ELSE
{ <proceed> }
Force the result to an Array so you could have a Count property. Single objects (scalar) do not have a Count property. Strings have a length property so you might get false results, use the Count property:
if (#($serverIps).Count -le 1)...
By the way, instead of using a wildcard that can also match strings, use the -as operator:
[array]$serverIps = gwmi Win32_NetworkAdapterConfiguration -filter "IPEnabled=TRUE" | Select-Object -ExpandProperty IPAddress | Where-Object {($_ -as [ipaddress]).AddressFamily -eq 'InterNetwork'}
You can either add a comma(,) before return list like return ,$list or cast it [Array] or [YourType[]] at where you tend to use the list.
If you declare the variable as an array ahead of time, you can add elements to it - even if it is just one...
This should work...
$serverIps = #()
gwmi Win32_NetworkAdapterConfiguration
| Where { $_.IPAddress }
| Select -Expand IPAddress
| Where { $_ -like '*.*.*.*' }
| Sort | ForEach-Object{$serverIps += $_}
You can use Measure-Object to get the actual object count, without resorting to an object's Count property.
$serverIps = gwmi Win32_NetworkAdapterConfiguration
| Where { $_.IPAddress }
| Select -Expand IPAddress
| Where { $_ -like '*.*.*.*' }
| Sort
if (($serverIps | Measure).Count -le 1) {
Write-Host "You need at least 2 IP addresses for this to work!"
exit
}
Return as a referenced object, so it never converted while passing.
return #{ Value = #("single data") }
I had this problem passing an array to an Azure deployment template. If there was one object, PowerShell "converted" it to a string. In the example below, $a is returned from a function that gets VM objected according to the value of a tag. I pass the $a to the New-AzureRmResourceGroupDeployment cmdlet by wrapping it in #(). Like so:
$TemplateParameterObject=#{
VMObject=#($a)
}
New-AzureRmResourceGroupDeployment -ResourceGroupName $RG -Name "TestVmByRole" -Mode Incremental -DeploymentDebugLogLevel All -TemplateFile $templatePath -TemplateParameterObject $TemplateParameterObject -verbose
VMObject is one of the template's parameters.
Might not be the most technical / robust way to do it, but it's enough for Azure.
Update
Well the above did work. I've tried all the above and some, but the only way I have managed to pass $vmObject as an array, compatible with the deployment template, with one element is as follows (I expect MS have been playing again (this was a report and fixed bug in 2015)):
[void][System.Reflection.Assembly]::LoadWithPartialName("System.Web.Extensions")
foreach($vmObject in $vmObjects)
{
#$vmTemplateObject = $vmObject
$asJson = (ConvertTo-Json -InputObject $vmObject -Depth 10 -Verbose) #-replace '\s',''
$DeserializedJson = (New-Object -TypeName System.Web.Script.Serialization.JavaScriptSerializer -Property #{MaxJsonLength=67108864}).DeserializeObject($asJson)
}
$vmObjects is the output of Get-AzureRmVM.
I pass $DeserializedJson to the deployment template' parameter (of type array).
For reference, the lovely error New-AzureRmResourceGroupDeployment throws is
"The template output '{output_name}' is not valid: The language expression property 'Microsoft.WindowsAzure.ResourceStack.Frontdoor.Expression.Expressions.JTokenExpression'
can't be evaluated.."
There is a way to deal with your situation. Leave most of you code as-is, just change the way to deal with the $serverIps object. This code can deal with $null, only one item, and many items.
$serverIps = gwmi Win32_NetworkAdapterConfiguration
| Where { $_.IPAddress }
| Select -Expand IPAddress
| Where { $_ -like '*.*.*.*' }
| Sort
# Always use ".Count" instead of ".Length".
# This works on $null, only one item, or many items.
if ($serverIps.Count -le 1) {
Write-Host "You need at least 2 IP addresses for this to work!"
exit
}
# Always use foreach on a array-possible object, so that
# you don't have deal with this issue anymore.
$serverIps | foreach {
# The $serverIps could be $null. Even $null can loop once.
# So we need to skip the $null condition.
if ($_ -ne $null) {
# Get the index of the array.
# The #($serverIps) make sure it must be an array.
$idx = #($serverIps).IndexOf($item)
if ($idx -eq 0) { $primaryIp = $_ }
if ($idx -eq 1) { $secondaryIp = $_ }
}
}
In PowerShell Core, there is a .Count property exists on every objects. In Windows PowerShell, there are "almost" every object has an .Count property.
the script below reads my outlook emails but how do I access the output. I'm new too Powershell and I'm still getting used to certain things. I just want to get the body of 10 unread outlook emails and store them in an Array called $Body.
$olFolderInbox = 6
$outlook = new-object -com outlook.application;
$ns = $outlook.GetNameSpace("MAPI");
$inbox = $ns.GetDefaultFolder($olFolderInbox)
#checks 10 newest messages
$inbox.items | select -first 10 | foreach {
if($_.unread -eq $True) {
$mBody = $_.body
#Splits the line before any previous replies are loaded
$mBodySplit = $mBody -split "From:"
#Assigns only the first message in the chain
$mBodyLeft = $mbodySplit[0]
#build a string using the –f operator
$q = "From: " + $_.SenderName + ("`n") + " Message: " + $mBodyLeft
#create the COM object and invoke the Speak() method
(New-Object -ComObject SAPI.SPVoice).Speak($q) | Out-Null
}
}
This may not be a factor here, since you're looping through only ten elements, but using += to add elements to an array is very slow.
Another approach would be to output each element within the loop, and assign the results of the loop to $body. Here's a simplified example, assuming that you want $_.body:
$body = $inbox.items | select -first 10 | foreach {
if($_.unread -eq $True) {
$_.body
}
}
This works because anything that is output during the loop will be assigned to $body. And it can be much faster than using +=. You can verify this for yourself. Compare the two methods of creating an array with 10,000 elements:
Measure-Command {
$arr = #()
1..10000 | % {
$arr += $_
}
}
On my system, this takes just over 14 seconds.
Measure-Command {
$arr = 1..10000 | % {
$_
}
}
On my system, this takes 0.97 seconds, which makes it over 14 times faster. Again, probably not a factor if you are just looping through 10 items, but something to keep in mind if you ever need to create larger arrays.
define $body = #(); before your loop
Then just use += to add the elements
Here's another way:
$body = $inbox.Items.Restrict('[Unread]=true') | Select-Object -First 10 -ExpandProperty Body
I have a CSV like below:
location,id
loc1,1234
loc1,1235
loc1,1236
Running $a = Import-CSV C:\File.csv | Group-Object "location" I get the following output:
Count Name Group
----- ---- -----
3 loc1 {#{location=loc1; id=1234}, #{location=loc1; id=1235), #{location=loc1, id=1236}}
I would like to add all ID's to a single group (Using Add-QADGroupMember) but I can't figure out how to get a group of ID's for $loc1. It seems to be be grouping them correctly but I can't seem to parse the output into a single group. E.g $loc1 = 1234,1235,1236 that I can loop through.
Any suggestions would be appreciated!
Group-Object doesn't handle hashtables well, since the keys aren't real properties.
Assuming:
$csv = Import-CSV C:\File.csv
You should be able to do, for example:
$ids = $csv | %{ $_.id }
to get an array of the ID values. You'd probably want to pipe through Get-Unique for location.
If you wanted to get the location for a single ID quickly:
$location = $csv | ?{ $_.id -eq 42 } | %{ $_.location }
If you wanted to get an array of all IDs for a single location quickly (I think this is what you want):
$loc1 = $csv | ?{ $_.location -eq 'loc1' }
For reference, if you wanted to get a hashtable mapping each location to an array of IDs:
$groups = $csv | %{ $_.location } | &{
begin
{
$hash = #{}
}
process
{
$location = $_.location
$hash[$location] = $csv | ?{ $_.location -eq $location }
}
end
{
$hash
}
}
A bit tricky, but this will do it:
Import-Csv C:\File.csv | Group-Object "location" | %{Set-Variable ($_.Name) ($_.Group | Select-Object -ExpandProperty id)}
After running that, $loc1, $loc2, etc. will be arrays of all the ids for each location.
And yet another option:
(Import-Csv c:\foo.csv | Group Location -AsHashTable).Loc1 | Foreach {$_.id}
And if you're on V3, you can do this:
(Import-Csv c:\foo.csv | Group Location -AsHashTable).Loc1.Id