I was asked to fix up the output of a PowerShell script a colleague wrote today, and noticed some strange behavior when trying to pipe output from a foreach loop. When I run the loop without piping the iterable object $gpos into the foreach as so:
# This is the foreach loop in question
foreach ( $gpo in $gpos ) {
[xml]$XML = Get-GPOReport -$gpo.DisplayName -ReportType Xml
$admins = $XML.DocumentElement.Computer.ExtensionData.Extention.RestrictedGroups.Member
# Not this one
$admins | foreach {
[PSCustomObject]#{
"GroupPolicy" = $gpo.DisplayName;
"Permisisons" = $_.name.'#text';
}
}
} | Export-CSV -Path \path\to\file.csv -NoTypeInformation
I get an error "An empty pipe element is not allowed".
However, if I pipe the $gpos object into the foreach loop like so:
$gpos | foreach {
$gpo = $_
# ...the rest of the code above
} | Export-CSV -Path \path\to\file.csv -NoTypeInformation
I am able to use the last pipe without issue. Why won't the pipe work when the statement starts with a foreach loop as opposed to piping in the iterable object? I rarely use the first format myself, so I've not run into this issue with code I write. I can't think of a functional reason both formats shouldn't work because if the piped input is null there is an appropriate exception which is thrown in this case.
Why won't the pipe work when the statement starts with a foreach loop as opposed to piping in the iterable object?
Because one syntax is the foreach statement, and the other is an alias for the ForEach-Object command. It's like the difference between Get-ChildItem and if {} else {}.
The PowerShell authors stupidly decided that overloading the term was a good idea. It's confused users of the language ever since.
Compare:
Get-Help about_Foreach -ShowWindow
Get-Help ForEach-Object -ShowWindow
The former even describes how PowerShell decides which is which:
When Foreach appears in a command pipeline, Windows PowerShell uses the foreach alias, which calls the ForEach-Object command. When you use the foreach alias in a command pipeline, you do not include the ($ in $) syntax as you do with the Foreach statement. This is because the prior command in the pipeline provides this information.
Bottom line is that the foreach will not send output down the pipeline. You can do this just fine:
$x = foreach ($i in 1..10) { $i }
But this will fail:
foreach ($i in 1..10) { $i } | Where-Object { $_ -eq 2 }
As Mathias R. Jessen notes in the comments, you can wrap the foreach statement in a subexpression to cause it to work with the pipeline:
$(foreach ($i in 1..10) { $i }) | Where-Object { $_ -eq 2 }
The ForEach-Object command always uses (and requires) the pipeline.
One is the language keyword foreach and the other is actually an alias to the cmdlet ForEach-Object.
A language keyword can't be a part of a pipeline which is why you get that exception. It's also why they mean different things in different contexts, the engine won't parse foreach as a keyword if it's already part of a pipeline.
Related
If I run this in PowerShell, I expect to see the output 0 (zero):
Set-StrictMode -Version Latest
$x = "[]" | ConvertFrom-Json | Where { $_.name -eq "Baz" }
Write-Host $x.Count
Instead, I get this error:
The property 'name' cannot be found on this object. Verify that the property exists and can be set.
At line:1 char:44
+ $x = "[]" | ConvertFrom-Json | Where { $_.name -eq "Baz" }
+ ~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [], RuntimeException
+ FullyQualifiedErrorId : PropertyAssignmentException
If I put braces around "[]" | ConvertFrom-Json it becomes this:
$y = ("[]" | ConvertFrom-Json) | Where { $_.name -eq "Baz" }
Write-Host $y.Count
And then it "works".
What is wrong before introducing the parentheses?
To explain the quotes around "works" - setting strict mode Set-StrictMode -Version Latest indicates that I call .Count on a $null object. That is solved by wrapping in #():
$z = #(("[]" | ConvertFrom-Json) | Where { $_.name -eq "Baz" })
Write-Host $z.Count
I find this quite dissatisfying, but it's an aside to the actual question.
Why is PowerShell applying the predicate of a Where to an empty list?
Because ConvertFrom-Json tells Where-Object to not attempt to enumerate its output.
Therefore, PowerShell attempts to access the name property on the empty array itself, much like if we were to do:
$emptyArray = New-Object object[] 0
$emptyArray.name
When you enclose ConvertFrom-Json in parentheses, powershell interprets it as a separate pipeline that executes and ends before any output can be sent to Where-Object, and Where-Object can therefore not know that ConvertFrom-Json wanted it to treat the array as such.
We can recreate this behavior in powershell by explicitly calling Write-Output with the -NoEnumerate switch parameter set:
# create a function that outputs an empty array with -NoEnumerate
function Convert-Stuff
{
Write-Output #() -NoEnumerate
}
# Invoke with `Where-Object` as the downstream cmdlet in its pipeline
Convert-Stuff | Where-Object {
# this fails
$_.nonexistingproperty = 'fail'
}
# Invoke in separate pipeline, pass result to `Where-Object` subsequently
$stuff = Convert-Stuff
$stuff | Where-Object {
# nothing happens
$_.nonexistingproperty = 'meh'
}
Write-Output -NoEnumerate internally calls Cmdlet.WriteObject(arg, false), which in turn causes the runtime to not enumerate the arg value during parameter binding against the downstream cmdlet (in your case Where-Object)
Why would this be desireable?
In the specific context of parsing JSON, this behavior might indeed be desirable:
$data = '[]', '[]', '[]', '[]' |ConvertFrom-Json
Should I not expect exactly 5 objects from ConvertFrom-Json now that I passed 5 valid JSON documents to it? :-)
With an empty array as direct pipeline input, nothing is sent through the pipeline, because the array is enumerated, and since there's nothing to enumerate - because an empty array has no elements - the Where (Where-Object) script block is never executed:
Set-StrictMode -Version Latest
# The empty array is enumerated, and since there's nothing to enumerate,
# the Where[-Object] script block is never invoked.
#() | Where { $_.name -eq "Baz" }
By contrast, in PowerShell versions up to v6.x "[]" | ConvertFrom-Json produces an empty array as a single output object rather than having its (nonexistent) elements enumerated, because ConvertFrom-Json in these versions doesn't enumerate the elements of arrays it outputs; it is the equivalent of:
Set-StrictMode -Version Latest
# Empty array is sent as a single object through the pipeline.
# The Where script block is invoked once and sees $_ as that empty array.
# Since strict mode is in effect and arrays have no .name property
# an error occurs.
Write-Output -NoEnumerate #() | Where { $_.name -eq "Baz" }
ConvertFrom-Json's behavior is surprising in the context of PowerShell - cmdlets generally enumerate multiple outputs - but is defensible in the context of JSON parsing; after all, information would be lost if ConvertFrom-Json enumerated the empty array, given that you wouldn't then be able to distinguish that from empty JSON input ("" | ConvertFrom-Json).
The consensus was that both use cases are legitimate and that users should have a choice between the two behaviors - enumeration or not - by way of a switch (see this GitHub issue for the associated discussion).
Therefore, starting with PowerShell [Core] 7.0:
Enumeration is now performed by default.
An opt-in to the old behavior is available via the new -NoEnumerate switch.
In PowerShell 6.x-, if enumeration is desired, the - obscure - workaround is to force enumeration by simply enclosing the ConvertFrom-Json call in (...), the grouping operator (which converts it to an expression, and expressions always enumerate a command's output when used in the pipeline):
# (...) around the ConvertFrom-Json call forces enumeration of its output.
# The empty array has nothing to enumerate, so the Where script block is never invoked.
("[]" | ConvertFrom-Json) | Where { $_.name -eq "Baz" }
As for what you tried: your attempt to access the .Count property and your use of #(...):
$y = ("[]" | ConvertFrom-Json) | Where { $_.name -eq "Baz" }
$y.Count # Fails with Set-StrictMode -Version 2 or higher
With the ConvertFrom-Json call wrapped in (...), your overall command returns "nothing": loosely speaking, $null, but, more accurately, an "array-valued null", which is the [System.Management.Automation.Internal.AutomationNull]::Value singleton that indicates the absence of output from a command. (In most contexts, the latter is treated the same as $null, though notably not when used as pipeline input.)
[System.Management.Automation.Internal.AutomationNull]::Value doesn't have a .Count property, which is why with Set-StrictMode -Version 2 or higher in effect, you'll get a The property 'count' cannot be found on this object. error.
By wrapping the entire pipeline in #(...), the array subexpression operator, you ensure treatment of the output as an array, which, with array-valued null output, creates an empty array - which does have a .Count property.
Note that you should be able to call .Count on $null and [System.Management.Automation.Internal.AutomationNull]::Value, given that PowerShell adds a .Count property to every object, if not already present - including to scalars, in a commendable effort to unify the handling of collections and scalars.
That is, with Set-StrictMode set to -Off (the default) or to -Version 1 the following does work and - sensibly - returns 0:
# With Set-StrictMode set to -Off (the default) or -Version 1:
# $null sensibly has a count of 0.
PS> $null.Count
0
# So does the "array-valued null", [System.Management.Automation.Internal.AutomationNull]::Value
# `. {}` is a simple way to produce it.
PS> (. {}).Count # `. {}` outputs
0
That the above currently doesn't work with Set-StrictMode -Version 2 or higher (as of PowerShell [Core] 7.0), should be considered a bug, as reported in this GitHub issue (by Jeffrey Snover, no less).
It's quite convenient to use Linux shell to append some content to a file, using pipe line operations and stream operations would do this.
But in PowerShell, the pipeline is used in object level, not in file level. Then, how can I for example, insert a row "helloworld" to a list of files, to become their first line?
It is not that trivial, but you can do this:
Get-ChildItem | foreach {$a = Get-Content $_
Set-Content $_ -Value "hi", $a}
Tho to be honest i do think it matches your definition of a pipeline.
I have an array that contains multiple file paths, with multiple file extensions:
$Array = #("C:\aaa\aaa\abc.txt", "C:\aaa\aaa\bbb.txt", "C:\aaa\aaa\abc.c", "C:\aaa\aaa\abc.h", ...etc)
Now, I wanted to remove all file paths that have a .txt extension and did the following:
$Array | Foreach {$_ | Where {$_ -notlike "*.txt"}}
It does remove the .txt file paths.
Since I'm still new to Powershell, I would like to know if this is the right way to do it or if there is a better solution (e.g. that doesn't use the Foreach statement).
You do not actually need a Foreach-Object here. Just pipe the array to the Where-Object directly:
PS > $Array = #("C:\aaa\aaa\abc.txt", "C:\aaa\aaa\bbb.txt", "C:\aaa\aaa\abc.c", "C:\aaa\aaa\abc.h")
PS > $Array | Where {$_ -notlike "*.txt"}
C:\aaa\aaa\abc.c
C:\aaa\aaa\abc.h
PS >
Of course, in this case, you could just use -notlike on the array itself:
PS > $Array -notlike "*.txt"
C:\aaa\aaa\abc.c
C:\aaa\aaa\abc.h
PS >
This is because all of PowerShell's comparison operators work with both scalars as well as collections. From the documentation:
When the input to an operator is a scalar value, comparison operators
return a Boolean value. When the input is a collection of values, the
comparison operators return any matching values. If there are no matches
in a collection, comparison operators do not return anything.
I'm trying to write a PowerShell script to build a list of files, from several directories. After all directories have been added to the main list, I'd like to do the same processing on all files.
This is what I have:
$items = New-Object Collections.Generic.List[IO.FileInfo]
$loc1 = #(Get-ChildItem -Path "\\server\C$\Program Files (x86)\Data1\" -Recurse)
$loc2 = #(Get-ChildItem -Path "\\server\C$\Web\DataStorage\" -Recurse)
$items.Add($loc1) # This line fails (the next also fails)
$items.Add($loc2)
# Processing code is here
which fails with this error:
Cannot convert argument "0", with
value: "System.Object[]", for "Add" to
type "System.IO.FileInfo": "Cannot
convert the "System.Object[]" va lue
of type "System.Object[]" to type
"System.IO.FileInfo"."
I am mostly interested in what is the correct approach for this type of situation. I realize that my code is a very C way of doing it -- if there is a more PowerShell way to acomplish the same task, I'm all for it. The key, is that the number of $loc#'s may change over time, so adding and removing one or two should be easy in the resulting code.
Not sure you need a generic list here. You can just use a PowerShell array e.g.:
$items = #(Get-ChildItem '\\server\C$\Program Files (x86)\Data1\' -r)
$items += #(Get-ChildItem '\\server\C$\Web\DataStorage\' -r)
PowerShell arrays can be concatenated using +=.
From get-help get-childitem:
-Path
Specifies a path to one or more locations. Wildcards are permitted. The default location is the current directory (.).
$items = get-childitem '\\server\C$\Program Files (x86)\Data1\','\\server\C$\Web\DataStorage\' -Recurse
Here is some perhaps even more PowerShell-ish way that does not need part concatenation or explicit adding items to the result at all:
# Collect the results by two or more calls of Get-ChildItem
# and perhaps do some other job (but avoid unwanted output!)
$result = .{
# Output items
Get-ChildItem C:\TEMP\_100715_103408 -Recurse
# Some other job
$x = 1 + 1
# Output some more items
Get-ChildItem C:\TEMP\_100715_110341 -Recurse
#...
}
# Process the result items
$result
But the code inside the script block should be written slightly more carefully to avoid unwanted output mixed together with file system items.
EDIT: Alternatively, and perhaps more effectively, instead of .{ ... } we can
use #( ... ) or $( ... ) where ... stands for the code containing several
calls of Get-ChildItem.
Keith's answer is the PowerShell way: just use #(...)+#(...).
If you actually do want a typesafe List[IO.FileInfo], then you need to use AddRange, and cast the object array to a FileInfo array -- you also need to make sure you don't get any DirectoryInfo objects, or else you need to use IO.FileSystemInfo as your list type:
So, avoid directories:
$items = New-Object Collections.Generic.List[IO.FileInfo]
$items.AddRange( ([IO.FileSystemInfo[]](ls '\\server\C$\Program Files (x86)\Data1\' -r | Where { -not $_.PSIsContainer } )) )
$items.AddRange( ([IO.FileSystemInfo[]](ls '\\server\C$\Web\DataStorage\' -r | Where { -not $_.PSIsContainer } )) )
Or use FileSystemInfo (the common base class of FileInfo and DirectoryInfo):
$items = New-Object Collections.Generic.List[IO.FileSystemInfo]
$items.AddRange( ([IO.FileSystemInfo[]](ls '\\server\C$\Program Files (x86)\Data1\' -r)) )
$items.AddRange( ([IO.FileSystemInfo[]](ls '\\server\C$\Web\DataStorage\' -r)) )
-Filter is more performant than -Include, so if you don't have a lot of different extensions, simply concatenating two filtered lists might be faster.
$files = Get-ChildItem -Path "H:\stash\" -Filter *.rdlc -Recurse
$files += Get-ChildItem -Path "H:\stash\" -Filter *.rdl -Recurse
I compared the output with a timer like this:
$stopwatch = [System.Diagnostics.Stopwatch]::StartNew()
# Do Stuff Here
$stopwatch.Stop()
Write-Host "$([Math]::Round($stopwatch.Elapsed.TotalSeconds)) seconds ellapsed"
I am powershell newbie. I used a sample script and made substitute from get-item to get-content in the first line.
The modified script looks like below:
$file = get-content "c:\temp\test.txt"
if ($file.IsReadOnly -eq $true)
{
$file.IsReadOnly = $false
}
So in essence I am trying to action items contained in test.txt stored as UNC paths
\\testserver\testshare\doc1.doc
\\testserver2\testshare2\doc2.doc
When running script no errors are reported and no action is performed even on first entry.
Short answer:
sp (gc test.txt) IsReadOnly $false
Long answer below
Well, some things are wrong with this.
$file is actually a string[], containing the lines of your file. So the IsReadOnly property applies to the string[] and not to the actual files represented by those strings, which happen to be file names.
So, if I'm understanding you correctly you are trying to read a file, containing other file names, one on each line. And clear the read-only attribute on those files.
Starting with Get-Content isn't wrong here. We definitely are going to need it:
$filenames = Get-Content test.txt
Now we have a list of file names. To access the file's attributes we either need to convert those file names into actual FileInfo objects and operate on those. Or we pass the file names to a -Path argument of Set-ItemProperty.
I will take the first approach first and then get to the other one. So we have a bunch of file names and want FileInfo objects from them. This can be done with a foreach loop (since we need to do this for every file in the list):
$files = (foreach ($name in $filenames) { Get-Item $name })
You can then loop over the file names and set the IsReadOnly property on each of them:
foreach ($file in $files) {
$file.IsReadOnly = $false
}
This was the long and cumbersome variant. But one which probably suits people best with no prior experience to PowerShell. You can reduce the need for having multiple collections of things lying around by using the pipeline. The pipeline transports objects from one cmdlet to another and those objects still have types.
So by writing
Get-Content test.txt | Get-Item | ForEach-Object { $_.IsReadOnly = $false }
we're achieving exactly the same result. We read the contents of the file, getting a bunch of strings. Those are passed to Get-Item which happens to know what to do with pipeline input: It treats those objects as file paths; exactly what we need here. Get-Item then sends FileInfo objects further down the pipeline, at which point we are looping over them and setting the read-only property to false.
Now, that was shorter and, with a little practise, maybe even easier. But it's still far from ideal. As I said before, we can use Set-ItemProperty to set the read-only property on the files. And we can take advantage of the fact that Set-ItemProperty can take an array of strings as input for its -Path parameter.
$files = Get-Content test.txt
Set-ItemProperty -Path $files -Name IsReadOnly -Value $false
We are using a temporary variable here, since Set-ItemProperty won't accept incoming strings as values for -Path directly. But we can inline this temporary variable:
Set-ItemProperty -Path (Get-Content test.txt) -Name IsReadOnly -Value $false
The parentheses around the Get-Content call are needed to tell PowerShell that this is a single argument and should be evaluated first.
We can then take advantage of the fact that each of those parameters is used in the position where Set-ItemProperty expects it to be, so we can leave out the parameter names and stick just to the values:
Set-ItemProperty (Get-Content test.txt) IsReadOnly $false
And then we can shorten the cmdlet names to their default aliases:
sp (gc test.txt) IsReadOnly $false
We could actually write $false as 0 to save even more space, since 0 is converted to $false when used as a boolean value. But I think it suffices with shortening here.
Johannes has the scoop on the theory behind the problem you are running into. I just wanted to point out that if you happen to be using the PowerShell Community Extensions you can perform this by using the Set-Writable and Set-ReadOnly commands that are pipeline aware e.g.:
Get-Content "c:\temp\test.txt" | Set-Writable
or the short, aliased form:
gc "c:\temp\test.txt" | swr
The alias for Set-ReadOnly is sro. I use these commands weekly if not daily.