Preserve $null value when creating new Powershell session with arguments - arrays

I have PowerShell scripts that require to start other PowerShell scripts in a new session. The first script passes a set of arguments to the second script as an array. Everything works fine when all of the arguments have values, but when i try passing a $null, the parameter is stripped and the list of arguments gets messed up.
To better understand the issue, you can do the following (this is just an example):
define C:\Test.ps1 as:
param($a,$b,$c)
" a $a " | Out-File C:\temp.txt
" b $b " | Out-File C:\temp.txt -append
" c $c " | Out-File C:\temp.txt -append
run in any ps console
PowerShell.exe -WindowStyle Hidden -NonInteractive -file C:\Test.ps1 #(1,2,3) #works as expected temp.txt contains
a 1
b 2
c 3
PowerShell.exe -WindowStyle Hidden -NonInteractive -file C:\Test.ps1 #(1,$null,5) #strips $null and writes the following to temp.txt
a 1
b 5
c
I need to preserve the $null when creating the new session, the desired temp.txt would contain
a 1
b
c 5
The problem seems to be that the $null gets stripped from the array directly and is already #(1,5) when being interpreted by the new session.
I've tried declaring an empty array and adding elements to it one by one, also tried replacing the array with a System.Collections.Generic.List[System.Object] and using the Add method, but still the $null gets stripped.
My last ideas are, to test for a $null and assign a default value to the argument, in the calling script, then in the second script to test for the default value and reassign the $null, or create a hash with all the arguments, pass it as an argument and process and split them in the called script. I really don't like these ideas as it feels overkill for the task at hand.
Any help in understanding the basic problem, why $null gets stripped from the array and how to preserve it, or how to alter the creation of the new session to pickup the $null is greatly appreciated.

When I've needed to serialize data between scripts, or preserve objects across script executions, I tend to use Export-Clixml and Import-Clixml, sometimes combined with splatting. The Clixml scripts will preserve objects as they existed previously in their entirety.
For example, in your sending script:
$a = 1;
$b = $null;
$c = 3;
#($a, $b, $c) | Export-Clixml 'C:\params.xml'
And then to retrieve the data:
$params = Import-Clixml 'C:\params.xml';
$x = $params[0];
$y = $params[1];
$z = $params[2];
Write-Output "$x,$y,$z";
You should get 1,,3.
Often you can even use hash tables to help organize:
#{'a'=$a, 'b'=$b, 'c'=$c} | Export-Clixml 'C:\params.xml'
And then:
$x = $params.a;
$y = $params.b;
$z = $params.c;
But hashtables are a bit funky sometimes. Be sure to test.
As for what's going on, it looks like PowerShell skips null values when assigning parameters from an array like you're doing. The null value is in the array (#(1, $null, 3)[1] -eq $null is True), it's just PowerShell skipping it.

If you specify the param names, then PowerShell knows which parameters you're giving it.
PowerShell.exe -WindowStyle Hidden -NonInteractive -file C:\Test.ps1 -a 1 -c 5
gives you
a 1
b
c 5
More on PowerShell parameters here.

Related

Why does an array behave differently when directly assigned or retrieved from get-content

Here's something I don't understand.
When I define a variable:
$v = [byte]2, [byte]3
and check its type:
$v.getType().name
I get
Object[]
I then format $v:
'{0} {1}' -f $v
which prints
2 3
Now, if I get a file's first two bytes:
$f = (get-content 'xyz.txt' -encoding byte -readCount 2 -totalCount 2)
and check its type:
$f.getType().name
I get the same type as before: Object[].
However, unlike with $v, I cannot format $f:
'{0} {1}' -f $f
I get the error message Error formatting a string: Index (zero based) must be greater than or equal to zero and less than the size of the, although the length of the array is 2:
$f.length
returns
2
I don't understand why this is and would appreciate an explanation.
The behavior should be considered a bug in the -f operator; it is present as of v7.1 and reported in GitHub issue #14355; it does not affect other operators with array operands, such as -split or -in.
The workaround is to cast $f to [array] or, if creating a copy of the array is acceptable, #($f):
'abc' > xyz.txt
$f = get-content 'xyz.txt' -encoding byte -readCount 2 -totalCount 2
'{0} {1}' -f ([array] $f)
Note: Using #(), the array-subexpression operator - ... - #($f) - as Mathias R. Jessen notes - is the even simpler option, but do note that using #() involves cloning (creating a shallow copy of) the array, whereas the [array] cast in this case does not.
The alternative is to apply the [array] cast as a type constraint (by placing it to the left of the $f = ... assignment):
'abc' > xyz.txt
[array] $f = (get-content 'xyz.txt' -encoding byte -readCount 2 -totalCount 2)
'{0} {1}' -f $f
Note:
In PowerShell [Core] v6+, you must use -AsByteStream in lieu of -Encoding Byte.
The problem can also be avoided if -ReadCount 2 is omitted, but note that that decreases the performance of the command, because the bytes are then emitted one by one; that is, with -ReadCount 2 -TotalCount 2 a single object is emitted that is a 2-byte array as a whole, whereas just -TotalCount 2 emits the individual bytes, one by one to the pipeline, in which case it is then the PowerShell engine itself that collects these bytes in an [object[]] array for the assignment.
Note that applying #() directly to the command - #(get-content ...) - would not work in this case, because #(), due to parameter combination -ReadCount 2 -TotalCount 2, receives a single output object that happens to be an array as a whole and therefore wraps that single object in another array. This results in a single-element array whose element is the original 2-element array of bytes; for more information about how #(...) works, see this answer.
Background information:
The problem is an invisible [psobject] wrapper around each array returned by Get-Content -ReadCount (just one in this case), which unexpectedly causes the $f array passed to -f not to be recognized as such.
Note that PowerShell's other array-based operators, such as -in and -replace, are not affected.
The wrapper can be bypassed in two ways:
$f.psobject.BaseObject
casting to [array], as shown at the top.
Note:
Generally, output objects produced by cmdlets - as opposed to output produced by PowerShell code - have generally invisible [psobject] wrappers; mostly, they are benign, because PowerShell usually just cares about the .NET object being wrapped, not about the wrapper, but on occasion problems arise, such as in this case - see GitHub issue #5579 for a discussion of the problem and other contexts in which it manifests.
In order to test if a given object has a [psobject] wrapper, use -is [psobject]; e.g.:
$var = 1
$var -is [psobject] # -> $false
$var = Write-Output 1
$var -is [psobject] # -> $true, due to use of a cmdlet.
# You can also test command output directly.
(Write-Output 1) -is [psobject] # -> $true

Powershell returning file size on second item of array; first and third are fine

I have what may be an odd issue. I've got a Powershell script that's supposed to watch a directory for files, then move and rename them. It checks the output directory to see if a file with that name already exists with the form "Trip ID X Receipts Batch Y.pdf" (original output from the web form will always be that Y=1) and if it does replace Y with whatever the highest existing number of Y for other files with Trip ID X is. If there isn't one already, it'll just stay that Y=1. It does this successfully except on the second match, where instead of 2 Y will equal a number that varies depending on the file. This seems to be the file size in bytes plus 1. Example results of the script (from copy/pasting the same source file into the watched directory):
Trip ID 7 Receipts Batch 1.pdf
Trip ID 7 Receipts Batch 126973.pdf
Trip ID 7 Receipts Batch 3.pdf
Trip ID 7 Receipts Batch 4.pdf
The relevant portion of my code is here:
$REFile = "Trip ID " + $TripID + " Receipts Batch "
$TripIDCheck = "Trip ID " + $TripID
$TripFileCount = Get-ChildItem $destination |Where-Object {$_.Name -match $TripIDCheck}
$BatchCount = $TripFileCount.GetUpperBound(0) + 1
$destinationRegEx = $destination + $REFile + $BatchCount + ".pdf"
Move-Item -Path $path -Destination $destinationRegEx -Force
For counting the number of items in the array, I've used what you see above as well as $TripFileCount.Length, and $TripFileCount.Count. They all behave the same, seemingly taking the file, examining its size, setting the Y value for the second item to that, but then treating the third, fourth, etc. items as expected. For the life of me, I can't figure out what's going on. Have any of you ever seen something like this?
Edit: Trying to force $TripFileCount as an array with
$TripFileCount = #(Get-ChildItem $destination |Where-Object {$_.Name -match $TripIDCheck})
doesn't work either. It still does this.
As TessellatingHeckler states, your symptom indeed suggests that you're not accounting for the fact that cmdlets such as Get-ChildItem do not always return an array, but may return a single, scalar item (or no items at all).
Therefore, you cannot blindly invoke methods / properties such as .GetUpperBound() or .Length on such a cmdlet's output. There are two workarounds:
Use array subexpression operator #(...) to ensure that the enclosed command's output is treated as an array, even if only a single object is returned or none at all.
In PSv3+, use the .Count property even on a single object or $null to implicitly treat them as if they were an array.
The following streamlined solution uses the .Count property on the output from the Get-ChildItem call, which works as intended in all 3 scenarios: Get-ChildItem matches nothing, 1 file, or multiple files.
$prefix = "Trip ID $TripID Receipts Batch "
$suffix = '.pdf'
$pattern = "$prefix*$suffix"
$count = (Get-ChildItem $destination -Filter $pattern).Count
Move-Item -Path $path -Destination (Join-Path $destination "$prefix$($count+1)$suffix")
Note:
If you're using PowerShell v2, then prepend # to (...). #(...), the array subexpression operator, ensures that the output from the enclosed command is always treated as an array, even if it comprise just 1 object or none at all.
In PowerShell v3 and above, this behavior is conveniently implicit, although there are caveats - see this answer of mine.

Pass String Array Varaible from Command Line (.BAT file) to Power Shell Script

I am trying to pass an array %variable% from command line to power shell and then perform operations with this array variable within power shell but I am having trouble passing the variable to power shell correctly. Current .BAT script to call power shell script is below...
SET STRING_ARRAY="test1" "test2" "test3" "test4"
Powershell.exe -executionpolicy remotesigned -File "FILEPATH\Build_DB.ps1" %STRING_ARRAY%
Then power shell script below to test for a sucessful handover of the array varaible is as follows:
$string_array=#($args[0])
Write-Host $string_array.length
for ($i=0; $i -lt $string_array.length; $i++) {
Write-Host $string_array[$i]
}
However all is returned is a length of 1 from power shell. What am I doing wrong here?
Alright, never mind I ended up coming up with a solution that works in my case so I am posting it here in case it is of benefit to anyone else. If someone has a better solution please let me know.
Change the power shell script as follows:
#The problem is that each item in the %STRING_ARRAY% variable is passed as
#an individual argument to power shell. To get around this we can just
#store all optional arguments passed to power shell as follows.
$string_array=#($args)
#Now (if desired) we can also remove any optional arguments we don't want
#in our new array using the following command.
$string_array = $string_array[2..($string_array.Length)]
Write-Host $string_array.length
for ($i=0; $i -lt $string_array.length; $i++) {
Write-Host $string_array[$i]
}

Combine the results of two distinct Get-ChildItem calls into single variable to do the same processing on them

I'm trying to write a PowerShell script to build a list of files, from several directories. After all directories have been added to the main list, I'd like to do the same processing on all files.
This is what I have:
$items = New-Object Collections.Generic.List[IO.FileInfo]
$loc1 = #(Get-ChildItem -Path "\\server\C$\Program Files (x86)\Data1\" -Recurse)
$loc2 = #(Get-ChildItem -Path "\\server\C$\Web\DataStorage\" -Recurse)
$items.Add($loc1) # This line fails (the next also fails)
$items.Add($loc2)
# Processing code is here
which fails with this error:
Cannot convert argument "0", with
value: "System.Object[]", for "Add" to
type "System.IO.FileInfo": "Cannot
convert the "System.Object[]" va lue
of type "System.Object[]" to type
"System.IO.FileInfo"."
I am mostly interested in what is the correct approach for this type of situation. I realize that my code is a very C way of doing it -- if there is a more PowerShell way to acomplish the same task, I'm all for it. The key, is that the number of $loc#'s may change over time, so adding and removing one or two should be easy in the resulting code.
Not sure you need a generic list here. You can just use a PowerShell array e.g.:
$items = #(Get-ChildItem '\\server\C$\Program Files (x86)\Data1\' -r)
$items += #(Get-ChildItem '\\server\C$\Web\DataStorage\' -r)
PowerShell arrays can be concatenated using +=.
From get-help get-childitem:
-Path
Specifies a path to one or more locations. Wildcards are permitted. The default location is the current directory (.).
$items = get-childitem '\\server\C$\Program Files (x86)\Data1\','\\server\C$\Web\DataStorage\' -Recurse
Here is some perhaps even more PowerShell-ish way that does not need part concatenation or explicit adding items to the result at all:
# Collect the results by two or more calls of Get-ChildItem
# and perhaps do some other job (but avoid unwanted output!)
$result = .{
# Output items
Get-ChildItem C:\TEMP\_100715_103408 -Recurse
# Some other job
$x = 1 + 1
# Output some more items
Get-ChildItem C:\TEMP\_100715_110341 -Recurse
#...
}
# Process the result items
$result
But the code inside the script block should be written slightly more carefully to avoid unwanted output mixed together with file system items.
EDIT: Alternatively, and perhaps more effectively, instead of .{ ... } we can
use #( ... ) or $( ... ) where ... stands for the code containing several
calls of Get-ChildItem.
Keith's answer is the PowerShell way: just use #(...)+#(...).
If you actually do want a typesafe List[IO.FileInfo], then you need to use AddRange, and cast the object array to a FileInfo array -- you also need to make sure you don't get any DirectoryInfo objects, or else you need to use IO.FileSystemInfo as your list type:
So, avoid directories:
$items = New-Object Collections.Generic.List[IO.FileInfo]
$items.AddRange( ([IO.FileSystemInfo[]](ls '\\server\C$\Program Files (x86)\Data1\' -r | Where { -not $_.PSIsContainer } )) )
$items.AddRange( ([IO.FileSystemInfo[]](ls '\\server\C$\Web\DataStorage\' -r | Where { -not $_.PSIsContainer } )) )
Or use FileSystemInfo (the common base class of FileInfo and DirectoryInfo):
$items = New-Object Collections.Generic.List[IO.FileSystemInfo]
$items.AddRange( ([IO.FileSystemInfo[]](ls '\\server\C$\Program Files (x86)\Data1\' -r)) )
$items.AddRange( ([IO.FileSystemInfo[]](ls '\\server\C$\Web\DataStorage\' -r)) )
-Filter is more performant than -Include, so if you don't have a lot of different extensions, simply concatenating two filtered lists might be faster.
$files = Get-ChildItem -Path "H:\stash\" -Filter *.rdlc -Recurse
$files += Get-ChildItem -Path "H:\stash\" -Filter *.rdl -Recurse
I compared the output with a timer like this:
$stopwatch = [System.Diagnostics.Stopwatch]::StartNew()
# Do Stuff Here
$stopwatch.Stop()
Write-Host "$([Math]::Round($stopwatch.Elapsed.TotalSeconds)) seconds ellapsed"

How to change read attribute for a list of files?

I am powershell newbie. I used a sample script and made substitute from get-item to get-content in the first line.
The modified script looks like below:
$file = get-content "c:\temp\test.txt"
if ($file.IsReadOnly -eq $true)
{
$file.IsReadOnly = $false
}
So in essence I am trying to action items contained in test.txt stored as UNC paths
\\testserver\testshare\doc1.doc
\\testserver2\testshare2\doc2.doc
When running script no errors are reported and no action is performed even on first entry.
Short answer:
sp (gc test.txt) IsReadOnly $false
Long answer below
Well, some things are wrong with this.
$file is actually a string[], containing the lines of your file. So the IsReadOnly property applies to the string[] and not to the actual files represented by those strings, which happen to be file names.
So, if I'm understanding you correctly you are trying to read a file, containing other file names, one on each line. And clear the read-only attribute on those files.
Starting with Get-Content isn't wrong here. We definitely are going to need it:
$filenames = Get-Content test.txt
Now we have a list of file names. To access the file's attributes we either need to convert those file names into actual FileInfo objects and operate on those. Or we pass the file names to a -Path argument of Set-ItemProperty.
I will take the first approach first and then get to the other one. So we have a bunch of file names and want FileInfo objects from them. This can be done with a foreach loop (since we need to do this for every file in the list):
$files = (foreach ($name in $filenames) { Get-Item $name })
You can then loop over the file names and set the IsReadOnly property on each of them:
foreach ($file in $files) {
$file.IsReadOnly = $false
}
This was the long and cumbersome variant. But one which probably suits people best with no prior experience to PowerShell. You can reduce the need for having multiple collections of things lying around by using the pipeline. The pipeline transports objects from one cmdlet to another and those objects still have types.
So by writing
Get-Content test.txt | Get-Item | ForEach-Object { $_.IsReadOnly = $false }
we're achieving exactly the same result. We read the contents of the file, getting a bunch of strings. Those are passed to Get-Item which happens to know what to do with pipeline input: It treats those objects as file paths; exactly what we need here. Get-Item then sends FileInfo objects further down the pipeline, at which point we are looping over them and setting the read-only property to false.
Now, that was shorter and, with a little practise, maybe even easier. But it's still far from ideal. As I said before, we can use Set-ItemProperty to set the read-only property on the files. And we can take advantage of the fact that Set-ItemProperty can take an array of strings as input for its -Path parameter.
$files = Get-Content test.txt
Set-ItemProperty -Path $files -Name IsReadOnly -Value $false
We are using a temporary variable here, since Set-ItemProperty won't accept incoming strings as values for -Path directly. But we can inline this temporary variable:
Set-ItemProperty -Path (Get-Content test.txt) -Name IsReadOnly -Value $false
The parentheses around the Get-Content call are needed to tell PowerShell that this is a single argument and should be evaluated first.
We can then take advantage of the fact that each of those parameters is used in the position where Set-ItemProperty expects it to be, so we can leave out the parameter names and stick just to the values:
Set-ItemProperty (Get-Content test.txt) IsReadOnly $false
And then we can shorten the cmdlet names to their default aliases:
sp (gc test.txt) IsReadOnly $false
We could actually write $false as 0 to save even more space, since 0 is converted to $false when used as a boolean value. But I think it suffices with shortening here.
Johannes has the scoop on the theory behind the problem you are running into. I just wanted to point out that if you happen to be using the PowerShell Community Extensions you can perform this by using the Set-Writable and Set-ReadOnly commands that are pipeline aware e.g.:
Get-Content "c:\temp\test.txt" | Set-Writable
or the short, aliased form:
gc "c:\temp\test.txt" | swr
The alias for Set-ReadOnly is sro. I use these commands weekly if not daily.

Resources