We have an application that keeps a server database - a list of server names and other related information. Sometimes we need to export the information in the XML format to process it by a Powershell script. Server names in the XML file can be in simple ("ServerXX") or FQDN ("ServerXX.abc.com") formats. The script searches for a server name that is always in the simple format, and the search results should contain all simple and full server names that match the searched name.
The main search operator (slightly simplified) looks like this:
$FoundServer = ($ServerList | Where {$_.Name -match $ServerName+"*"})
$ServerList here is the array of strings (server names). Looks simple and works as expected. Usually.
The strange thing is, sometimes the script can't find some FQDNs. For example, if the FQDN in the file is "ServerXX.abc.com", and we're searching for "ServerXX", the FQDN is not found. At the same time search for other names works as expected. When debugging the script, it can be seen that the expression inside {} is literally "ServerXX.abc.com" -like "ServerXX*". It MUST be true. But the resulting search result is empty. And even more interesting thing is, if the search name is specified as "ServerXX.", "ServerXX.a" or with other letters from the FQDN, the script finds it. If the same server name is specified in the file without the domain name (in the simple form), the script finds it.
Well, and even more enigmatic thing is, we have two instances of the installed application, one for production, another one for testing. The test one contains a much smaller server database. If I add the "invisible" server name from the prod instance to the test one and export the database, the script finds this name without any problems.
If I replace -like with -match, the issue disappears. So it's not an issue of the XML file generator (it's another PS script that generates a PSCustomObject and exports it via Export-CliXml). It's also not an issue of some invisible or non-ANSI symbols in the server name. I also examined the content of the XML file manually. It's huge (several tens of megabytes) and complex, so it's pretty difficult to analyze but I didn't find any visible issue. The XML structure looks correct.
I don't understand that random behavior. Can it be related somehow to the XML file size? Memory lack in PS or something like that? We use Powershell v4.
Note that this answer is not a solution, because (as of this writing) there's not enough information to diagnose your problem; however, your use of the -like and -match operators deserves some scrutiny.
$_.Name -match $ServerName+"*" (more succinctly: $_.Name -match "$ServerName*") is not the same as $_.Name -like "$ServerName*":
-match uses regular expressions (regexes), which (also) match part of the input, unless explicitly formulated to match at the start (^) and/or the end ($) of the input.
-like uses wildcard expressions, which must match the input as a whole.
While regexes and wildcards are distantly related, their syntax - and capabilities - are different; regexes are far more powerful; in the case at hand (note that matching is case-insensitive by default):
... -like 'ServerXX*' matches a string that starts with ServerXX and is followed by zero or more arbitrary characters (*).
Inputs 'ServerXX', 'ServerXX.foo.bar' and 'ServerXXY' would all return $true.
... -match 'ServerXX*' matches a string that contains substring ServerX (just one X!) anywhere in the input, if followed by zero or more (*) X characters, because duplication symbol * modifies the preceding character/subexpression.
While inputs 'ServerXX' and 'ServerXX.foo.bar' would return $true, so would 'ServerX' and 'fooServerXX' - which is undesired in this case.
If your inputs are FQDNs, use either of the following expressions, which are equivalent:
... -like 'ServerXX.*'
... -match '^ServerXX\.'
If the server name is supplied via variable, e.g. $ServerName, use "...", an expandable string, in the simplest case:
... -like "$ServerName.*"
... -match "^$ServerName\."
This is fine in the case of server names, as they're not permitted to contain characters that could mistakenly be interpreted as regex / wildcard metacharacters (characters with special meaning, such as *).
Generally, the safest approach is to explicitly escape a variable value to ensure its literal use, though note that needing to do so is much more likely in a regex than in a wildcard expression, because regexes have many more metacharacters:
... -like ('{0}.*' -f [System.Management.Automation.WildcardPattern]::Escape($ServerName))
... -match ('^{0}\.' -f [regex]::Escape($ServerName))
Using a single-quoted template string with -f, the format operator ({0} represents the 1st RHS operand), makes it obvious which parts are used literally, and which parts are spliced in as an escaped variable value.
Related
Using BorderAround emits "True" to the console.
$range = $sum_wksht.Range('B{0}:G{0}' -f ($crow))
$range.BorderAround(1, -4138)
This can be overcome by using one of the following.
$wasted = $range.BorderAround(1, -4138)
[void]$range.BorderAround(1, -4138)
Why is this needed? Am I not creating the range correctly? Is there a better workaround?
Why is this needed?
It is needed, because the BorderAround method has a return value and, in PowerShell, any command or expression that outputs (returns) data is implicitly output to the (success) output stream, which by default goes to the host, which is typically the console window (terminal) in which a PowerShell session runs.
That is, the data shows in the console/terminal, unless it is:
captured ($var = ...)
sent through the pipeline for further processing (... | ...; the last pipeline segment's command may or may not produce output itself)
redirected (... >)
or any combination thereof.
That is:
$range.BorderAround(1, -4138)
is (more efficient) shorthand for:
Write-Output $range.BorderAround(1, -4138)
(Explicit use of Write-Output is rarely needed.)
Since you don't want that output, you must suppress it, for which you have several options:
$null = ...
[void] (...)
... > $null
... | Out-Null
$null = ... may be the best overall choice, because:
It conveys the intent to suppress up front
While [void] = (...) does that too, it often requires you to enclose an expression in (...) for syntactic reasons; e.g., [void] 1 + 2 doesn't work as intended, only [void] (1 + 2); similarly, a command must always be enclosed in (...); [void] New-Item test.txt doesn't work, only [void] (New-Item test.txt) does.
It performs well with both command output (e.g., $null = Get-AdUser ...) and expression output (e.g., $null = $range.BorderAround(1, -4138)).
Conversely, avoid ... | Out-Null, because it is generally much slower (except in the edge case of a side effect-free expression's output in PowerShell (Core) 6+)[1].
However, if you need to silence all output streams - not just the success output, but also errors, verbose output, ... - you must use *> $null
Why does PowerShell produce output implicitly?
As a shell, PowerShell's output behavior is based on streams, as in traditional shells such as cmd.exe or Bash. (While traditional shells have 2 output streams - stdout and stderr - PowerShell has 6, so as to provide more sophisticated functionality - see about_Redirection.)
A cmdlet, script, or function can write to the output streams as often as it wants, and such output is usually instantly available for display but notably also to potential consumers, which enables the streaming, one-by-one processing that the pipeline provides.
This contrasts with traditional programming languages, whose output behavior is based on return values, typically provided via the return keyword, which conflates output data (the return value) with flow control (exit the scope and return to the caller).
A frequent pitfall is to expect PowerShell's return statement to act the same, but it doesn't: return <val> is just syntactic sugar for <val>; return, i.e., implicit output of <val> followed by an unconditional return of control to the caller; notably, the use of return does not preclude generation of output from earlier statements in the same scope.
Unlike traditional shells, PowerShell doesn't require an explicit write-to-the-output stream command in order to produce output:
While PowerShell does have a counterpart to echo, namely Write-Output, its use is rarely needed.
Among the rare cases where Write-Output is useful is preventing enumeration of a collection on output with -NoEnumerate, or to use common parameter -OutVariable to both output data and capture it in a variable (which is generally only needed for expressions, because cmdlets and advanced functions / scripts themselves support -OutVariable).
The implicit output behavior:
is generally a blessing:
for interactive experimentation - just type any statement - notably including expressions such as [IO.Path]::GetExtension('foo.txt') and [math]::Pow(2, 32) - and see its output (akin to the behavior of a REPL).
for writing concise code that doesn't need to spell out implied behavior (see example below).
can occasionally be a pitfall:
for users accustomed to the semantics of traditional programming languages.
due to the potential for accidental pollution of the output stream from statements that one doesn't expect to produce output, such as in your case; a more typical example is the .Add() method of the [System.Collections.ArrayList] class unexpectedly producing output.
Example:
# Define a function that takes an array of integers and
# outputs their hex representation (e.g., '0xa' for decimal 10)
function Get-HexNumber {
param([int[]] $numbers)
foreach ($i in $numbers) {
# Format the integer at hand
# *and implicitly output it*.
'0x{0}' -f $i.ToString('x')
}
}
# Call the function with integers 0 to 16 and loop over the
# results, sleeping 1 second between numbers.
Get-HexNumber (0..16) | ForEach-Object { "[$_]"; Start-Sleep 1 }
The above yields the following:
[0x0]
# 1-second pause
[0x1]
# 1-second pause
[0x2]
...
[0x10]
This demonstrates the streaming aspect of the behavior: Get-HexNumber's output is available to the ForEach-Object cmdlet call as it is being produced, not after Get-HexNumber has exited.
[1] In PowerShell (Core) 6+, Out-Null has an optimization if the only preceding pipeline segment is a side effect-free expression rather than a method or command call; e.g., 1..1e6 | Out-Null executes in almost no time, because the expression is seemingly not even executed. However, such a scenario is atypical, and the functionally equivalent Write-Output (1..1e6) | Out-Null takes a long time to run, much longer than $null = Write-Output (1..1e6).
I have an array that contains MBs already in the values. This is how MS DPM returns data written to a tape. I would like to sum them together. Is there an easy one liner to accommodate for this?
MB is a recognized numeric suffix in PowerShell's native grammar, so you can parse and evaluate your size strings with Invoke-Expression:
PS ~> Invoke-Expression '2401927.56MB'
2517924115906.56
You'll want to do some basic input validation to make sure it's actually a numeric sequence, and remove the thousand separator:
$Tapes.DataWrittenDisplayString |ForEach-Object {
# remove commas and whitespace
$dataWritten = $_ -replace '[,\s]'
# ensure it's actually a number in the expected format
if($dataWritten -match '^\d+(?:\.\d+)?[kmgtp]b$'){
# let PowerShell do the rest
$dataWritten |Invoke-Expression
}
}
Consider the code (the variable $i is there because it was in a loop, adding several conditions to the pattern, e.g. *.a and *.b, ... but to illustrate this problem only one wildcard pattern is enough):
#!/bin/bash
i="a"
PATTERN="-name bar -or -name *.$i"
find . \( $PATTERN \)
If ran on a folder containing files bar and foo.a, it works, outputting:
./foo.a
./bar
But if you now add a new file to the folder, namely zoo.a, then it no longer works:
find: paths must precede expression: zoo.a
Presumably, because the wildcard in *.$i gets expanded by the shell to foo.a zoo.a, which leads to an invalid find command pattern. So one attempt at a fix is to put quotes around the wildcard pattern. Except it does not work:
with single quotes -- PATTERN="-name bar -or -name '*.$i'" the find command outputs only bar. Escaping the single quotes (\') yields the same result.
idem with double quotes: PATTERN="-name bar -or -name \"*.$i\"" -- only bar is returned.
in the find command, if $PATTERN is replaced with "$PATTERN", out comes an error (for single quotes same error, but with single quotes around the wildcard pattern):
find: unknown predicate -name bar -or -name "*.a"'
Of course, replacing $PATTERN with '$PATTERN' also does not work... (no expansion whatsoever takes place).
The only way I could get it to work was to use... eval!
FINDSTR="find . \( $PATTERN \)"
eval $FINDSTR
This works properly:
./zoo.a
./foo.a
./bar
Now after a lot of googling, I saw it mentioned several times that to do this kind of thing, one should use arrays. But this doesn't work:
i="a"
PATTERN=( -name bar -or -name '*.$i' )
find . \( "${PATTERN[#]}" \)
# result: ./bar
In the find line the array has to be enclosed in double quotes, because we want it to be expanded. But single quotes around the wildcard expression don't work, and neither does not quotes at all:
i="a"
PATTERN=( -name bar -or -name *.$i )
find . \( "${PATTERN[#]}" \)
# result: find: paths must precede expression: zoo.a
BUT DOUBLE QUOTES DO WORK!!
i="a"
PATTERN=( -name bar -or -name "*.$i" )
find . \( "${PATTERN[#]}" \)
# result:
# ./zoo.a
# ./foo.a
# ./bar
So I guess my question are actually two questions:
a) in this last example using arrays, why are double quotes required around the *.$i?
b) using an array in this way is supposed to expand «to all elements individually quoted». How would do this with a variable (cf my first attempt)? After getting this to function, I went back and tried using a variable again, with blackslashed single quotes, or \\', but nothing worked (I just got bar). What would I have to do to emulate "by hand" as it were, the quoting done when using arrays?
Thank you in advance for your help.
Required reading:
BashFAQ — I'm trying to put a command in a variable, but the complex cases always fail!
a) in this last example using arrays, why are double quotes required around the *.$i?
You need to use some form of quoting to prevent the shell from performing glob expansion on *. Variables are not expanded in single quotes so '*.$i' doesn't work. It does inhibit glob expansion but it also stops variable expansion. "*.$i" inhibits glob expansion but allows variable expansion, which is perfect.
To really delve into the details, there are two things you need to do here:
Escape or quote * to prevent glob expansion.
Treat $i as a variable expansion, but quote it to prevent word splitting and glob expansion.
Any form of quoting will do for item 1: \*, "*", '*', and $'*' are all acceptable ways to ensure it's treated as a literal asterisk.
For item 2, double quoting is the only answer. A bare $i is subject to word splitting and globbing -- if you have i='foo bar' or i='foo*' the whitespace and globs will cause problems. \$i and '$i' both treat the dollar sign literally, so they're out.
"$i" is the only quoting that does everything right. It's why common shell advice is to always double quote variable expansions.
The end result is, any of the following would work:
"*.$i"
\*."$i"
'*'."$i"
"*"."$i"
'*.'"$i"
Clearly, the first is the simplest.
b) using an array in this way is supposed to expand «to all elements individually quoted». How would do this with a variable (cf my first attempt)? After getting this to function, I went back and tried using a variable again, with blackslashed single quotes, or \\', but nothing worked (I just got bar). What would I have to do to emulate "by hand" as it were, the quoting done when using arrays?
You'd have to cobble together something with eval, but that's dangerous. Fundamentally, arrays are more powerful than simple string variables. There's no magic combination of quotes and backslashes that will let you do what an array can do. Arrays are the right tool for the job.
Could you explain in a little more detail, why ... PATTERN="-name bar -or -name \"*.$i\"" does not work? The quoted double quotes should, when the find command is actually ran, expand the $i but not the glob.
Sure. Let's say we write:
i=a
PATTERN="-name bar -or -name \"*.$i\""
find . \( $PATTERN \)
After the first two line runs, what is the value of $PATTERN? Let's check:
$ i=a
$ PATTERN="-name bar -or -name \"*.$i\""
$ printf '%s\n' "$PATTERN"
-name bar -or -name "*.a"
You'll notice that $i has already been replaced with a, and the backslashes have been removed.
Now let's see how exactly the find command is parsed. In the last line $PATTERN is unquoted because we want all the words to be split apart, right? If you write a bare variable name Bash ends up performing an implied split+glob operation. It performs word splitting and glob expansion. What does that mean, exactly?
Let's take a look at how Bash performs command-line expansion. In the Bash man page under the "Expansion" section we can see the order of operations:
Brace expansion
Tilde expansion, parameter and variable expansion, arithmetic expansion, command substitution, and process substitution
Word splitting
Pathname (AKA glob) expansion
Quote removal
Let's run through these operations by hand and see how find . \( $PATTERN \) is parsed. The end result will be a list of strings, so I'll use a JSON-like syntax to show each stage. We'll start with a list containing a single string:
['find . \( $PATTERN \)']
As a preliminary step, the command-line as a whole is subject to word splitting.
['find', '.', '\(', '$PATTERN', '\)']
Brace expansion -- No change.
Variable expansion
['find', '.', '\(', '-name bar -or -name "*.a"', '\)']
$PATTERN is replaced. For the moment it is all a single string, whitespace and all.
Word splitting
['find', '.', '\(', '-name', 'bar', '-or', '-name', '"*.a"', '\)']
The shell scans the results of variable expansion that did not occur within double quotes for word splitting. $PATTERN was unquoted, so it's expanded. Now it is a bunch of individual words. So far so good.
Glob expansion
['find', '.', '\(', '-name', 'bar', '-or', '-name', '"*.a"', '\)']
Bash scans the results of word splitting for globs. Not the entire command-line, just the tokens -name, bar, -or, -name, and "*.a".
It looks like nothing happened, yes? Not so fast! Looks can be deceiving. Bash actually performed glob expansion. It just happened that the glob didn't match anything. But it could...†
Quote removal
['find', '.', '(', '-name', 'bar', '-or', '-name', '"*.a"', ')']
The backslashes are gone. But the double quotes are still there.
After the preceding expansions, all unquoted occurrences of the characters \, ', and " that did not result from one of the above expansions are removed.
And that's the end result. The double quotes are still there, so instead of searching for files named *.a it searches for ones named "*.a" with literal double quotes characters in their name. That search is bound to fail.
Adding a pair of escaped quotes \" didn't at all do what we wanted. The quotes didn't disappear like they were supposed to and broke the search. Not only that, but they also didn't inhibit globbing like they should have.
TL;DR — Quotes inside a variable aren't parsed the same way as quotes outside a variable.
† The first four tokens have no special characters. But the last one, "*.a", does. That asterisk is a wildcard. If you read the "pathname expansion" section of the man page carefully you'll see that there's no mention of quotes being ignored. The double quotes do not protect the asterisk.
Hang on! What? I thought quotes inhibit glob expansion!
They do—normally. If you write quotes out by hand they do indeed stop glob expansion. But if you put them inside an unquoted variable, they don't.
$ touch 'foobar' '"foobar"'
$ ls
foobar "foobar"
$ ls foo*
foobar
$ ls "foo*"
ls: foo*: No such file or directory
$ var="\"foo*\""
$ echo "$var"
"foo*"
$ ls $var
"foobar"
Read that over carefully. If we create a file named "foobar"—that is, it has literal double quotes in its filename—then ls $var prints "foobar". The glob is expanded and matches the (admittedly contrived) filename!
Why didn't the quotes help? Well, the explanation is subtle, and tricky. The man page says:
After word splitting ... bash scans each word for the characters *, ?, and [.
Any time Bash performs word splitting it also expands globs. Remember how I said unquoted variables are subject to an implied split+glob operator? This is what I meant. Splitting and globbing go hand in hand.
If you write ls "foo*" the quotes prevent foo* from being subject to splitting and globbing. However if you write ls $var then $var is expanded, split, and globbed. It wasn't surrounded by double quotes. It doesn't matter that it contains double quotes. By the time those double quotes show up it's too late. Word splitting has already been performed, and so globbing is done as well.
I have what may be an odd issue. I've got a Powershell script that's supposed to watch a directory for files, then move and rename them. It checks the output directory to see if a file with that name already exists with the form "Trip ID X Receipts Batch Y.pdf" (original output from the web form will always be that Y=1) and if it does replace Y with whatever the highest existing number of Y for other files with Trip ID X is. If there isn't one already, it'll just stay that Y=1. It does this successfully except on the second match, where instead of 2 Y will equal a number that varies depending on the file. This seems to be the file size in bytes plus 1. Example results of the script (from copy/pasting the same source file into the watched directory):
Trip ID 7 Receipts Batch 1.pdf
Trip ID 7 Receipts Batch 126973.pdf
Trip ID 7 Receipts Batch 3.pdf
Trip ID 7 Receipts Batch 4.pdf
The relevant portion of my code is here:
$REFile = "Trip ID " + $TripID + " Receipts Batch "
$TripIDCheck = "Trip ID " + $TripID
$TripFileCount = Get-ChildItem $destination |Where-Object {$_.Name -match $TripIDCheck}
$BatchCount = $TripFileCount.GetUpperBound(0) + 1
$destinationRegEx = $destination + $REFile + $BatchCount + ".pdf"
Move-Item -Path $path -Destination $destinationRegEx -Force
For counting the number of items in the array, I've used what you see above as well as $TripFileCount.Length, and $TripFileCount.Count. They all behave the same, seemingly taking the file, examining its size, setting the Y value for the second item to that, but then treating the third, fourth, etc. items as expected. For the life of me, I can't figure out what's going on. Have any of you ever seen something like this?
Edit: Trying to force $TripFileCount as an array with
$TripFileCount = #(Get-ChildItem $destination |Where-Object {$_.Name -match $TripIDCheck})
doesn't work either. It still does this.
As TessellatingHeckler states, your symptom indeed suggests that you're not accounting for the fact that cmdlets such as Get-ChildItem do not always return an array, but may return a single, scalar item (or no items at all).
Therefore, you cannot blindly invoke methods / properties such as .GetUpperBound() or .Length on such a cmdlet's output. There are two workarounds:
Use array subexpression operator #(...) to ensure that the enclosed command's output is treated as an array, even if only a single object is returned or none at all.
In PSv3+, use the .Count property even on a single object or $null to implicitly treat them as if they were an array.
The following streamlined solution uses the .Count property on the output from the Get-ChildItem call, which works as intended in all 3 scenarios: Get-ChildItem matches nothing, 1 file, or multiple files.
$prefix = "Trip ID $TripID Receipts Batch "
$suffix = '.pdf'
$pattern = "$prefix*$suffix"
$count = (Get-ChildItem $destination -Filter $pattern).Count
Move-Item -Path $path -Destination (Join-Path $destination "$prefix$($count+1)$suffix")
Note:
If you're using PowerShell v2, then prepend # to (...). #(...), the array subexpression operator, ensures that the output from the enclosed command is always treated as an array, even if it comprise just 1 object or none at all.
In PowerShell v3 and above, this behavior is conveniently implicit, although there are caveats - see this answer of mine.
I found some strange behavior in PowerShell surrounding arrays and double quotes. If I create and print the first element in an array, such as:
$test = #('testing')
echo $test[0]
Output:
testing
Everything works fine. But if I put double quotes around it:
echo "$test[0]"
Output:
testing[0]
Only the $test variable was evaluated and the array marker [0] was treated literally as a string. The easy fix is to just avoid interpolating array variables in double quotes, or assign them to another variable first. But is this behavior by design?
So when you are using interpolation, by default it interpolates just the next variable in toto. So when you do this:
"$test[0]"
It sees the $test as the next variable, it realizes that this is an array and that it has no good way to display an array, so it decides it can't interpolate and just displays the string as a string. The solution is to explicitly tell PowerShell where the bit to interpolate starts and where it stops:
"$($test[0])"
Note that this behavior is one of my main reasons for using formatted strings instead of relying on interpolation:
"{0}" -f $test[0]
EBGreen's helpful answer contains effective solutions, but only a cursory explanation of PowerShell's string expansion (string interpolation):
Only variables by themselves can be embedded directly inside double-quoted strings ("...") (by contrast, single-quoted strings ('...'), as in many other languages, are for literal contents).
This applies to both regular variables and variables referencing a specific namespace; e.g.:
"var contains: $var", "Path: $env:PATH"
If the first character after the variable name can be mistaken for part of the name - which notably includes : - use {...} around the variable name to disambiguate; e.g.:
"${var}", "${env:PATH}"
To use a $ as a literal, you must escape it with `, PowerShell's escape character; e.g.:
"Variable `$var"
Any character after the variable name - including [ and . is treated as a literal part of the string, so in order to index into embedded variables ($var[0]) or to access a property ($var.Count), you need $(...), the subexpression operator (in fact, $(...) allows you to embed entire statements); e.g.:
"1st element: $($var[0])"
"Element count: $($var.Count)"
"Today's date: $((Get-Date -DisplayHint Date | Out-String).Trim())"
Stringification (to-string conversion) is applied to any variable value / evaluation result that isn't already a string:
Caveat: Where culture-specific formatting can be applied, PowerShell chooses the invariant culture, which largely coincides with the US-English date and number formatting; that is, dates and numbers will be represented in US-like format (e.g., month-first date format and . as the decimal mark).
In essence, the .ToString() method is called on any resulting non-string object or collection (strictly speaking, it is .psobject.ToString(), which overrides .ToString() in some cases, notably for arrays / collections and PS custom objects)
Note that this is not the same representation you get when you output a variable or expression directly, and many types have no meaningful default string representations - they just return their full type name.
However, you can embed $(... | Out-String) in order to explicitly apply PowerShell's default output formatting.
For a more comprehensive discussion of stringification, see this answer.
As stated, using -f, the string-formatting operator (<format-string> -f <arg>[, ...]) is an alternative to string interpolation that separates the literal parts of a string from the variable parts:
'1st element: {0}; count: {1:x}' -f $var[0], $var.Count
Note the use of '...' on the LHS, because the format string (the template) is itself a literal. Using '...' in this case is a good habit to form, both to signal the intent of using literal contents and for the ability to embed $ characters without escaping.
In addition to simple positional placeholders ({0} for the 1st argument. {1} for the 2nd, ...), you may optionally exercise more formatting control over the to-string conversion; in the example above, x requests a hex representation of the number.
For available formats, see the documentation of the .NET framework's String.Format method, which the -f operator is based on.
Pitfall: -f has high precedence, so be sure to enclose RHS expressions other than simple index or property access in (...); e.g., '{0:N2}' -f 1/3 won't work as intended, only '{0:N2}' -f (1/3) will.
Caveats: There are important differences between string interpolation and -f:
Unlike expansion inside "...", the -f operator is culture-sensitive:
Therefore, the following two seemingly equivalent statements do not
yield the same result:
PS> [cultureinfo]::CurrentCulture='fr'; $n=1.2; "expanded: $n"; '-f: {0}' -f $n
expanded: 1.2
-f: 1,2
Note how only the -f-formatted command respected the French (fr) decimal mark (,).
Again, see the previously linked answer for a comprehensive look at when PowerShell is and isn't culture-sensitive.
Unlike expansion inside "...", -f stringifies arrays as <type-name>[]:
PS> $arr = 1, 2, 3; "`$arr: $arr"; '$arr: {0}' -f (, $arr)
$arr: 1 2 3
$arr: System.Object[]
Note how "..." interpolation created a space-separated list of the stringification of all array elements, whereas -f-formatting only printed the array's type name.
(As discussed, $arr inside "..." is equivalent to:
(1, 2, 3).psobject.ToString() and it is the generally invisible helper type [psobject] that provides the friendly representation.)
Also note how (, ...) was used to wrap array $arr in a helper array that ensures that -f sees the expression as a single operand; by default, the array's elements would be treated as individual operands.
In such cases you have to do:
echo "$($test[0])"
Another alternative is to use string formatting
echo "this is {0}" -f $test[0]
Note that this will be the case when you are accessing properties in strings as well. Like "$a.Foo" - should be written as "$($a.Foo)"