I've created a PowerShell script that runs perfectly from the Management Shell. I'm trying to get it setup to work in a scheduled task in Windows Server 2008 R2 and am unsure how to pass the parameters for my string array parameter.
Here is the relevant portion of my script:
[CmdletBinding()]
param(
[Parameter(Mandatory=$true)]
[String]
$BaseDirectory,
[String]
$BackupMethod = "Full",
[Int]
$RemoveOlderThanDays = 0,
[String]
$LogDirectory,
[Int]
$LogKeepDays = 7,
[String[]]
$AdditionalDirectories
)
if ($AdditionalDirectories -and $AdditionalDirectories.Count -gt 0) {
Write-Host " Additional Directories to be included:" -ForegroundColor Green
$AdditionalDirectories | ForEach-Object {
Write-Host " $_" -foregroundcolor green
}
}
The parameter giving the trouble is that last one, $AdditionalDirectories.
From the Shell:
If I run the script from the Management Shell like this, it works perfectly:
.\FarmBackup.ps1 \\SomeServer\Backups Full 0 D:\Logs\Backups 0 "D:\Documents\PowerShell Scripts","D:\SomeFolder"
Result:
Additional Directories to be included:
D:/Documents/PowerShell Scripts
D:/SomeFolder
From Scheduled Task:
Action: Start a program
Program/script: PowerShell.exe
Arguments: -File "D:\Documents\PowerShell Scripts\FarmBackup.ps1" \\SomeServer\Backups Full 0 D:\Logs\Backups 0 "D:\Documents\PowerShell Scripts","D:\SomeFolder"
Result: (From Log File)
Additional Directories to be included:
D:\Documents\PowerShell Scripts,D:\SomeFolder
I've tried a couple of different methods for those parameters but I can't seem to get them to be seen as 2 separate strings in the string array. I'm hardcoding them for now, but it seems like there must be a way to make this work since it's totally valid when run from the shell.
Try using the -Command switch instead of the -File switch, and then use the invocation operator '&'. Here is a link to an example of doing this with scheduled tasks:
http://blogs.technet.com/b/heyscriptingguy/archive/2011/01/12/schedule-powershell-scripts-that-require-input-values.aspx
Something like:
-Command "& 'D:\Documents\PowerShell Scripts\FarmBackup.ps1' '\\SomeServer\Backups' 'Full' 0 'D:\Logs\Backups' 0 'D:\Documents\PowerShell Scripts','D:\SomeFolder'"
I tested this solution by creating a script with the contents:
param([string[]] $x)
Write-Host $x.Count
Then called it in the following two ways:
powershell -File ".\TestScript.ps1" "what1,what2"
with result : 1
and
powershell -Command "& .\TestScript.ps1 what1,what2"
with result: 2
Another option, when the options get too complex and you're tired of fiddling with quotes, backticks, etc is to use the underused -EncodedCommand parameter on PowerShell.exe e.g.:
C:\PS> $cmd = "c:\temp\foo.ps1 'D:\Documents\PowerShell Scripts','D:\SomeFolder'"
C:\PS> $cmd
c:\temp\foo.ps1 'D:\Documents\PowerShell Scripts','D:\SomeFolder'
C:\PS> $bytes = [Text.Encoding]::Unicode.GetBytes($cmd)
C:\PS> $encodedCmd = [Convert]::ToBase64String($bytes)
C:\PS> $encodedCmd
YwA6AFwAdABlAG0AcABcAGYAbwBvAC4AcABzADEAIAAnAEQAOgBcAEQAbwBjAHUAbQBlAG4AdABzAFwAUABvAHcAZQByAFMAaABlAGwAbAAgAFMAYwByAGkAcAB0AHMAJwAsACcARAA6AFwAUwBvAG0AZQBGAG8AbABkAGUAcgAnAA==
C:\PS> powershell.exe -encodedCommand YwA6AFwAdABlAG0AcABcAGYAbwBvAC4AcABzADEAIAAnAEQAOgBcAEQAbwBjAHUAbQBlAG4AdABzAFwAUABvAHcAZQByAFMAaABlAGwAbAAgAFMAYwByAGkAcAB0AHMAJwAsACcARAA6AFwAUwBvAG0AZQBGAG8AbABkAGUAcgAnAA==
param1[0] is D:\Documents\PowerShell Scripts
param1[1] is D:\SomeFolder
Admittedly, not something that would be exactly readable/understandable by someone else. :-) You'd have to doc the command in the description of the scheduled task.
I have been using PowerShell for SharePoint and have created several Scheduled Task to trigger them at various intervals.
This is how i do it and it worked for me all the time.
Syntax : [Path to ur script'] -Param1Name 'Value1' -Param2Name 'Value2' -Param3Name 'Value3'
Here is a real example :
D:\Scripts\Global.ps1 -DataLocation 'D:\Scripts' -DeploymentParameters 'Deploymentparameters' -Deployments 'Deployments' -GlobalParameters 'GlobalParameters' -SiteUrl 'https://my.sp.company.com'
Related
I am trying to pass an array %variable% from command line to power shell and then perform operations with this array variable within power shell but I am having trouble passing the variable to power shell correctly. Current .BAT script to call power shell script is below...
SET STRING_ARRAY="test1" "test2" "test3" "test4"
Powershell.exe -executionpolicy remotesigned -File "FILEPATH\Build_DB.ps1" %STRING_ARRAY%
Then power shell script below to test for a sucessful handover of the array varaible is as follows:
$string_array=#($args[0])
Write-Host $string_array.length
for ($i=0; $i -lt $string_array.length; $i++) {
Write-Host $string_array[$i]
}
However all is returned is a length of 1 from power shell. What am I doing wrong here?
Alright, never mind I ended up coming up with a solution that works in my case so I am posting it here in case it is of benefit to anyone else. If someone has a better solution please let me know.
Change the power shell script as follows:
#The problem is that each item in the %STRING_ARRAY% variable is passed as
#an individual argument to power shell. To get around this we can just
#store all optional arguments passed to power shell as follows.
$string_array=#($args)
#Now (if desired) we can also remove any optional arguments we don't want
#in our new array using the following command.
$string_array = $string_array[2..($string_array.Length)]
Write-Host $string_array.length
for ($i=0; $i -lt $string_array.length; $i++) {
Write-Host $string_array[$i]
}
I have PowerShell scripts that require to start other PowerShell scripts in a new session. The first script passes a set of arguments to the second script as an array. Everything works fine when all of the arguments have values, but when i try passing a $null, the parameter is stripped and the list of arguments gets messed up.
To better understand the issue, you can do the following (this is just an example):
define C:\Test.ps1 as:
param($a,$b,$c)
" a $a " | Out-File C:\temp.txt
" b $b " | Out-File C:\temp.txt -append
" c $c " | Out-File C:\temp.txt -append
run in any ps console
PowerShell.exe -WindowStyle Hidden -NonInteractive -file C:\Test.ps1 #(1,2,3) #works as expected temp.txt contains
a 1
b 2
c 3
PowerShell.exe -WindowStyle Hidden -NonInteractive -file C:\Test.ps1 #(1,$null,5) #strips $null and writes the following to temp.txt
a 1
b 5
c
I need to preserve the $null when creating the new session, the desired temp.txt would contain
a 1
b
c 5
The problem seems to be that the $null gets stripped from the array directly and is already #(1,5) when being interpreted by the new session.
I've tried declaring an empty array and adding elements to it one by one, also tried replacing the array with a System.Collections.Generic.List[System.Object] and using the Add method, but still the $null gets stripped.
My last ideas are, to test for a $null and assign a default value to the argument, in the calling script, then in the second script to test for the default value and reassign the $null, or create a hash with all the arguments, pass it as an argument and process and split them in the called script. I really don't like these ideas as it feels overkill for the task at hand.
Any help in understanding the basic problem, why $null gets stripped from the array and how to preserve it, or how to alter the creation of the new session to pickup the $null is greatly appreciated.
When I've needed to serialize data between scripts, or preserve objects across script executions, I tend to use Export-Clixml and Import-Clixml, sometimes combined with splatting. The Clixml scripts will preserve objects as they existed previously in their entirety.
For example, in your sending script:
$a = 1;
$b = $null;
$c = 3;
#($a, $b, $c) | Export-Clixml 'C:\params.xml'
And then to retrieve the data:
$params = Import-Clixml 'C:\params.xml';
$x = $params[0];
$y = $params[1];
$z = $params[2];
Write-Output "$x,$y,$z";
You should get 1,,3.
Often you can even use hash tables to help organize:
#{'a'=$a, 'b'=$b, 'c'=$c} | Export-Clixml 'C:\params.xml'
And then:
$x = $params.a;
$y = $params.b;
$z = $params.c;
But hashtables are a bit funky sometimes. Be sure to test.
As for what's going on, it looks like PowerShell skips null values when assigning parameters from an array like you're doing. The null value is in the array (#(1, $null, 3)[1] -eq $null is True), it's just PowerShell skipping it.
If you specify the param names, then PowerShell knows which parameters you're giving it.
PowerShell.exe -WindowStyle Hidden -NonInteractive -file C:\Test.ps1 -a 1 -c 5
gives you
a 1
b
c 5
More on PowerShell parameters here.
So, I am fairly new to PowerShell and need to create a script to rename the computers in our office. That portion of the script works. The part I am having trouble with is the output.
I have this set in task scheduler, but when it runs I do not see if the rename was successful. Below is my script and below that is what goes into the text file.
start-transcript -path C:\Users\abhagwandin.SENECA\Desktop\RenameResults.txt
$CSV = Import-Csv "C:\Users\abhagwandin.SENECA\Desktop\Computer Desktop Names Test.csv" -Header OldName, NewName
Foreach ($name in $CSV)
{
write-output $name
netdom renamecomputer $name.OldName /newname: $name.NewName /userd: admin /passwordd: pass /usero: admin /passwordo: pass /reboot /force
}
stop-transcript
-------------------------------------------------------------------------------
**********************
Windows PowerShell Transcript Start
Start time: 20150520154216
Username :
Machine : (Microsoft Windows NT 6.1.7601 Service Pack 1)
**********************
Transcript started, output file is C:\Users\abhagwandin.SENECA\Desktop\RenameRe
sults.txt
OldName NewName
------- -------
JFLAHNYCD1 JFLAHERTY
**********************
Windows PowerShell Transcript End
End time: 20150520154218
**********************
You know that renaming a computer through a cmdline command in powershell instead of using the built in cmdlets can give problems with the output if you don't parse the output (and preferably create a new object for it)?
Why don't you use rename-computer or the rename() method of the win32_computersystem wmi class? Both can be used remotely so you don't even have to schedule tasks that way.
Just create an input file with the current name and the desired names and use a loop to process them.
i have a problem that is troubling my mind.
I want to automatically execute powershell scripts with named arguments but within another powershell script (that will act as a script deamon).
For example:
One of the scripts that get called has this parameters
param(
[int]$version,
[string]$user,
[string]$pass,
[string]$domain,
)
The powershell script deamon now loads the file and arguments like this
$argumentsFromScript = [System.IO.File]::ReadAllText("C:\params.txt") $job = Start-Job { & "ps1file" $arguments}
The params.txt contains the data like this
-versionInfo 2012 -user admin -pass admin -domain Workgrup
But when i try to execute this code obviously the whole $argumentsFromScript variable will be seen as parameter 1 (version) and i end up with an error, that "-versionInfo 2012 -user admin -pass admin -domain Workgrup" cannot be converted to Int32...
Do you guys have any idea how i can accomplish this task?
The Powershell deamon does not know anything about the parameters. He just needs to execute scripts with given named parameters. The params.txt is just an example. Any other file (csv,ps1,xml,etc) would be fine, i just want to automatically get the named parameters passed to the script.
Thank you in advance for any help or advice..
Try this:
#'
param ([string]$logname,[int]$newest)
get-eventlog -LogName $logname -Newest $newest
'# | sc c:\testfiles\testscript.ps1
'-logname:application -newest:10' | sc c:\testfiles\params.txt
$script = 'c:\testfiles\testscript.ps1'
$arguments = 'c:\testfiles\params.txt'
$sb = [scriptblock]::Create("$script $(get-content $argumentlist)")
Start-Job -ScriptBlock $sb
I guess you want this:
$ps1 = (Resolve-Path .\YourScript.ps1).ProviderPath
$parms = (Resolve-Path .\YourNamedParameters.txt).ProviderPath
$job = sajb -ScriptBlock {
param($ps1,$parms)
iex "$ps1 $parms"
} -ArgumentList #(
$ps1,
[string](gc $parms)
)
# if you wanna see the outcome
rcjb $job -Wait
UPDATE: Modified the script to work within the bounds of PS1 as required by SQLPS.
Changed:
IF($property.Value -match $regex){
$currentBadLine = (ConvertTo-Csv $_ -NoTypeInformation -Delimiter $delimiter);
$badLines += $currentBadLine[1,2]
};
To:
IF($property.Value -match $regex){
$badLines += $_ | Select-Object | ft -autoSize;
};
Prints a new header for each bad line, but it's not the end of the world and not worth the effort to prevent.
I have a Powershell script that pre-processes CSV files before they have a chance to screw up my data imports.
On two servers in a row now I have confirmed that the PS Major Version at least 2, and that the following code snippet runs fine in Powershell ISE. The purpose of the code is to read in each line of the CSV, and then loop through the columns looking for the regex pattern in the $regex variable. When it finds one I want it to keep track of the error before fixing it so I can write an error log later before outputting a cleaned up file ready for import.
%{
Foreach($Property in $_.PSObject.Properties){
IF($property.Value -match $regex){
$currentBadLine = (ConvertTo-Csv $_ -NoTypeInformation -Delimiter $delimiter);
$badLines += $currentBadLine[1,2]
};
$property.Value = $property.Value -replace $regex;
}
$_
}
But once I put that code into an agent job the agent complains:
'The term 'ConvertTo-Csv' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again. '
The question then is this: Is the Agents Powershell subsystem using a different version of Powershell than the rest of the system? If so, how do I find out which version the Subsystem is using and if possible upgrade it so I can fix this.
The server is running:
Windows Server 2008 R2 Enterprise
PS Major version 2, Minor 0 Build -1 revision -1
SQL Server 2008 R2 SP1 10.5.2500.0 64 bit
Thanks!
Yes, proper PowerShell support isn't really implemented until SQL Server 2012 (an even that is a bit flakey as to what cmdlets it supports)
In 2008 and R2 the agent's powershell implementation is actually a minishell, created by the now (thankfully) deprecated make-shell.exe utility, which only allows v1 cmdlets to run, and disallows the use of Add-PSSnapin so you can't add anymore cmdlets.
To get proper powershell support, you either need to shell out and call an external script, or run the job as a windows scheduled task rather than an agent job.
The following article explains a lot about why powershell support in 2008 R2 doesn't work like you think it should:
The Truth about SQLPS and PowerShell V2
One work-around: Export-CSV to a file, then Get-Content from the file.
$rows = ,(New-Object PSObject -prop #{a=7; b='stuff';});
$rows +=,(New-Object PSObject -prop #{a=77; b='more';});
#To run from SQL Job, change from this:
$csvrows = $rows | ConvertTo-CSV -NoType | % {$_.Replace('"','')};
write-output $csvrows;
#to this:
$rows | Export-CSV -NoType "C:\Temp\T.csv"
$csvrows = (Get-Content "C:\Temp\T.csv") | % {$_.Replace('"','')};
write-output $csvrows;