Running Powershell Script from SQL Server Agent - sql-server

How different should the programming be when you execute a powershell script from SQL Server Agent. I have been seeing very weird behavior.
Any object call fails
Can't we use powershell functions in these scripts. The parameters go empty if we call function through an object parameter
Some commands just don't pring messages, even though I use a variable hard print or I use Write-Output.
I just wanted to know why this is too different. I have a big script that automated and helped big manual task, which works with no errors at all when done manually.
Please help me on this.
Object: $agobj = $PrimarySQLConnObj.AvailabilityGroups[$AGName]
Error from Agent History:
The corresponding line is ' Add-SqlAvailabilityDatabase -InputObject $agobj -Database "$db" -ErrorAction Stop '. Correct the script and reschedule the job. The error information returned by PowerShell is: 'Cannot bind parameter 'InputObject'. Cannot convert the "[AG]" value of type "Microsoft.SqlServer.Management.Smo.AvailabilityGroup" to type "Microsoft.SqlServer.Management.Smo.AvailabilityGroup"

Related

SqlPackage seems to escape right square bracket ( ] ) in variable value passed to .dacpac

I'm passing a variable to my .dacpac but the text received is not what I passed. Example command:
sqlpackage /v:TextTest="abc]123" /Action:Publish /SourceFile:"my.dacpac" /TargetDatabaseName:MyDb /TargetServerName:"."
My variable $(TextTest) comes out as "abc]]123" instead of the original "abc]123".
Is there anything I can do to prevent SqlPackage from corrupting my input variables before they are passed to the .dacpac scripts?
Unfortunately, I don't think there is a good answer. This appears to be a very old bug. I'm seeing references to this issue going back 10 years.
Example A: https://web.archive.org/web/20220831180208/https://social.msdn.microsoft.com/forums/azure/en-US/f1d153c2-8f42-4148-b313-3449075c612f/sql-server-database-project-sqlcmd-variables-with-closing-square-brackets
They mention a "workaround" in the post, but they link to a Microsoft Connect issue which no longer exists and is not available on archive.org.
My best guess is that the "workaround" is to generate the deploy script rather than publishing, and then manually modify the variable value in the script...which is not really a workaround if you are working on a build/release pipeline or any sort of automation.
I tried testing this to see if it would make any difference using Microsoft.SqlServer.Dac.DacServices.Publish() directly (via dbatools PowerShell module), but unfortunately the problem exists there as well.
I also tested it against every keyboard accessible symbol and that is the only character it seems to have a problem with.
Another option, though still not great, is to generate the deployment script, then execute it using SQLCMD.EXE.
So for example this would work:
sqlpackage /Action:Script `
/DeployScriptPath:script.sql `
/SourceFile:foobar.dacpac `
/TargetConnectionString:'Server=localhost;Database=foobar;Uid=sa;Password=yourStrong(!)Password' `
/p:CommentOutSetVarDeclarations=True
SQLCMD -S 'localhost' -d 'foobar' -U 'sa' -P 'yourStrong(!)Password' `
-i .\script.sql `
-v TextTest = "abc]123" `
-v DatabaseName = "foobar"
/p:CommentOutSetVarDeclarations=True - This setting is key, otherwise SQLCMD will be overridden by what's in the file. Just make sure you specify ALL variables, and not just the one you need. So open the file to look at what is commented out and make sure you are supplying what is needed.
It's not a great option...but it's at least scriptable and doesn't require manual intervention.

How to run SQL Server Agent Powershell script with output and exit code

I'm trying to run a PowerShell script as a SQL Server 2016 Agent job.
As my Powershell script runs, I'm generating several lines of output using "Write-Output". I'd like to save this output to the job history, and I only want the job to continue to the next step if the step running the PowerShell script completes with an exit code of 0.
I'm using the "PowerShell" step type in my agent job. The Command text looks like this..
# Does some stuff that eventually sets the $resultState variable...
Write-Output ("Job complete with result '" + $resultState + "'")
if($resultState -eq "SUCCESS") {
[Environment]::Exit(0);
}
else {
[Environment]::Exit(1);
}
Under the "Advanced" settings, "Include step output in history" is checked. If I remove the final "if" statement from the PowerShell script, then I can see the output in the history, but the job step is always successful and moves on to the next step. If I include the if/else statements, the job step fails if $resultState does not equal "SUCCESS" (which is what I want), but I don't see my output anymore in the history for the job step.
Any suggestions?
I worked around this by saving all of my output lines to a single variable, and using Write-Error with -ErrorAction Stop if my result wasn't what I wanted. This isn't quite what I was trying to do at first, because this doesn't use the exit codes, but SQL Agent will correctly detect if the job step succeeded or not, and my output can still show up in the job history because it will be included in the error message.
Updated code:
# Does some stuff that sets the $resultState and saves output lines to $output...
$output += "`r`nJob complete with result '$resultState'"
if($resultState -eq "SUCCESS") {
Write-Output ($output)
}
else {
Write-Error ("`r`n" + $output) -ErrorAction Stop
}
I struggled with this. The logged output from the Sql Job Powershell steps is pretty useless. I found it better to use a CmdExex step instead that calls Powershell.exe.
In a Sql 2019 CmdExec job step you can just call your powershell script like this:
Powershell "F:\Temp\TestScript.ps1"
... and you'll get all the output (and you can log it to a file if you like). And if there's an error, the job stops properly.
In some earlier versions of SQL Server, Powershell errors would get logged but the job would continue to run (see here https://stackoverflow.com/a/53732722/22194 ), so you need to wrap your script in a try/catch to bubble up the error in a way SQL can deal with it:
Powershell.exe -command "try { & 'F:\Temp\TestScript.ps1'} catch { throw $_ }"
Note that if your script path has spaces in it you might get different problems, see here https://stackoverflow.com/a/45762288/22194

Deploying SQL Changes Containing $(ESCAPE_SQUOTE())

I have a Database project in Visual Studio that I am attempting to deploy automatically to a test environment nightly. To accomplish this I am using TFS which leverages a PowerShell script to run "SqlPackage.exe" to deploy any changes that have occurred during the day.
Some of my procs contain logic that is run inside of a script that is part of a agent job step and contains the following code(In dynamic SQL):
$(ESCAPE_SQUOTE(JOBID))
When deploying changes that affect this proc, I get the following issue:
SQL Execution error: A fatal error occurred. Incorrect syntax was
encountered while $(ESCAPE_SQUOTE( was being parsed.
This is a known issue, it appears as though that is not supported. It appears to be a function of the "SQLCmd" command misinterpreting the $( characters as a variable:
"override the value of a SQL command (sqlcmd) variable used during a
publish action."
So how do I get around this? It seems to be a major limitation of "sqlcmd" that you can't disable variables, I don't see that parameter that supports that...
Update 1
Seems as through you can disable variable substitution from "sqlcmd" by feeding it the "-x" argument(Source, Documentation):
-x (disable variable substitution)
Still don't see a way to do this from "SqlPackage.exe" though.
It seems that sqlcmd looks for the $( as a token, so separating those two characters is good enough. You can do this with a dynamic query that does nothing more than break the query into two strings:
DECLARE #query nvarchar(256) = N'... $' + N'(ESCAPE_SQUOTE(JOBID)) ...';
EXEC sp_executesql #query
One way to get around this is to refactor the "$(ESCAPE_SQUOTE(JOBID))" string into a scalar function, then setup a PowerShell script to directly invoke the "Sqlcmd" command with the "-x" parameter to "Create/Alter" said function before running "SqlPackage.exe".
Looks something like this in PowerShell:
$sql = #"
USE $database
GO
CREATE FUNCTION [dbo].[GetJobID] ()
RETURNS NVARCHAR(MAX)
AS
BEGIN
RETURN '$(ESCAPE_SQUOTE(JOBID))'
END
GO
"#;
Sqlcmd -S $servername -U $username -P $password -Q $sql -x;
This is a pretty poor workaround, but it does accomplish the task. Really hoping for something better.
I propose another workaround
my job has a step running : DTEXEC.exe /SERVER "$(ESCAPE_NONE(SRVR))"
I just have to add a SQLCMD variable before:
:setvar SRVR "$(ESCAPE_NONE(SRVR))"
this way the toked is considered as SQLCMD variables $(SRVR) and is replaced by the requested value

SSDT Post-Deployment script (data dump with JQuery) - disable variable substitution

In my SSDT project I have a post deployment script where I include a script file.
:r .\Data\Data.Content.sql
The file Data.Content.sql is a dump of the database (insert statements) and it contains content like 'var $sameHeightDivs = $(''.product-tile-region'');'. The database contains JQuery scripts. So I receive the following errors:
SQL72008: Variable document is not defined.
or
72006: Fatal scripting error: Incorrect syntax was encountered while parsing '$(''
I found that you can disable 'disable
variable substitution' with the argument -x.
But is there a way to define this somewhere? (post-deployment script? project setting?)
Or is there another way to solve this problem?
FYI: to create the dump I use Microsoft.SqlServer.Management.Smo.Scripter.
Kind regards,
bob
I posted the same question in the SQL Server Data Tools forum where someone came with a workaround.
After the script generation I do a search and replace for the $ char.
function SearchAndReplace($file) {
(Get-Content $file) |
Foreach-Object {$_ -replace "\$\(", "' + CHAR(36) + '("} |
Set-Content $file
}
I included the '(' to be sure to limit the scope (to JQuery selectors).

osql vs Invoke-Sqlcmd-- redirecting output of the latter

We're moving from a batch file that calls osql to a Powershell script which uses the Invoke-Sqlcmd cmdlet.
Would anyone know what the equivalent steps are for redirecting the output in the latter case, to using the -o flag in osql? We have some post-processing steps that look at the osql output file and act accordingly (report an error if those logs are greater than X bytes). I would very much like it if Invoke-Sqlcmd could duplicate the same output information given the same SQL commands going in.
Right now in my script I'm planning to call Invoke-Sqlcmd <...> | Out-file -filepath myLog.log. Anyone know if this is ok or makes sense?
From the documentation for the cmdlet itself:
Invoke-Sqlcmd -InputFile "C:\MyFolder\TestSQLCmd.sql" | Out-File -filePath "C:\MyFolder\TestSQLCmd.rpt"
The above is an example of calling Invoke-Sqlcmd, specifying an input file and piping the output to a file. This is similar to specifying sqlcmd with the -i and -o options.
http://technet.microsoft.com/en-us/library/cc281720.aspx
I think you'll find it's difficult to reproduce the same behavior in invoke-sqlcmd as I have.
osql and sqlcmd.exe will send T-SQL PRINT and RAISERROR and errors to the output file.
Using Powershell you can redirect standard error to standard output with the standard error redirection technique (2>&1):
Invoke-Sqlcmd <...> 2>&1 | Out-file -filepath myLog.log
However this still won't catch everything. For example RAISERROR and PRINT statements only output in Invoke-sqlcmd when using the -verbose parameter as documented in help invoke-sqlcmd. In Powershell V2 you can't redirect verbose output. Although you can with Powershell V3 using 4>
For these reason and others (like trying to recreate all the many different options in sqlcmd) I switched back to using sqlcmd.exe for scheduled job in my environment. Since osql.exe is deprecated, I would suggest switching to sqlcmd.exe which supports the same options as osql.
You can still call osql from PowerShell. I would continue to do just that. Invoke-SqlCmd returns objects representing each of the rows in your result set. If you aren't going to do anything with those objects, there's no reason to upgrade to Invoke-SqlCmd.

Resources