exit script after writing error to log - sql-server

I'm writing a script to deploy a bunch of sql files to sql server.
Requirements are:
Everything should be written to a log file
The script should end when encountering an error (the logic is we fix it then re-run the script again).
I have a function which runs a bunch of sql files from a folder.
function writeLog($comment,
$logfile="C:\Users\ndefontenay\Documents\Releases\logs\release-update.log"){
$date=Get-Date -Format "MM/dd/yyyy HH:mm:ss"
$fullComment="$date - $comment"
$fullComment | Out-File -Append $logfile
}
function runQueriesInFolder($folder, $server=".",$database){
Write-Host $folder
$files=Get-ChildItem -Recurse -Path $folder -Filter "*.sql" | Sort-Object -Descending
foreach ($file in $files){
invoke-sqlcmd -InputFile $file.FullName -Database $database -OutputSqlErrors $True -ErrorAction Stop -ServerInstance $server -queryTimeout 65536
}
}
That's great. This exits and write some errors in the console. But I need to log the errors so I use try..catch.
function runQueriesInFolder($folder, $server=".",$database){
Write-Host $folder
$files=Get-ChildItem -Recurse -Path $folder -Filter "*.sql" | Sort-Object -Descending
try{
foreach ($file in $files){
invoke-sqlcmd -InputFile $file.FullName -Database $database -OutputSqlErrors $True -ErrorAction Stop -ServerInstance $server -queryTimeout 65536
}
}
catch{
WriteLog("ERROR - $($_)")
}
}
Now my errors are written in the logs but the script doesn't exit anymore. So I try to use "finally" statement. It doesn't seem to work any better.
function runQueriesInFolder($folder, $server=".",$database){
Write-Host $folder
$files=Get-ChildItem -Recurse -Path $folder -Filter "*.sql" | Sort-Object -Descending
try{
foreach ($file in $files){
invoke-sqlcmd -InputFile $file.FullName -Database $database -OutputSqlErrors $True -ErrorAction Stop -ServerInstance $server -queryTimeout 65536
}
}
catch{
WriteLog("ERROR - $($_)")
}
finally{
exit 1
}
}
Error log should have ended when logging first error Invalid object name:
07/23/2015 10:57:45 - ERROR - Invalid object name 'History.DOS.Test'.
07/23/2015 10:57:45 - ERROR - Cannot open database "blah" requested by the login. The login failed.
07/23/2015 10:57:45 - ERROR - The 'Query' and the 'InputFile' options are mutually exclusive.
07/23/2015 10:57:45 - Release update completed
So the question is: How do I get my application to log the stopping error into a file, and exit the script completely?

Now my errors are written in the logs but the script doesn't exit anymore.
That's because your catch block is handling the error and doesn't take any action.
If you want to rethrow the error, then just do this:
catch{
WriteLog("ERROR - $($_)")
throw
}
Now you error will be logged, as well as the catch block will rethrow the error so that it bubbles up.
As far as "bubbling" the error up, the calling code should be handling this. The calling code might want to handle the exception accordingly, or it could just exit the script.
# calling code
try {
runQueriesInFolder(...)
}
catch {
# hit an error, don't want to keep script running
exit
}

Related

Powershell script to find file age on multiple shares

I'm working on a powershell script to read file attributes filtered by CreationTime on multiple shares. The scripts works, sporadically. It works great when I use a single path but I get mixed results when I add the folders paths to an array. The most disturbing result is when it successfully find and reads all path and then includes everything under c:windows\system32. Same anomaly when shares are empty.
So what I want to accomplish is:
Read list of Shares
Read each share content filtered by 'CreationTime' and 'Archive' attributes.
Save results to a csv file.
if file not empty, write results to event log.
here is my testing code
$Date=(Get-Date).AddHours(-3)
$FolderList = "C:\Software\Scripts\FolderList.txt"
$Folders = get-content $Folderpath
$Filepath = "C:\Software\Scripts"
$filename = "$Filepath\" + $timer + "OldFiles.csv"
foreach ($Folder in $Folders)
{
Get-ChildItem -Path $Folder -Recurse -Force | Where-Object { $_.CreationTime -lt $Date -and $_.Attributes -eq "Archive"} | Select Attributes, CreationTime, Fullname | Export-Csv -Path $filename -NoTypeInformation
}
if ( (get-childitem $filename).length -eq 0 )
{
exit
}
else{
#Write to OpsMgr Log
$Message = get-content $filename
Write-EventLog -LogName "Operations Manager" -Source "Health Service Script" -EventID 402 -EntryType Information -Message "Old files found. $Message"
}

Move-Item causes "File Already Exists" error, despite folder having been deleted

I have some code which deletes a folder, then copies files from a temporary directory to where that folder had been.
Remove-Item -Path '.\index.html' -Force
Remove-Item -Path '.\generated' -Force -Recurse #folder containing generated files
#Start-Sleep -Seconds 10 #uncommenting this line fixes the issue
#$tempDir contains index.html and a sub folder, "generated", which contains additional files.
#i.e. we're replacing the content we just deleted with new versions.
Get-ChildItem -Path $tempDir | %{
Move-Item -Path $_.FullName -Destination $RelativePath -Force
}
I get an intermittent error, Move-Item : Cannot create a file when that file already exists. on the Move-Item line for the generated path.
I've been able to prevent this by adding a hacky Start-Sleep -Seconds 10 after the second Remove-Item statement; though that's not a great solution.
I assume the issue is that the Remove-Item statement completes / code moves on to the next line, before the OS has caught up with the actual file deletion; though that seems odd/worrying. NB: There are ~2,500 files in the generated folder (all between 1-100 KBs).
There are no other processes accessing the folders (i.e. I've even closed my explorer windows & tested with this directory being excluded from my AV).
I've considered other options:
using Copy-Item instead of Move-Item. I don't like this as it requires creating new files when they're not required (i.e. a copy is slower than a move)... It's faster than my current sleep hack; but still not ideal.
deleting the files & not the folder, then iterating through the subfolders & copying files to the new locations. This would work, but is a lot more code for something that should be simple; so I don't want to pursue that option.
Robocopy would do the trick; but I'd prefer a pure PowerShell solution. This is the option I'll eventually pick if there is no clean solution though.
Question
Has anyone seen this before?
Is it a bug, or have I missed something?
Is anyone aware of a fix / good workaround?
Update
Running the remove in a separate job (i.e. using the code below) did not resolve the issue.
Start-Job -ScriptBlock {
Remove-Item -Path '.\index.html' -Force
Remove-Item -Path '.\generated' -Force -Recurse #folder containing generated files
} | Wait-Job | Out-Null
#$tempDir contains index.html and a sub folder, "generated", which contains additional files.
#i.e. we're replacing the content we just deleted with new versions.
Get-ChildItem -Path $tempDir | %{
Move-Item -Path $_.FullName -Destination $RelativePath -Force
}
Update #2
Adding this works; i.e. rather than waiting a fixed time, we wait for the path to be removed / checking every second. If it's not removed after 30 seconds we assume it's not going to be; so carry on regardless (which will cause the move-item to throw an error which gets handled elsewhere).
# ... remove-item code ...
Start-Job -ScriptBlock {
param($Path)
while(Test-Path $Path){start-sleep -Seconds 1}
} -ArgumentList '.\generated' | Wait-Job -Timeout 30 | Out-Null
# ... move-item code ...
In the end I settled for this solution; not perfect, but it works.
Remove-Item -Path '.\index.html' -Force
Remove-Item -Path '.\generated' -Force -Recurse #folder containing generated files
#wait until the .\generated directory is full removed; or until ~30 seconds has elapsed
1..30 | %{
if (-not (Test-Path -Path '.\generated' -PathType Container)) {break;}
Start-Sleep -Seconds 1
}
Get-ChildItem -Path $tempDir | %{
Move-Item -Path $_.FullName -Destination $RelativePath -Force
}
This does the same as the job in update #2 of the question; only doesn't require the overhead of a job; just loops until the file's removed.
Here's the above logic wrapped as a reuable cmdlet:
function Wait-Item {
[CmdletBinding()]
param (
[Parameter(Mandatory, ValueFromPipeline, HelpMessage = 'The path of the item you wish to wait for')]
[string]$Path
,
[Parameter(HelpMessage = 'How many seconds to wait for the item before giving up')]
[ValidateRange(1,[int]::MaxValue)]
[int]$TimeoutSeconds = 30
,
[Parameter(HelpMessage = 'By default the function waits for an item to appear. Adding this switch causes us to wait for the item to be removed.')]
[switch]$Remove
)
process {
[bool]$timedOut = $true
1..$TimeoutSeconds | %{
if ((Test-Path -Path $Path) -ne ($Remove.IsPresent)){$timedOut=$false; return;}
Start-Sleep -Seconds 1
}
if($timedOut) {
Write-Error "Wait-Item timed out after $TimeoutSeconds waiting for item '$Path'"
}
}
}
#example usage:
Wait-Item -Path '.\generated' -TimeoutSeconds 30 -Remove

Run multiple *.sql query files log to file

My goal is to have a PowerShell script run several Sqlquery.sql files against a specific SQL server and then log the output to a log file.
I can't get the logging to work and I don't know what I'm missing. My log file is always empty and I'm at a loss for that I am missing.
Contents of C:\Temp:
Build1.SQL
Build2.SQL
Build3.sql
Build4.sql
Build5.SQL
Build6.SQL
$PatchPostConvSQLScripts = Get-ChildItem -Path C::\Temp -Filter *.sql -Name
$Queries = $PatchPostConvSQLScripts
foreach ($query in $Queries){
Write-Host "Starting: $query"
Invoke-Sqlcmd -ServerInstance $DBServer -InputFile $query |
Out-File "C:\TEMP\scriptResults.log"
Write-Host "Completed: $query"
}
Once I get it logging to a file, I'll need to get a newline each time with a `n`r, but baby steps right now.
Is there a better way to do this that I just don't know?
The main reason you got nothing in log file is that Output-File rewrite whole data in it on each run. Try to use -Verbose as mentioned in answer by TechSpud to collect print/server statements, or write output to temp file and Add-Content to main log file:
$DBServer = "MYPC\SQLEXPRESS"
$sqlPath = "C:\TEMP\"
$log = "scriptResults.log"
$tempOut = "temp.log"
$files = Get-ChildItem -Path $sqlPath -Filter *.sql -Name
foreach ($file in $files){
Write-Host "Starting: $file"
Invoke-SQLcmd -ServerInstance $DBServer -InputFile $sqlPath$file | Out-File $sqlPath$tempOut
Get-Content $sqlPath$tempOut | Add-Content $sqlPath$log
Write-Host "Completed: $file"
}
Firstly, as #Ben Thul has mentioned in his comment, check that your SQL files actually output something (a resultset, or messages), by running them in Management Studio.
Then, you'll need to use the -Verbose flag, as this command will tell you.
Get-Help Invoke-Sqlcmd -Full
Invoke-Sqlcmd does not return SQL Server message output, such as the
output of PRINT statements, unless you use the PowerShell -Verbose parameter.
$Queries = Get-ChildItem -Path C::\Temp -Filter *.sql -Name
Clear-Content -Path "C:\TEMP\scriptResults.log" -Force
foreach ($query in $Queries){
Write-Host "Starting: $query"
Invoke-Sqlcmd -ServerInstance $DBServer -InputFile $query -Verbose |
Add-Content -Path "C:\TEMP\scriptResults.log"
Write-Host "Completed: $query"
}

execute .sql script using powershell and store the output in .sql file

I’m trying to run the sql script .sql file from powershell and save the result into .sql file. Overview : SQL database restore requires a user and permission backup pre-restore and once the restore is complete we need to execute the output( users permissions backup which we did pre-restore ) on the database.
here’s my script and when i execute i see an empty file.
Add-PSSnapin SqlServerProviderSnapin100;
$server = 'DBA_Test';
$database = 'Test';
$mydata = invoke-sqlcmd -inputfile "C:\users\security.sql" -serverinstance $server -database $database | Format-Table -HideTableHeaders -AutoSize
$mydata | out-file C:\users\output.sql;
Remove-PSSnapin SqlServerCmdletSnapin100;
Can someone help me on this ?
Thanks in advance
invoke-sqlcmd -inputfile "C:\users\security.sql" -serverinstance $server -database $database | Format-Table -HideTableHeaders -AutoSize >> C:\users\output.sql
or
Invoke-sqlcmd -inputfile "C:\users\security.sql" -serverinstance $server -database $database | Format-Table -HideTableHeaders -AutoSize | Out-File –FilePath C:\users\output.sql –Append
should do the trick.
Your problem is that you're only capturing one output stream. Your code would work as expected if your query was running "Select 'Hello World!'".
In order to get all output streams (verbose, error, and output), into a single file, you can do the following:
invoke-sqlcmd -inputfile "C:\users\security.sql" -serverinstance $server -database $database -verbose *>&1 | out-file C:\users\output.sql
The -verbose flag turns on a lot of the messages you'd expect to see. The * indicates you want all output streams (you can look up the definitions if you'd like. The verbose stream itself is 4, so 4>&1 would just redirect that one stream). Then you are just redirecting the output to out-file.

Powershell Script using Invoke-SQL command,needed for SQL job, the SQL Server version of Powershell is somewhat crippled, is there a workaround?

Full Question: Have Powershell Script using Invoke SQL command, using snappins, I need them to be included in a SQL job, the SQL Server version of Powershell is somewhat crippled, does anyone know a workaround?
From what I have gathered, SQL Management Studio's version of powershell is underpowered, not allowing for the use of snappins, as such it does not recognize the cmdlets that I used in the script. I have tried running it in the job as a command line prompt rather than a Powershell script, which causes the code to work somewhat, however I check the history on the job and it says that invoke-sql is still not a recognized cmdlet. I speculate that because I am running the code on a remote server, with different credentials than my standard my profile with the snappins preloaded isn't being loaded, though this is somewhat doubtful.
Also, as I am a powershell rookie, any advice on better coding practices/streamlining my code would be much appreciated!
Code is as follows:
# define parameters
param
(
$file = "\\server\folder\file.ps1"
)
"invoke-sqlcmd -query """ | out-file "\\server\folder\file.ps1"
# retrieve set of table objects
$path = invoke-sqlcmd -query "select TableName from table WITH (NoLock)" -database db -server server
[reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")
$so = New-Object Microsoft.SqlServer.Management.Smo.ScriptingOptions
$so.DriPrimaryKey = $false
$so.Nocollation = $true
$so.IncludeIfNotExists = $true
$so.NoIdentities = $true
$so.AnsiPadding = $false
# script each table
foreach ($table in $path)
{
#$holder = $table
$table = get-item sqlserver:\sql\server\default\databases\database\tables\dbo.$($table.TableName)
$table.script($so) | out-file -append $file
}
(get-content "\\server\folder\file.ps1") -notmatch "ANSI_NULLS" | out-file "\\server\folder\file.ps1"
(get-content "\\server\folder\file.ps1") -notmatch " AS "| out-file "\\server\folder\file.ps1"
(get-content "\\server\folder\file.ps1") -notmatch "Quoted_" | out-file "\\server\folder\file.ps1"
(get-content "\\server\folder\file.ps1") -replace "\) ON \[PRIMARY\].*", ")" | out-file "\\server\folder\file.ps1"
(get-content "\\server\folder\file.ps1") -replace "\[text\]", "[nvarchar](max)" | out-file "\\server\folder\file.ps1"
(get-content "\\server\folder\file.ps1") -replace " SPARSE ", "" | out-file "\\server\folder\file.ps1"
(get-content "\\server\folder\file.ps1") -replace "COLUMN_SET FOR ALL_SPARSE_COLUMNS", "" | out-file "\\server\folder\file.ps1"
""" -database database -server server" | out-file "\\server\folder\file.ps1" -append
So I figured out the answer to my own question. Using this site: http://www.mssqltips.com/tip.asp?tip=1684 and
http://www.mssqltips.com/tip.asp?tip=1199
I figured out that he was able to do so using a SQL Server Agent Proxy, so I followed the yellow brick road, and basically I set up a proxy to my account and was able to use the external powershell through a feature. A note, you need to create a credential under the securities tab in object explorer prior to being able to select one when creating the proxy. Basically I ended up creating a proxy named powershell, using the powershell subsystem, and use my login info to create a credential. VOILA!
You have to add the snapins each time. In your editor you likely already have them loaded from another script/tab/session. In SQL Server you will need to add something like this to the beginning of the script:
IF ( (Get-PSSnapin -Name sqlserverprovidersnapin100 -ErrorAction SilentlyContinue) -eq $null )
{
Add-PsSnapin sqlserverprovidersnapin100
}
IF ( (Get-PSSnapin -Name sqlservercmdletsnapin100 -ErrorAction SilentlyContinue) -eq $null )
{
Add-PsSnapin sqlservercmdletsnapin100
}
I'm not sure the error you are trying to workaround - can you post that?
Have you tried this from a PowerShell prompt?
Add-PSSnapin SqlServerCmdletSnapin100

Resources