I have a PowerShell script:
Invoke-Sqlcmd -ServerInstance "PRODUCTION" -Database "DATABASE" -InputFile "E:\DW_Exports\cdd.sql" | Export-Csv "E:\DW_Exports\Pearsonvue\CDD.csv" -NoTypeInformation
When I run this manually in ISE, works fine, no problems.
However, when I set it up as a SQL Agent job it just returns a blank file. No errors reported, it says it was successful, but all I end up with is a blank file.
I've tried the process with very simple queries (just changing the input file the PowerShell points to), and it works fine. So we can rule out SQL Server Agent access issues to the file location or running PowerShell. It just doesn't work for this specific query.
Whats also odd is that sometimes after I run the job, if I try to run the PowerShell script manually it says I don't have access to the file location unless I delete the blank file, then it works fine again.
Any ideas?
Related
The process: an Azure agent that runs on a Windows 10 32bit pro machine with SQL Server 2014 Express installed.
The pipeline is built and runs successfully with PowerShell scripts as follows:
Create blank database
Create tables needed
C# application runs and populates the tables executed via a PowerShell script
Cross reference tables to update data needed.
Build a SSIS package
After result from SSIS package is success perform a backup
Command:
Backup-SqlDatabase -ServerInstance "$env.ComputerName" -Database "RealDB"
-BackupAction Database -BackupFile $Path -Blocksize 4096
This all works with one exception the actual backup I get is missing the data from the SSIS package run. BUT if I log into the machine and restore the backup used from $Path it is missing the data.
When I query the database after this process the data is there in the database.
There is only one database so its not backing up a different one.
I can run this command in powershell on the machine and my backup has the missing data that the powershell command from the agent does not.
Also interesting enough if I remove the -Blocksize 4096, it works as I expect and the backup has the data in it. I am considering abandoning the powershell due to this but thought I would ask to see if anyone experienced this or no.
Any help or thoughts are appreciated.
Thank you
Thank you #user19702 I was so consumed looking at the backup command with the added -BlockSize that I completely ignored the fact that my data has increased (thanks random process I never heard of before today) and even though the powershell is written to start the backup AFTER the SSIS package the process was not done. To find it I started task manager on the machine while the build was running and watched the process stay in memory for a few seconds when the backup started. I added a powershell command to make it wait a few seconds before processing the backup and its working.
In case anyone is wondering this is the command
$result = $package.Execute("false", $null)
Write-Host "Package ID Result: " $result
Start-Sleep -Seconds 10
Backup-SqlDatabase -ServerInstance "$env:ComputerName" -Database "RealDB" -BackupAction Database -BlockSize 4096 -BackupFile $Path
Thank You!!
I have a fairly straightforward Powershell script that is ran as part of a Bamboo deployment which includes a call to Invoke-DbaQuery to run sql scripts against a database.
After an incident with a bad script left our deployment hanging for hours, I am attempting to implement a timeout which I would expect to cause the Invoke-DbaQuery to fail, and thus the script to fail, and likewise the Bamboo deployment after a set amount of time.
However, Powershell seems to be ignoring the -queryTimeout parameter of Invoke-DbaQuery. Running the following commands calling an intentionally long running script and an arbitrarily short timeout, the query continues to execute hours after the timeout.
To eliminate variables I am testing this directly on Powershell on the server with the sql instance.
$path = "c:\somescript.sql"
$server = "localhost"
Invoke-DbaQuery -file $path -SqlInstance $server -Database MyDatabase -QueryTimeout 30
I am trying to create a PowerShell script to automate a very simple process, however, I cannot get much (if anything) to work. The documentation is either not what I need, outdated or conflicting.
I've had a few variations of this:
$SQLConnection = New-Object System.Data.SQLClient.SQLConnection
$SQLConnection.ConnectionString = "Data Source=.\SQL2016;Initial Catalog=TEST;Trusted_Connection=true;"
$SQLConnection.Open()
$Cmd = new-object system.Data.SqlClient.SqlCommand($SQLConnection)
Invoke-Sqlcmd -InputFile "C:\dev\test\script.sql" | Out-File -filePath "C:\dev\test\output.sql"
$SQLConnection.Close()
I've not managed to connect to the database.
The idea being, script.sql spits out a bunch of SQL (this works fine) which we will put into source control. Once in source control, a Jenkins job will do something with it.
Trying to keep this as basic as possible, no flexibility is needed other than different connection strings. I want to avoid using PSSQL if possible, a user throws in their connecting string (database will be the name) and runs the script, job done.
Can anyone point me in the right direction?
System.Data.SQLClient is only necessary if you don't have the SqlServer module installed or are doing something unusual. It's a much more verbose method.
You just need:
Import-Module SqlServer;
Invoke-Sqlcmd -ServerInstance '.\SQL2016' -Database 'TEST' -InputFile 'C:\dev\test\script.sql' |
Out-File -FilePath "C:\dev\test\output.sql"
However... your output file isn't really an .sql file unless the queries in script.sql are actually returning strings that should be executed as SQL. It should probably be a .txt file.
And depending on what exactly you're generating, you might want to consider Export-Csv -Path "C:\dev\test\output.csv" -NoTypeInformation instead of Out-File. I can't tell if you're trying to export data or just logging information.
Additionally, you'll need to make sure that you're not using the batch separator (GO) in script.sql, or relying on any "SQLCMD Mode" (as it's called in SQL Server Management Studio) or other sqlcmd.exe specific syntax. The Invoke-Sqlcmd doc outlines the differences between the two. If you don't know what that is, you're probably safe.
I'm trying to write a PowerShell script which will execute tsql query to only one remote server using invoke-sqlcmd. Those tsql queries are simple one like backup/restore database, create a user, etc.
Below is an extract of it :
# 5 # Create clientdb database on secondary server by restoring the full backup for primary
Try {
Invoke-sqlcmd -ServerInstance 'REMOTESQLSRV'`
-Username 'ts_sql' -Password 'somepassword'`
-InputFile "$LScltid\__01_On_Secondary_CreateDB2_srv2.sql"`
-ErrorAction Stop
Write-Host " clt_$id is now restored to secondary server "`
-ForegroundColor White -BackgroundColor Green
} Catch {
Write-Host " Restore operation for clt_$id did not succeed. Check the error logs " -ForegroundColor Black -BackgroundColor Red
}
My scripts always break here. For some reasons that i could not put my head on, invoke-sqlcmd does not use the variable "$LScltid" to resolve the path where it will find the .sql script.
Everytime i had to run it, it change the current directory to either the SQLERVER:\ provider or some other causing the script to failed at this step.
Am I doing this the right way? If so, how should i adapt the command to perform as I expect it to.?
UPDATE
Forgot to mention, if i run the script with the variables values hard-coded I'm able to get the result i need (in this case restoring a database from device).
Thanks for your feedbacks.
Odd thing, the command is now working. I don't really know what i've done wrong previously, but the exact same command is now working.
Just a heads up for those facing the same issue:
If you have to use Invoke-Sqlcmd in your scripts, beware of the provider change especially if the commands coming after Invoque-Sqlcmd are regulars one (get, set, copy, new, etc....)
In my case, somewhere in my script between two Invoke-sqlcmd commands i had to copy files from local to remote server. Everytime the command failed because the provider changed. As a workaround I set-location before copy-item command execution and that maneuver seemed to do the trick (don't know if its re-commanded tough.
Thanks Stackoverflow Team
I am currently trying to run a Powershell script through SQL Server Agent, which completes its task without error if I run it through Powershell ISE on the desktop. The simple script is below (it's only being used for testing):
$test = "G:\test.txt"
if (Test-Path $testFile)
{
Remove-Item $test
}
When I run this through SQL Server Agent, it produces a successful output - no errors whatsoever, but does show that it's being run as a different user in the job history log, for instance domain\localmachine, whereas when I run the script through Powershell ISE, it shows domain\you.
As a note, I can't confirm this manually because what I tried to do was run the below script both locally and through SQL Server Agent in a job to see the output, but the job failed (and thus why I suspect it's a user issue). Therefore, I'm trusting SQL Server Agent as to the domain\locallmachine is running the job (the reason it won't delete the file).
([Environment]::UserDomainName + "\" + [Environment]::UserName) | out-file pssaved.txt
"$env:userdomain\$env:username" | out-file -append pssaved.txt
[Security.Principal.WindowsIdentity]::GetCurrent().Name | out-file -append pssaved.txt
## Locally this produces domain\you
## On SQL Server Agent, I receive the error: The error information returned by PowerShell is: 'SQL Server PowerShell provider error: Path SQLSERVER:\pssaved.txt does not exist. Please specify a valid path.'
Is there a way, through SQL Server Agent to run a job as my domain user, for instance domain\you instead of the domain\localmachine (at least, this would eliminate this possibility of an error)?
You can use a proxy for this. Check it out.