I am working to update the agent job with data based on certain string found under job step. I can see the SMO finding the string displaying the replacement object in memory. But when I try to alter the final output
$AgentJob = Get-SqlAgentjob -ServerInstance $InstanceName | where Name -Like "somestring*"
it doesn't work in updating the actual agent job steps.
Foreach ($steps in $AgentJob.jobsteps)
{
$steps.Command -Replace("CurrentString1","$NewString2")
$steps.Alter()
$steps.Command -Replace("CurrentString2","$NewString2")
$steps.Alter()
$steps.Command -Replace("CurrentString3","$NewString3")
$steps.Alter()
}
You're not actually updating the command text, you're just outputting it.
Try
$steps.Command = $steps.Command -Replace 'CurrentStringX, $NewStringX
Related
Let's just say I have a table A with some data on it in SSMS. There are sub tables such as columns, constraints,triggers,indexes and statistics etc.
I want to create a similar table with same properties as table A. I know I need to go to Script Table As-> Create To-> New Query Window to duplicate the table structure.
However, after doing that, I realized the statistics in my new table is empty when there are statistics in table A. Did I miss out something?
You can script the statistics blob only with the following bit of powershell (which I yoinked from an old blog post of mine):
pushd;
import-module sqlps -disablenamechecking;
popd;
$opts = new-object Microsoft.SqlServer.Management.SMO.ScriptingOptions;
$opts.OptimizerData = $true;
$server = new-object Microsoft.SqlServer.Management.SMO.Server ".";
$database = $server.Databases["AdventureWorks2008R2"];
foreach ($table in $database.Tables) {
foreach ($stat in $table.Statistics) {
$stat.Script($opts);
}
}
The above will script out all statistics (including the histogram data) for all tables in the AdventureWorks2008R2 database. You should be able to tailor it to your needs.
What I'm trying to achieve is finding out how long retention a database backup has by using DATEDIFF function.
But in order to use DATEDIFF I would need something to compare, the data from the result, because I don't know it being anywhere else.
Why from a result ?
I found out that this command gives me all the info I need to accomplish my task (BackupFinishDate, ExpirationDate):
RESTORE HEADERONLY FROM DISK = 'X:\Backups\Backuptest.bak'
I'm pretty sure I'm not allowed to create temp tables in production servers, so if this is one option, I'm afraid I can't use that.
PS! If there's a better way to find out retentiondays of a backup, I'd happily use that. If this would be possible in PowerShell, that would be even better.
Well, it seemed I answered my own question with the PowerShell hint.. I gave myself :P
Solution was:
$bkp_start = Invoke-Sqlcmd -ServerInstance myServer -Query "RESTORE HEADERONLY FROM DISK = 'X:\Backups\Backuptest.bak'" | Select-Object -ExpandProperty BackupFinishDate
$bkp_end = Invoke-Sqlcmd -ServerInstance myServer -Query "RESTORE HEADERONLY FROM DISK = 'X:\Backups\Backuptest.bak'" | Select-Object -ExpandProperty ExpirationDate
$RetentionInDays = New-TimeSpan -Start $bkp_start -End $bkp_end | Select-Object -ExpandProperty Days
Write-Output "Retention period : $RetentionInDays"
I love PowerShell.. looks a bit clunky, but it works and I don't know a better way at this time.
I'm trying to get all users that had been disabled in my domain and put it into a SQL Table. I'm trying to use SSIS to do that. Now that I can grab the right output and put it into a CSV file using this code:
Search-ADAccount -AccountDisabled -UsersOnly |
Select Name |
Export-CSV -Path C:\Users\hou\Downloads\Test.csv
But since I'm going to run the package in different servers and I couldn't have a fixed location to store the file and load into SQL Table. So either I'm going to use a variable in the Execute Process Task (where I run the Powershell script) to store the CSV file, or use SSIS to store the output directly in SQL table.
But I don't know neither of those. How can I do that?
Location should be define in the script, i.e:
$Path = Get-Location
"$Path\Test.csv"
#Option 1 just a Name
$Path = Get-Location ; Get-ChildItem C:\temp | Select-Object Name | Export-CSV -Path "$Path\Test.csv"
## Option 2 Name, with other property
$Path = Get-Location ; Get-ChildItem C:\temp | Select-Object Name,mode | Export-CSV -Path "$Path\Test.csv"
For one liner script , use the ";" to separate the commands.
I would suggest loading the data into SQL using PowerShell. There is a free PowerShell module from Microsoft called "sqlserver" that allows PowerShell to talk directly to SQL. You may already have it installed.
## Check if installed:
Get-InstalledModule -Name SqlServer
## If installed you can Update (you may need to run as local admin):
Update-Module -Name SqlServer
## If not installed (only need admin if you want all users to have access to this module):
Install-Module SqlServer
Once the module is installed there is a cmdlet called "Write-SqlTableData" to bulk copy data into SQL. The assumption is (1) the table already exists in SQL. (2) All the columns in the PowerShell Select match the order and datatype as they exist in the SQL table. In this case, there would be a SQL table with a single [Name] column. (3) You are using your AD credentials to access SQL, if not you will have to add credentials to the cmdlet.
The actual PowerShell code, update the variables in the quotes:
## Input Variables
$SqlServerName = ""
$SqlDatabaseName = ""
$SqlTableSchemaName = ""
$SqlTableName = ""
## Insert into SQL
Search-ADAccount -AccountDisabled -UsersOnly | Select-Object Name | Write-SqlTableData -ServerInstance $SqlServerName -DatabaseName $SqlDatabaseName -SchemaName $SqlTableSchemaName -TableName $SqlTableName -Force
As a side note, if you plan on doing a lot of PowerShell/SQL work, you may want to also install the "WFTools" module as it also has many additional SQL cmdlets.
Hope this helps.
I am writing a Powershell script that does several things with a local SQL Server database.
One thing I am doing is running several SQL jobs, one after another. I run them like this:
sqlcmd -S .\ -Q "EXECUTE msdb.dbo.sp_start_job #job_name = 'Rebuild Content Asset Relationship Data'"
Is there a way to get Powershell to delay running the next job until the first one is completed?
Thanks
To get access to SQL Agent Jobs from PowerShell you can use SMO:
EDIT: Thinking on efficiency if you are going to add this function to your script I would take the SMO loading out and just place it near the top of your script (prior to this function). It will probably slow your script down if every time you call the function it reloads the assembly.
Function Get-SQLJobStatus
{
param ([string]$server, [string]$JobName)
# Load SMO assembly, and if we're running SQL 2008 DLLs load the SMOExtended and SQLWMIManagement libraries
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
# Create object to connect to SQL Instance
$srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $server
# used to allow piping of more than one job name to function
if($JobName)
{
foreach($j in $jobName)
{
$srv.JobServer.Jobs | where {$_.Name -match $JobName} | Select Name, CurrentRunStatus
}
}
else #display all jobs for the instance
{
$srv.JobServer.Jobs | Select Name, CurrentRunStatus
} #end of Get-SQLJobStatus
}
Example of ways you could use this function:
#will display all jobs on the instance
Get-SQLJobStatus MyServer
#pipe in more than one job to get status
"myJob","myJob2" | foreach {Get-SQLJobStatus -Server MyServer -JobName $_}
#get status of one job
Get-SQLJobStatus -Server MyServer -JobName "MyJob"
You could utilize this function in your script and just repeatedly call it in a while loop or something until your job status shows "Idle". At least in my head that is what I think could work :)
Sure, execute a Start-Sleep -seconds <nn> between invocations of sqlcmd.exe.
My suggestion would be to wrap your job in a new sproc that starts the job then waits for it to finish by continually polling its status. From the attached article, you can do something like this:
DECLARE #JobStatus INT
SET #JobStatus = 0
EXEC MSDB.dbo.sp_start_job #Job_Name = 'JobName'
SELECT #JobStatus = current_execution_status FROM OPENROWSET('SQLNCLI', 'Server=localhost;Trusted_Connection=yes;',
'EXEC MSDB.dbo.sp_help_job #job_name = ''JobName'', #job_aspect = ''JOB'' ')
WHILE #JobStatus <> 4
BEGIN
SELECT #JobStatus = current_execution_status FROM OPENROWSET('SQLNCLI', 'Server=localhost;Trusted_Connection=yes;',
'EXEC MSDB.dbo.sp_help_job #job_name = ''JobName'', #job_aspect = ''JOB'' ')
END
Then, rather than calling sp_start_job from the command line, call your sproc from the command line and PowerShell will be blocked until that sproc finishes. Hope this helps!
Your job is executed by SQL Server Agent. Either you call the corresponding stored proc (or SQL Statements) directly from your PowerShell script, or you implement something in the line of what is explained here.
How can I save the results from a Powershell command to a MS SQL DB?
For example:
I run this at a Powershell prompt: get-psdrive and it returns some results in a column view.
How can I take each element of the result and log it into a separate DB row/column?
I recommend saving the results of your command to a variable. Such as:
$drives = Get-PSDrive
The variable can be indexed like this:
First Element:
$drives[0]
Last Element:
$drives[-1]
You can iterate through each element with foreach:
foreach ($drive in $drives) {
# current drive is $drive
}
Or the ForEach-Object cmdlet:
$drives | ForEach-Object {
# current drive is $_
}
Now that you have the data to populate your table with you are ready to connect to the database and perform the database record inserts.
You can make use of the Powershell SQL server cmdlets or you can connect using .NET objects. Depending on what version of SQL server you have will drive your choice on which to use. SQL Server 2008 has Powershell cmdlets, 2005 does not. There is a wealth of information about the SQL server 2008 Powershell integration here. For SQL Server 2005 you have some different options. This question answer here provides a list of Powershell options to use with SQL Server 2005.
More Info:
When Powershell displays object information it uses a type system to selectively determine what properties of the object to display on the screen. Not all of the object's are displayed. Powershell uses XML files to determine what properties to display which are stored in the Powershell directory:
dir $PSHOME/*format* | select Name
The objects returned from Get-PsDrive are of type System.Management.Automation.PSDriveInfo. The file PowerShellCore.format.ps1xml tells the formatting engine what properties to display in the Powershell window. It just might be that these are the exact properties you are looking for however many objects have additional properties that are not displayed. For example an object of type System.IO.DirectoryInfo will not have all it's properties displayed by default. You can view the rest of the objects properties using the Get-Member cmdlet, for example:
Get-Item $env:windir | Get-Member
This will show all of the object's methods and properties. You can also view all of the object's properties using the Select-Object cmdlet using a wildcard for the property parameter:
Get-Item $env:windir | Select-Object -Property *
To access an objects properties values use the following syntax:
$objectVariable.ObjectProperty
Now that you know how to view an objects properties and access their values you'll need to use this to construct an Insert SQL statement. Here is an example using the Invoke-SqlCmd cmdlet provided with SQL Server 2008.
Invoke-Sqlcmd -ServerInstance $env:COMPUTERNAME -Database Test -Query "Insert MyTable values ('a', 'b')"
Here's an example looping through objects returned from Get-PsDrive assuming you have a table called MyTable and it has at least two columns which accept textual data:
Get-PsDrive | ForEach-Object {
$providerName = $_.Name
$providerRoot = $_.Root
Invoke-Sqlcmd -ServerInstance $env:COMPUTERNAME -Database Test -Query "Insert MyTable values ('$providerName', '$providerRoot')"
}