SQL Trigger Fires on Every 24 Hrs - sql-server

I want to make a MSSQL Trigger which will Fires in Everyday when date will change.

For MSSS Express editions create MS Windows job which will start Sqlcmd, see https://technet.microsoft.com/en-us/library/ms165702(v=sql.105).aspx
which will run an Sql script. Note, when sqlcmd is run from the command line, sqlcmd uses the OLE DB provider.
How to create a Sqlcmd job by using Windows Task Scheduler https://support.microsoft.com/en-us/kb/2019698 . This article deals with DB backup task. Replace the Sql script at step A with the one you need and adjust following steps accordingly.

You have to schedule a JOB in SQL which will fire in defined time and put your query in JOB
Expand the SQL Server Agent node and right click the Jobs node in SQL Server Agent and select 'New Job'.
In the 'New Job' window enter the name of the job and a description on the 'General' tab.
Select 'Steps' on the left hand side of the window and click 'New' at the bottom.
In the 'Steps' window enter a step name and select the database you want the query to run against.
Paste in the T-SQL command you want to run into the Command window and click 'OK'.
Click on the 'Schedule' menu on the left of the New Job window and enter the schedule information (e.g. daily and a time).
Click 'OK' - and that should be it.

For that purpose you can use PowerShell and Task Sheduler. All action below must be done on the machine where SQL Server is running.
At first create .sql file with a batch to run. I call it my_batch.sql. F.e. with this inside:
USE [MyDB]
INSERT INTO [dbo].[test]
([id]
,[somevalue]
,[New Column]
,[NewColumn])
VALUES
(NEWID()
,'testing'
,'test'
,'just a test')
Do not use GO in this script!
Then create .ps1 script to run that batch file (my_batch.ps1):
$conn=new-object System.Data.SqlClient.SQLConnection
$ConnectionString = "Server=(local)\SQLEXPRESS;Database=MyDB;Integrated Security=True;Connect Timeout=0"
$conn.ConnectionString=$ConnectionString
$conn.Open()
$fileToGetContent = 'D:\my_batch.sql'
$commandText = Get-Content -Path $fileToGetContent
$command = $conn.CreateCommand()
$command.CommandText = $commandText
$command.ExecuteNonQuery()
$conn.Close()
Then create a schedule task. You can make it manually (here is a good sample) or via PowerShell (I prefer this way):
#Create a new trigger that is configured to trigger at startup
$STTrigger = New-ScheduledTaskTrigger -Daily -At 00:01
#Name for the scheduled task
$STName = "Run SQL batch"
#Action to run as
$STAction = New-ScheduledTaskAction -Execute "PowerShell.exe" -Argument "D:\my_batch.ps1"
#Configure when to stop the task and how long it can run for. In this example it does not stop on idle and uses the maximum possible duration by setting a timelimit of 0
$STSettings = New-ScheduledTaskSettingsSet -DontStopOnIdleEnd -ExecutionTimeLimit ([TimeSpan]::Zero)
#Configure the principal to use for the scheduled task and the level to run as
$STPrincipal = New-ScheduledTaskPrincipal -User "DOMAIN\user" -RunLevel "Highest"
#Register the new scheduled task
Register-ScheduledTask $STName -Action $STAction -Trigger $STTrigger -Principal $STPrincipal -Settings $STSettings

Related

Snowflake Task does not start running

The task should run at 2:27am UTC, but it did not executed.
GRANT EXECUTE TASK ON ACCOUNT TO ROLE SYSADMIN;
CREATE or replace TASK TASK_DELETE3
WAREHOUSE = TEST
SCHEDULE = 'USING CRON 27 2 * * * UTC' as
CREATE OR REPLACE TABLE TEST2."PUBLIC"."DELETE"
CLONE TEST1."PUBLIC"."DELETE";
ALTER TASK TASK_DELETE3 RESUME;
The task [state] = started. Does anyone know why?
If the status shows that the task is started, that means it is enabled and will run in the scheduled times.
You can check the task history to see the previous runs and next run of the task using the following query:
select *
from table(information_schema.task_history(
task_name=>'TASK_DELETE3'));
I was using a different role when I was checking the table in the database. The Task successfully completed at scheduled time.

How to use copy Storage Integration in a Snowflake task statement?

I'm testing SnowFlake. To do this I created an instance of SnowFlake on GCP.
One of the tests is to try the daily load of data from a STORAGE INTEGRATION.
To do that I had generated the STORAGE INTEGRATION and the stage.
I tested the copy
copy into DEMO_DB.PUBLIC.DATA_BY_REGION from #sg_gcs_covid pattern='.*data_by_region.*'
and all goes fine.
Now it's time to test the daily scheduling with the task statement.
I created this task:
CREATE TASK schedule_regioni
WAREHOUSE = COMPUTE_WH
SCHEDULE = 'USING CRON 42 18 9 9 * Europe/Rome'
COMMENT = 'Test Schedule'
AS
copy into DEMO_DB.PUBLIC.DATA_BY_REGION from #sg_gcs_covid pattern='.*data_by_region.*';
And I enabled it:
alter task schedule_regioni resume;
I got no errors, but the task don't loads data.
To resolve the issue i had to put the copy in a stored procedure and insert the call of the storede procedure instead of the copy:
DROP TASK schedule_regioni;
CREATE TASK schedule_regioni
WAREHOUSE = COMPUTE_WH
SCHEDULE = 'USING CRON 42 18 9 9 * Europe/Rome'
COMMENT = 'Test Schedule'
AS
call sp_upload_c19_regioni();
The question is: this is a desired behavior or an issue (as I suppose)?
Someone can give to me some information about this?
I've just tried ( but with storage integration and stage on AWS S3) and it works fine also using copy command inside sql part of the task, without calling a stored procedure.
In order to start investigating the issue, I would check following info (maybe for debugging I would create the task scheduling it every few minutes):
check task_history and verify executions
select *
from table(information_schema.task_history(
scheduled_time_range_start=>dateadd('hour',-1,current_timestamp()),
result_limit => 100,
task_name=>'YOUR_TASK_NAME'));
if previous step is successfull, check copy_history and verify the input file name , target table and number of records/errors are the expected ones
SELECT *
FROM TABLE (information_schema.copy_history(TABLE_NAME => 'YOUR_TABLE_NAME',
start_time=> dateadd(hours, -1, current_timestamp())))
ORDER BY 3 DESC;
Check if the results are the same you get when the task with sp call is executed.
Please also confirm that you are loading new files not yet loaded into your table with COPY command (otherwise you need to specify FORCE = TRUE parameter in the copy command or remove metadata information truncating your target table to reload the same files).

How do I get the result output from an SQL BACKUP command into a Delphi program?

The output of sql commands that is visible to users who interactively run SQL commands from SQL Server Management studio, is different than the output you get back from executing an ADO command or ADO query object.
USE [DBNAME]
BACKUP DATABASE [DBNAME] TO
DISK = 'C:\SqlBackup\Backup.mdf'
The successful completion output is like this:
Processed 465200 pages for database 'DBNAME', file 'filename' on file 2.
Processed 2 pages for database 'DBNAME', file 'filename_log' on file 2.
BACKUP DATABASE successfully processed 465202 pages in 90.595 seconds (40.116 MB/sec).
When I execute either a TADOCommand or TADOQuery with the CommandText or SQL set as above, I do not get any such output. How do I read this "secondary output" from the execution of an SQL command? I'm hoping that perhaps via some raw ADO operations I might be able to execute a command and get back the information above, for success, as well as any errors in performing an Sql backup.
Update: The answer below works better for me than my naive attempt, which did not work, using plain Delphi TADOCommand and TADOConnection classes:
create TADOCommand and TADOConnection.
execute command.
get info-messages back.
The problem I experienced in my own coding attempts, is that my first command is "use dbname" and the only recordset I traversed in my code, was the results of the "use dbname" command, not the second command I was executing. The accepted answer below traverses all recordsets that come back from executing the ADO command, and thus it works much better. Since I'm doing all this in a background thread, I actually think it's better to create the raw Com Objects anyways, and avoid any VCL entanglement in my thread. The code below could be a nice component if anybody is interested, let me know and I might make an open source "SQL Backup for Delphi" component.
Here is an example. I've tested it with D7 and MSSQL2000. And it adds to Memo1 all messages from server:
29 percent backed up.
58 percent backed up.
82 percent backed up.
98 percent backed up.
Processed 408 pages for database 'NorthWind', file 'Northwind' on file 1.
100 percent backed up.
Processed 1 pages for database 'NorthWind', file 'Northwind_log' on file 1.
BACKUP DATABASE successfully processed 409 pages in 0.124 seconds (26.962 MB/sec).
Also if it takes a long time consider to implement a WHILE loop not in the main thread.
uses AdoInt,ComObj;
.....
procedure TForm1.Button1Click(Sender: TObject);
var cmd : _Command;
Conn : _Connection;
RA : OleVariant;
rs :_RecordSet;
n : Integer;
begin
Memo1.Clear;
Conn := CreateComObject(CLASS_Connection) as _Connection;
Conn.ConnectionString := 'Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=NorthWind;Data Source=SQL_Server';
Conn.Open(Conn.ConnectionString,'','',Integer(adConnectUnspecified));
cmd := CreateComObject(CLASS_Command) as _Command;
cmd.CommandType := adCmdText;
cmd.Set_ActiveConnection(Conn);
cmd.CommandText := 'BACKUP DATABASE [NorthWind] TO DISK = N''c:\sql_backup\NorthWind'' WITH INIT , NOUNLOAD , NAME = N''NortWind backup'', NOSKIP , STATS = 10, NOFORMAT;';
rs:=cmd.Execute(RA,0,Integer(adCmdText));
while (rs<>nil) do
begin
for n:=0 to(Conn.Errors.Count-1)do begin
Memo1.Lines.Add(Conn.Errors.Item[n].Description);
end;
rs:=rs.NextRecordset(RA);
end;
cmd.Set_ActiveConnection(nil);
Conn.Close;
cmd := nil;
Conn := nil;
end;
I've found this thread (Russian) for stored procedure and correct it for BACKUP command.

Powershell run SQL job, delay, then run the next one

I am writing a Powershell script that does several things with a local SQL Server database.
One thing I am doing is running several SQL jobs, one after another. I run them like this:
sqlcmd -S .\ -Q "EXECUTE msdb.dbo.sp_start_job #job_name = 'Rebuild Content Asset Relationship Data'"
Is there a way to get Powershell to delay running the next job until the first one is completed?
Thanks
To get access to SQL Agent Jobs from PowerShell you can use SMO:
EDIT: Thinking on efficiency if you are going to add this function to your script I would take the SMO loading out and just place it near the top of your script (prior to this function). It will probably slow your script down if every time you call the function it reloads the assembly.
Function Get-SQLJobStatus
{
param ([string]$server, [string]$JobName)
# Load SMO assembly, and if we're running SQL 2008 DLLs load the SMOExtended and SQLWMIManagement libraries
[System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null
# Create object to connect to SQL Instance
$srv = New-Object "Microsoft.SqlServer.Management.Smo.Server" $server
# used to allow piping of more than one job name to function
if($JobName)
{
foreach($j in $jobName)
{
$srv.JobServer.Jobs | where {$_.Name -match $JobName} | Select Name, CurrentRunStatus
}
}
else #display all jobs for the instance
{
$srv.JobServer.Jobs | Select Name, CurrentRunStatus
} #end of Get-SQLJobStatus
}
Example of ways you could use this function:
#will display all jobs on the instance
Get-SQLJobStatus MyServer
#pipe in more than one job to get status
"myJob","myJob2" | foreach {Get-SQLJobStatus -Server MyServer -JobName $_}
#get status of one job
Get-SQLJobStatus -Server MyServer -JobName "MyJob"
You could utilize this function in your script and just repeatedly call it in a while loop or something until your job status shows "Idle". At least in my head that is what I think could work :)
Sure, execute a Start-Sleep -seconds <nn> between invocations of sqlcmd.exe.
My suggestion would be to wrap your job in a new sproc that starts the job then waits for it to finish by continually polling its status. From the attached article, you can do something like this:
DECLARE #JobStatus INT
SET #JobStatus = 0
EXEC MSDB.dbo.sp_start_job #Job_Name = 'JobName'
SELECT #JobStatus = current_execution_status FROM OPENROWSET('SQLNCLI', 'Server=localhost;Trusted_Connection=yes;',
'EXEC MSDB.dbo.sp_help_job #job_name = ''JobName'', #job_aspect = ''JOB'' ')
WHILE #JobStatus <> 4
BEGIN
SELECT #JobStatus = current_execution_status FROM OPENROWSET('SQLNCLI', 'Server=localhost;Trusted_Connection=yes;',
'EXEC MSDB.dbo.sp_help_job #job_name = ''JobName'', #job_aspect = ''JOB'' ')
END
Then, rather than calling sp_start_job from the command line, call your sproc from the command line and PowerShell will be blocked until that sproc finishes. Hope this helps!
Your job is executed by SQL Server Agent. Either you call the corresponding stored proc (or SQL Statements) directly from your PowerShell script, or you implement something in the line of what is explained here.

Waiting for DB restore to finish using sqlalchemy on SQL Server 2008

I'm trying to automate my db restores during development, using TSQL on SQL Server 2008, using sqlalchemy with pyodbc as a transport.
The command I'm executing is:
"""CREATE DATABASE dbname
restore database dbname FROM DISK='C:\Backups\dbname.bak' WITH REPLACE,MOVE 'dbname_data' TO 'C:\Databases\dbname_data.mdf',MOVE 'dbname_log' TO 'C:\Databases\dbname_log.ldf'"""
Unfortunately, the in SQL Management Studio, after the code has run, I see that the DB remains in state "Restoring...".
If I restore through management studio, it works. If I use subprocess to call "sqlcmd", it works. pymssql has problems with authentication and doesnt even get that far.
What might be going wrong?
The BACKUP and RESTORE statements run asynchronously so they don't terminate before moving on to the rest of the code.
Using a while statement as described at http://ryepup.unwashedmeme.com/blog/2010/08/26/making-sql-server-backups-using-python-and-pyodbc/ solved this for me:
# setup your DB connection, cursor, etc
cur.execute('BACKUP DATABASE ? TO DISK=?',
['test', r'd:\temp\test.bak'])
while cur.nextset():
pass
Unable to reproduce the problem restoring directly from pyodbc (without sqlalchemy) doing the following:
connection = pyodbc.connect(connection_string) # ensure autocommit is set to `True` in connection string
cursor = connection.cursor()
affected = cursor.execute("""CREATE DATABASE test
RESTORE DATABASE test FROM DISK = 'D:\\test.bak' WITH REPLACE, MOVE 'test_data' TO 'D:\\test_data.mdf', MOVE 'test_log' to 'D:\\test_log.ldf' """)
while cursor.nextset():
pass
Some questions that need clarification:
What is the code in use to do the restore using sqlalchemy?
What version of the SQL Server ODBC driver is in use?
Are there any messages in the SQL Server log related to the restore?
Thanks to geographika for the Cursor.nextset() example!
For SQL Alchemy users, and thanks to geographika for the answer: I ended up using the “raw” DBAPI connection from the connection pool.
It is exactly as geographika's solution but with a few additional pieces:
import sqlalchemy as sa
driver = 'SQL+Server'
name = 'servername'
sql_engine_str = 'mssql+pyodbc://'\
+ name\
+ '/'\
+ 'master'\
+ '?driver='\
+ driver
engine = sa.create_engine(sql_engine_str, connect_args={'autocommit': True})
connection = engine.raw_connection()
try:
cursor = connection.cursor()
sql_cmd = """
RESTORE DATABASE [test]
FROM DISK = N'...\\test.bak'
WITH FILE = 1,
MOVE N'test'
TO N'...\\test_Primary.mdf',
MOVE N'test_log'
TO N'...\\test_log.ldf',
RECOVERY,
NOUNLOAD,
STATS = 5,
REPLACE
"""
cursor.execute(sql_cmd)
while cursor.nextset():
pass
except Exception as e:
logger.error(str(e), exc_info=True)
Five things fixed my problem with identical symptoms.
Found that my test.bak file contained the wrong mdf and ldf files:
>>> cursor.execute(r"RESTORE FILELISTONLY FROM DISK = 'test.bak'").fetchall()
[(u'WRONGNAME', u'C:\\Program Files\\Microsoft SQL ...),
(u'WRONGNAME_log', u'C:\\Program Files\\Microsoft SQL ...)]
Created a new bak file and made sure to set the copy-only backup option
Set the autocommit option for my connection.
connection = pyodbc.connect(connection_string, autocommit=True)
Used the connection.cursor only for a single RESTORE command and nothing else
Corrected the test_data MOVE to test in my RESTORE command (courtesy of #beargle).
affected = cursor.execute("""RESTORE DATABASE test FROM DISK = 'test.bak' WITH REPLACE, MOVE 'test' TO 'C:\\test.mdf', MOVE 'test_log' to 'C:\\test_log.ldf' """)

Resources