need your help on this: SSIS Server Maintenance Job Fail
This job cleanup system tables using default Stored Procedures
The code that’s being run is an in-built part of SQL Server that is ready out-of-the-box.
From what I can gather, it cleans up the log history of what packages have been run.
DECLARE #role int
SET #role = (SELECT [role] FROM [sys].[dm_hadr_availability_replica_states] hars
INNER JOIN [sys].[availability_databases_cluster] adc ON hars.[group_id] = adc.[group_id]
WHERE hars.[is_local] = 1 AND adc.[database_name] ='SSISDB')
IF DB_ID('SSISDB') IS NOT NULL AND (#role IS NULL OR #role = 1)
EXEC [SSISDB].[internal].[cleanup_server_retention_window]
Which fail with this error :
Message :
Executed as user: ##MS_SSISServerCleanupJobLogin##. The DELETE
statement conflicted with the REFERENCE constraint
"FK_EventMessagecontext_Operations". The conflict occurred in database
"SSISDB", table "internal.event_message_context", column
'operation_id'. [SQLSTATE 23000] (Error 547).
Online there are some suggestions on how to deal with this error, but I wasn’t sure how best to apply them to a default procedure.
It worries me to amend something that Microsoft have built into the tool.
Yes, that's the default cleanup job that runs when SSIS is installed on SQL Server. I think something has happened to SSISDB, you should check if SSIS packages are still running normally.
SSISDB is filled with triggers and default cleanup job tries to use them when clearing old data. Sometimes it has troubles to do it's job, especially when there's a lot of data to be deleted. At that point, you can help it by removing data manually and more efficiently: https://www.timmitchell.net/post/2018/12/30/clean-up-the-ssis-catalog/
Related
I was running some SQL migration that first checked for some conditions to determine if it should be executed and I was surprised to discover that SQL server failed the script even in the case it was not executed. What was even more surprising - only broken INSERTs for existing tables caused the failure, while broken SELECTs and INSERTs for non-existing tables were not causing any failures.
Here is a minimalistic example.
The test table creation script:
CREATE TABLE [dbo].[TableWithManyColumnsTheFirstIsGuid](
[Id] [uniqueidentifier] NOT NULL,
[Value] [int] NOT NULL
)
The test script:
IF 1 = 2 -- to never execute
BEGIN
INSERT INTO NoSuchTable VALUES ('failure');
SELECT * FROM NoSuchTable;
INSERT INTO TableWithManyColumnsTheFirstIsGuid VALUES ('failure');
END
You will receive Column name or number of supplied values does not match table definition. error, which is correct for the third command because there is no way for SQL server to insert a string value in a table with Guid and int columns.
Why does it fail at all if that faulty INSERT statement would be never executed because of the IF condition?
If the answer is 'because SQL server validates commands in scripts even when not executing them', then the question is why do the other two broken commands not cause it to fail?
This behavior can bite you hard in a situation when you have a migration that removes a column and a disabled old migration that inserts some data into that column. The disabled migration will cause your entire migration script to fail. Of course, you can clean up your migration, but that's manual work and not automation-friendly when using migration scripts generated by EntityFramework.
Is this behavior by design? Is it specified in ANSI SQL or is it Microsoft-specific?
In my SQL Server 2012 database, I have a linked server reference to a second SQL Server database that I need to pull records from and update accordingly.
I have the following update statement that I am trying to run:
UPDATE
Linked_Tbl
SET
Transferred = 1
FROM
MyLinkedServer.dbo.MyTable Linked_Tbl
JOIN
MyTable Local_Tbl ON Local_Tbl.LinkedId = Linked_Tbl.Id
JOIN
MyOtherTable Local_Tbl2 ON Local_Tbl.LocalId = Local_Tbl2.LocalId
Which I had to stop after an hour of running as it was still executing.
I've read online and found solutions stating that the best solution is to create a stored procedure on the Linked Server itself to execute the update statement rather than run it over the wire.
The problems I have are:
I don't have the ability to create any procedures on the other server.
Even if I could create that procedure, I would need to pass through all the Ids to the stored procedure for the update and I'm not sure how to do that efficiently with thousands of Ids (this, obviously, is the smaller of the issues, though since I can't create that procedure in the first place).
I'm hoping there are other solutions people may have managed to come up with given that it's often the case you don't have permissions to make changes to a different server.
Any ideas??
I am not sure, whether it can give more performance, you an try:
UPDATE
Linked_Tbl
SET
Transferred = 1
FROM OPENDATASOURCE([MyLinkedServer],'select Id, LocalId,Transferred from remotedb.dbo.MyTable') AS Linked_Tbl
JOIN MyTable Local_Tbl
ON Local_Tbl.LinkedId = Linked_Tbl.Id
JOIN MyOtherTable Local_Tbl2
ON Local_Tbl.LocalId = Local_Tbl2.LocalId
Now that SQL Server 2016 enables SSISDB to be fully High Available, I have a question regarding the job setup.
When I do create a SQL Agent Job that executes a SSIS Package that is deployed in SSISDB, should in the job step the Server be the Listener Name or the physical host name?
I am asking that because if I use the physical host name and create the job in both replicas, the secondary jobs will always fail because the DB is in read only mode. I didn't try placing the Listener name yet, because I wanted to get opinions first.
The server name should be listener name ,if you follow this approach,it is enough to deploy job in one instance
you also can use Physical host names and deploy jobs in all instances,provided you have below piece of code as first step
- fn_hadr_group_is_primary
USE master;
GO
IF OBJECT_ID('dbo.fn_hadr_group_is_primary', 'FN') IS NOT NULL
DROP FUNCTION dbo.fn_hadr_group_is_primary;
GO
CREATE FUNCTION dbo.fn_hadr_group_is_primary (#AGName sysname)
RETURNS bit
AS
BEGIN;
DECLARE #PrimaryReplica sysname;
SELECT
#PrimaryReplica = hags.primary_replica
FROM sys.dm_hadr_availability_group_states hags
INNER JOIN sys.availability_groups ag ON ag.group_id = hags.group_id
WHERE ag.name = #AGName;
IF UPPER(#PrimaryReplica) = UPPER(##SERVERNAME)
RETURN 1; -- primary
RETURN 0; -- not primary
END;
This post also deals with some of the common issues,that needs to be taken care off
https://blogs.msdn.microsoft.com/mattm/2012/09/19/ssis-with-alwayson/
I need to execute a SSRS reports from SSIS on periodic schedule.
Saw a solution here :
https://www.mssqltips.com/sqlservertip/3475/execute-a-sql-server-reporting-services-report-from-integration-services-package/
But is there any other option in SSIS without using Script Task ? I don't quite understand the script and concern there could be some support issue for me.
Database : SQL Server 2008R2 Standard Edition
Any ideas ? Thanks very much ...
SSIS controlling the running of an SSRS in SQL Agent.
This assumes that the SSIS job will have updated a control record or written some other identifiable record to a database.
1. Create a subscription for the report.
2. Run this SQL to get the GUID of the report
SELECT c.Name AS ReportName
, rs.ScheduleID AS JOB_NAME
, s.[Description]
, s.LastStatus
, s.LastRunTime
FROM
ReportServer..[Catalog] c
JOIN ReportServer..Subscriptions s ON c.ItemID = s.Report_OID
JOIN ReportServer..ReportSchedule rs ON c.ItemID = rs.ReportID
AND rs.SubscriptionID = s.SubscriptionID<br>
3. Create a SQL Agent job.
a. Step 1. A SQL statement to look for data in a table containing a flagged record where the Advanced setting is "on failure end job reporting success"
IF NOT exists ( select top 1 * from mytable where mykey = 'x'
and mycondition = 'y') RAISERROR ('No Records Found',16,1)
b. Step 2
USE msdb
EXEC sp_start_job #job_name = ‘1X2C91X5-8B86-4CDA-9G1B-112C4F6E450A'<br>
Replacing the GUID with the one returned from your GUID query.
One thing to note though ... once the report subscription has been executed then as far as SQL Agent is concerned then that step is complete, even though the report has not necessarily finished running. I once had a clean up job after the Exec step which effectively deleted some of my data before the report reached it!
You can create a subscription for the report that is never scheduled to run.
If you have the Subscription ID, you can fire the report subscription using a simple SQL Task in SSIS.
You can get the Subscription ID from the Report Server database. It is in the Subscriptions table.
Use this query to help locate the subscription:
SELECT Catalog.Path
,Catalog.Name
,SubscriptionID
,Subscriptions.Description
FROM Catalog
INNER JOIN Subscriptions
ON Catalog.ItemID = Subscriptions.Report_OID
In SSIS, you can use this statement, inside of a SQL Task, to fire the subscription:
EXEC reportserver.dbo.AddEvent #EventType='TimedSubscription',#EventData= [Your Subscription ID]
Hope this helps.
I'm trying to rename a table using the following syntax
sp_rename [oldname],[newname]
but any time I run this, I get the following [using Aqua Datastudio]:
Command was executed successfully
Warnings: --->
W (1): The SQL Server is terminating this process.
<---
[Executed: 16/08/10 11:11:10 AM] [Execution: 359ms]
Then the connection is dropped (can't do anything else in the current query analyser (unique spid for each window))
Do I need to be using master when I run these commands, or am I doing something else wrong?
You shouldn't be getting the behaviour you're seeing.
It should either raise an error (e.g. If you don't have permission) or work successfully.
I suspect something is going wrong under the covers.
Have you checked the errorlog for the ASE server? Typically these sorts of problems (connections being forcibly closed) will be accompanied by an entry in the errorlog with a little bit more information.
The error log will be on the host that runs the ASE server, and will probably be in the same location that ASE is installed into. Something like
/opt/sybase/ASE-12_5/install/errorlog_MYSERVER
try to avoid using "sp_rename". Because some references in system tables remain like old name. Someday this may cause some faulties if you forget this change.
I suggest;
select * into table_backup from [tableRecent]
go
select * into [tableNew] from table_backup
go
drop table [tableRecent] -- in case of backup you may not drop that table
go
drop table table_backup -- in case of backup you may not drop that table
go
to achieve that; your database has an option "select into/bulkcopy/pllsort"
if your ata is huge, check your free space on that database.
and enjoy :)