SSIS Logging showing random behaviour while logging event in dbo.syssisLog table - sql-server

I was planning to use SSIS logging to get task level details (duration of running, error message thrown-if any, user who triggered the job ) for my package.
SSIS was creating dbo.syssisLog table under System table and it was working just fine. Suddenly it stops creating table under System table and start creating under Users table. Also now it is not logging some events which were logged previously when created under System table. Events like: PackageStart and User:PackageStart/User:PackageEnd event for some tasks.
Can anyone please guide me what's going wrong here ?

The table showing under System versus User tables is fairly meaningless but if you want the table to show the same, set it as a MS shipped table
EXECUTE sys.sp_MS_marksystemobject 'sysssislog'
The way database logging works in the package deployment model, is that SSIS will attempt to log to dbo.sysdtslog90/dbo.sysssislog (depending on your version) but if that table doesn't exist, it will create it for you. There is a copy of that table in the msdb catalog which is marked as a system object. When SSIS creates its own copy, it just has the DDL somewhere in the bowels of the code that does logging. You'll notice it also creates a stored procedure sp_ssis_addlogentry to assist in the logging.
As for your observation for inconsistent logging behaviour, all I can say is I've never seen that. The only reason it won't log an event is if the event doesn't occur - either a precursor condition didn't happen or the package errors out. If you can provide a reproducible scenario where it does and then doesn't log events, I'll be happy to tell you why it does/doesn't do it.

Related

SSIS Package Error single UPDATE in a execute SQL task

I am trouble shooting an error in a package.
Update MYTABLE for MYCOLUMN (REF to task name):Error: Executing the query "..." failed with the following error: "Invalid column name 'MYCOLUMN'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
I have verified the table and column exists, the length of the field is way excessive than what it needs that is 14 where it is declared as varchar(250).
I have verified the script works on the server in SSMS outside of the context of the package.
I have verified the connection and database in the package is as I expect.
Is there away to verify on the server. I did try to look at the Connection Managers tab on the package configuration itself i.e. in the Integration Services Catalogs->SSISDB->solutionfolder->..->package.dtsx->Configure context menu but it is empty.
Any ideas on how to troubleshoot?
Just to add more context the package contains 27 other tasks, 9 tasks in a row linked to this task but all set to on completion, all seem to be doing stuff independent of the other. 1 task is a loop doing stuff and the rest are single independent tasks. So I don't know at this stage if it is a cascading connection issue perhaps however; I am just reading what the log says.
I kicked off the package at 9:54am, the timestamp on the error log says 11:45am so nearly 2 hours into running is this log reported.
I would suggest the below things to troubleshoot the issue.
I would suggest you to just have this task and disable all other
tasks to troubleshoot the issue. So that you can focus on this issue
specifically. That will tell you whether connection is working fine
without issues.
I would suggest you to edit the task and see whether parameters are
set properly. Different providers have different way of setting
parameters. Again check whether parameters are proper. Execute SQL
Task
one more thing, may be you are pointing the package to different
connection than the one you used for SSMS. So, it is working in SSMS
and in the connection being used in the package is not having schema
changes yet done.
I finally figure it out before I read the previous offered suggestion so will give some credit if I can! FYI: We have a lot of dev servers. I clicked on the overview hyperlink in the All Execution logs and it said another server. Also I found the connection on the job calling the package not the package itself so I have learnt something today. Anyhow the job said one server but the overview said another so I again I was back to square one scratching my head.
Then I decided to open the connection manager on the job and select the field and make no change rather then cancelling I clicked ok not thinking about it and noticed the field changed to bold face. So I am assuming if you make a manual change on the server in SSMS to anything it shows up in bold which is kind of useful. So I can only assume this is a MS SSMS or SSIS or VS deployment bug. That it does not overwrite, the previous connection although the SSMS interface says otherwise. Perhaps somebody can share some light. Having not checked the server before I made a change and deployed it I have no idea if the previous settings were changed manually by someone or the connection in the package was changed and deployed. Anyhow checking the job history shows it had been failing for awhile so it wasn't me so whoever and whenever a change was done by a previous developer didn't figure it our either or didn't bother or did not know how, or didn't observe it. Anyhow it is pointing to the correct server now!!!

SQL Service Broker Error - The conversation handle is not found

I have used SQL Service broker and SQL Table Dependency and started SQL table dependency in a table for notifications on table data change. I have given all the permission to database listed in SQL table Dependency document. After some times, may be in idle state it is giving status as "Waiting for notification" .
When I change in table (inserting new record), status is not changing (From waiting for notification) and gives error as "The conversation handle "A705917C-4762-E711-9447-000C29C3FCF0" is not found."
Can anyone help me to fix this issue?
First read this comment please:
There is one very common scenario that results in much more time:
debugging. When you develop applications, you often spend several
minutes inside the debugger before you move on. So please be careful
when you debug an application that the value assigned to
watchDogTimeOut parameter is long enough, otherwise you will incur in
a destruction of database objects in the middle of you debug activity.
Reference
On the other hand
If you are using SQLDependency and get an error like this:
The conversation handle
"206A971D-6F25-DA11-B22F-0003FF6FCCCA" is not found. Invalid object
name 'SqlQueryNotificationService -
41136655-4314-4536-a477-37156eb628db'.
Then try enable trustworthy :
Alter database [DbName] set trustworthy on
The TRUSTWORTHY database property is used to indicate whether the
instance of SQL Server trusts the database and the contents within it.
By default, this setting is OFF, but can be set to ON by using the
ALTER DATABASE statement. more information
Thank to Scott Hanselman for his answer

Why did my SSIS package complete with an error while there seems to be no errors?

I have a package with one data flow task. In the data flow task it copies data from one database to another archive database.
I linked two precedence constraints. If it's successful it should go on and start a certain job in SQL Server (delete records from the original database). If the task fails, it should return a script task saying that it failed.
When I run this, the data flow task is successful (every record gets copied). The data flow task gets a green tick. The "execute SQL Server Agent Job Task" also gets a green tick. Yet after completing the package it says
"Package execution completed with error. Click here to switch to design mode, or select Stop Debugging from the Debug menu."
I included a screenshot of it:
The output basically only says:
SSIS package "c:\Users\Kim\Documents\Visual Studio 2012\Projects\POC\POC\Archive.dtsx" finished: Failure.
So:
Where can I find the error? There is no indication at all what went wrong. Both show green ticks and the migration of data did went well indeed. The SQL Server job didn't do its job. The records are still there. So I have a feeling that the error has to do with the job. I have to mention it is the Change Data Capture cleanup job which was automatically made when I turned on CDC on this table. I did this because I only want this job to happen when the data flow task is successful (instead of running the schedule by default).
If it failed, why didn't it follow the precedence constraint for failure (showing the script)?
SQL Server agent is turned on by the way.
Can someone please help me? I googled "Package execution completed with error" and I literally get only 68 results which are not helping.
Kim
I recreated the entire package and it completed with success. I'm still wondering what the difference is with my original package, but I'm guessing it might have something to do with non-corresponding meta-data. When making the original package I had copied a few tasks and then made new tasks and deleted the copied ones (because it was easier to look between them instead of switching between SSIS projects). I deleted all the old copied tasks, but possibly something went wrong there and that something is still linked to old metadata. All the tasks are performing though.
Mike en Ennor thanks for looking into the problem. If anyone has any clue what it could be, please reply anyway, because my solution to recreate the package again was not a satisfying solution.
Kim
Did you look at the Event Handlers? Also, any other failed configurations or Loggings in the background? These three could be possible culprits. Any other Tasks that are disabled and are using an old connection string that no longer resides on the package?
I got the same issue.
Try going through each event handler tab, data flow tab for Evey executable while in the execution / run mode. You would be able to see red Cross marks where the error is.
Run the workflow > click the "Progress" button to see red X's.

Duplicate log entries due to nested containers

Any simple way of solving the problem where you have a component and get the same log massage more than one time due to the fact that the component is nested inside other components (like a container and a data flow for example)
I found this, where MS say it wont do anything about it, but its from 2007.
I dont really want to do the workaround suggested, seems a lot of work.
For logging, if you chose to log to sql server, SSIS will create a Stored Proc
(SQL 2005)
Proc Name: dbo.sp_dts_addlogentry
Table Name: dbo.sysdtslog90 (user table)
(SQL 2008)
Proc Name: dbo.sp_ssis_addlogentry
Table Name: dbo.sysssislog (system table)
You are free to modify this to filter out log entries (sql will recreate it if it is missing, but will not overwrite a self written version of the same name.)
The system generated version is simply an insert into of the parameters sent to the proc by SSIS.
You can add logic to follow the chain up the executionid to find parent objects and suppress logging for entries that have already been logged (you will want to handle the OnError event name in your code for this) I usually pass OnPreExecute, OnPostExecute, OnError, and OnTaskFailed events into my logging proc. By default, the proc also gets PackageStart and PackageEnd events.
That being said, I let all of these log fully to the table in sql then use summary and detail report to check the logging and see errors. My report filters to show only the one error for each occurance rather than filtering on the input into the log file. I also log all of my SSIS packages into a single database for configurations and logging that is included in every SSIS package.

Why are the database rows are getting deleted automatically sql 2008

Everyday, some of my database rows are getting deleted automatically.
Even the log files are getting deleted, so I am unable to check who deleted those files.
I dont understand what to do.
If the SQL server is pre-production, you could just yank all delete rights to the target table and wait to see who complains. If deletes are not allowed on this table anyway, even in production, then it would be a good idea to restrict that functionaity moving forward.
Beyond that, try adding a delete trigger to the table to do auditing. You can get the source IP address, logged in user info, etc. You can even rollback the delete if needed.
Here's a good article on using triggers for auditing.
http://weblogs.asp.net/jgalloway/archive/2008/01/27/adding-simple-trigger-based-auditing-to-your-sql-server-database.aspx
Edit:
If you want to stop all deletes on a table, you can use the following trigger.
CREATE TRIGGER dbo.MyTable_Delete_Instead_Of_Trigger
ON dbo.MyTable
INSTEAD OF DELETE
AS
BEGIN
raiserror('Deletes are not allowed.', 16, 1)
END
Run SQL Profiler against the DB capturing all RPC Completed and SQL BatchCompleted events and review it to find whatever is performing the deletes.

Resources