Error: Executing the query "EXEC dbo.executeJob #parentExecutionInstanceGUI..." failed with the following error: "Object cannot be cast to Empty.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Does anyone have ideas on why I am getting this error? It doesn't happen with every run. It happened on the first step of an 8 step SSA job that moves data between the data warehouse, the data mart, and several databases. That fist step executes package that ultimately executes two stored procedures that insert data into two tables.
I didn't write/create any of this - I have been tasked with figuring out why this error occurred and from there possibly figuring out how to keep it from happening again.
Related
I am running a daily job in MSSQL agent, that runs a package to export a certain table to an excel file. After completion it sends me an email.
It happened couple of times that the table was empty and it exported 0 rows.
I did not receive a job failed notification, because the job didn't fail.
I would like to receive a failed notification when it's exporting 0 rows.
How would you achieve this?
Thanks
There are a number of ways to force a package to return a failure (which will then cause the calling Agent job to report a failure).
One simple way would be to put a Row Count task between the source and destination in the data flow task that populates the spreadsheet. Assign the value to a variable, say #RowsExported.
Back on the Control Flow tab, if there's more to the package, put a condition on the precedent constraint leading to the rest of the work where #RowsExported > 0 so the rest of the package will only continue if there were rows sent. Whether or not there's more to the package, add a new precedent constraint coming off the data flow with a condition #RowsExported == 0. Send that constraint to an Execute SQL task that just contains the script SELECT 1/0;.
Now, if zero rows are exported, the package will throw a division by zero error, and the calling job will fail.
There are other ways, of course, but this one can be implemented very quickly.
1st .)
I have a Sequence container.
It has 4 different execute sql tasks and 4 different DFT where data is inserting into different tables .
I want to implement transaction with or without MSDTC service on the package failure i.e., each and every data should be rollback on failure of any of the DFT or execute SQL task .
How to implement it? when I am trying to implement with MSDTC service I get the "OLEDB Connection" error and without MSDTC the data is getting inserted only the last execute Sql task is getting rolled back . How to implement this on ssis 2017?
2nd.)
when I tried without MSDTC by setting the property of ServerConnection RetainSameConnection as TRUE and took two more execute sql task for begin transaction and commit. I faced a issue with the EVENT HANDLER i.e., I was not able to log error into different table. Either the Rollback is working or the Event Handler when tried to manipulate.
Soon as the error occurred the control goes to the event handler and then Rollback every thing including task in event handler
3rd.)
The Sequence Container is used for parallel execution of tasks. So the particular task among the 4 getting failed only that particular task getting rolled back rest SQL task was inserting data into tables.
Thanks in Advance!! ;-)
One option I've used (without MSDTC) is to configure your OLEDB conection as RetainSameConnection=True
(Via Properties Window)
Then Begin a Transaction Before your Sequence Container & Commit afterwards (all sharing the same OLEDB Connection.
Works quite well & pretty easy to implement.
According to my Scenario :
I used a sequence container(which contains different DFT's and tasks),and have taken 3 more Execute sql task :
1st Begin Transaction T1(before the sequence container)
2nd Commit transaction T1(after the sequence container)
3rd Rollback transaction T1(after the sequence container) with precedence as failure i.e, only when the sequence container fails the Execute Sql task containing rollback operation executes.
Note : I tried to rollback this way but only the current execute sql task i.e., the nearest to it was getting rolled back rest of the data were inserted. So whats the solution? In that same execute sql task I have truncated the table where rows were getting inserted. So, when the sequence container fails the execute sql task will truncate all the data in the respective table. (Rollback transaction T1 go truncate table Table_name1 go truncate table table_name2 go truncate table table_name3)
*IMPORTANT :***To make the above operation work make sure in the connection manager properties **RetainSameConnection is set to True by default it's false.
Now, to log errors into user defined tables we make use of event handler.So the scenario was, when the sequence container gets failed everything gets rolled back including the table used in execute sql task in the event handler. So whats the solution?
When you are not using SSIS transaction properties, by default every task have the properties set to supported. The Execute sql task in the Event handler also has the same properties as Supported so it follows the same transaction.To make the event handler work properly change the connection of the Execute Sql Task i.e, take a different connection and set its TransactionProperty to NotSupported. Thus, it will not follow the same transaction and when any error occurs it will log the errors into the table.
Note : we are using sequence container for parallel execution of tasks.What if? the error occurs inside the sequence container in any of the task and task does not allow to sequence container to fail.In that case, connect all the task serially.Yes, that makes no sense of the sequence container.I found my solution to work that way.
Hope it helps all! ;-)
I am running an SSIS package that contains many (7) reads from a single flat file uploaded from an external source. There is consistently a deadlock in every environment(Test, Pre-Production, and Production) on one of the data flows that uses a Slowly Changing Dimension to update an existing SQL table with both new and changed rows.
I have three groups coming off the SCD:
-Inferred Member Updates Output goes directly to an OLE DB Update command.
-Historical Attribute goes to a derived column boxed that sets a delete date and then goes to an update OLE DB command, then goes to a union box where it unions with the last group New Output.
-New Output goes into a union box along with the Historical output then to a derived column box that adds an update/create date, then inserts the values into the same SQL table as the Inferred Member Output DB Command.
The only error I am getting in my log looks like this:
"Transaction (Process ID 170) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction."
I could put the (NOLOCK) statement into the OLE db commands, but I have read that this isn't the way to go.
I am using SQL Server 2012 Data Tools to investigate and edit the Package, but I am unsure where to go from here to find the issue.
I want to get out there that i am a novice in terms of SSIS programming... with that out of the way... Any help would be greatly appreciated, even if it is just pointing me to a place I haven't looked for help.
Adding index on the WHERE condition column may resolve your issue. After adding index on the column, transactions will executes in faster way which reduce the chances of deadlock.
I have a stored proc that is called by a .net application and passes an xml parameter - this is then shredded and forms the WHERE section of the query.
So in my query, I look for records with a documentType matching that contained in the XML. The table contains many more records with a documentType of C than P.
The query will run fine for a number of week, regardless of if the XML contains P or C for documentType. Then it stops working for documentType C.
I can run both queries from SSMS with no errors (using profiler to capture the exact call that was made). Profiler shows that when run from the application, the documentType C query starts a statement then finishes before the statement ends, and before completing the outstanding steps of the query.
I ran another profiler session to capture all errors and warnings. All I can see is error 3621 - The statement has been terminated.
There are no other errors relating to this spid, the only other things to be picked up were warnings changing database context.
I've checked the SQL logs and extended events and can find nothing. I don't think the query relates to the data content as it runs in SSMS without problems - I've also checked the range values for other fields in the WHERE clause and nothing unusual or untoward there. I also know that if I drop and recreate the procedure (i.e. exact same code) the problem will be fixed.
Does anyone know how I can trace the error that is causing the 3261 failure? Profiling does not pick this up.
In some situations, SQL Server raises two error messages, one is the actual error message saying exactly what is happening and the other one is 3621 which says The statement has been terminated.
Sometimes the first message get lost specially when you are calling an SQL query or object from a script.
I suggest you to go through each of your SQL statement and run them individually.
Another guess is you have a timeout error on your client side. If you have Attention event on your SQL Server trace, you can follow the timeout error messages.
I've gotten this error message in Job History of merge replication job :
Executed as user: NT AUTHORITY\SYSTEM. String or binary data would be
truncated. [SQLSTATE 22001] (Error 8152). The step failed.
I know what the message meant but no idea what did cause that because the database model is the same !
Any suggestions of what can cause this particular error ?
After hours working with the Profiler I found that a very long stored procedure caused the issue. When added to updates to subscribers. The column that holds the Alter procedure is a 4000 character long but the stored procedure was much bigger "Cause of embedded documentation". Same problem raised here