Debezium transaction metadata - no `END` event received with SQL Server - sql-server

I'm trying to get the transaction metadata working with debezium 1.4.1.Final using a SQL Server database.
It seems to be working to some extent - the dbservername.transaction topic has been created, and when I run a stored proc which contains a transaction, then the "status":"BEGIN" event is received, along with the CDC packets on the table topic.
However, no "status":"END" event is received... until I run the stored proc again.
It may very well be that I'm not closing the transaction in the stored proc correctly (I'm not a MSSQL expert by any means)...
This is the structure I'm using:
CREATE PROCEDURE schema.myproc
AS
BEGIN
BEGIN TRANSACTION
...
COMMIT;
END
GO
Any ideas what I need to do to get the END event at the end of the proc?

Unfortunately, there's no reliable way to emit the END event before the connector has received another transaction BEGIN event. Hence END events are delayed in case you don't have a continuous transaction load on your database.
You may consider to run some dummy transactions in a loop for triggering this. We're planning to support this OOTB via Debezium's heartbeat feature (see DBZ-3263); any help with that will surely be welcomed.

Related

Is there any way to rollback transactions in SSIS for SQL Server 2012?

Cannot successfully execute an SSIS package with BEGIN TRAN functionality.
I'm at a loss with an SSIS package I inherited. It contains:
1 Script Task
3 Execute SQL tasks
5 Data flow tasks (each contains a number of merges, lookups, data inserts and other transformations)
1 file system task of the package.
All of these are encapsulated in a Foreach loop container. I've been tasked with modifying the package so that if any of the steps within the control/data flow fails, the entire thing is rolled back. Now I've tried two different approaches to accomplish this:
I. Using Distributed Transactions.
I ensured that:
MSDTC was running on target server and executing client (screenshot enclosed)
msdtc.exe was added as an exception to server and client firewall
Inbound and outbound rules were set for both server and client to allow DTC connections.
ForeachLoop Container TrasanctionLevel: Required
All other tasks TransactionLevel: Supported
My OLEDB Connection has RetainSameConnection set to TRUE and I'm using SQL Server Authentication with Save Password checked
When I execute the package, it fails right after the script task (first step).
After spending an entire week trying to figure out a workaround, I decided to try SQL Tasks to try to accomplish my goal using 3 Execute SQL Tasks:
BEGIN TRAN before the ForeachLoop Container
COMMIT TRAN after the ForeachLoop Container with a Success Constraint
ROLLBACK TRAN after the ForeachLoop Container with a Failure constraint
In this case, the ForeachLoop container and all other tasks have TransactionLevel property set to Supported. Now here, the problem is that the package executes up to the fourth data flow task and hangs there forever. After logging into SQL Server and verifying the running sessions, I noticed sys.sp_describe_first_result_set;1 as a headblocker session.
Doing some research, I found it could be related to a few TRUNCATE statements in some of my Data flow tasks which could cause a schema lock. I went ahead and changed the ValidateExternalMetaData property to False for all tasks within my data flow and changed my truncate statements to DELETE statements instead. Re-ran package and still hangs in the same spot with the same headblocker. As an alternative, I tried creating a second OLEDB connection to the same database, assigned that new OLEDB Connection to my BEGIN, ROLLBACK and COMMIT SQL tasks with RetainSameConnectionProperty set to TRUE and changed the RetainSameConnectionProperty to FALSE (and tried it with TRUE as well) in the original OLEDB connection (the one used by the data flow tasks). This worked in the sense that the package appeared to execute (It ran and Commit Tran executed fine) and then I ran it again with a forced error to cause it to fail and the Rollback TRAN task executed successfully, however, when I queried the affected tables, the transaction hadn't rolled back, all new records were inserted and old ones were updated (the begin tran was clearly started in a different connection and hence didn't affect the package's workflow). I'm not sure what else to try at this point. Any help would be truly appreciated, I’m about to go nuts with this!
P.S. additionally, all objects have "DelayValidation" set to true on everything and SQL Server version is 2012.

SQL Server: How to make a transaction commit fail?

To test error handling in an application, I'm looking for a way to let a transaction commit result in an error.
The application is written in C and uses ODBC to talk to a SQL Server 2017 data source. The application starts a database transaction and executes an arbitrary SQL (which I can change for the sake of the test). Then, it commits the transaction (using ODBCs SQLEndTran()). I want to build a test that verifies the error handling of the commit.
Is there an easy way and reliable way to let the commit fail, e.g by executing some specific SQL script before the commit, or by changing the database or the data source settings?
EDIT / clarification: What I need to fail is the transaction commit itself (specifically the SQLEndTran() complete call with an error). SQL before that shall complete successfully.
If you are able to time it correctly in a testing framework you can do few things:
1. Kill session from a separate connection in a testing framework.
2. Change firewall configuration to emulate network error.
3. Switch database to single user mode or stop SQL Service.
Easiest way is to force a divide by zero.
declare #SomeVal int = 0
set #SomeVal = 2 / #SomeVal
--EDIT--
Since I guess you want the commit to fail you could simply add a rollback right before the commit. Then the exception would be thrown on the commit statement.

What happens in the case of a Postgres Transaction Rollback command not reaching the database?

Section 3.4 of the Postgres documentation covers transactions.
I thought a transaction worked according to the following rules:
The client sends a BEGIN statement to the Database server on a connection. Call this connection “connection_one”.
The client sends whatever queries they want to the Database server. All of these queries are sent via “connection_one”.
If at any time the connection (in this example “connection_one”) is lost before a COMMIT statement reaches the Database server, the Database server rollsback to before the BEGIN statement.
If a COMMIT statement is issued and received by the Database server, then the changes are saved and then transaction block has completed.
It looks like the above is not the case though. My confusion is that it looks like I have to actually issue a ROLLBACK command and have it reach the Database Server in order for partial changes not to be saved. Is this really the case or am I missing something? If it is the case is there any way I can get the above behavior to occur or is there some reason I would not want the above behavior to occur? My concern is what if the connection is lost before I am able to ROLLBACK.
Thanks.

Sending and Receiving SQL Server Service Broker Messages within Nested Transactions

I'd like to use SQL Server 2008 Service Broker to log the progress of a long-running (up to about 30 minutes) transaction that is dynamically created by a stored procedure. I have two goals:
1) To get real-time logging of the dynamically-created statements that make up the transaction so that the progress of the transaction can be monitored remotely,
2) To be able to review the steps that made up the transaction up to a point where a failure may have occurred requiring a rollback.
I cannot simply PRINT (or RAISERROR(msg,0,0)) to the console because I want to log the progress messages to a table (and have that log remain even if the stored procedure rollsback).
But my understanding is that messages cannot be received from the queue until the sending thread commits (the outer transaction). Is this true? If so, what options do I have?
It is true that you cannot read messages from the service queue until the transaction is committed.
You could try some other methods:
use a sql clr procedure to send a .net remoting message to a .net app that receives the messages and them log them.
use a sql clr procedure to write a text or other log file to disk.
Some other method...
Regards
AJ

Send message from SQL Server trigger

I need to signal a running application (Windows service) when certain things happen in SQL Server (2005). Is there a possibility to send a message from a trigger to an external application on the same system?
You can use a SQL Service Broker queue to do what you want.
The trigger can create a conversation and send a message on the queue.
When it starts, the external process should connect to the database and issue a WAITFOR (RECEIVE) statement on this queue. It will receive the message when the trigger sends it.
Not sure DBA's would approve of this, but there is a way to run commands using xp_cmdshell
"Executes a given command string as an operating-system command shell and returns any output as rows of text. Grants nonadministrative users permissions to execute xp_cmdshell."
Example from MS's site:
CREATE PROC shutdown10
AS
EXEC xp_cmdshell 'net send /domain:SQL_USERS ''SQL Server shutting down
in 10 minutes. No more connections allowed.', no_output
EXEC xp_cmdshell 'net pause sqlserver'
WAITFOR DELAY '00:05:00'
EXEC xp_cmdshell 'net send /domain: SQL_USERS ''SQL Server shutting down
in 5 minutes.', no_output
WAITFOR DELAY '00:04:00'
EXEC xp_cmdshell 'net send /domain:SQL_USERS ''SQL Server shutting down
in 1 minute. Log off now.', no_output
WAITFOR DELAY '00:01:00'
EXEC xp_cmdshell 'net stop sqlserver', no_output
Either:
Use RAISERROR (severity 10) to fire a SQL agent alert and job.
Load a separate table that is polled periodically by a separate mail handling process. (as HLGEM suggested)
Use a stored procedure to send the message and write to the table.
Each solution decouples the transactional trigger from a potentially long messaging call.
You can send an email from a trigger, but it isn't a recommended practice becasue if the email ssystem is down, no data changes can be made to the table.
Personally if you can live with less than realtime, I would information about the event you are interested in to another table (so the real change of data can go smoothly even if email is down for some reason.) Then I would have a job that checks that table every 5-10 minutes for any new entries and emails those out.
You can use a dbmail email message. It should not slow the trigger down if the mail server is down because the message is queued and then sent by and external (to sql) process.
The table idea sounds good if the application can access sql server.
You could also give access to that same table via sql 2005 native XML Services - which exposes procs through xml.
http://msdn.microsoft.com/en-us/library/ms345123(SQL.90).aspx
Depending on what sort of message you want to send, you could use a CLR stored procedure to connect to a socket on the running process and write the message to that. If you have no control over the process (i.e. can't modify it) you could build a bridge or use a library that can issue a message in a suitable format.
For reliable delivery, you could do something that uses MSMQ to deliver the message.
A reminder that triggers can be problematic for stuff like this because they are embedded in set-operations. And being associated with tables, they aren't very sensitive to the context in which they are fired. The problem can be if they fire on an operation that involves multiple rows, because it's hard to avoid invoking as many instances of your action as there are records in the operation. Several hundred emails are not unlikely, for instance.
Hopefully "things that happen" can be detected in closer association with the context in which they happen (which also can be interesting to try to backtrack from a trigger.)

Resources