I tried using Debezium (1.6) signal table to take snapshot for newly added tables in SQL Server connector. When I inserted a signal of event type execute-snapshot, it took a snapshot initially. When I sent the signal the second time, it stopped taking the snapshot and started inserting snapshot-window-open and close events into the signal table. It is inserting the records continuously without stopping, which results in high disk usage. Can someone please help me with this.
Related
I created a stored procedure which sends an email and accidentally called the stored procedure within itself creating an endless loop. Within a few seconds of executing the stored procedure I realized what I had done and fixed the loop, but it had already created 517 processes. I killed all the SPID's but they are stuck in a KILLED/ROLLBACK state.
This code shows me the processes:
select session_id,handle.percent_complete, *
from sys.dm_exec_requests handle
outer apply sys.fn_get_sql(handle.sql_handle) spname
where cast(handle.start_time as date) = '2022-01-10'
spname.text is showing 'xp_sysmail_format_query' for all the SPID's. It's been two days, and all 517 processes have been stuck in this rollback state with 0% progress. We are still able to use all our business applications and execute queries, with the exception of EXEC msdb.dbo.sp_send_dbmail which, when starting even a test email, gets stuck executing and has to be cancelled. This is not good because any auto generated email warnings will not be sent, and all other sql email functions are blocked. I'm not sure what other jobs are being blocked at this time.
This is a huge problem and I cannot find a solution. I've read every post I can find about this. I've tried everything I can think of except restarting the SQL server. Some posts state that restarting the SQL server can fix this and some state not to restart it or that the tasks will just resume in the killed/rollback state when restarted. I tried killing the spids again with statusonly but that just informs me that they are in a rollback state with 0% complete.
Should I restart the server and will this fix anything? Is there another solution other than restoring the DB to a backup that is more than 2 days old and losing all the work the entire business has done in the last couple days?
Any assistance will be greatly appreciated.
As robust as it is, sometimes (fortunately rarely) SQL Server leaves us no choice when killing a process to adopt the IT mantra of turning it off an on again when the rollback does not complete in a timely fashion.
This can be more prevalent when a transaction enlists external methods or functions, email is notorious for this inparticular.
As unwelcome as it is, it's often the least-expensive in terms of time and should be considered an option soon in the diagnosis process when the low-hanging fruit options have been exhausted.
I have created a Flow to update the GUID(the unique identifier of CDS entity records) into an SQL Server table From CDS whenever a new record is created in CDS. The flow is working fine If I create records one by one. But If I import multiple records(around 3000 records) from SQL to CDS using Dataflows, then I am getting the below deadlock error in Flows.
"Transaction (Process ID 74) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction."
The dataflow refreshes the data on a scheduled basis. So, I could not resubmit the failed runs every time.
How to get rid of this deadlock issue? Or Is there any other approaches to update the SQL table in an efficient way?
I tried options like the degree of parallelism(10 records), retry policy. But no use. If I reduce the parallelly running records to 1, then it is running slowly and taking more than 1h for updating 1000 records.
If your query is a deadlock victim, you can create extended event session to capture details about this event. Then, having the deadlock graph you will find the real cause of your issue.
The graph will show you exactly what's the resource lock causing it and the involved statements.
you can try to change isolation level of your transactions in your connection with
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
to learn more
https://learn.microsoft.com/en-us/sql/connect/jdbc/understanding-isolation-levels?view=sql-server-ver15
I have a table in SQL Server 2012 that's being used by a service that's continually updating records within the table. It's sort of a queue where the service processes the records, and then periodically I run another stored procedure to pull out the ones that are processed into another table. Records in this table start out in one status and as they're processed they get put into another status.
When I try to run the stored procedure to pull the completed records out, I'm running into a deadlocking issue if it happens to occur when the running process is updating the table, which happens about every 2 minutes. I thought about just using a NOLOCK hint to eliminate that, but after reading a bit on this SO thread, I'm thinking I should avoid NOLOCK whenever possible.
GOAL:
Allow the service to continue running as usual, but also allow another stored procedure to periodically go in and remove records that are completed. In the event that there's a lock on a given row, I'd like to just leave that row alone and pick it up on the next time I run the stored procedure. During the processing, there's no requirement that I get all the rows with the stored procedure. That only matters once all the records have been processed, at which point I need to ensure that I get all the records, all while having the service still running on other unrelated records, and not causing any deadlocking issues. Hopefully this makes sense.
This article seems to suggest REPEATABLE READ
Am I on the right track or is there a better method?
Background Information
I found a bug in some code we have where a function connects to a postgresql database, but then "returns" out of it before closing the database connection.
This issue was only caught because at one point, we had a huge number of concurrent connections that exceeded the MAX_CONNECTIONS value and I found a bunch of "idle" records in the pg_stat_activity table
'Question
I only see these idle connections if I create a load on the database by looping in my script and calling this function a bunch of times...
Meaning, if I use the buggy code that doesn't close the db, and connect once, I don't see any "idle" records in the pg_stat_activity table. Mind you, it takes me a second or two to switch between the window that is running the script and the one that has the psql client running.
So here are my questions.
What's the best way to track idle connections? Am I using the right approach?
After the postgresql session selects the data I've requested and returns it to the client, how long does it wait before killing idle sessions? Or, are these idle records getting killed off when my script has finished running through all its logic?
I've tried using TCP keep alives with very low values just in case that's relevant... and I get the same results.
If my question is not clear enough, please let me know and I will revise.
Thanks
I need to sync(upload first to remote DB-download to mobile device next) DB tables with remote DB from mobile device (which may insert/update/delete rows from multiple tables).
The remote DB performs other operation based on uploaded sync data.When sync continues to download data to mobile device the remote DB still performing the previous tasks and leads to sync fail. something like 'critical condition' where both 'sync and DB-operations' want access remote Databse. How to solve this issue? is it possible to do sync DB and operate on same DB at a time?
Am using Sql server 2008 DB and mobilink sync.
Edit:
Operations i do in sequence:
1.A iPhone loaded with application which uses mobilink for SYNC data.
2.SYNC means UPLOAD(from device to Remote DB)followed by DOWNLOAD(from Remote DB to device).
3.Remote DB means Consolidated DB ; device Db is Ultralite DB.
4.Remote DB has some triggers to fire when certain tables are updated.
5.An UPLOAD from device to Remote will fire triggers when sync upload finished.
6.Very next moment the UPLOAD finished DOWNLOAD to device starts.
7.Exactly same moment those DB triggers will fire.
8.Now a deadlock between DB SYNC(-DOWNLOAD) and trigger(Update queries included within) operations occur.
9.Sync fails with error saying cannot access some tables.
I did a lots of work around and Google! Came out with a simple(?!) solution for the problem.
(though the exact problem cannot be solved at this point ..i tried my best).
Keep track of all clients who does a sync(kind of user details).
Create a sql job scheduler which contains all the operations to be performed when user syncs.
Announce a "maintenance period" everyday to execute the tasks of sql job with respect to saved user/client sync details.
Here keeping track of client details every time is costlier but much needed!
Remote consolidated DB "completely-updated" only after maintenance period.
Any approaches better than this would be appreciated! all Suggestions are welcome!
My understanding of your system is following:
Mobile application sends UPDATE statement to SQL Server DB.
There is ON UPDATE trigger, that updates around 30 tables (= at least 30 UPDATE statements in the trigger + 1 main update statement)
UPDATEis executed in single transaction. This transaction ends when Trigger completes all updates.
Mobile application does not wait for UPDATE to finish and sends multiple SELECT statements to get data from database.
These SELECTstatements query same tables as the Trigger above is updating.
Blocking and deadlocks occur at some query for some user as Trigger is not completing updates before selects and keeps lock on tables.
When optimizing we are trying make it our processes less easy for computer, achieve same result in less iterations and use less resources or those resources that are more available/less overloaded.
My suggestions for your design:
Use parametrized SPs. Every time SQL Server receives any statement it creates Execution plan. For 1 UPDATE statement with a trigger DB needs at least 31 execution plan. It happens on busy Production environment for every connection every time app updates DB. It is a big waste.
How SPs would help reduce blocking?
Now you have 1 transaction for 31 queries, where locks are issued against all tables involved and held until transaction commits. With SP you'll have 31 small transaction and only 1-2 tables will be locked at a time.
Another question I would like to address: how to do asynchronous updates to your database?
There is a feature in SQL Server called Service Broker. It allows to process message queue (rows from the queue table) automatically: it monitors queue, takes messages from it and does processing you specify and deletes processes messages from the queue.
For example, you save parameters for your SPs - messages - and Service Broker executes SP with parameters.