Find most recent SQL Server database activity - sql-server

Data from another system is replicated into a SQL Server 2005 database in real-time (during the day, it's hundreds of transactions/second) using Goldengate. I'd like to be able to tell if there's been a transaction recently, which will tell me if replication is currently happening. Even in the off-hours, I can expect a transaction every few minutes, though I won't know which of the 400 tables it will go into.
Here's my current process:
IUD trigger on most popular replicated table
Updates date in "Sync Notification" table every time there's any activity on that table
SQL Agent job runs every few minutes and compares this date with GETDATE(). If it's been too long, it emails me.
This works for the most part, but I get false positives if there's activity in other tables, but not the monitored one, which can happen overnight.
Any other suggestions short of adding this same trigger to every table in the database? If I do add the triggers, how to I prevent deadlocks and contention on the "Sync notification" table? Since I don't care about the most recent date being exact during high-contention periods, is there a way I can have SQL try to update the date but just skip it if some other process has locked it?
The only "application-level" choice I have is to TELNET to the Goldengate monitor and ask for the replica lag, then screen scrape the results. I'm open to that, but I'd like to do something SQL-side if it's more feasible.

Is this for an automated job or something you want to look at every now and then? If the latter, then you could use a transaction log examination tool (Redgate Log Rescue, Apex SQLLog, probably others).
Another option open to you is look at sysindexes (SQL Server 2000: dbo.sysindex; 2005: sys.sysindexes). The column rowmodctr (to quote MSDN) "Counts the total number of inserted, deleted, or updated rows since the last time statistics were updated for the table". It may not return everything you need to know but, providing you've got covering indexes, it would give an indication of how many and where the changes there have been if sampled on a regular basis.

You can check SELECT * FROM ::fn_dblog(#startLSN, NULL) and see if any LOP_MODIFY_ROW operation occured since the last check (since last LSN you checked).

Related

Disable transactions on SQL Server

I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.

How many SQL jobs a sql server can handle?

I am creating a database medical system and then I came to a point where I am trying to create a notification feature and i will use SQL jobs in it, where the SQL job responsibility is to check some tables and the entities that will find it need to be notified for a change in certain data will put their ids in an entity called Notification and a trigger will be called for the app to check that table and send the notificiation.
what I want to ask is how many SQL jobs can a sql server handle ?
Does the number of running SQL jobs in background affect the performance of my application or the database performance in a way or another ?
NOTE: the SQL job will run every 10 seconds
I couldn't find any useful information online.
thanks in advance.
This question really doesn't have enough background to get a definitive answer. What are the considerations?
Do the queries in your ten-second job actually complete in ten seconds, even when your DBMS is under its peak transactional workload? Obviously, if the job routinely doesn't complete in ten seconds, you'll get jobs piling up.
Do the queries in your job lock up tables and/or indexes so the transactional load can't run efficiently? (You should use SET ISOLATION LEVEL READ UNCOMMITTED; as much as you can so database reads won't lock things unnecessarily.)
Do the queries in your job do a lot of rows' worth of inserts and updates, and so swamp the SQL Server transaction logs?
How big is your server? (CPU cores? RAM? IO capacity?) How big is your database?
If your project succeeds and you get many users, will your answers to the above questions remain the same? (Hint: no.)
You should spend some time on the execution plans for the queries in your job, and try to make them as efficient as possible. Add the necessary indexes. If necessary refactor the queries to make them more efficient. SSMS will show you the execution plans and suggest appropriate indexes.
If your job is doing things like deleting expired rows, you may want to build the expiration in your data model. For example, suppose your job does
DELETE FROM readings WHERE expiration_date >= GETDATE()
and your application does this, relying on your job to avoid getting expired readings.
SELECT something FROM readings
You can refactor your application query to say
SELECT something FROM readings WHERE expiration_date < GETDATE()
and then run your job overnight, at a quiet time, rather than every ten seconds.
A ten-second job is not the greatest idea in the world. If you can rework your application so it will function correctly with a ten-second, ten-minute, or twelve-hour job, you'll have a more resilient production system. At any rate if something goes wrong with the job when your system is very busy you'll have more than ten seconds to fix it.

How to change the retention period for CDC or put a condition on it (SQL Server 2012)

I'm in a SQL Server 2012 environment, and I have CDC enabled on a table. To sum, the table is used to populate a data warehouse (basically, the content of the table is duplicated elsewhere).
However, I just noticed that we are losing a lot of data because CDC was enabled with the default setting of 3 days. I know how to write a query that enables a CDC with a NEW/DIFFERENT retention period. But that's not the task, the task is that I must be able to change the current CDC setting. So, is there a way to ALTER the retention period?
Also, additionally, is there a way to prevent CDC from deleting expired records all in all? meaning, is there a way to implement a flag that prevents the CDC to delete any records UNLESS that record has already been transferred/moved.
Thank you very much!
sys.sp_cdc_change_job is what you're looking for to change the retention.
As to your second question, there is no way for CDC to know what rows you've processed and not. So, you have to set your retention period such that you have enough time to process the records that have accumulated since the ETL job last ran. The typical workflow that uses CDC runs on a regular periodic basis (e.g. daily, weekly). So I smell something odd when you say that you're losing data (unless the current retention period is set lower than the time between ETL runs).
Specifically, the following command can be used (pursuant to the answer above):
EXECUTE sys.sp_cdc_change_job
N'cleanup',
#retention = RetentionTimeInMin --this is a number in minutes

Sql Server, find all rows that have been updated by a statement

Is there a way of finding all the rows that have been updated by a single statement, sql itself must be tracking this as it could roll back the update if required. I'm interested in finding all the changed rows as I'm getting performance hit using update triggers.
I have a some large (2M-10M) row tables in Sql Server, and I'm adding audit triggers to track when records are updated and by what, trouble is this is killing performance. Most of the updates against the table will touch 20,000+ rows and they're now taking 5-10 times longer than previously.
I've thought of some options
1) Ditch triggers entirely and add the audit fields to every update statement, but that relies on everyone's code being changed.
2) Use before/after checksum values on the fields and then use them to update the changed rows a second time, still a performance hit.
Has anyone else solved this problem?
An UPDATE trigger already has the records affected by an update statement in the inserted and deleted pseudo columns. You can select their primary key columns into a preliminary audit table serving as a queue, and move more complicated calculation into a separate job.
Another option is the OUTPUT clause for the UPDATE statement, which was introduced in SQL Server 2005. (updated after comment by Philip Kelley)
SqlServer knows how to rollback because it has the transaction log. Is not something that you can find in the data tables.
You can try to add a timestamp column to your rows, then save a "current" timestamp, update all the rows. The changed rows should be all the rows with the timestamp greater than your "current" timestamp. THis will help you to find the changed rows, but not to find what has changed them.
You can use Change Tracking or Change Data Capture. These are technologies built into the Engine for tracking changes and are leveraging the Replication infrastructure (log reader or table triggers). Both are only available in SQL Server 2008 or 2008 R2 and CDC requires Enterprise Edition licensing.
Anything else you'd try to do would ultimately boil down to either one of:
reading the log for changes (which is only doable by Replication, including Change Data Capture, otherwise the Engine will recycle the log before you can read it)
track changes in triggers (which is what Change Tracking would use)
track changes in application
There just isn't any Free Lunch. If audit is a requirement, then the overhead of auditing has to be taken into consideration and capacity planning must be done accordingly. All data audit solution will induce significant overhead, so the an increase of operating cost by factors of 2x, 4x or even 10x are not unheard of.

How do I ensure SQL Server replication is running?

I have two SQL Server 2005 instances that are geographically separated. Important databases are replicated from the primary location to the secondary using transactional replication.
I'm looking for a way that I can monitor this replication and be alerted immediately if it fails.
We've had occasions in the past where the network connection between the two instances has gone down for a period of time. Because replication couldn't occur and we didn't know, the transaction log blew out and filled the disk causing an outage on the primary database as well.
My google searching some time ago led to us monitoring the MSrepl_errors table and alerting when there were any entries but this simply doesn't work. The last time replication failed (last night hence the question), errors only hit that table when it was restarted.
Does anyone else monitor replication and how do you do it?
Just a little bit of extra information:
It seems that last night the problem was that the Log Reader Agent died and didn't start up again. I believe this agent is responsible for reading the transaction log and putting records in the distribution database so they can be replicated on the secondary site.
As this agent runs inside SQL Server, we can't simply make sure a process is running in Windows.
We have emails sent to us for Merge Replication failures. I have not used Transactional Replication but I imagine you can set up similar alerts.
The easiest way is to set it up through Replication Monitor.
Go to Replication Monitor and select a particular publication. Then select the Warnings and Agents tab and then configure the particular alert you want to use. In our case it is Replication: Agent Failure.
For this alert, we have the Response set up to Execute a Job that sends an email. The job can also do some work to include details of what failed, etc.
This works well enough for alerting us to the problem so that we can fix it right away.
You could run a regular check that data changes are taking place, though this could be complex depending on your application.
If you have some form of audit train table that is very regularly updated (i.e. our main product has a base audit table that lists all actions that result in data being updated or deleted) then you could query that table on both servers and make sure the result you get back is the same. Something like:
SELECT CHECKSUM_AGG(*)
FROM audit_base
WHERE action_timestamp BETWEEN <time1> AND BETWEEN <time2>
where and are round values to allow for different delays in contacting the databases. For instance, if you are checking at ten past the hour you might check items from the start the last hour to the start of this hour. You now have two small values that you can transmit somewhere and compare. If they are different then something has most likely gone wrong in the replication process - have what-ever pocess does the check/comparison send you a mail and an SMS so you know to check and fix any problem that needs attention.
By using SELECT CHECKSUM_AGG(*) the amount of data for each table is very very small so the bandwidth use of the checks will be insignificant. You just need to make sure your checks are not too expensive in the load that apply to the servers, and that you don't check data that might be part of open replication transactions so might be expected to be different at that moment (hence checking the audit trail a few minutes back in time instead of now in my example) otherwise you'll get too many false alarms.
Depending on your database structure the above might be impractical. For tables that are not insert-only (no updates or deletes) within the timeframe of your check (like an audit-trail as above), working out what can safely be compared while avoiding false alarms is likely to be both complex and expensive if not actually impossible to do reliably.
You could manufacture a rolling insert-only table if you do not already have one, by having a small table (containing just an indexed timestamp column) to which you add one row regularly - this data serves no purpose other than to exist so you can check updates to the table are getting replicated. You can delete data older than your checking window, so the table shouldn't grow large. Only testing one table does not prove that all the other tables are replicating (or any other tables for that matter), but finding an error in this one table would be a good "canery" check (if this table isn't updating in the replica, then the others probably aren't either).
This sort of check has the advantage of being independent of the replication process - you are not waiting for the replication process to record exceptions in logs, you are instead proactively testing some of the actual data.

Resources