SQL Server Subscription Initialization Restarts Endlessly, Never Finishes - sql-server

I'm trying to set up transactional pull replication on 2 SQL Server 2005 instances, through a 3rd as a distributor. When the subscription is being initialized, it bulk inserts properly, giving the message that the snapshot was successfully loaded. Then it makes primary key indexes as usual.
At this point the job starts over, dropping all the tables and bulk inserting again. It loops endlessly and never finishes, until the snapshot expires and a new one has to be made. I need help diagnosing this problem, as I have checked all the error logs I know of, and didn't see anything that might be of relevance.

Check to see if there are any tables with corrupted primary keys in the publication. I have seen instances where that causes SQL Server transactional replication to behave in bizarre ways.

Related

Why a global temporary table is deleted on my SQL Server?

We use global temporary tables in our program, written in C++ using ATL as a DB interface. Sessions are established with the SQL OLE DB provider.
This global temporary tables are held for a long time, maybe for the complete time of a session. Such temporary tables are explicitly deleted by us, when the specific action/activity ends. So we always clean up the tables.
Now we see an effect on people that are using a slow or unstable VPN connection that the global temporary is deleted. A query that should read some content returns an error
##tblTemp... is not a valid object name
For me it is an indicator that SQL Server terminated the session.
But how can it be? Our program has internal functions that access the server at least every 5 minutes (even if the user is inactive). Usually the SQL Server is accessed much more frequently. But the program may be minimized in the background.
What timeout is responsible that SQL Server terminates a session and deletes the temporary tables?
I see the Remote Query Timeout in the server settings. But this seams to be wrong for me, because we have no open query here... also the queries to the table are real simple. Insert an record, delete an record.
Questions:
Where do I find the settings for this session timeout?
Is there a way for the client to find out that the session got terminated? Strange for me the SQL query itself was transferred to SQL Server and finally failed because the temporary table did no longer exist. We got other error on the client.
Is there a way to protocol this on the server?
EDIT:
Here more details how we work with this tables.
The tables are created in my main thread. This thread has a SQL session that is created at start of the program and ends with the program end.
Other threads use the temporary tables. We pass the names through it.
So due to the fact that the creating SQL session is still alive and doesn't show an error when executing a statement that uses the temporary table, it tells me that the session is still alive. But my problem is the object seams to be deleted.
Again: We only have this problem on machines with a slow / bad VPN connection to the server!
To quote the manual:
Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.
source
So it's not about the server being accessed every few minutes, but the specific object being referenced.
If the session that created the global temporary table is ended (e.g. a timeout) and nothing else is actively referencing the same table, it is dropped!

Disable transactions on SQL Server

I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.

How do I ensure SQL Server replication is running?

I have two SQL Server 2005 instances that are geographically separated. Important databases are replicated from the primary location to the secondary using transactional replication.
I'm looking for a way that I can monitor this replication and be alerted immediately if it fails.
We've had occasions in the past where the network connection between the two instances has gone down for a period of time. Because replication couldn't occur and we didn't know, the transaction log blew out and filled the disk causing an outage on the primary database as well.
My google searching some time ago led to us monitoring the MSrepl_errors table and alerting when there were any entries but this simply doesn't work. The last time replication failed (last night hence the question), errors only hit that table when it was restarted.
Does anyone else monitor replication and how do you do it?
Just a little bit of extra information:
It seems that last night the problem was that the Log Reader Agent died and didn't start up again. I believe this agent is responsible for reading the transaction log and putting records in the distribution database so they can be replicated on the secondary site.
As this agent runs inside SQL Server, we can't simply make sure a process is running in Windows.
We have emails sent to us for Merge Replication failures. I have not used Transactional Replication but I imagine you can set up similar alerts.
The easiest way is to set it up through Replication Monitor.
Go to Replication Monitor and select a particular publication. Then select the Warnings and Agents tab and then configure the particular alert you want to use. In our case it is Replication: Agent Failure.
For this alert, we have the Response set up to Execute a Job that sends an email. The job can also do some work to include details of what failed, etc.
This works well enough for alerting us to the problem so that we can fix it right away.
You could run a regular check that data changes are taking place, though this could be complex depending on your application.
If you have some form of audit train table that is very regularly updated (i.e. our main product has a base audit table that lists all actions that result in data being updated or deleted) then you could query that table on both servers and make sure the result you get back is the same. Something like:
SELECT CHECKSUM_AGG(*)
FROM audit_base
WHERE action_timestamp BETWEEN <time1> AND BETWEEN <time2>
where and are round values to allow for different delays in contacting the databases. For instance, if you are checking at ten past the hour you might check items from the start the last hour to the start of this hour. You now have two small values that you can transmit somewhere and compare. If they are different then something has most likely gone wrong in the replication process - have what-ever pocess does the check/comparison send you a mail and an SMS so you know to check and fix any problem that needs attention.
By using SELECT CHECKSUM_AGG(*) the amount of data for each table is very very small so the bandwidth use of the checks will be insignificant. You just need to make sure your checks are not too expensive in the load that apply to the servers, and that you don't check data that might be part of open replication transactions so might be expected to be different at that moment (hence checking the audit trail a few minutes back in time instead of now in my example) otherwise you'll get too many false alarms.
Depending on your database structure the above might be impractical. For tables that are not insert-only (no updates or deletes) within the timeframe of your check (like an audit-trail as above), working out what can safely be compared while avoiding false alarms is likely to be both complex and expensive if not actually impossible to do reliably.
You could manufacture a rolling insert-only table if you do not already have one, by having a small table (containing just an indexed timestamp column) to which you add one row regularly - this data serves no purpose other than to exist so you can check updates to the table are getting replicated. You can delete data older than your checking window, so the table shouldn't grow large. Only testing one table does not prove that all the other tables are replicating (or any other tables for that matter), but finding an error in this one table would be a good "canery" check (if this table isn't updating in the replica, then the others probably aren't either).
This sort of check has the advantage of being independent of the replication process - you are not waiting for the replication process to record exceptions in logs, you are instead proactively testing some of the actual data.

SQL Server Merge Replication Problems

I have Merge replication setup on a CRM system. Sales reps data merges when they connect to the network (I think when SQL detects the notebooks are connected), and then they take the laptops away and merge again when they come back (there are about 6 laptops in total merging via 1 server).
This system seems fine when initially setup, but then it almost grinds to a halt after about a month passes, with the merge job taking nearly 2 hours to run, per user, the server is not struggling in any way.
If I delete the whole publication and recreate all the subscriptions it seems to work fine until about another month passes, then I am back to the same problem.
The database is poorly designed with a lack of primary keys/indexes etc, but the largest table only has about 3000 rows in it.
Does anyone know why this might be happening and if there is a risk of losing data when deleting and recreating the publication?
The problem was the metadata created by sql server replication, there is an overnight job that emptys and refills a 3000 row table. This causes replication to replicate all of these rows each day.
The subscriptions were set to never expire which means the old meta data was never being deleted by sql server.
I have set the subscription period to 7 days now in the hope tat it will now clean up the meta data after this period. I did some testing and proved that changes were not lost if a subscription expired. But any updates on the server took priority over the client.
I have encountered with "Waiting 60 second(s) before polling for further changes" recently in 2008 R2.
Replication monitor shows "In progress state" for replication but only step 1 (Initialization) and step 2 (Schema changes and bulk inserts) were performed.
I was very puzzled why other steps do not executed?
The reason was simple - it seems that for merge replication demands tcp/ip (and or not sure) named pipes protocols activation.
No errors were reported.
Probably the similar problem (some sort of connection problem) became apparent in Ryan Stephens case.

SQL Server transactional replication for very large tables

I have set up transactional replication between two SQL Servers on different ends of a relatively slow VPN connection. The setup is your standard "load snapshot immediately" kind of thing where the first thing it does after initializing the subscription is to drop and recreate all tables on the subscriber side and then start doing a BCP of all the data. The problem is that there are a few tables with several million rows in them, and the process either a) takes a REALLY long time or b) just flat out fails. The messages I keep getting when I look in Replication Monitor are:
The process is running and is waiting for a response from the server.
Query timeout expired
Initializing
It then tries to restart the bulk loading process (skipping any BCP files that it has already loaded).
I am currently stuck where it just keeps doing this over and over again. It's been running for a couple days now.
My questions are:
Is there something I could do to improve this situation given that the network connection is so slow? Maybe some setting or something? I don't mind waiting a long time as long as the process doesn't keep timing out.
Is there a better way to do this? Perhaps make a backup, zip it, copy it over and then restore? If so, how would the replication process know where to pick up when it starts applying the transactions, since updates will be occurring between the time I make the backup and get it restored and running on the other side.
Yes.
You can apply the initial snapshot manually.
It's been a while for me, but the link (into BOL) has alternatives to setting up the subscriber.
Edit: From BOL How-tos, Initialize a Transactional Subscriber from a Backup
In SQL 2005, you have a "compact snapshot" option, that allow you to reduce the total size of the snapshot. When applied over a network, snapshot items "travel" compacted to the suscriber, where they are then expanded.
I think you can easily figure the potential speed gain by comparing sizes of standard and compacted snapshots.
By the way, there is a (quite) similar question here for merge replication, but I think that at the snapshot level there is no difference.

Resources