Whenever I try to backup a database it goes until 90% and gets stuck there until I manually kill (because it doesn't stop if I try to stop it) the msftesql process.
That clearly means that something makes a conflict between the fulltext indexing and the backup process.
So, have you seen anything like this? If not, how would you go about debugging this problem?
The first and obvious debug point is to disable full text indexing and try backing up the database again. If it does backup, then you know that FTS is the problem. If it doesn't, then you have another issue to find.
I would also check both the SQL Logs and the Event Viewer to see if any useful information is there.
Finally, if you have actual, physical access to the server during the backup, listen and see if the disk is making any funny noises during the backup process to indicate a disk failure of some sort.
I can say that I've never had FTS stop a backup from happening, but that doesn't mean it couldn't happen.
What time do you happen to have the job that refreshes the Full text indexes running? Perhaps it is trying to repopulate those indesxes at the same time the backup is running.
I have the same problem.
The activity monitor shows that the Backup job has a wait type MSSEARCH
The index is manually populated when run it is hanging for days on end until I force-ably stop it or the service is restarted. it used to take minutes to populate.
Related
I am shipping transaction logs to another database that we will be using for certain reports that don't need real-time data. This works fine until we start directing traffic to it, then it lasts for a day or so and then just gets stuck in a Restoring state.
I am shipping transaction logs every 15 minutes from the production server and on the standby server, I have it set to to Standby mode with 'Disconnect users in the database when restoring backups' set to true (if I don't do this, the restore will be put off sometimes by a day or so, I am ok with killing the active sessions). Also on the standby server I have it set to run LSRestore every 10 minutes.
The problem is that I don't know what is causing the database to hang, nor do I know where I can even look to get some diagnostics that may tell me something.
Does anyone know where I can look?
It sounds like the Secondary cannot keep up with the processing of the restores of the t-log backups on the secondary. I would try decreasing the time of the backups, copies, and restores and see if you can find an optimal time interval where the secondary database does not stay in the restoring mode. I see you already have it down to every 15 minutes. Also, check this article and use the t-sql in it to dive down deep and try and see where the delays are occurring in more detail - https://www.sqlshack.com/monitor-transaction-log-shipping-using-t-sql-and-ssms/. I hope this helps.
I am looking into using log shipping in a SQL Server 2005 environment. The idea was to set up frequent log shipping to a secondary server. The intent: Use the secondary server to serve report queries, thereby offloading the primary db server.
I came across this on a sqlservercentral forum thread:
When you create the log shipping you have 2 choices. You can configure restore log operation to be done with norecovery or with standby option. If you use the norecovery option, you can not issue select statements on it. If instead of norecovery you use the standby option, you can run select queries on the database.
Bear in mind with the standby option when log file restores occur users will be kicked out without warning by the restore process. Acutely when you configure the log shipping with standby option, you can also select between 2 choices – kill all processes in the secondary database and perform log restore or don’t perform log restore if the database is being used. Of course if you select the second option, the restore operation might never run if someone opens a connection to the database and doesn’t close it, so it is better to use the first option.
So my questions are:
Is the above true? Can you really not use log shipping in the way I intend?
If it is true, could someone explain why you cannot execute SELECT statements to a database while the transaction log is being restored?
EDIT:
First question is duplicate of this serverfault question. But I still would like the second question answered: Why is it not possible to execute SELECT statements while the transaction log is being restored?
could someone explain why you cannot
execute SELECT statements to a
database while the transaction log is
being restored?
Short answer is that RESTORE statement takes an exclusive lock on the database being restored.
For writes, I hope there is no need for me to explain why they are incompatible with a restore. Why does it not allow reads either? First of all, there is no way to know if a session that has a lock on a database is going to do a read or a write. But even if it would be possible, restore (log or backup) is an operation that updates directly the data pages in the database. Since these updates go straight to the physical location (the page) and do not follow the logical hierarchy (metadata-partition-page-row), they would not honor possible intent locks from other data readers, and thus have the possibility to change structures as they are read. A SELECT table scan following the page next-prev pointers would be thrown into disarray, resulting in a corrupted read.
Well yes and no.
You can do exactly what you wish to do, in that you may offload reporting workloads to a secondary server by configuring Log Shipping to a read only copy of a database. I have set this type of architecture up on a number of occasions previously and it works very well indeed.
The caveat is that in order to perform a restore of a Transaction Log Backup file there must be no other connections to the database in question. Hence the two choices being, when the restore process runs it will either fail, thereby prioritising user connections, or it will succeed by disconnecting all user connection in order to perform the restore.
Dependent on your restore frequency this is not necessarily a problem. You simply educate your users to the fact that, say every hour at 10 past the hour, there is a possibility that your report may fail. If this happens simply re-run the report.
EDIT: You may also want to evaluate alternative architeciture solutions to your business need. For example, Transactional Replication or Database Mirroring with a Database Snapshot
If you have enterprise version, you can use database mirroring + snapshot to create read-only copy of the database, available for reporting, etc. Mirroring uses "continuous" log shipping "under the hood". It is frequently used in scenario you have described.
Yes it's true.
I think the following happens:
While the transaction log is being restored, the database is locked, as large portions of it are being updated.
This is for performance reasons more then anything else.
I can see two options:
Use database mirroring.
Schedule the log shipping to only occur when the reporting system is not in use.
Slight confusion in that, the norecovery flag on the restore means your database is not going to be brought out of a recovery state and into an online state - that is why the select statements will not work - the database is offline. The no-recovery flag is there to allow you to restore multiple log files in a row (in a DR type scenario) without bringing the database back online.
If you did not want to log ship / have the disadvantages you could swap to a one way transactional replication, but the overhead / set-up will be more complex overall.
Would peer-to-peer replication work. Then you can run queries on one instance and so save the load on the original instance.
I have been told that SQL Profiler makes changes to the MSDB when it is run. Is this true and if so what changes does it make?
MORE INFO
The reason I ask is that we have a DBA who wants us to range a change request when we run the profiler on a live server. Her argument is that it makes changes to the DB's which should be change controlled.
Starting a trace adds a row into msdb.sys.traces, stopping the trace removes the row. However msdb.sys.traces is a view over an internal table valued function and is not backed by any physical storage. To prove this, set msdb to read_only, start a trace, observer the new row in msdb.sys.traces, stop the trace, remember to turn msdb back read_write. Since a trace can be started in the Profiler event when msdb is read only it is clear that normally there is no write into msdb that can occur.
Now before you go and grin to your dba, she is actually right. Profiler traces can pose a significant stress on a live system because the traced events must block until they can generate the trace record. Live, busy, systems may experience blocking on resources of type SQLTRACE_BUFFER_FLUSH, SQLTRACE_LOCK, TRACEWRITE and other. Live traces (profiler) are usualy worse, file traces (sp_trace_create) are better, but still can cause issues. So starting new traces should definetly something that the DBa should be informed about and very carefully considered.
The only ones I know happen when you schedule a trace to gather periodic information - a job is added.
That's not the case as far as I'm aware (other than the trivial change noted by others).
What changes are you referring to?
Nothing I have ever read, heard, or seen says that SQL Profiler or anything it does or uses has any impact on the MSDB database. (SQL Profiler is, essentially, a GUI wrapped around the trace routines.) It is of course possible to configure a specific setup/implementation to do, well, anything, and perhaps that's what someone is thinking of.
This sounds like a kind of "urban legend". I recommend that you challenge it -- get the people who claim it to be true to provide proof.
I have set up transactional replication between two SQL Servers on different ends of a relatively slow VPN connection. The setup is your standard "load snapshot immediately" kind of thing where the first thing it does after initializing the subscription is to drop and recreate all tables on the subscriber side and then start doing a BCP of all the data. The problem is that there are a few tables with several million rows in them, and the process either a) takes a REALLY long time or b) just flat out fails. The messages I keep getting when I look in Replication Monitor are:
The process is running and is waiting for a response from the server.
Query timeout expired
Initializing
It then tries to restart the bulk loading process (skipping any BCP files that it has already loaded).
I am currently stuck where it just keeps doing this over and over again. It's been running for a couple days now.
My questions are:
Is there something I could do to improve this situation given that the network connection is so slow? Maybe some setting or something? I don't mind waiting a long time as long as the process doesn't keep timing out.
Is there a better way to do this? Perhaps make a backup, zip it, copy it over and then restore? If so, how would the replication process know where to pick up when it starts applying the transactions, since updates will be occurring between the time I make the backup and get it restored and running on the other side.
Yes.
You can apply the initial snapshot manually.
It's been a while for me, but the link (into BOL) has alternatives to setting up the subscriber.
Edit: From BOL How-tos, Initialize a Transactional Subscriber from a Backup
In SQL 2005, you have a "compact snapshot" option, that allow you to reduce the total size of the snapshot. When applied over a network, snapshot items "travel" compacted to the suscriber, where they are then expanded.
I think you can easily figure the potential speed gain by comparing sizes of standard and compacted snapshots.
By the way, there is a (quite) similar question here for merge replication, but I think that at the snapshot level there is no difference.
So our SQL Server 2000 is giving me the error, "The log file for database is full. Back up the transaction log for the database to free up some log space."
How do I go about fixing this without deleting the log like some other sites have mentioned?
Additional Info: Enable AutoGrowth is enabled growing by 10% and is restricted to 40MB.
To just empty it:
backup log <dbname> with truncate_only
To save it somewhere:
backup log <dbname> to disk='c:\somefile.bak'
If you dont really need transactional history, try setting the database recovery mode to simple.
Scott, as you guessed: truncating the log is a bad move if you care about your data.
The following, free, videos will help you see exactly what's going on and will show you how to fix the problem without truncating the logs. (These videos also explain why that's such a dangerous hack and why you are right to look for another solution.)
SQL Server Backups Demystified
SQL Server Logging Essentials
Understanding Backup Options
Together these videos will help you understand exactly what's going on and will show you whether you want to switch to SIMPLE recovery, or look into actually changing your backup routines. There are also some additional 'how-to' videos that will show you exactly how to set up your backups to ensure availability while managing log file sizing and growth.
ether backup your database logs regularly if you need to recover up to the minute or do other fun stuff like log shipping in the future, or set the database to simple mode and shrink the data file.
DO NOT copy, rename, or delete the .ldf file this will break your database and after you recover from this you may have data in an inconsistent state making it invalid.
I don't think renaming or moving the log file will work while the database is online.
Easiest thing to do, IMO, is to open the properties for the database and switch it to Simple Recovery Model. then shrink the database and then go back and set the DB to Full Recoery Model (or whatever model you need).
Changing the logging mode forces SQL Server to set a checkpoint in the database, after which shrinking the database will free up the excess space.
My friend who faced this error in the past recommends:
Try
Backing up the DB. The maintenance plan includes truncation of these files.
Also try changing the 'recovery mode' for the DB to Simple (instead of Full for instance)
Cause:
The transaction log swells up due to events being logged (Maybe you have a number of transactions failing and being rolled back.. or a sudden peaking in transactions on the server )
You may want to check related SO question:
How do you clear the transaction log in a SQL Server 2005 database?
Well you could take a copy of the transaction log, then truncate the log file, which is what the error message suggests.
If disk space is full and you can't copy the log to another machine over the network, then connect a drive via USB and copy it off that way.
You have the answer in your question: Backup the log, then it will be shrunk.
Make a maintenance plan to regularly backup the database and don't forget to select "Backup the transaction log". That way you'll keep it small.
If it's a non production environment use
dump tran <db_name> with no_log;
Once this has completed shrink the log file to free up disk space. Finally switch database recovery mode to simple.
As soon as you take a full backup of the database, and the database is not using the Simple recovery model, SQL Server keeps a complete record of all transactions ever performed on the database. It does this so that in the event of a catastrophic failure where you lose the data file, you can restore to the point of failure by backing up the log and, once you have restored an old data backup, restore the log to replay the lost transactions.
To prevent this building up, you must back up the transaction log. Or, you can break the chain at the current point using the TRUNCATE_ONLY or NO_LOG options of BACKUP LOG.
If you don't need this feature, set the recovery model to Simple.
My dear friend it is vey important for a DBA to check his log file quite frequently. Because if you don't give much attention towards it some day it is going to give this error.
For this purpose you have to periodically take back up so that the logs file would not faced such error.
Other then this the above given suggestion are quite right.
Rename it it. eg:
old-log-16-09-08.log
Then the SQL server can use a new empty one.