Log suspend reason unknown - sybase

I am not a DBA but a programmer. Recently we have been getting LOG SUSPEND issue daily on our production. I am unable to catch the scenario as it is not reproducible on my local system.
A file when uploaded on production fails with log suspend while same file uploaded on local seems to work fine. Also, when the same file is uploaded again after some time it seems to work fine in production too.
Really confused as why this is happening.

Log Suspend indicates that the transaction log is filling up, and may not be properly sized for the transaction rate you are supporting. Have the DBA/System Administrator add additional Log device space to the database that is having issues. If possible, you may also want to break up any large transactions as well to lower the possibility
As for a cause, it's very dependent on how the system is setup. First check the database settings.
sp_helpdb will print out the list of databases on the server, as well as any options that may be set for each database.
If you don't see trunc log on chkpt, then the database is setup for maximum recoverability, the log space will only free up after a backup is run, or after the transaction log is dumped. This allows for up to the second recovery in the event of a failure, at the expense of using more log space.
If you DO see trunc log on chkpt, then the database will automatically truncate the log after a checkpoint occurs in the database. Checkpoints are issued by the database itself as part of routine processing, but the command can also be issued manually. If this option is set, and the database still goes into log suspend, then you may have a transaction that did not properly close (whether by committing or rolling back). You can check the master..syslogshold table to find long running transactions.
A third possibility is that if the system is using SAP/Sybase Replication Server, there is actually a secondary truncation point used as part of the replication processes. The system will not truncate the transaction log until after a transaction has been read by the RepAgent process, so this too can cause a system to go into log suspend.

Related

Monitoring SQL Log File

After deploying a project on the client's machine, the sql db log file has grown to up to 450G, although the db size is less than 100MB, The logging mode is set to Simple mode, and the transactions are send from a windows service that send insertion and updating transaction every 30 seconds.
my question is, how to know the reason of db log file growth?
I would like to know how to monitor the log file to know what is the exact transaction that causes the problem.
should i debug the front end ? or there is away that expose the transactions that cause db log file growth.
Thank you.
Note that a simple recovery model does not allow for log backups since it keeps the least amount of information and relies on CHECKPOINT, so if this is a critical database, consider protecting the client by use of a FULL RECOVERY plan. Yes, you have to use more space, but disk space is cheap and you can have greater control over the point in time recovery and managing your log files. Trying to be concise:
A) Your database in Simple Mode will only truncate transactions in your transaction log as when a CHECKPOINT is created.
B) Unfortunately, large/lots of uncommitted transactions, including BACKUP, creation of SNAPSHOT, and LOG SCANs, among other things will stop your database from creating those checkpoints and your database will be left unprotected until those transactions are completed.
Your current system relies on having the right edition of your .bak file, which depending on the size may mean hours of potential loss.
In other words, it is that ridiculous size because your database is not able to create a CHECKPOINT to truncate these transactions often enough....
a little note on log files
Foremost, Log files are not automatically truncated every time a transaction is committed (otherwise, you would only have the last committed transaction to go back to). Taking frequent log backups will ensure pertinent changes are kept (point in time) and SHRINKFILE will squeeze the log file to the smallest size available/size specified.
Use DBCC SQLPERF(logspace) to see how much of your log file is in use and how large it is. Once you perform a full backup, the log file will be truncated to the remaining uncommitted/active transactions. (Do not confuse this with shrinking the size)
Some suggestions on researching your transactions:
You can use the system tables to see the most expensive cache, frequent, and active plans.
You can search the log file using an undocumented extended stored procedure, fn_dblog.
Pinal has great info on this topic that you can read at this webpage and link:
Beginning Reading Transaction Log
A Log File is text, and depending on your log levels and how many errors and messages you receive these files can grow very quickly.
You need to rotate your logs with something like logrotate although from your question it sounds like you're using Windows so not sure what the solution for that would be.
The basics of log rotation are taking daily/weekly versions of the logs, and compressing them with gzip or similar and trashing the uncompressed version.
As it is text with a lot of repetition this will make the files very very small in comparison, and should solve your storage issues.
log file space won't be reused ,if there is open transaction..You can verify the reason for log space reuse using below DMV..
select log_reuse_wait_desc,database_id from sys.databases
In your case,your database is set to simple and database is 100 MB..but the log has grown upto 450 GB..which is very huge..
My theory is that ,there may be some open transactions ,which prevented log space reuse..log file won't shrink back,once it grew..
As of know you can run above DMV and see ,what is preventing log space reuse at this point,you can't go back in time to know what prevented log space reuse

Table loading on Simple model still writes to log

I have a database on SqlServer 2012 Enterprise with Recovery model set to 'Simple'.
When data gets pushed into it and I check the resource monitor on the server, I see that MyDB_dat.mdf gets written to with 20MB/sec, and MyDB_log.ldf gets 30MB/sec.
Both files are op separate disks.
I drop all indexes except the clustered ones.
How can I prevent this IO on the Log file? The database is completely redundant so I couldn't care less about the log.
You can't. In simple recovery mode you can still do BEGIN TRAN then COMMIT/ROLLBACK, and more significantly each statement is transactional, so everything has to be written to the log. The thing about simple recovery mode is that the log space is re-used as soon as the transaction (or statement) is complete - there's no waiting until a log backup has been done.
In simple mode Logs truncate when a checkpoint occurs. There is no way to write (or update) in SQL Server without writing to the Log file. The number, and types of indexes only affects how fast, potentially, SQL Server finds the relevant rows. You need a commit followed by a checkpoint (which happens automatically or by having a script issue a 'checkpoint' command) for the log to truncate.

Question about database transaction log

I read the following statement:
SQL Server doesn’t write data immediately to disk. It is kept in a
buffer cache until this cache is full or until SQL Server issues a
checkpoint, and then the data is written out. If a power failure
occurs while the cache is still filling up, then that data is lost.
Once the power comes back, though, SQL Server would start from its
last checkpoint state, and any updates after the last checkpoint that
were logged as successful transactions will be performed from the
transaction log.
And a couple of questions arise:
What if the power failure happens after SQL Server issues a
checkpoint and before the buffer cache is actuall written to
disk? Isn't the content in buffer cache permanently missing?
The transaction log is also stored as disk file, which is no
different from the actual database file. So how could we guarantee
the integrity of log file?
So, is it true that no real transaction ever exists? It's only a matter of probability.
The statement is correct in that data can be written to cache, but misses the vital point that SQL Server uses a technique called Write Ahead Logging (WAL). The writes to the log are not cached, and a transaction is only considered complete once the transaction records have been written to the log.
http://msdn.microsoft.com/en-us/library/ms186259.aspx
In the event of a failure, the log is replayed as you mention, but the situation regarding the data pages still being in memory and not written to disk does not matter, since the log of their modification is stored and can be retrieved.
It is not true that there is no real transaction, but if you are operating in simple logging mode then the ability to replay is not there.
For the integrity of the log file / same as the data file - a proper backup schedule and a proper restore testing schedule - do not just backup data / logs and assume they work.
What if the power failure happens after SQL Server issues a checkpoint and before the buffer cache is actuall written to disk? Isn't the content in buffer cache permanently missing?
The checkpoint start and end are different records on the transaction log.
The checkpoint is marked as succeeded only after the end of the checkpoint has been written into the log and the LSN of the oldest living transaction (including the checkpoint itself) is written into the database.
If the checkpoint fails to complete, the database is rolled back to the previous LSN, taking the data from the transaction log as necessary.
The transaction log is also stored as disk file, which is no different from the actual database file. So how could we guarantee the integrity of log file?
We couldn't. It's just the data are stored in two places rather than one.
If someone steals your server with both data and log files on it, your transactions are lost.

Is it possible to have secondary server available read-only in a log shipping scenario?

I am looking into using log shipping in a SQL Server 2005 environment. The idea was to set up frequent log shipping to a secondary server. The intent: Use the secondary server to serve report queries, thereby offloading the primary db server.
I came across this on a sqlservercentral forum thread:
When you create the log shipping you have 2 choices. You can configure restore log operation to be done with norecovery or with standby option. If you use the norecovery option, you can not issue select statements on it. If instead of norecovery you use the standby option, you can run select queries on the database.
Bear in mind with the standby option when log file restores occur users will be kicked out without warning by the restore process. Acutely when you configure the log shipping with standby option, you can also select between 2 choices – kill all processes in the secondary database and perform log restore or don’t perform log restore if the database is being used. Of course if you select the second option, the restore operation might never run if someone opens a connection to the database and doesn’t close it, so it is better to use the first option.
So my questions are:
Is the above true? Can you really not use log shipping in the way I intend?
If it is true, could someone explain why you cannot execute SELECT statements to a database while the transaction log is being restored?
EDIT:
First question is duplicate of this serverfault question. But I still would like the second question answered: Why is it not possible to execute SELECT statements while the transaction log is being restored?
could someone explain why you cannot
execute SELECT statements to a
database while the transaction log is
being restored?
Short answer is that RESTORE statement takes an exclusive lock on the database being restored.
For writes, I hope there is no need for me to explain why they are incompatible with a restore. Why does it not allow reads either? First of all, there is no way to know if a session that has a lock on a database is going to do a read or a write. But even if it would be possible, restore (log or backup) is an operation that updates directly the data pages in the database. Since these updates go straight to the physical location (the page) and do not follow the logical hierarchy (metadata-partition-page-row), they would not honor possible intent locks from other data readers, and thus have the possibility to change structures as they are read. A SELECT table scan following the page next-prev pointers would be thrown into disarray, resulting in a corrupted read.
Well yes and no.
You can do exactly what you wish to do, in that you may offload reporting workloads to a secondary server by configuring Log Shipping to a read only copy of a database. I have set this type of architecture up on a number of occasions previously and it works very well indeed.
The caveat is that in order to perform a restore of a Transaction Log Backup file there must be no other connections to the database in question. Hence the two choices being, when the restore process runs it will either fail, thereby prioritising user connections, or it will succeed by disconnecting all user connection in order to perform the restore.
Dependent on your restore frequency this is not necessarily a problem. You simply educate your users to the fact that, say every hour at 10 past the hour, there is a possibility that your report may fail. If this happens simply re-run the report.
EDIT: You may also want to evaluate alternative architeciture solutions to your business need. For example, Transactional Replication or Database Mirroring with a Database Snapshot
If you have enterprise version, you can use database mirroring + snapshot to create read-only copy of the database, available for reporting, etc. Mirroring uses "continuous" log shipping "under the hood". It is frequently used in scenario you have described.
Yes it's true.
I think the following happens:
While the transaction log is being restored, the database is locked, as large portions of it are being updated.
This is for performance reasons more then anything else.
I can see two options:
Use database mirroring.
Schedule the log shipping to only occur when the reporting system is not in use.
Slight confusion in that, the norecovery flag on the restore means your database is not going to be brought out of a recovery state and into an online state - that is why the select statements will not work - the database is offline. The no-recovery flag is there to allow you to restore multiple log files in a row (in a DR type scenario) without bringing the database back online.
If you did not want to log ship / have the disadvantages you could swap to a one way transactional replication, but the overhead / set-up will be more complex overall.
Would peer-to-peer replication work. Then you can run queries on one instance and so save the load on the original instance.

Does the Full Recovery Model Generate Additional Transaction Logs?

I read some Books Online about recovery/backup, one stupid question, if I use full database backup/full recovery model, for backup operation itself, will it generate any additional transaction log to source database server? Will full recovery operation generate additional transaction log to destination database?
A more useful view of this might be to say that Full Recovery prevents the contents of the transaction log from being overwritten without some other action allowing them to be overwritten
SQL Server will log most transactions (e.g. bulk load and a few others aside) and when running in simple recovery mode, effectively discard the newly created log contents at the end of the transaction associated with the creation of the same. When running in Full Recovery mode the contents of the trans log are retained until marked as available to be overwritten. To mark them as available to be overwritten one normally performs a backup (either Full or Trans Log).
If there is no space in the trans log and no logs contents marked as available to be overwritten then SQL Server will attempt to increase the size of the logs.
In practical terms Full Recovery requires you to manage your transaction logs, generally by performing a trans log backup every so often (every 1 hour is probably a good rule of thumb if you have no SLA to work to or other driver to determine how often to do this)
I'm not sure I completely understand your question, but here goes. Keeping your DB in Full Recovery mode can make your transaction logs grow to be very large. The trade off is that you can restore to the point of recovery.
The reason that the transaction logs are larger than normal is ALL transactions are fully logged. This can include bulk-logged operations, index creation, etc.
If drive space is not a concern (and with drives being so inexpensive, it shouldn't be), this is the recommended backup approach.

Resources