I'm running an integration flow which processing actions are on hold due to the following error:
com.sybase.jdbc4.jdbc.SybSQLWarning: The transaction log in database <database_name> is almost full. Your transaction is being suspended until space is made available in the log.
How can I erase the log or increase its size?
Thank you
From my understanding this message is related to your SybSQL Database. This is not related to HCI. So you should clear the database log.
On HCI side you cannot delete any log or influence log sizes. I had a quite similar request a while ago. I clarified with SAP Support and it is not possible to delete any log entries manually. Furthermore in the meantime I found that the log messages are deleted automatically after 6 months.
Related
I have a problem using the AWS Database Migration Service for implementing a transactional replication from SQL Server as a source database engine, a help is highly appreciated.
The 'safeguardPolicy' connection attribute defaults to 'RELY_ON_SQL_SERVER_REPLICATION_AGENT'. The tools will start mimicking a transaction in the database for preventing the log to be reused and to be able to read as much changes from the active log.
But what is the intended behavior of these safeguard transaction? Will those sessions be stopped at some point? What is the mechanism to start / run for some time / stop such a transaction?
The production databases I manage are in Full recovery mode, with Log backups on each half an hour. The log grows to an enormous size due to the inability for a valid truncation procedure to succeed and because of those safeguard transactions initiated by the DMS tool.
The only solution to a full transaction log due to LOG_SCAN caused by such behavior of DMS for now is to stop the DMS tasks and run a manual truncation of the log, to release space not used. But it is not a solution at all if we need to stop the replication each time such a problem occurs, knowing that it will occur often.
Please share some internals about the tool if possible.
Thanks
I am not a DBA but a programmer. Recently we have been getting LOG SUSPEND issue daily on our production. I am unable to catch the scenario as it is not reproducible on my local system.
A file when uploaded on production fails with log suspend while same file uploaded on local seems to work fine. Also, when the same file is uploaded again after some time it seems to work fine in production too.
Really confused as why this is happening.
Log Suspend indicates that the transaction log is filling up, and may not be properly sized for the transaction rate you are supporting. Have the DBA/System Administrator add additional Log device space to the database that is having issues. If possible, you may also want to break up any large transactions as well to lower the possibility
As for a cause, it's very dependent on how the system is setup. First check the database settings.
sp_helpdb will print out the list of databases on the server, as well as any options that may be set for each database.
If you don't see trunc log on chkpt, then the database is setup for maximum recoverability, the log space will only free up after a backup is run, or after the transaction log is dumped. This allows for up to the second recovery in the event of a failure, at the expense of using more log space.
If you DO see trunc log on chkpt, then the database will automatically truncate the log after a checkpoint occurs in the database. Checkpoints are issued by the database itself as part of routine processing, but the command can also be issued manually. If this option is set, and the database still goes into log suspend, then you may have a transaction that did not properly close (whether by committing or rolling back). You can check the master..syslogshold table to find long running transactions.
A third possibility is that if the system is using SAP/Sybase Replication Server, there is actually a secondary truncation point used as part of the replication processes. The system will not truncate the transaction log until after a transaction has been read by the RepAgent process, so this too can cause a system to go into log suspend.
I have some very strange problems. I have an application running on Windows 2003 terminal server from multiple clients. The application uses SQL Server 2008 Express as its database.
Yesterday, I connected to the app, closed some sessions on the server that were not responding, and to my surprise, I saw that some data was missing from the database. After a futher search I found that all the database changes made from last week were lost.
It's like the database rolled back all the changes, and returned to the state of one week ago! I can confirm that all the changes were lost. In fact I have inserted a record into a table with identity_insert ON (to manually insert an ID on an autonumeric col) and that record is missing, so there is no way this is a program failure.
Does anyone have any idea of what could have happend here?
EDIT
I have a suspect: could a transaction initiated by a session stays in a unconfirmed state for one week, retain all the database changes and when I close the session rollback all the changes made?
EDIT II
Find this on log:
SQL Server never rolls back a database to a previous state (like this). The database was restored, or the entire disk/VM was rolled back, or DML was executed to create the impression that a rollback happened (but really didn't). Maybe someone executed a sync tool in the wrong direction.
The question does not have information that allows for finding the problem. But it certainly isn't SQL Server rolling back a database.
You can try examining the log using fn_dblog.
From the log it looks like the server has only just started up after a reboot or service restart.
If a database is not cleanly shut down then the database can be left with partially applied transactions. If this happens then the database is recovered on start up.
Any transactions that are incomplete are rolled back. Committed transactions that were not yet applied are rolled forwards. How long this recovery takes depends on the size of the transactions in the log that have not yet been applied to the database.
The transactions may not show up in the log after they have been rolled back following a crash. This depends upon their location in the log and the databases's recovery mode.
If the transaction is at the end of the log it is likely the log will just be rolled back and the transaction removed.
If the transaction is in the middle of the log you might see a LOP_ABORT_XACT in the log.
When using simple recovery there is a good chance the log will be cleared after recovery (since the logs are only kept until the transactions are committed).
See Are log records removed from ldf file for rollbacks? for more details.
I've inherited an application that uses Atomikos for transaction handling in Spring on top of an Oracle database. In production deployments transaction logging has always been enabled by setting com.atomikos.icatch.enable_logging=true but the truth is I can't find any info on what exactly these logs are used for.
The atomikos site states "this should never be disabled on production or data integrity cannot be guaranteed" and I found a comment in a jta.properties on that site that said there is a "risk of losing data after restart or crash" if it is disabled.
We don't enable this in our development environments and are able to use the application normally. I thought they might be used in the case of the application crashing but if so I'm not sure how they'd be used. Maybe automatically during the next startup or manually in some way? In terms of data integrity I know Oracle enables it's own data recovery but maybe these transaction logs hold data that Oracle hasn't seen yet, e.g. if Spring were to crash.
http://fogbugz.atomikos.com/default.asp?community.6.1950.6 seems to indicate that the transaction logs are used for recovery only and can be disabled if you don't need them for recovery.
These logs maintain transaction information in the latest revision that may not be known yet to your database. without this set, recovery after a crash/restart will probably be incorrect.
HTH
Guy
Before I answer you question you need to read the begining of this post here How would you tune Distributed ( XA ) transaction for performance? to get the therminology.
The Atomikos is acting as Transaction coordinator who coordinates across the participants which are the different databases. As a coordinator it orchestrate the process of transactions accross the different databases. It is essentialy the same work that a Policemen is doing at the middle of a crossroad.
Atomikos writes its log file in order to know where exactyly in the process of the distributed transaction it is. In case of failure it can trace its uncommited transactions progress and attempt from the place it was previously interrupted. As such the transaction log is very important for the transaction recovery process.
After reading an article about the subject from O'Reilly, I wanted to ask Stack Overflow for their thoughts on the matter.
Write locally to disk, then batch insert to the database periodically (e.g. at log rollover time). Do that in a separate, low-priority process. More efficient and more robust...
(Make sure that your database log table contains a column for "which machine the log event came from" by the way - very handy!)
I'd say no, given that a fairly large percentage of server errors involve problems communicating with databases. If the database were on another machine, network connectivity would be another source of errors that couldn't be logged.
If I were to log server errors to a database, it would be critical to have a backup logger that wrote locally (to an event log or file or something) in case the database was unreachable.
Log to DB if you can and it doesn't slow down your DB :)
It's way way way faster to find anything in DB then in log files. Especially if you think ahead what you will need. Logging in db let's you query log table like this:
select * from logs
where log_name = 'wcf' and log_level = 'error'
then after you find error you can see the whole path that lead to this error
select * from logs
where contextId = 'what you get from previous select' order by timestamp
How will you get this info if you log it in text files?
Edit:
As JonSkeet suggested this answer would be better if I stated that one should consider making logging to db asynchronous. So I state it :) I just didn't need it. For example how to do it you can check "Ultra Fast ASP.NET" by Richard Kiessig.
If the database is production database, this is a horrible idea.
You will have issues with backups, replication, recovery. Like more storage for DB itself, replicas, if any, and backups. More time to setup and restore replication, more time to verify backups, more time to recover DB from backups.
It probably isn't a bad idea if you want the logs in a database but I would say not to follow the article's advice if you have a lot of log file entries. The main issue is that I've seen file systems have issues keeping up with logs coming from a busy site let alone a database. If you really want to do this I would look at loading the log files into the database after they are first written to disk.
Think about a properly setup database that utilizes RAM for reads and writes? This is so much faster than writing to disk and would not present the disk IO bottleneck you see when serving large numbers of clients that occurs when threads start locking down due to the OS telling them to wait on currently executing threads that are using all available IO handles.
I don't have any benchmarks to prove this data, although my latest application is rolling with database logging. This will have a failsafe as mentioned in one of these responses. If the database connection can not be created, create local database (h2 maybe?) and write to that. Then we can have a periodic check of database connectivity that will re-establish the connection, dump the local database, and push it to the remote database.
This could be done during off hours if you do not have a H-A site.
Sometime in the future I hope to develop benchmarks to demonstrate my point.
Good Luck!