can anyone explain when SQL Server issues a checkpoint?
from: http://msdn.microsoft.com/en-us/library/ms188748.asp
Events That Cause Checkpoints
Before a database backup, the Database Engine automatically performs a checkpoint so that all changes to the database pages are contained in the backup. In addition, checkpoints occur automatically when either of the following conditions occur:
The active portion of the log exceeds
the size that the server could
recover in the amount of time
specified in the recovery interval
server configuration option.
The log becomes 70 percent full, and
the database is in log-truncate mode.
A database is in log truncate mode
when both these conditions are TRUE:
the database is using the Simple
recovery model, and, after execution
of the last BACKUP DATABASE statement
that referenced the database, one of
the following events occurs:
A minimally logged operation is
performed in the database, such as a
minimally logged bulk copy operation
or a minimally logged WRITETEXT
statement is executed. An ALTER
DATABASE statement is executed that
adds or deletes a file in the
database.
Also, stopping a server issues a checkpoint in each database on the server. The following methods of stopping SQL Server perform checkpoints for each database:
Using SQL Server Configuration Manager.
Using SQL Server Management Studio.
Using the SHUTDOWN statement.
Related
Every day, a job is connecting to a 2005 SQL Server and performs a dump transaction with truncate_only (or no_log) and breaks my transactions backups sequence.
I checked the Agent and the task scheduler, none of them is hosting such a job.
It occurs every day at 2:38PM, this I know from the server log showing this kind of error :
BACKUP LOG WITH TRUNCATE_ONLY or WITH NO_LOG is deprecated. The simple recovery model should be used to automatically truncate the
transaction log.
After digging in the profiler I could not see any column showing the IP of the sessions, either, I could continuously pull data from this query :
select * from
sys.dm_exec_connections A,
sys.sysprocesses B
where A.session_id = B.spid
But I wonder if I can catch the job since the transaction segment is very small.
On an other side, it would be nice if I could hang the backup itself by locking the transaction file, so I would have the time to see which process is stuck trying to dump the transaction.
Any ideas?
Configure Auditing or a Trace to track the additional information.
You can then look back over the output logs and see where it came from.
I am working with SQL Server 2016 and TFS 2018.
In my TFS I have a collection : DefaultCollection.
This collection has a log file (DefaultCollection_Log) which is growing fast.
I'd like to set a time period (let's say 2 weeks for examples) for log retention. This is, every day (for example) SQL Server should delete data in my DefaultCollection_Log file older than 2 weeks.
How can I acomplish that?
Transaction log backups to be configured. In this case, virtual log files will be reused, so transaction log file will not grow unless long-running transactions.
Note, that you have to setup backup routines using TFS Administration Console, regular backups using T-SQL scripts are not sufficient, because of multi-database restore in TFS.
Process in few steps:
Create scheduled backups:
Provide a path where backups to be stored
Include transaction log backups into a configuration:
SQL Server Transaction Log Architecture and Management Guide:
Log truncation occurs automatically after the following events, except when delayed for some reason:
Under the simple recovery model, after a checkpoint.
Under the full recovery model or bulk-logged recovery model, after a log backup, if a checkpoint has occurred since the previous
backup.
Hypothetical question:
If a maintenance plan is scheduled to run a full backup of several databases while they're online, and during this time other jobs are scheduled to run (stored procedures, SSIS packages etc), what happens to these jobs during the backup?
I'm guessing either:
The job is paused until the backup is completed, then they're run in the same order they were scheduled to.
Or
SQL Server works out what tables will be affected by each scheduled job and backs them up after the job completes?!
Or
SQL Server creates a "snapshot" of all the tables before the back up starts, any changes to them (including changes made by the jobs run during the backup) are added to the transaction log, which should be backed up separately.
...are any of my ideas correct?!
Idea #3 is the closest to what happens. The key is that when the backup operation completes, the backup file will be in a state that allows for the restore of the database to a consistent state.
From the documentation:
Performing a backup operation has minimal effect on transactions that
are running; therefore, backup operations can be run during regular
operations. During a backup operation, SQL Server copies the data
directly from the database files to the backup devices. The data is
not changed, and transactions that are running during the backup are
never delayed. Therefore, you can perform a SQL Server backup with
minimal effect on production workloads.
...
SQL Server uses an online backup process to allow for a database
backup while the database is still being used. During a backup, most
operations are possible; for example, INSERT, UPDATE, or DELETE
statements are allowed during a backup operation.
Oracle has SQL commands that one can issue so that a transaction does not get logged. Is there something similar for SQL Server 2008?
My scenario: We need Tx logs on servers (Dev, QA, Prod), but maybe we can do without them on developer machines.
You can't do without transaction logs in SQL Server, under any circumstances. The engine simply won't function.
You CAN set your recovery model to SIMPLE on your dev machines - that will prevent transaction log bloating when tran log backups aren't done.
ALTER DATABASE MyDB SET RECOVERY SIMPLE;
There is a third recovery mode not mentioned above. The recovery mode ultimately determines how large the LDF files become and how ofter they are written to. In cases where you are going to be doing any type of bulk inserts, you should set the DB to be in "BULK/LOGGED". This makes bulk inserts move speedily along and can be changed on the fly.
To do so,
USE master ;
ALTER DATABASE model SET RECOVERY BULK_LOGGED ;
To change it back:
USE master ;
ALTER DATABASE model SET RECOVERY FULL ;
In the spirit of adding to the conversation about why someone would not want an LDF, I add this: We do multi-dimensional modelling. Essentially we use the DB as a large store of variables that are processed in bulk using external programs. We do not EVER require rollbacks. If we could get a performance boost by turning of ALL logging, we'd take it in a heart beat.
SQL Server requires a transaction log in order to function.
That said there are two modes of operation for the transaction log:
Simple
Full
In Full mode the transaction log keeps growing until you back up the database. In Simple mode: space in the transaction log is 'recycled' every Checkpoint.
Very few people have a need to run their databases in the Full recovery model. The only point in using the Full model is if you want to backup the database multiple times per day, and backing up the whole database takes too long - so you just backup the transaction log.
The transaction log keeps growing all day, and you keep backing just it up. That night you do your full backup, and SQL Server then truncates the transaction log, begins to reuse the space allocated in the transaction log file.
If you only ever do full database backups, you don't want the Full recovery mode.
What's your problem with Tx logs? They grow? Then just set truncate on checkpoint option.
From Microsoft documentation:
In SQL Server 2000 or in SQL Server
2005, the "Simple" recovery model is
equivalent to "truncate log on
checkpoint" in earlier versions of SQL
Server. If the transaction log is
truncated every time a checkpoint is
performed on the server, this prevents
you from using the log for database
recovery. You can only use full
database backups to restore your data.
Backups of the transaction log are
disabled when the "Simple" recovery
model is used.
If this is only for dev machines in order to save space then just go with simple recovery mode and you’ll be doing fine.
On production machines though I’d strongly recommend that you keep the databases in full recovery mode. This will ensure you can do point in time recovery if needed.
Also – having databases in full recovery mode can help you to undo accidental updates and deletes by reading transaction log. See below or more details.
How can I rollback an UPDATE query in SQL server 2005?
Read the log file (*.LDF) in sql server 2008
If space is an issue on production machines then just create frequent transaction log backups.
When restoring a SQL Server Database, I notice that there are 3 different Recovery States to choose from:
Restore with Recovery
Restore with No Recovery
Restore with Standby
I've always left it at it's default value, but what do they all mean?
(Preferably in layman's terms)
GateKiller,
In simple terms (and not a copy-paste out of the SQLBOL) so you can understand the concepts:
RESTORE WITH RECOVERY uses the backup media file (eg. fulldata.bak) to restore the database to back to the time that backup file was created. This is great if you want to go back in time to restore the database to an earlier state - like when developing a system.
If you want to restore the database TO THE VERY LATEST DATA, (i.e. like if your doing a system Disaster Recovery and you cannot lose any data) then you want to restore that backup AND THEN all the transaction logs created since that backup. This is when you use RESTORE NORECOVERY. It will allow you to restore the later transaction logs right up to the point of failure (as long as you have them).
RECOVERY WITH STANDBY is the ability to restore the database up to a parital date (like NORECOVERY above) but to allow the database still to be used READONLY. New transaction logs can still be applied to the database to keep it up to date (a standby server). Use this when it would take too long to restore a full database in order to Return To Operations the system. (ie. if you have a multi TB database that would take 16 hours to restore, but could receive transaction log updates every 15 minutes).
This is a bit like a mirror server - but without having "every single transaction" send to the backup server in real time.
You can set a Microsoft SQL Server database to be in NORECOVERY, RECOVERY or STANDBY mode.
RECOVERY is the normal and usual status of the database where users can connect and access the database (given that they have the proper permissions set up).
NORECOVERY allows the Database Administrator to restore additional backup files such as Differential or Transactional backups. While the database is in this state then users are not able to connect or access this database.
STANDBY is pretty much the same as NORECOVERY status however it allows users to connect or access database in a READONLY access. So the users are able to run only SELECT command against the database. This is used in Log Shipping quite often for reporting purposes. The only drawback is that while there are users in the database running queries SQL Server or a DBA is not able to restore additional backup files. Therefore if you have many users accessing the database all the time then the replication could fall behind.
From Books On line, i think it is pretty clear after you read it
NORECOVERY
Instructs the restore operation to not roll back any uncommitted transactions. Either the NORECOVERY or STANDBY option must be specified if another transaction log has to be applied. If neither NORECOVERY, RECOVERY, or STANDBY is specified, RECOVERY is the default.
SQL Server requires that the WITH NORECOVERY option be used on all but the final RESTORE statement when restoring a database backup and multiple transaction logs, or when multiple RESTORE statements are needed (for example, a full database backup followed by a differential database backup).
Note When specifying the NORECOVERY option, the database is not usable in this intermediate, nonrecovered state.
When used with a file or filegroup restore operation, NORECOVERY forces the database to remain in the restoring state after the restore operation. This is useful in either of these situations:
A restore script is being run and the log is always being applied.
A sequence of file restores is used and the database is not intended to be usable between two of the restore operations.
RECOVERY
Instructs the restore operation to roll back any uncommitted transactions. After the recovery process, the database is ready for use.
If subsequent RESTORE operations (RESTORE LOG, or RESTORE DATABASE from differential) are planned, NORECOVERY or STANDBY should be specified instead.
If neither NORECOVERY, RECOVERY, or STANDBY is specified, RECOVERY is the default. When restoring backup sets from an earlier version of SQL Server, a database upgrade may be required. This upgrade is performed automatically when WITH RECOVERY is specified. For more information, see Transaction Log Backups .
STANDBY = undo_file_name
Specifies the undo file name so the recovery effects can be undone. The size required for the undo file depends on the volume of undo actions resulting from uncommitted transactions. If neither NORECOVERY, RECOVERY, or STANDBY is specified, RECOVERY is the default.
STANDBY allows a database to be brought up for read-only access between transaction log restores and can be used with either warm standby server situations or special recovery situations in which it is useful to inspect the database between log restores.
If the specified undo file name does not exist, SQL Server creates it. If the file does exist, SQL Server overwrites it.
The same undo file can be used for consecutive restores of the same database. For more information, see Using Standby Servers.
Important If free disk space is exhausted on the drive containing the specified undo file name, the restore operation stops.
STANDBY is not allowed when a database upgrade is necessary.