I had very large data in database(log) after deleting all the records in the database tables but the database size is not reduced , specially Log File(s) Ldf
ROWS 56274.125000 55306.625000
LOG 179705.437500 179567.046875
how i reduce the size ?
Is the database in Full Recovery Model? If so, you will need to implement Transaction log backups before you can shrink the log-file size.
If you do not need transaction log backups (in case your full backup is taken often enough that the business is okay with losing the amount of data since the previous backup), you can switch the database to run in Simple recovery model.
You can find out which recovery model a database is in by right-clicking the database, selecting "Properties", and then checking the "Options" tab.
You can change the recovery model at that location, too. However, before you do, I highly recommend you to read up on the different recovery models, and the implications of changing them.
When you are ready to change the recovery model, you might want to read this first.
Related
I am using the SQL script on Select SQL Server database size in order to monitor how much my database file usage increased. I am particularly inspecting the "data_used_size" and "log_used_size" fields. But I notice that while the data file usage consistently increases after a set of activities, log file usage sometimes increases and sometimes decreases, thus always staying at a certain level. Why is this?
This is documented in MSDN article about Transaction Log
If the DB is using simple recovery model, after a checkpoint
If full recovery model or bulk-logged recovery model is used, then after a log backup.
We have a large database in MS SQL in which one of the tables is partitioned by a date column. The Primary key index is also partitioned using the same partition function. The database is kept in Simple Recovery model, since data is added to it in batches every 3 months.
DBCC checkfilegroup found consistency errors, so we needed to bring back just one filegroup from a complete backup.
Restore did not allow me to run a restore of a filegroup in Simple Mode, so I changed to full recovery mode, then ran the following, with no errors.
restore database aricases filegroup='2003'
from disk=N'backupfile-name.bak'
with recovery
I expected the "with recovery" clause to bring this back to working order, but the process ended with a note saying
The roll forward start point is now at log sequence number (LSN) 511972000001350200037. Additional roll forward past LSN 549061000001370900001 is required to complete the restore sequence.
When I query the database table that includes this filegroup I get a message saying that the primary key cannot be accessed because one of the partitions for the table cannot be access because it is offline, restoring, or defunct.
Why didn't "with recovery" clause leave this filegroup fully restored. Now what?
The entire database is very large (1.5TB). I can't backup the log file, because I'd first need to create a backup in full model mode. The filegroup itself is only 300gb.
I can do the restore again-- but would like to know the correct way of performing this.
Is there a way of staying in complete recovery mode and performing a piecemeal filegroup backup from a complete database backup?
I found the answer. Bottom line is that Simple Recovery Model is very limited. You must restore ALL read/write filegroups together from the same backup. Individual read/only filegroups CAN be restored separately, as long as they became read/only (no more changes) BEFORE the last backup of the read/write filegroups.
Bottom line-- only Full or Bulk-Logged models let you restore single read/write filegroups.
Bulk-Logged model is what a datawarehouse with batch loading should be using, not Simple Model. My error in design.
see from Microsoft
http://msdn.microsoft.com/en-us/library/ms191253.aspx
then look at piecemeal restores for Simple Model
http://msdn.microsoft.com/en-us/library/ms190984%28v=sql.100%29.aspx
very limited
Most of the time users will hit the database to read news. There are very few number of queries executed under transactions. 95% of the database hits would be for read-only purposes.
My database log files size is growing 1 GB per day. Even if I shrink the database, the log file size is not decreasing. What could be the reason for growing the log file size more and more? How can I control this? As per my knowledge log file does not increase when we read data from tables.
Any suggestions on how to deal with the log file growing? How can it be kept a manageable or reasonable size? Does this effect performance in any way?
There are couple of things to consider. What type of backups you do, and what type of backups you need. If you will have the answer to this question you can either switch Recovery Mode to simple or leave it full but then you need to make incremental backups on a daily basis (or whatever makes you happy with log size).
To set your database logging to simple (but only if you do Full Backups of your database!).
Right click on your database
Choose Properties
Choose Options
Set Recovery mode to simple
This will work and is best if your backup schedule is Full Backup every day. Because in such scenario your log won't be trimmed and it will skyrocket (just like in your case).
If you would be using Grandfather&Father&Son backup technique, which means Monthly Full backup, Weekly Full backup, and then every day incremental backup. Then for that you need Full Recovery Mode. If 1GB of log per day is still too much you can enable incremental backup per hour or per 15 minutes. This should fix the problem of log growing more and more.
If you run Full Backup every day you can switch it to simple recovery mode and you should be fine without risking your data (if you can leave with possible 1 day of data being lost). If you plan to use incremental then leave it at Recovery Mode Full.
Full backups will not help, you must regularly backup the transaction log (as well as the regular database full and differential backups) for it to be emptied. If you are not backing up the log and you are not in simple recovery mode, then your transaction log has every transaction in it since the database was set up. If you have enough action that you are growing by a gig a day, then you may also have large imports or updates affecting many records at once. It is possible you need to be in simple recovery mode where transactions are not recorded individually. Do NOT do that however if you have a mix of data from imports and users. In that case you need to back up the transaction log frequently to be able to keep the size manageable and to a point in time. We backup our transaction log every 15 minutes.
Read about transaction log backups in BOL to see how to fix the mess you have right now. Then get your backups set up and running properly. You need to read and understand this stuff thoroughly before attempting to fix. Right now, you would probably be in a world of hurt if your server failed and you had to recover the database. Transaction log backups are critical to being able to recover properly from a failure.
Do you backup your database frequently? You need to perform full- and/or transaction log- backups in order for SQL Server to consider shrinking your log file.
The major reason for the log file large size is due to bulk transaction in DB. To reduce the log file size best option to take the transaction log backup after certain interval of time.
We have a database that has configuration data in. When the applications are run against it, they basically do lots of calculations and then write some data to file. Unlike normal installations, we do not really need the transaction log. What I mean here is that we could take a backup of the database and use it without applying transaction logs to it to get it up to date.
Since the transaction logs are not that important to us, what should be the best backup strategy? Currently the transaction log is enormous (10 GB where as the database is about 50 MB, this was over a few months).
Could we just do an initial backup of the database and then each few days or less backup the transaction log, overwriting the current one? Or could we just delete the transaction log all together and have a new one started?
JD.
Ensure the database is running in the Simple Recovery Model.
Doing so negates the need for you to perform transaction log backups.
This recovery model automatically ensures that the inactive portions of the transaction log can become immediately available for reuse.
No longer concerned with the Transaction Log management, you can focus your attention on your backup strategy.
A weekly FULL Database Backup, perhaps with daily Differential Backups may suit your requirements.
Recovery Model Overview
As I understand, you do not write any data to your database. And for this reason the best backup strategy for you will be:
1. Change recovery model to simple and shrink the transaction log, using DBCC SHRINKFILE.
2. Make one full backup of your database.
This question might be kind of elementary, but here goes:
I have a SQL Server database with a 4 GB log file. The DB is 16GB and is backed up nightly.
Can I truncate the log regularly because the entire DB+Log is backed up each night?
you can something like this to you maintenance schedule to run every night before the backup. This will try to shrink/truncate your log file to 1 meg
BACKUP LOG DBNAME
TO disk = 'D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup\DBNAME.log'
DBCC SHRINKFILE('DBNAME_Log', 1)
Are you sure the log is backed up nightly and not just the database?
If so, then what does this database do? Are you deleting and refreshing whole tables? If so, your log might be the right size for the amount of transactions you have. You want the log to be large enough to handle your normal transaction load without having to grow. A small log can be a detriment to performance.
If this database is not transactional in nature (i.e., the tables are populated by full refreshes rather than one record ata time), the change the recovery mode to simple. Do not do that though if you have transactional tables that you will need to be able to recover from the log rahter than simply re-importing the data.
If you can run log backups during the day (depending on load, etc. this may or may not be possible for you) you can keep the log file under control by doing so. This will prevent the log file itself from growing quite so large, and also provide the side benefit of giving you the ability to restore closer to the point of failure in the event of a problem.
You'll still need to shrink the log file once using DBCC SHRINKFILE, but if it's backed up regularly after that point it shouldn't stabilize at a smaller size.