The problem is that I need to explain the different sizes of backups that are being made of a database in a plant. Sometimes the difference between the sizes is in negative, even though that there is no data being deleted from the system.
Datum Backupfile-file Size KB Diff
6/1/10 backup201006010100.bak 3355914
7/1/10 backup201007010100.bak 4333367 977453
7/2/10 backup201007020100.bak 4355963 22596
7/3/10 backup201007030100.bak 4380896 24933
7/4/10 backup201007040100.bak 4380404 -492
8/1/10 backup201008010100.bak 4507775 1151861
8/2/10 backup201008020100.bak 4507777 2
8/3/10 backup201008030100.bak 4532492 24715
8/4/10 backup201008040100.bak 4584028 51536
On 7/3/10 and 8/1/10 there was no production. On other days the production is mostly consistent hence the database is expected to have a pretty much linear increase in size, but how is it that the size goes to negative.
In the maintenance plan the the tasks are: Backup Database Task (Type: Full Append Existing) -> Shrink Database (Leave 10% free space)
The last step of the backup process is to append data from the log that reflects any changes made to the database during the backup process, this could account for the difference you are seeing.
SQL Server have 2 step process of storing your data. First, your data goes into log file, and it not only data that you inserted, but also the whole list of operations SQL performed on your data. So, if something wrong happens, SQL can 'replay' your transactions.
At some point CHECKPOINT happens, ans data gets written into data file. Log files have a tendency to grow and shrink.
During BACKUP, SQL will write data and log files EXACTLY as they look at the point of BACKUP. That's why you can see that difference in size.
Related
Apologies if this question was asked by someone.
I'm not much experienced in SQL server.
On our SQL server, there is 1 TB plus log file size.
Database is in full recovery.
Had taken an initial full backup and set a regular backup job for a transaction log for a stop to growing log file size too much.
so my question is, can I truncate my log file after taking log backup.
If there was abnormal event like long running transaction or huge data import, you restore the previous size with the code below:
DBCC SHRINKFILE(2,TRUNCATEONLY);
ALTER DATABASE [StackOverflow] MODIFY FILE (NAME = N'StackOverflow_Log', SIZE = 256MB);
The SHRINKFILE second argument is the file_id:
SELECT *
FROM sys.database_files;
Also, sometimes having a huge log file might be something normal. It basically depends on the activity on your database. So, 256 MB might be more or less. It will be better to set a size, which will be enough for handling your normal workload without growing.
You should also check how often you are performing backup of the log file - each 10 minute or each 1 hour.
While deleting a large number of records, I get this error:
The transaction log for database 'databasename' is full
I found this answer very helpful, it recommends:
Right-click your database in SQL Server Manager, and check the Options page.
Switch Recovery Model from Full to Simple
Right-click the database again. Select Tasks Shrink, Files Shrink the log file to a proper size (I generally stick to 20-25% of the size of the data files)
Switch back to Full Recovery Model
Take a full database backup straight away
Question: in step 3, when I go to shrink > files and choose log from the file type dropdown menu, it tells me that 99% of the allocated space is free.
Out of ~4500MB of allocated space, there is ~4400MB free (the data file size is ~3000MB).
Does that mean I'm good to go, and there is no need to shrink?
I don't understand this. Why would that be the case, given the warning I received initially?
I'm not one for hyperbole, but there are literally billions of articles written about SQL Server transaction logs.
Reader's digest version: if you delete 1,000,000 rows at a time, the logs are going to get large because it is writing those 1,000,000 deletes in case it has to roll back the transaction. The space needed to hold those records does not get released until the transaction commits. If your logs are not big enough to hold 1,000,000 deletes, the log will get filled, throw that error you saw, and rollback the whole transaction. Then all that space will most likely get released. Now you have a big log with lots of free space.
You probably hit a limit on your log file at 4.5gb and it wont get any bigger. To avoid filling your logs in the future, chunk down your transactions to smaller amounts, like deleting 1,000 records at a time. A shrink operation will reduce the physical size of the file, like from 4.5gb down to 1gb.
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-shrinkfile-transact-sql?view=sql-server-2017
I have a table called test i insert 40000 records in it ,I split my database file into two file groups like this :
The size of both files based on round robin algorithm increased 160 mb as you can see .
after this i delete the data in my table .but the size of both file (FileGroup)
remains on 160 mb .Why ?
This is because SQL Server is assuming that if your database got that large once, it is likely it will need to do so again. To save having to go through the overhead of requesting space from the operating system each time SQL Server wants to use some more disk space, it will simply hold on to what it has and fill it back up as required unless you manually issue a SHRINK DATABASE command.
Due to this, using shrink is generally considered a bad idea, unless you are very confident your database will not need that space at some point in the future.
I found a link expaining very well the main factors of transaction-log. But there is 1 statements I don't understand completly:
The FULL recovery model means that every part of every operation is
logged, which is called being fully logged. Once a full database
backup has been taken in the FULL recovery model, the transaction log
will not automatically truncate until a log backup is taken. If you do
not want to make use of log backups and the ability to recover a
database to a specific point in time, do not use the FULL recovery
model. However, if you wish to use database mirroring, then you have
no choice, as it only supports the FULL recovery model.
My question are:
Will the transaction-logs get truncated if I have a database in Full-Backup-Mode but have neither taken an full-backup than an log-backup? Will the free space overwriten after next checkpoint? And when will those checkpoints be reached? Do I need to set a soze limit for the transaction logs to force the truncation or not?
Many thanks in advance
When your database is in full recovery mode,only log backup frees the space in log file..
This space won't be available for file system,but will be internally marked as free,so that new transactions can use this space
Will the free space overwriten after next checkpoint? And when will those checkpoints be reached? Do I need to set a size limit for the transaction logs to force the truncation or not?
You need not do anything,just ensure log backups are taken depending on your requirements
I have a simple SSIS package that transfer data between source and destination from one server to another.
If its new records - it inserts, otherwise it checks HashByteValue column and if it different its update record.
Table contains approx 1.5 million rows, and updates around 50 columns.
When I start debug the package, for around 2 minutes nothing happens, I cant even see the green check-mark. After that I can see data starts flowing through, but sometimes it stops, then flowing again, then stops again and so on.
The whole package looks like this:
But if I do just INSERT part (without update) then it works perfectly, 1 min and all 1.5 million records in a destination table.
So why adding another LOOKUP transformation to the package that updates records slows down performance so significantly.
Is it something to do with memory? I am using FULL CACHE option in both lookups.
what would be the way to increase performance?
Can the reason be in Auto Growth File size:
Besides changing AutoGrowth size to 100MB, your Database Log file is 29GB. That means you most likely are not doing Transaction Log backups.
If you're not, and only do Full Backups nightly or periodically. Change the Recovery Model of your Database from Full to Simple.
Database Properties > Options > Recovery Model
Then Shrink your Log file down to 100MB using:
DBCC SHRINKFILE(Catalytic_Log, 100)
I don't think that your problem is in the lookup. The OLE DB Command is realy slow on SSIS and I don't think it is meant for a massive update of rows. Look at this answer in the MSDN: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/4f1a62e2-50c7-4d22-9ce9-a9b3d12fd7ce/improve-data-load-perfomance-in-oledb-command?forum=sqlintegrationservices
To verify that the error is not the lookup, try disabling the "OLE DB Command" and rerun the process and see how long it takes.
In my personal experience it is always better to create a Stored procedure to do the whole "dataflow" when you have to update or insert based on certain conditions. To do that you would need a Staging table and a Destination table (where you are going to load the transformed data).
Hope it helps.