one of my database has multiple log files
log.ldf - 40 GB ( on D: drive)
log2.ldf -70 GB (on S: Drive)
log3.ldf -100 GB ( on L:Drive)
which log file SQL Server will pick first. is SQL server will follow any order to pick the log file ?Can we control this ?
I believe you can't control into which file LOG info will be written.
You should not concentrate only on the BIGGEST file, but on the FASTEST
General advise would be to have ONLY two files:
- First file, as big as possible on the FASTEST drive (on SSD). Set MAXSIZE to the file size, so it won't grow anymore.
- Second file, as small as possible on big drive, where it can grow in case the first file is full.
Your task would be to monitor your second file size and if it starts to grow then make log backups more often and shrink that file back.
If you want to see how your log files are used, you can use following DBCC command:
DBCC LOGINFO ();
Related
I have two hard disks: C (40GB capacity left) and D (1TB capacity left).
My sqlite folder (SQLite3 Windows download files from tutorial) is in disk D.
I created a database called myDatabase.db in the sqlite folder and have created a table in it and populated the table from a CSV file. This was done successfully as I ran a few queries and they worked.
The size of the database is quite large (50GB) and I want to create an index for my table. I do the CREATE INDEX command and it starts - it creates a myDatabase.db-journal file in the folder next to the .db file.
However, from "This PC" view of the hard drives I can see that disk C is getting drained (from 40GB, going 39 38 etc incrementally), myDatabase.db in drive D is not getting bigger.
I dont want SQLite to use C when it doesnt make sense for it do it as sqlite and .db file are in disk D.
Any suggestions why this is happening ?
Thanks in advance for your time.
I want to setup checkdb using DatabaseIntegrityCheck.sql from Ola-hallengren. I have passed LogToTable = 'Y'. But will it log to disk as well in text files? I did not find any parameter for that.
P.S. I know that jobs from MaintenanceSolution.sql do log to files in disk.
Script reference : DatabaseIntegrityCheck.sql
The procedure do not, byt itself log to disk. There isn't really any clean way to write to disk from inside T-SQL. Hence using an output file in the job step (like what the create job section of MaintenanceSolution does).
Is there any way to get the size of the DB without restoring a backup file?
For example: I have a backup file of 10 GB, I want to know the size of the DB after the backup file will be restored. Most of the times the DB size is much larger than its backup file because of free spaces in DB. So is there anyway to know the DB size without restoring only from backup file?
Yes, you can use RESTORE FILELISTONLY to get the size like below
RESTORE FILELISTONLY FROM DISK = N'D:\backup_filename.bak'
It doesn't actually restore rather returns a result set containing a list of the database and log files contained in the backup set in SQL Server. Result includes Size column which gives the size in bytes.
Size numeric(20,0) Current size in bytes.
How can I limit the total size of log files that are managed by syslog? The oldest archived log files should probably be removed when this size limit (quota) is exceeded.
Some of the log files are customized files specified by LOG_LOCALn, but I guess this doesn't matter regarding the quota issue.
Thanks!
The Linux utility logrotate renames and reuses system error log files on a periodic basis so that they don't occupy excessive disk space. Linux system stores all the relevant information regarding this into the file /etc/logrotate.conf
There are number of attributes which help us to manage the log size. Please read the manual("man logrotate") before doing anything. On my machine this file looks as follows:
# see "man logrotate" for details
# rotate log files weekly
weekly
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# uncomment this if you want your log files compressed
#compress
# packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp, or btmp -- we'll rotate them here
/var/log/wtmp {
missingok
monthly
create 0664 root utmp
rotate 1
}
/var/log/btmp {
missingok
monthly
create 0660 root utmp
rotate 1
}
# system-specific logs may be configured here
As we can see that log files would be rotated on weekly basis.This may be changed to daily basis.The compress is not enabled on my machine. This may be enabled if you want to make log file size smaller.
There is excellent article which you may want to refer for the complete understanding this topic.
I am running a script every day (only 2 days so far) to back up my database:
sqlcmd -E -S server-hl7\timeclockplus -i timeclockplus.sql
move "C:\Program Files\Microsoft SQL Server\MSSQL.2\MSSQL\Backup\*.*" w:\
Why is it that backups from two different dates have the SAME EXACT size in bytes?? I know for a fact that the database was definitely changed!
The database files (*.mdf, *.ldf) are allocated in chunks - e.g. they're using a given number of megabytes, until that space is filled up, and then another chunk (several megabytes) is allocated.
It would be really bad for performance to allocate every single byte you ever add to the database.
Due to this chunk-based allocation, it's absolutely normal to have a given size for a certain period of time - even if your database is being used and data is being added and deleted.
A SQL Server backup only contains pages of data. A page is 8k. If your changes day to day do not add or remove pages (eg deleting, adding) then the number of pages to backup stays constant.
Try a CRC check on the backup files to see what changes...