Currently my db logs for my production SQL Server 2008 R2 server is growing out of control:
DATA file: D:\Data...\MyDB.mdf = 278859 MB on disk
LOG file: L:\Logs...\MyDB_1.ldf = 394542 MB on disk
The server mentioned above has daily backups scheduled #1am & translog backups every 15 min.
The database is replicated in full recovery model to a subscriber. Replciation is pushed from the node above (publisher). That same db log file on the subscriber is ~< 100 GB on disk.
What I did to try and fix:
Run a full backup of the db (takes 1h:47m)
Run the translog backup job which runs every 15 min. (takes 1m:20s)
Run another full backup of the db
Nothing above has worked so I then attempt to shrink the log files which doesn't work either using DBCC SHRINKFILE. The size doesn't ever change.
Can anyone please tell me what is wrong or what I need to do as a SQL Server DBA to resolve the above issue?
Possible things that may stop you from shrinking the translog file:
Long running transaction is occurring on your database
Your replication distribution agent runs quite frequent
Looking at the size of your translog file size, most likely it was caused by the 2nd possibility.
Your replication distribution agent runs quite frequent
SQL Server log reader agent marks the translog file as being used and prevent them from being shrunk, which is what SQL Server does after the translog file is backed up. If this process happens frequent and long enough, this could prevent your translog file from being shrunk on translog scheduled back up.
Look at this MSDN transactional explaination and how to modify log reader agent.
And a thread in MSDN forum that describe similar problem, there is DBCC query here that helps you identify running transaction that may be blocking the translog file (DBCC OPENTRAN).
Long running transaction is occurring on your database
You can check wheter any long running transaction is happening by using DBCC OPENTRAN and what process is running then decide what to do with it. As soon as the long running transaction is finished you should be able to shrink the log file.
After running sp_who2, I noticed a long running transaction on the log that was growing uncontrollably. I used kill on that SPID and not I'm proceeding to shrink the log file.
You should make blank database with same table and migrate your old database data to blank database from migration script.
for eg:
INSERT INTO customers(cust_id, Name, Address)
SELECT cust_id, Name, Address
FROM olddb.customers
--this script should run in new blank database
You can manually shrink you log file
1.right click your database > task > shrink > file > file type=log
than ok
Related
We have a TFS2017 environment. The size is growing every week for a long time now.
In this environment , i have multiple collection ; the size of Transaction Log File , is very big (overtop the 155 Gb)
My question is : It's safe to do a shirnk of the log file for the defined TFS collection ? (without loss data or getting error in administration console) ?
Thanks
Yes. You can.
The log file is there so you can restore to a point in time between full backups. If you have backups enabled on your server and regularly take a full backup, then you can truncate the logs after each full backup.
If you want to just "delete the logs" then temporarily turn off TFS by running this on every application tier server you have: TFSServiceControl quiesce, then truncate the logs, then turn TFS back on using TFSServiceControl unquiesce.
You may also want to verify your backup strategy. If all is well, you shouldn't see a ever growing log file.
See:
https://learn.microsoft.com/en-us/azure/devops/server/command-line/tfsservicecontrol-cmd?view=azure-devops-2019
I had stuck with one of our team issue where database drive size was overfilled due to log file which was around 150 GB and there was no hope of making any space on server. So, they had detached the database and then deleted to log file. But, then they were then not able to attach the mdf file. I then tried to rebuild log file but it was too not successful as there was no clean shutdown in database. Has anyone gone through this problem and successfully recovered the database?
sp_attach_single_file_db followed by a DBCC CHECKDB should do the trick. Any uncommitted transactions that might still be in that log file will be lost.
If you have an offline database in your metadata, delete that one first with DROP DATABASE but make sure you have a backup of your MDF file.
Suddenly I saw my SQL Server is in suspect/OFF Line mode. That's why I am not able to do any any operation in my db. For this reason I restarted my server (Windows Server 2003) .
But when I get ready I found that some of my data has been lost. I have no any back up of my db.
Is there any way to get back the data that I have lost.
the error log:
Could not redo log record (5108:10151:5), for transaction ID
(0:1552370), on page (1:3679), database '??'
The database may go in suspected/offline mode if the location of datafile and the log file have been misplaced accidentally or intentionally, and so after restart the database is unable to find its datafiles and goes in suspect or offline mode. This can be resolved by bringing the datafile and the log file back to the original path that has been configured for the database. After that, the database can be restored with no loss using the command 'Restore with recovery'. The original path for the datafile and the log file can be found in the error log of the server that contains the database.
Try the solution, hope it will help as it did for me.
In another case, the database may go in suspected/offline mode due to off and restart of the server in middle of a transaction and after the restart the transactions may not be committed or rolled back to a consistent state thus leaving database in an inconsistent state turning it suspected or offline. The solution for this is:
alter database <database name> set emergency dbcc checkdb (repair_allow_data_loss)
As the commnand itself states allow data loss, this command may result in loss of some data from the transaction log and hence we may face a data loss, so it is not recommended for a frequent or unapproved use.
In this case, I suggest you check your log file (LDF). In SQL, this log file records all the INSERT, UPDATE, and DELETE query operations performed on a database.
Suppose you have an LDF file. You can work with Restore and recovery process. I used this process for one of my existing clients & It worked.
https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-and-recovery-overview-sql-server?view=sql-server-ver16
If you do not have a log file, you can use stellar repair for SQL this process helps me to recover my data many times.
Thanks
Hi I'm having a production database and its replicated report database. How to shrink the transaction log files in the production database as the log file size is increasing. I had tried DBCC SHRINKFILE and SHRINKDATABASE commands but it does not work for me. I can't detach and shrink and attach back as the db in replication. Please help me in this issue.
First check what is causing your database to not shrink by running:
SELECT name, log_reuse_wait_desc FROM sys.DATABASES
If you are blocked by a transaction, find which one with:
DBCC OPENTRAN
Kill the transaction and shrink your db.
If the cause of the blocking is 'REPLICATION' and you are sure that your replicas are in sync, you might need to reset the status of replicated transactions. To see the status of what the database still think needs to be replicated use:
DBCC loginfo
You can reset this by first turning the Reader agent off (I usually just turn the whole SQL Server Agent off), and then run that query on the database for which you want to fix the replication issue:
EXEC sp_repldone #xactid = NULL, #xact_segno = NULL, #numtrans = 0, #time= 0, #reset = 1
Close the connection where you executed that query and restart SQL Server Agent (or just the Reader Agent). You should be all set to shrink your db now.
The database won't let you remove transaction data that isn't backed up. First you have to back up the transaction log, then you can shrink it.
Do you have a regular backup schedule in place?
If not, I suggest you read this excellent article: 8 Steps to better Transaction Log throughput
Shrink the logfile with dbcc shrinkfile
Then truncate the log file using
Backup databaseName with truncate_only
Then shrink the logfile again
I used Red-Gate's SQL Backup tool to take care of the backing up. Then I just use the management console to issue a shrink command on the log file (telling it to rearrange the pages before releasing unused space).
Works like a charm.
i'm trying to restore our live DB onto our Dev box. To do this, I when onto production, TASKS -> backup Db. it created a 4Gig file. I zipped this down to 2.2Gig. download that to my dev server.
On my dev server, i create a new DB (called 'xxxxx') and then Tasks -> restore DB from file. I give it the .bak file name, overwrite all and go.
When it gets to 40% it fails. here's the screenie:
alt text http://img19.imageshack.us/img19/1/restorefailurejl5.png
Now, i can manually zip up the .mdf and .log files, zip them, download them and then attach them to my dev server sql instance. I did that last night actually, to get this working .. so that worked.
But i'm not sure why the backup/restore method didn't work? I've downloaded the .bak file a few times (in case the download was corrupt). i tried to rezip and re-backup the live server a few times also .. but after about the 5 download of a 2Gig file, I'm starting to get grumpy :)
I've tried doing a DBCC CHECKDB('live db name', RESTORE_REBUILD) and that worked fine and THEN backup, download, restore, fail.
My live DB is sql2008 x64bit and my dev box is x86 (32 bit), so i'm not sure if that's an issue. Both servers are versions 10.0 RTM.
I don't want to have to stop the db to be able to copy the .mdf/.log files (cause u can't access them while the db is running, i believe) .. which is why i prefer the backup/restore method.
Any suggestions?
UPDATE 1
Before i posted this question, I did do a DBCC CHECKDB ('xxxxx', RESTORE_REBUILD). I noted this in my initial post, which i've now just highlighted for future reference.
When i get a chance to stop the DB, i'll post the end results here (and save the text output incase someone asks for some other info).
UPDATE 2
I tried to restore the backup to a dummy live db i created. Failed (same error message). I'm running a DBCC CHECKDB('LiveDB') right now. Just before I did this, i stopped the SQL Server service & manually copy-backedup the .mdb and .log files.
UPDATE 3
This is the result from the DBCC CHECKDB ('LiveDb')
DBCC results
for 'Addresses' There are 1689363 rows
in 101624 pages for object 'Addresses'
DBCC results for 'Thumbnails' There
are 1197 rows in 30 pages for object
'Thumbnails' ..
CHECKDB found 0 allocation errors and
0 consistency errors in database
'LiveDb' DBCC execution completed. If
DBCC printed error messages, contact
your administrator.
hmm :( Any further ideas? Should i run DBCC again with RESTORE_REBUILD or some other magic argument?
Final Update
Ok - this is sooo weird. After running CHECK DBCC and then doing another full backup (using the same sql script that we have been using for eons) the restore now works fine! I just don't get it :(
Looks like i can close the job now. I just don't get it :(
This is IMPORTANT, you may potentially have an issue with the live database.
Have you performed DBCC CHECKDB on your production server recently? If not, do so at the next possible maintenance window.
You should ideally be performing CHECKDB before you take your full backups (although not always practical) in order to validate that the backups you are generating, are of a database that has been confirmed to be without issue.
Please feel free to pose further questions. You can contact me directly if you require additional assistance.
Cheers, John
We do the same thing every morning.
RESTORE DATABASE Accounts FROM DISK = '\\server\F$\SQL_BACKUP\DB.bak' WITH REPLACE
This works fine for us. I'm not sure if the x64 > x32 will have an impact, I wasn't aware that data was stored differently between those kinds of platforms. We basically dump our live database every night, a process on our SQL server then picks them up, and another process sends them to be backed up to disk.
The command above will create you a new database and import the data, it's worth a shot.
Sounds like your production database has some corruption. Run DBCC CHECKDB on your production database and see what is returned.
Does a restore on the live machine (to a different db) fail? If so, try running DBCC CHECKDB.