Log file for ReportServerTempDB growing unexpectedly - sql-server

The transaction log file for the ReportServerTempDB database (database installed with Reporting Services) is has grown to over 100GB. And I'm not sure why.
Here are the file sizes:
D:\SQLDatabases\ReportServer.mdf - 0.7GB
G:\SQLDatabases\ReportServer.ldf - 1.8GB
E:\SQLDatabases\ReportServerTempDB.mdf - 5GB
G:\SQLDatabases\ReportServerTempDB.ldf - 107.6GB
Recovery mode for all these database is SIMPLE.
We are using SQL Server 2008 R2 Standard Edition.
EDIT: Something that is unique to the reporting services databases:
The collation for these databases is Latin1_General_CI_AS_KS_WS. But for all other database it is Latin1_General_CI_AS.
I don't want to just shink the log files and carry on, because they might just grow again. And I can't see why they should be so large.
Does anyone know what could cause the log file (and the data file) for the ReportServerTempDB database to grow so much
And what I should do about it?
Could this indicate a problem with our Report Server?

You are sure that your temp DB is on recovery model - simple as well?
At least you can shrink the database so you get your disk space back using SHRINK DBCC, check this link for more details: http://msdn.microsoft.com/en-us//library/ms190488.aspx

We had renamed SSRS, but the cleanup / archive procs were still trying to cleanup the old database names. When we changed that, out problem stopped.

Related

TFS GIT getting full Error TF30042. tbl_content is full

We are running our project in TFS using Git. Recently it started giving Error
TF30042: The database is full. Contact your Team Foundation Server
administrator. Server: ATSS-P-AAI\SqlExpress01, Error: 1105, Message:
'Could not allocate space for object
'dbo.tbl_Content'.'PK_tbl_Content' in database 'Tfs_DefaultCollection'
because the 'PRIMARY' filegroup is full. Create disk space by deleting
unneeded files, dropping objects in the filegroup, adding additional
files to the filegroup, or setting autogrowth on for existing files in
the filegroup.
I have checked and found that the tbl_content itself has occupied around 9.5GB of space while the total DB size is 10GB. One of my teammate had mistakenly checked in a repository with huge binary files before this happened. He has deleted the repository but seems like the tbl_content is still having same space.
I have tried setting autogrowth as well, but nothing seems to be working. We are now not able to use it anymore.
Any solutions are suggested.
Restricted File Growth in autogrowth will not work in your situation. Since the 10GB limitation is from SQL Express s Daniel mentioned.
SQL Server Express: Limitations of the free version of SQL Server
The most important limitation is that SQL Server Express does not
support databases larger than 10 GB. This will prevent you from
growing your database to be large.
What you can do at present:
Clean the drive to free up space. Delete transaction logs, look for extraneous test case attachments, build drops checked into source that sort of thing.
Restore your prior back-up database
Use SQL Server Standard instead
This is because you're using SQL Express. SQL Express is limited to databases of up to 10 GB.
The easy answer here is that you should upgrade your SQL edition. It may be possible to remove data from the database, but doing so without explicit instructions from Microsoft is not recommended.

Restore SQL Server DB without transaction log

Given a SQL Server 2008 .bak file, is there a way to restore the data file only from the .bak file, without the transaction log?
The reason I'm asking is that the transaction log file size of this database is huge - exceeding the disc space I have readily available.
I have no interest in the transaction log, and no interest in any uncompleted transactions. Normally I would simply shrink the log to zero, once I've restored the database. But that doesn't help when I have insufficient disc space to create the log in the first place.
What I need is a way to tell SQL Server to restore only the data from the .bak file, not the transaction log. Is there any way to do that?
Note that I have no control over the generation of the .bak file - it's from an external source. Shrinking the transaction log before generating the .bak file is not an option.
The transaction log is an integral part of the backup. You can't tell SQL Server to ignore the transaction log, because there is no way to let's say restore and shrink the transaction log file at the same time. However, you can take a look at DBA post to hack the process, although it is not recommended at all
Alternatively you could try some third party tools for restoring, particularly virtual restoring process that can save a lot of space and time. Check out ApexSQL Restore, RedGate Virtual Restore, Idera Virtual Database.
Disclaimer: I work for ApexSQL as support engineer
No, the transaction log is required.
Option 1:
An option may be to restore it to a machine that you DO have enough space on. Then on the restored copy change the logging to either bulk logged or simple, shrink the logs, do another backup operation on this new copy and then use that to restore to the target machine with the now much smaller transaction log.
Option 2:
Alternatively, perhaps the contact at the external source could shrink the transaction log before sending it to you (this may not work if the log is large due to a lot of big transactions).
Docs on the command to shrink the log file are available here.
This is really a question for the ServerFault or DBA sites, but the short answer is no, you can only restore the full .bak file (leaving aside 'exotic' scenarios such as filegroup or piecemeal restores). You don't say what "huge" means, but disk space is cheap; if adding more really isn't an option then you need to find an alternative way of getting the data from your external source.
This may not work since you have no control over the generation of the .bak file, but if you could convince your source to detach the database and then send you a copy of the .mdf file directly, you could then attach the .mdf and your server would automatically create a new empty transaction log file.
See sp_detach_db and sp_attach_db (or CREATE DATABASE database_name FOR ATTACH depending on your sql server version).
I know this is an old thread now, but i stumbled across it while I was having transactional log corruption issues, here is how I got around it without any data loss (I did have down time though!)
Here is what I did:--
Stop the sql server instance service
make a copy of the affected database .mdf file and .ldf file (if you have an .ndf file, copy that as well!) - Just to be sure, you can always put these back if it doesn't work for you.
restart the service.
Log into sql management studio and change the database mode to simple, then take a full backup.
Change the database type back again and once again take a full backup, then take a transactional log backup.
Detach the database.
Right click on databases and click on restore, select the database name from the drop down list, select the later full database backup created (not the one taken from the simple mode) and also select the transactional log backup.
Click restore and it should put it all back without any corruption in the log files.
This worked for me with no errors and my backups all worked correctly afterwards and there were no more transactional log errors.

How would a SQL Server 2005 database lose a few days data?

I really need some help here.
I'm the owner of a SQL Server Database application that lost three days data! I can't understand how or why.
So here is the set-up.
SQL Server 2005 32bit standard edition database on Windows 2000 server. (Database B)
Database is in simple recovery mode
The database is connected as a subscriber to another database(SQL Server 2005 64bit enterprise edition on Win2k3 enterprise) using SQL Server continuous Merge Replication. (Database A)
DatabaseB was rebooted on night X as part of scheduled reboot. When the database came back up it was used as normal for a couple of days and data was created into it perfectly fine.
But then yesterday Day X + 4 it lost a lot of data.
Database B is on a server with another instance of SQL Server and they both started to run out of memory(conflicting with each other).
Here is the sequence of events from the event log when I think this happened.
AppDomain 2 (DatabaseB.dbo[runtime].1) is marked for unload due to memory pressure.
AppDomain 2 (DatabaseB.dbo[runtime].1) unloaded.
BACKUP LOG WITH TRUNCATE_ONLY or WITH NO_LOG is deprecated.
The simple recovery model should be used to automatically truncate the transaction log. (on DatabaseB)
AppDomain 3 (DatabaseB.dbo[runtime].2) created.
I know the data is missing because of my audit logs and that a user had taken a screen shot of some of the data before it was deleted.
So here is my dilema...how could this have happened?
How can several days data go missing from DatabaseB?? (it subsequently is missing from the publication db also!)
Did the truncate with the Appdomain down cause the data to be flushed from the log?
Any and all theories considered. If anyone needs more data I can add it.
Help!
This isn't the answer you want to hear, but in a nutshell, SQL Server doesn't "lose" data. Someone deleted it. If you had the database in full recovery mode, you could use a product like Quest LiteSpeed to read the logs and identify exactly how it was deleted, but in simple mode...sorry, sir, but you're out of luck.
Merge replication is implemented with triggers, so it doesn't need full recovery. Is it possible that someone disabled all triggers in the db? its easy to do DISABLE TRIGGER [database] This would at least account for the subscriber losing data.
Those appdomain lines in the log don't mean that much, its the SQL CLR telling you its unloading assemblies to free up some memory. & then reloading them later on.
Truncating the log removes inactive parts that have been committed to disk, having the recovery model set to simple means there's no point in truncating the log, as the message suggests.
None of this explains why data went missing on both the servers though. There has to be something else that caused this.
How did you verify that for the 4 days when everything was 'created perfectly fine' that it actually was? do you have backups from these days? can you see records with time stamps from those days?
Is it possible there's a ghost in the machine that did a restore without telling you?

SQL Server online backup with MozyPro

Anyone using MozyPro to backup SQL Server databases?
I'm concerned about the way it does the backup. It just copies data files the way they are. Not using the backup database command.
Is it safe?
MozyPro uses the Volume Shadow Service (VSS) to create backups for SQL Server. SQL Server 2005 has been engineered so that VSS backups are consistent. So this is definitely a valid way to back up SQL Server databases.
Here is a white paper on how the SQL Server 2005 SQL Writer works with VSS.
Microsoft® SQL Server™ 2005 provides
support for creating snapshots from
SQL Server data using Volume Shadow
Copy Service (VSS). This is
accomplished by providing a VSS
compliant writer (the SQL writer) so
that a third-party backup application
can use the VSS framework to back up
database files. This paper describes
the SQL writer component and its role
in the VSS snapshot creation and
restore process for SQL Server
databases. It also captures details on
how to configure and use the SQL
writer to work with backup
applications in the context of the VSS
framework.
Here is the MozyPro manual (PDF), which describes how to restore SQL Server backups that were made using VSS.
That being said, if you don't trust this method, there is nothing stopping you from setting up a backup job and just having Mozy backup your *.bak files.
Judging by the hell I am currently going through with Mozy.. NO NO NO!
The backups work, in theory, just not the restore part. Mozy's extreme incremental backup system results in restores that can take weeks. Apparently. I'm still waiting despite talking their top level tech support, over 10 days have passed.
https://github.com/candera/hobocopy
WHY DOES HOBCOPY USE THE VOLUME SHADOW SERVICE?
Because HoboCopy copies from a VSS snapshot, it is able copy even
files that are in locked by some other program. Further, certain
programs (such as SQL Server 2005) are VSS-aware, and will write their
state to disk in a consistent state before the snapshot is taken,
allowing a sort of "live backup". Files locked by VSS-unaware programs
will still be copied in a "crash consistent" state (i.e. whatever
happens to be on the disk). This is generally a lot better than not
being able to copy the file at all.

Is there a way to compact a SQL2000/2005 MDF file?

I deleted millions of rows of old data from a production SQL database recently, and it didn't seem to shrink the size of the .MDF file much. We have a finite amount of disk space.
I am wondering if there is anything else I can do to "tighten" the file (like something analogous to Access' Compact and Repair function)?
Use the Shrink File option in Sql Server Management Studio
Right-click on Database > Tasks > Shrink > Database (or Files)
DBCC SHRINKDATABASE etc. - read up on transaction logs and backups in the Books Online
If large log files are the problem, this may help:
backup log MY_DATABASE WITH TRUNCATE_ONLY;
Then right click on MY_DATABASE and choose All Tasks->Shrink Database as teller suggested.
This worked for me and shrank my log files by a thousand.
Using the SQL Server Manager.
Right Click on the database in question.
Choose Properties, then the options tab.
Change the Recovery Model to Simple From Full.
If you need it in full mode switch it back after it shrinks.
That's it!

Resources