We recently ran out space on a Azure database. After deleting a lot of unused tables, (none of which had indexes, or even keys,) Transact queries take almost exactly twice as long. SSIS imports seem to be completely unaffected.
After the obvious step of shrinking the log file, I'm now totally baffled.
Any advice would be greatly appreciated.
After you deleted a lot of unused tables, the database space will not be released automatically in SQL Server.
For a test, you can migrate you data to another database and run the same query, comparing the query time it costs.
Since you have tried to shrink the log file, I suggest you can also try the database shrinking.
Login you SQL Database with SSMS:
right-click your database---->Tasks ---->Shrink---->Database:
Another way is setting database Auto Shrink:
right-click your database---->Properties, set Auto Shrink---->True:
Hope this helps.
Related
I have an application that is in production with its own database for more than 10 years.
I'm currently developing a new application (kind of a reporting application) that only needs read access to the database.
In order not to be too much linked to the database and to be able to use newer DAL (Entity Framework 6 Code First) I decided to start from a new empty database, and I only added the tables and columns I need (different names than the production one).
Now I need some way to update the new database with the production database regularly (would be best if it is -almost- immediate).
I hesitated to ask this question on http://dba.stackexchange.com but I'm not necessarily limited to only using SQL Server for the job (I can develop and run some custom application if needed).
I already made some searches and had those (part-of) solutions :
Using Transactional Replication to create a smaller database (with only the tables/columns I need). But as far as I can see, the fact that I have different table names / columns names will be problematic. So I can use it to create a smaller database that is automatically replicated by SQL Server, but I would still need to replicate this database to my new one (it may avoid my production database to be too much stressed?)
Using triggers to insert/update/delete the rows
Creating some custom job (either a SQL Job or some Windows Service that runs every X minutes) that updates the necessary tables (I have a LastEditDate that is updated by a trigger on my tables, so I can know that a row has been updated since my last replication)
Do you some advice or maybe some other solutions that I didn't foresee?
Thanks
I think that the Transactional replication is the better than using triggers.
Too much resources would be used in source server/database due to the trigger fires by each DML transaction.
Transactional rep could be scheduled as a SQL job and run it few times a day/night or as a part of nightly scheduled job. IT really depends on how busy the source db is...
There is one more thing that you could try - DB mirroring. it depends on your sql server version.
If it were me, I'd use transactional replication, but keep the table/column names the same. If you have some real reason why you need them to change (I honestly can't think of any good ones and a lot of bad ones), wrap each table in a view. At least that way, the view is the documentation of where the data is coming from.
I'm gonna throw this out there and say that I'd use Transaction Log shipping. You can even set the secondary DBs to read-only. There would be some setting up for full recovery mode and transaction log backups but that way you can just automatically restore the transaction logs to the secondary database and be hands-off with it and the secondary database would be as current as your last transaction log backup.
Depending on how current the data needs to be, if you only need it done daily you can set up something that will take your daily backups and then just restore them to the secondary.
In the end, we went for the Trigger solution. We don't have that much changes a day (maybe 500, 1000 top), and it didn't put too much pressure on the current database. Thanks for your advices.
The transaction log file for the ReportServerTempDB database (database installed with Reporting Services) is has grown to over 100GB. And I'm not sure why.
Here are the file sizes:
D:\SQLDatabases\ReportServer.mdf - 0.7GB
G:\SQLDatabases\ReportServer.ldf - 1.8GB
E:\SQLDatabases\ReportServerTempDB.mdf - 5GB
G:\SQLDatabases\ReportServerTempDB.ldf - 107.6GB
Recovery mode for all these database is SIMPLE.
We are using SQL Server 2008 R2 Standard Edition.
EDIT: Something that is unique to the reporting services databases:
The collation for these databases is Latin1_General_CI_AS_KS_WS. But for all other database it is Latin1_General_CI_AS.
I don't want to just shink the log files and carry on, because they might just grow again. And I can't see why they should be so large.
Does anyone know what could cause the log file (and the data file) for the ReportServerTempDB database to grow so much
And what I should do about it?
Could this indicate a problem with our Report Server?
You are sure that your temp DB is on recovery model - simple as well?
At least you can shrink the database so you get your disk space back using SHRINK DBCC, check this link for more details: http://msdn.microsoft.com/en-us//library/ms190488.aspx
We had renamed SSRS, but the cleanup / archive procs were still trying to cleanup the old database names. When we changed that, out problem stopped.
We have some large schema changes coming down the pipe and are in needs of some tips in writing upgrade scripts manually. We're using SQL Server 2000 and do not have access to automated tools nor are they an option at this point in time. The only database tool we have is SQL Server Management Studio.
You can import the database to a local machine with has a newer version of SQL, then you can use the 'Generate Scripts' feature to script out a lot of the database objects.
Make sure to set in the Advanced Settings to script for SQL Server 2000.
If you are having problems with the script generated, you can try breaking it up into chunks and run it in small batches. That way if you have any specific generated scripts you can just write the SQL manually to get it to run.
While not quite what you had in mind, you can use Schema comparing tools like SQL Compare, and then just script the changes to a sql file, which you can then edit by hand before running it. I guess that would be as close to writing it manually without writing it manually.
If you -need- to write it all manually i would suggest getting some intellisense-type of tools to speed things up.
Your upgrade strategy is probably going to be somewhat customized for your deployment scenario, but here are a few points that might help.
You're going to want to test early and often (not that you wouldn't do this anyway), so be sure to have a testing DB in your initial schema, with a backup so you can revert back to "start" and test your upgrade any number of times.
Backups & restores can be time-consuming, so it might be helpful to have a DB with no data rows (schema-only) to test your upgrade script. Remember to get a "start" backup so you can go back there on-demand.
Consider stringing a series of scripts together - you can use one per build, or feature, or whatever. This way, once you've got part of the script working, you can leave it alone.
Big data migration can get tricky. If you're doing data transformations, copying or moving rows to new tables, etc., be sure to check row counts before the move and account for all rows afterwards.
Plan for failure. If something goes wrong, have a plan to fix it -- whether that's rolling everything back to a backup taken at the beginning of the deployment, or whatever. Just be sure you've got a plan and you understand where your go / no-go points are.
Good luck!
In a couple of my tables in my SQL Server 2005 database all of my data has been erased. Is there anyway to get a log in SQL Server of all the statements that have ran in the past day? I am trying to find out if someone did this on accident, there is a vulnerability in my web app, or the actual DB has been compromised.
You're looking for the transaction log. Depending on how, and if, it is setup, you'll be able to see what was run. There some info on it at http://www.databasedesign-resource.com/sql-server-transaction-log.html. Given that, I'm sure you can also Google some better resource.
You could also try running the command DBCC LOG(database,3). It will output the data that is in the transaction log.
See the following there are a couple of programs which will allow you to read the log.
https://web.archive.org/web/20080215075500/http://sqlserver2000.databases.aspfaq.com/how-do-i-recover-data-from-sql-server-s-log-files.html
The one from Red Gate is called SQL Rescue and looks pretty good.
You could try a log rescue tool like Log Rescue
I would also sort out some auditing of your own.
Log Rescue doesn't support SQL 2005 so you could also try Apex SQL Log
There are applications you can buy that can convert a transaction log backup into the actual statements that were run. You may be able to find a trial version of some of these, unfortunately I cannot reccommend any specific one though.
Something else to keep in mind: if a hacker gained enough access to clean out some tables, there's a good chance they gained enough access to have their way with your log files as well.
Make a Transaction Log Backup in SQL Server, download a Trial Version of TOAD for SQL Server there you can import your Transactionlog Backup.
And if you want you can also create INSERT Scripts of the DELETED records. But I dont know if there are any restrictions in the TOAD trial version.
I've created a couple of maintenance plans on our Sql Server 2008 databases that perform back-ups (full and differential) to run overnight, but they keep failing with a message saying the database is currently in use.
We typically have little to no traffic during the times the maintenance plans are scheduled to run so I'm not sure why I'm getting this error. Is there a command I can add to the maintenance plan or a configuration change I can make to the plan(s) to allow the plans to execute?
Thanks.
I can speak for 2000/2005 and say that backups should not be affected by a database being in use. Are there other steps in the maintenance plan that could be causing this? Have you set up the reporting/logging options of the maintenance plan to write to a log file? That might give a little more info.
Seems that one of my databases was getting "stuck" in a restore state. Removing that database from the plan seemed to take care of it. Now on to figuring out what was causing that database to get "stuck"...
If there is little to no traffic during the times the maintenance plans then you should take your database in Read-only mode or take it offline by this all the transaction is going to pause for meanwhile you can take your backup. this is totally same way.