We have Microsoft SQL Server 2012 database used for storing statistics data from the queueing system. Queueing system is sending statistics to DB every 10 minutes and database size is growing around 1 GB per week.
We do not need data older than 1 year.
We have created a SQL script to delete old data. After the script is executed, DB size is bigger.
use statdb
-- Execute as stat user
-- The following three settings are used to control what will be used
-- Amount of days of stat data which should be kept
DECLARE #numberOfDaysToKeep int=365
-- If set to 1, also the aggregated data will be removed, otherwise only the events will be removed
DECLARE #DeleteAggregateData int = 1
-- If set to 1, also the hardware monitoring data will be removed.
DECLARE #DeleteAgentDcData int = 1
-- Do not change anything below this line
DECLARE #DeleteBeforeDate int = (SELECT id FROM stat.dim_date WHERE full_date > DATEADD(day,-1-#numberOfDaysToKeep,GETDATE()) AND full_date < DATEADD(day,0-#numberOfDaysToKeep,GETDATE()))
-- Remove CFM events
DELETE FROM stat.fact_visit_events where date_key < #DeleteBeforeDate
DELETE FROM stat.fact_sp_events where date_key < #DeleteBeforeDate
DELETE FROM stat.fact_staff_events where date_key < #DeleteBeforeDate
-- ...continue to delete from other tables
We would like to keep DB size on constant size. Does MS SQL server use free space (after delete) or will DB size grow at the same speed as before delete?
Or do I need to run SHRINK after script? (based on other discussions, it is not recommended)
You do not provide information is your database is in FULL recovery mode or SIMPLE recovery mode. For sure you have to shrink your log file as well as a data file. In SSMS you can see how much free space is available for the data file and for the log file. See the image below.
There is an option to shrink the data file and the log file using T-SQL.
See more on DBCC SHRINKFILE (Transact-SQL)
Related
I used the following query to view the database log file.
declare #templTable as table
( DatabaseName nvarchar(50),
LogSizeMB nvarchar(50),
LogSpaceUsedPersent nvarchar(50),
Statusee bit
)
INSERT INTO #templTable
EXEC('DBCC SQLPERF(LOGSPACE)')
SELECT * FROM #templTable ORDER BY convert(float , LogSizeMB) desc
DatabaseName LogSizeMB LogSpaceUsedPersent
===============================================
MainDB 6579.93 65.8095
I also used the following code to view the amount of space used by the main database file.
with CteDbSizes
as
(
select database_id, type, size * 8.0 / 1024 size , f.physical_name
from sys.master_files f
)
select
dbFileSizes.[name] AS DatabaseName,
(select sum(size) from CteDbSizes where type = 1 and CteDbSizes.database_id = dbFileSizes.database_id) LogFileSizeMB,
(select sum(size) from CteDbSizes where type = 0 and CteDbSizes.database_id = dbFileSizes.database_id) DataFileSizeMB
--, (select physical_name from CteDbSizes where type = 0 and CteDbSizes.database_id = dbFileSizes.database_id) as PathPfFile
from sys.databases dbFileSizes ORDER BY DataFileSizeMB DESC
DatabaseName LogFileSizeMB DataFileSizeMB
===============================================
MainDB 6579.937500 7668.250000
But whatever I did, the amount of database log space is not less than 6 GB. Do you think there is a reason that the database log has not been changed for more than a month? Is there a solution to reduce this amount or not? I also used different methods and queries to reduce the size of the log file. I got good answers on other databases. Like folow:
use [master];
GO
USE [master]
GO
ALTER DATABASE [MainDB] SET RECOVERY SIMPLE WITH NO_WAIT
GO
USE [MainDB]
GO
DBCC SHRINKDATABASE(N'MainDB')
GO
DBCC SHRINKFILE (N'MainDB_log' , EMPTYFILE)
GO
ALTER DATABASE [MainDB] SET RECOVERY FULL WITH NO_WAIT
GO
But in this particular database, the database log is still not less than 6 GB. Please help. Thanks.
As I informed in the main post, after a while, the size of our database file log reached about 25 gigs and we could no longer even compress the database files. After some searching, I came to the conclusion that I should back up the log file and then compress the log file. For this purpose, I defined a job that prepares a file from a log backup file almost every 30 minutes, and the size of these files usually does not exceed 150 MB. Then, after each backup of the log file, I run the log compression command once. With this method, the size of the log file is greatly reduced and now we have about 500 MB of log file. Of course, due to the large number of transactions on the database, the mentioned job must always be active. If the job is not active, I will increase the log volume again.
I have a database in SQL Server. Basically, table consists of a number of XML documents that represent same table data at given time (like backup history). What is the best method to cut off all the old (3 months) backups, remove from DB and save them archived?
There is no export out of the box in SQL Server.
Assuming
Your table can be pretty big, since it looks like you and image of the table every minute.
If you want to do it all from inside SQL Server.
Then I'll suggest doing cleanup in chunks.
The usual process in SQL to delete by chunks is using DELETE in combination with OUTPUT statement.
The easiest way to archive and remove then would be having the OUTPUT to a table in another database, for that sole purpose.
so your steps would be:
Create a new database (ArchiveDatabase)
Create an Archive table in ArchiveDatabase (ArchiveTable) with same structure of the table that you want to remove.
In a while loop perform the DELETE/OUTPUT
Backup the ArchiveDatabase
TRUNCATE ArchiveTable table in ArchiveDatabase
The DELETE/OUTPUT loops will look like something like
declare #RowsToDelete int = 1000
declare #DeletedRowsCNT int = 1000
while #DeletedRowsCNT = #RowsToDelete
begin
delete top (#RowsToDelete)
from MyDataTable
output deleted.* into ArchiveDatabase.dbo.ArchiveTable
where dt < dateadd(month, 3, getdate())
set #DeletedRowsCNT = ##ROWCOUNT
end
It's been several years since I've worked with SQL and C# .NET so be gentle.
I'm jumping in to assist on a project that a coworker has been building. Something though seems quite out of whack.
I'm trying to provide straight reports on a particular Table in the database. It has 9 columns and approximately 1.6M rows last time I checked. This is big, but it's hardly large enough to create problems. However, when I run a simple Query using MS SQL Server Management Studio, it takes 11 seconds.
SELECT *
FROM [4.0Analytics].[dbo].[Measurement]
where VIN = 'JTHBJ46G482271076';
I tried creating an index for VIN but it times out.
"An exception occurred while executing a Transact-SQL statement or batch."
"Could not allocate space for object 'X' in dabase 'Your Datase' because the 'PRIMARY' filegroup is full"
It seems however that it should be taking a lot less time in the first place even non-indexed so I'd like to find out what might be wrong there and then move onto the index time-out next. Unless 11 seconds is normal for a simple query when non-indexed?
As David Gugg has mentioned you do not have enough space left in your database.
Check if you have enough space left on the disk where your Primary File is located. If you have enough space on the disk use the following command and then try to create the index
USE [master]
GO
ALTER DATABASE [4.0Analytics]
MODIFY FILE ( NAME = N'Primary_File_Name'
, MAXSIZE = UNLIMITED
, FILEGROWTH = 10%
)
GO
-- This will allow your database to grow automatically if it runs out of space
-- provided you have space left on the disk
-- Now try to create the Index and it should let you create it.
SELECT * is taking too long. Well no wonder how many indexes you put on a table if you are doing a SELECT * it will always result in a Clustered Index Scan if you have primary key defined on the table otherwise a table scan.
Try `Select <Column Names>` --<-- Only the columns you actually need
I would not recommend to SET the datafile Autogruth to percentage [%], it is better
(best practice ) to set it to growth by MB, for example:
USE [master]
GO
ALTER DATABASE [YourDataBaseName] MODIFY
FILE ( NAME = N'YouDataBaseFileName',
FILEGROWTH = 10240KB ,
MAXSIZE = UNLIMITED)
GO
The Error you have got during the index creation were, because that the index didn't have the ability to extend.(because the parameter MAXSIZE is set to LIMIT value).
to check it you can do by :
a. Object Explorer >>> Databases >>> Right click on the requested Database >>> GO to TAB "File".
b.T-SQL :
select
FILE_NAME(e.file_id ) as [FileName],
e.growth,
e.max_size,
e.is_percent_growth
f rom sys.master_files e
where OBJECT_NAME(e.database_id) = 'YourDatabaseName'
GO
I have 200K+ rows data in xls and as per requirement i need to update database tables (2 tables) using xls data.
I know the process to copy data from xls to SQL server table however i am struggling with approach to update database tables.
I could not think of any other approach than writing a cursor and i dont want to go with cursor approach as updating
200k+ data using cursor may eat up transaction log and will take lot of time to finish the update.
Can someone help me with what else could be done to accomplish this.
Use the following techniques.
1 - Import the data into a staging table. Use the import / export tool is one way to do the task The target table should be in a throw away or staging database.
http://technet.microsoft.com/en-us/library/ms141209.aspx
2 - Make sure that the data types between the EXCEL data and TABLE data are the same.
3 - Make sure the existing target [TRG_TBL] TABLE has a primary key. Make sure the EXCEL data loaded into a [SRC_TBL] table has the same key. You can add a non-clustered index to speed up the JOIN in the UPDATE statement.
4 - Add a [FLAG] column as INT NULL to the [TRG_TABLE] with an ALTER TABLE command.
5 - Make sure a full backup is done before and after the large UPDATE. You can also use a DATABASE SNAPSHOT. The key point is to have a roll back plan in place if needed.
-- Select correct db
USE [TRG_DB]
GO
-- Set to simple mode
ALTER DATABASE [TRG_DB] SET RECOVERY SIMPLE;
GO
-- Update in batches
DECLARE #VAR_ROWS INT = 1;
WHILE (#VAR_ROWS > 0)
BEGIN
-- Update fields and flag on join
UPDATE TOP (10000) T
SET
T.FLD1 = S.FLD1,
-- ... Etc
T.FLAG = 1
FROM [TRG_TABLE] T JOIN [SRC_TABLE] S ON T.ID = S.ID
WHERE T.[FLAG] IS NULL
-- How many rows updated
SET #VAR_ROWS = ##ROWCOUNT;
-- WAL -> flush log entries to data file
CHECKPOINT;
END
-- Set to full mode
ALTER DATABASE [MATH] SET RECOVERY FULL;
GO
In summary, I gave you all the tools to do the job. Just modify them for your particular occurrence.
PS: Here is working code from my blog on large deletes. Same logic applies.
http://craftydba.com/?p=3079
PPS: I did not check the sample code for syntax. That is left up for you.
How do I get the current size of the transaction log? How do I get the size limit?
I'd like to monitor this so I can determine how often I need to backup the transaction log.
I usually have a problem with the transaction log when I perform large operations.
Based on SQL Server 2005, try this
SELECT (size * 8)/1024.0 AS size_in_mb,
CASE WHEN max_size = -1 THEN 9999999 -- Unlimited growth, so handle this how you want
ELSE (max_size * 8) / 1024.0
END AS max_size_in_mb
FROM <YourDB>.sys.database_files
WHERE data_space_id = 0 -- Log file
Change YourDB to your database name
For an overall of all database sizes try DBCC SQLPERF
DBCC SQLPERF (LOGSPACE)
This should work in SQL 2000/2005/2008
If you want to monitor it in real time, try Performance Monitor (perfmon) while you are doing those large operations.
Perfmon can be used in many different scenarios.
Find out more from Technet.