Reducing SpaceUsedMB in SQL Server Database - sql-server

I have a database which is 120GB in size. One of the tables that uses up a large amount of space has thousands of records being created on a daily basis. One of those columns is an nvarchar(max). This column can commonly have 2000 characters of data that only needs to be there for a week.
If I update that column to be blank after a week for those records, it does not seem to reduce the database size. E.g.:
UPDATE tblSample
SET largefieldname = ''
WHERE DateAdded < DATEADD(D, -7, GETDATE())
So if I insert 100mb worth of data into that column, and then blank that column after a week using the above statement, the database still remains 100mb in size.
How can I get the database to reduce in size after such a task? I don't want to delete the entire row, just remove the unnecessary disk usage by that one specific column.

You need to consider a redesign for the temporary data.
Put the temporary data in its own table and put a key to it in the main table.
When you are done with the data, delete the key value and truncate the table. This will not release the unused space but will make maintenance of it easier. The only way to reclaim usable disk space is to shrink the database or use partitions.

One way to release the space is to copy the table into temp file, truncate the table and reinsert the rows from the temp table.
Try this code:
create table dbo.x5 (name varchar(1000), i1 int IDENTITY(1,1))
set nocount on
-- select the next two rows and run
insert into dbo.x5 values (replicate('abcd',250))
go 10000
exec sp_spaceused 'dbo.x5' --11488 KB
update dbo.x5 set name=''
select * into #a1 from dbo.x5
truncate table dbo.x5
insert into dbo.x5 (name) select name from #a1
exec sp_spaceused 'dbo.x5' -- 136k
On a large table with several indexes it may pay to drop the indexes, reinsert the rows, and recreate the indexes. Your results will vary so run the code on your install.
If you have a clustered index on the table doing a rebuild on the index will also free up the space. I don't know if doing a rebuild on a heap (a table without a clustered index) will have the same effect.

you can apply a shrink process to the database in this way:
USE [YourDatabase]
GO
DBCC SHRINKFILE (N'YourDatabase')
GO
this is the command provided by SQL Server in order to return that space to the OS but also keep in mind that this is not recomended as this process create fragmentation over your indexes and also will impact CPU cost during the process, please take a look at the following reading http://www.sqlskills.com/blogs/paul/why-you-should-not-shrink-your-data-files/

Related

Column with Non Clustered Index takes longer to Execute

I'm trying to understand how to properly use nonclustered indexes. Here what I found with test data.
CREATE TABLE TestTable
(
RowID int Not Null IDENTITY (1,1),
Continent nvarchar(100),
Location nvarchar(100)
CONSTRAINT PK_TestTable_RowID
PRIMARY KEY CLUSTERED (RowID)
)
ALTER TABLE TestTable
DROP CONSTRAINT PK_TestTable_RowID
GO
INSERT INTO TestTable
SELECT Continent, Location
FROM StgCovid19
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SELECT *
FROM TestTable
WHERE Continent = 'Asia' --551ms
CREATE NONCLUSTERED INDEX NCIContinent
ON TestTable(Continent)
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SELECT *
FROM TestTable
WHERE Continent = 'Asia' --1083ms
DROP INDEX NCIContinent
ON TestTable
CREATE NONCLUSTERED INDEX NCIContinent
ON TestTable(Continent)
INCLUDE (Location)
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
SELECT *
FROM TestTable
WHERE Continent = 'Asia' ---530ms
As you guys can see, if I only add the non clustered index on the Continent column, it performs a seek and also takes double the time to execute the select.
When I add the INCLUDE (Location) it takes less than without any clustered index.
Are you guys able to tell me what is going on?
The strategy of accessing data depends on the table structure, but also, and mainly, on data distribution. It is why statistics about data distribution are stored in indexes and tables :
In indexes, to know the distribution (histogram) of the key value
In tables, to know the distribution (histogram) of the columns values
An execution plan is computed to create a tree compound of branches containing chained steps that are algorithms specialized in one action (join, sort, data access...) to construct the program that will retrieve the data in response to your demand (query).
The optimizer role is to determine, among many execution plans, which will be the most interresting, using the least resources (memory, data volume, cpu...). The plan choose is not systematycally the quicker one, but the one that will have the lower cost in terms in resources usage... This estimate is done by the optimizer upon the bias of statistics...
The test you make has no sense because we does not know the data distribution and the use of the DBCC DROPCLEANBUFFERS has a heavy slide effect that is not current in real word of databases exploitation. In real world, 98 % of the data used by users are in cache... !
Also mesuring time of execution of a query has two problematics :
this metric is not stable and depends of the PC activity which is
really heavy even when you do nothing. Usually we restart the test
at least 10 time, eliminating the slowest and the quickest time and
finally computing the average on the remaining 8 results
time is not the only figure that is interresting and can be mostly the time to send resulting data to the client application. To eliminate this
time, SSMS has a parameter that can execute the query without
sghowing the resulting dataset in SSMS

alter column size take very long time?

I have a table with 45M rows (45 GB data space and 2GB Index space). I added a new column and it finished instantly.
alter table T add C char(25)
Then I found the size is too small so I run the following query.
alter table T alter column C varchar(2500)
And it runs one hour and is still running. sp_whoisactive shows (at the moment, still running)
reads: 48,000,000
writes: 5,000,000
physical reads: 3,900,000
Shouldn't it be really fast?
I tested the case. You can do it faster using below steps:
Create the same table structure with a different name (call it Tbl2)
Alter the column on Tbl2
insert data from Tbl1 into Tbl2
Drop Tbl1 (the old table)
Rename Tbl2 (the new one) to Tbl1
This will give you much better performance.
The reason is, altering the column on table containing data, will take a lot of data transfer and data page alignment.
Using my solution you just insert data w/o any page reorganization.
If a post answers your question, please mark is as answer

Oracle database truncate table (or something similar, but not delete) partially based on condition

I am not sure if this question is an obvious one. I need to delete a load of data. Delete is expensive. I need to truncate the table but not fully so that the memory is released and watermark is changed.
Is there any feature which would allow me to truncate a table based on a condition for select rows?
Depends on how your table is organised.
1) if your (large) table is partitioned based on similar condition ( eg. you want to delete previous month's data and your table is partitioned by month), you could truncate only that partition, instead of the entire table.
2) The other option, provided you have some downtime, would be to insert the data that you want to keep into a temporary table, truncate the original table and then load the data back.
insert into <table1>
select * from <my_table>
where <condition>;
commit;
truncate table my_table;
insert into my_table
select * from <table1>;
commit;
--since the amount of data might change considerably,
--you might want to collect statistics again
exec dbms_stats.gather_table_stats
(ownname=>'SCHEMA_NAME',
tabname => 'MY_TABLE');

Using a trigger to simulate a second identity column in SQL Server 2005

I have various reasons for needing to implement, in addition to the identity column PK, a second, concurrency safe, auto-incrementing column in a SQL Server 2005 database. Being able to have more than one identity column would be ideal, but I'm looking at using a trigger to simulate this as close as possible to the metal.
I believe I have to use a serializable isolation level transaction in the trigger. Do I go about this like Ii would use such a transaction in a normal SQL query?
It is a non-negotiable requirement that the business meaning of the second incrementing column remain separated from the behind the scenes meaning of the first, PK, incrementing column.
To put things as simply as I can, if I create JobCards '0001', '0002', and '0003', then delete JobCards '0002' and '0003', the next Jobcard I create must have ID '0002', not '0004'.
Just an idea, if you have 2 "identity" columns, then surely they would be 'in sync' - if not exactly the same value, then would differ by a constant value. If so, then why not add the "second identity" column as a COMPUTED column, which offsets the primary identity? Or is my logic flawed here?
Edit : As per Martin's comment, note that your calc might need to be N * id + C, where N is the Increment and C the offset / delta - excuse my rusty maths.
For example:
ALTER TABLE MyTable ADD OtherIdentity AS Id * 2 + 1;
Edit
Note that for Sql 2012 and later, that you can now use an independent sequence to create two or more independently incrementing columns in the same table.
Note: OP has edited the original requirement to include reclaiming sequences (noting that identity columns in SQL do not reclaim used ID's once deleted).
I would disallow all the deletes from this table altogether. Instead of deleting, I would mark rows as available or inactive. Instead of inserting, I would first search if there are inactive rows, and reuse the one with the smallest ID if they exist. I would insert only if there are no available rows already in the table.
Of course, I would serialize all inserts and deletes with sp_getapplock.
You can use a trigger to disallow all deletes, it is simpler than filling gaps.
A solution to this issue from "Inside Microsoft SQL Server 2008: T-SQL Querying" is to create another table with a single row that holds the current max value.
CREATE TABLE dbo.Sequence(
val int
)
Then to allocate a range of sufficient size for your insert
CREATE PROC dbo.GetSequence
#val AS int OUTPUT,
#n as int =1
AS
UPDATE dbo.Sequence
SET #val = val = val + #n;
SET #val = #val - #n + 1;
This will block other concurrent attempts to increment the sequence until the first transaction commits.
For a non blocking solution that doesn't handle multi row inserts see my answer here.
This is probably a terrible idea, but it works in at least a limited use scenario
Just use a regular identity and reseed on deletes.
create table reseedtest (
a int identity(1,1) not null,
name varchar(100)
)
insert reseedtest values('erik'),('john'),('selina')
select * from reseedtest
go
CREATE TRIGGER TR_reseedtest_D ON reseedtest FOR DELETE
AS
BEGIN TRAN
DECLARE #a int
SET #a = (SELECT TOP 1 a FROM reseedtest WITH (TABLOCKX, HOLDLOCK))
--anyone know another way to lock a table besides doing something to it?
DBCC CHECKIDENT(reseedtest, reseed, 0)
DBCC CHECKIDENT(reseedtest, reseed)
COMMIT TRAN
GO
delete reseedtest where a >= 2
insert reseedtest values('katarina'),('david')
select * from reseedtest
drop table reseedtest
This won't work if you are deleting from the "middle of the stack" as it were, but it works fine for deletes from the incrementing end.
Reseeding once to 0 then again is just a trick to avoid having to calculate the correct reseed value.
if you never delete from the table, you could create a view with a materialized column that uses ROW_NUMBER().
ALSO, a SQL Server identity can get out of sync with a user generated one, depending on the use of rollback.

How do you add a NOT NULL Column to a large table in SQL Server?

To add a NOT NULL Column to a table with many records, a DEFAULT constraint needs to be applied. This constraint causes the entire ALTER TABLE command to take a long time to run if the table is very large. This is because:
Assumptions:
The DEFAULT constraint modifies existing records. This means that the db needs to increase the size of each record, which causes it to shift records on full data-pages to other data-pages and that takes time.
The DEFAULT update executes as an atomic transaction. This means that the transaction log will need to be grown so that a roll-back can be executed if necessary.
The transaction log keeps track of the entire record. Therefore, even though only a single field is modified, the space needed by the log will be based on the size of the entire record multiplied by the # of existing records. This means that adding a column to a table with small records will be faster than adding a column to a table with large records even if the total # of records are the same for both tables.
Possible solutions:
Suck it up and wait for the process to complete. Just make sure to set the timeout period to be very long. The problem with this is that it may take hours or days to do depending on the # of records.
Add the column but allow NULL. Afterward, run an UPDATE query to set the DEFAULT value for existing rows. Do not do UPDATE *. Update batches of records at a time or you'll end up with the same problem as solution #1. The problem with this approach is that you end up with a column that allows NULL when you know that this is an unnecessary option. I believe that there are some best practice documents out there that says that you should not have columns that allow NULL unless it's necessary.
Create a new table with the same schema. Add the column to that schema. Transfer the data over from the original table. Drop the original table and rename the new table. I'm not certain how this is any better than #1.
Questions:
Are my assumptions correct?
Are these my only solutions? If so, which one is the best? I f not, what else could I do?
I ran into this problem for my work also. And my solution is along #2.
Here are my steps (I am using SQL Server 2005):
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn varchar(40) DEFAULT('')
2) Add a NOT NULL constraint with the NOCHECK option. The NOCHECK does not enforce on existing values:
ALTER TABLE MyTable WITH NOCHECK
ADD CONSTRAINT MyColumn_NOTNULL CHECK (MyColumn IS NOT NULL)
3) Update the values incrementally in table:
GO
UPDATE TOP(3000) MyTable SET MyColumn = '' WHERE MyColumn IS NULL
GO 1000
The update statement will only update maximum 3000 records. This allow to save a chunk of data at the time. I have to use "MyColumn IS NULL" because my table does not have a sequence primary key.
GO 1000 will execute the previous statement 1000 times. This will update 3 million records, if you need more just increase this number. It will continue to execute until SQL Server returns 0 records for the UPDATE statement.
Here's what I would try:
Do a full backup of the database.
Add the new column, allowing nulls - don't set a default.
Set SIMPLE recovery, which truncates the tran log as soon as each batch is committed.
The SQL is: ALTER DATABASE XXX SET RECOVERY SIMPLE
Run the update in batches as you discussed above, committing after each one.
Reset the new column to no longer allow nulls.
Go back to the normal FULL recovery.
The SQL is: ALTER DATABASE XXX SET RECOVERY FULL
Backup the database again.
The use of the SIMPLE recovery model doesn't stop logging, but it significantly reduces its impact. This is because the server discards the recovery information after every commit.
You could:
Start a transaction.
Grab a write lock on your original table so no one writes to it.
Create a shadow table with the new schema.
Transfer all the data from the original table.
execute sp_rename to rename the old table out.
execute sp_rename to rename the new table in.
Finally, you commit the transaction.
The advantage of this approach is that your readers will be able to access the table during the long process and that you can perform any kind of schema change in the background.
Just to update this with the latest information.
In SQL Server 2012 this can now be carried out as an online operation in the following circumstances
Enterprise Edition only
The default must be a runtime constant
For the second requirement examples might be a literal constant or a function such as GETDATE() that evaluates to the same value for all rows. A default of NEWID() would not qualify and would still end up updating all rows there and then.
For defaults that qualify SQL Server evaluates them and stores the result as the default value in the column metadata so this is independent of the default constraint which is created (which can even be dropped if no longer required). This is viewable in sys.system_internals_partition_columns. The value doesn't get written out to the rows until next time they happen to get updated.
More details about this here: online non-null with values column add in sql server 2012
Admitted that this is an old question. My colleague recently told me that he was able to do it in one single alter table statement on a table with 13.6M rows. It finished within a second in SQL Server 2012. I was able to confirm the same on a table with 8M rows. Something changed in later version of SQL Server?
Alter table mytable add mycolumn char(1) not null default('N');
I think this depends on the SQL flavor you are using, but what if you took option 2, but at the very end alter table table to not null with the default value?
Would it be fast, since it sees all the values are not null?
If you want the column in the same table, you'll just have to do it. Now, option 3 is potentially the best for this because you can still have the database "live" while this operation is going on. If you use option 1, the table is locked while the operation happens and then you're really stuck.
If you don't really care if the column is in the table, then I suppose a segmented approach is the next best. Though, I really try to avoid that (to the point that I don't do it) because then like Charles Bretana says, you'll have to make sure and find all the places that update/insert that table and modify those. Ugh!
I had a similar problem, and went for your option #2.
It takes 20 minutes this way, as opposed to 32 hours the other way!!! Huge difference, thanks for the tip.
I wrote a full blog entry about it, but here's the important sql:
Alter table MyTable
Add MyNewColumn char(10) null default '?';
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 0 and 1000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 1000000 and 2000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 2000000 and 3000000
go
..etc..
Alter table MyTable
Alter column MyNewColumn char(10) not null;
And the blog entry if you're interested:
http://splinter.com.au/adding-a-column-to-a-massive-sql-server-table
I had a similar problem and I went with modified #3 approach. In my case the database was in SIMPLE recovery mode and the table to which column was supposed to be added was not referenced by any FK constraints.
Instead of creating a new table with the same schema and copying contents of original table, I used SELECT…INTO syntax.
According to Microsoft (http://technet.microsoft.com/en-us/library/ms188029(v=sql.105).aspx)
The amount of logging for SELECT...INTO depends on the recovery model
in effect for the database. Under the simple recovery model or
bulk-logged recovery model, bulk operations are minimally logged. With
minimal logging, using the SELECT… INTO statement can be more
efficient than creating a table and then populating the table with an
INSERT statement. For more information, see Operations That Can Be
Minimally Logged.
The sequence of steps :
1.Move data from old table to new while adding new column with default
SELECT table.*, cast (‘default’ as nvarchar(256)) new_column
INTO table_copy
FROM table
2.Drop old table
DROP TABLE table
3.Rename newly created table
EXEC sp_rename 'table_copy', ‘table’
4.Create necessary constraints and indexes on the new table
In my case the table had more than 100 million rows and this approach completed faster than approach #2 and log space growth was minimal.
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn int default 0
2) Update the values incrementally in the table (same effect as accepted answer). Adjust the number of records being updated to your environment, to avoid blocking other users/processes.
declare #rowcount int = 1
while (#rowcount > 0)
begin
UPDATE TOP(10000) MyTable SET MyColumn = 0 WHERE MyColumn IS NULL
set #rowcount = ##ROWCOUNT
end
3) Alter the column definition to require not null. Run the following at a moment when the table is not in use (or schedule a few minutes of downtime). I have successfully used this for tables with millions of records.
ALTER TABLE MyTable ALTER COLUMN MyColumn int NOT NULL
I would use CURSOR instead of UPDATE. Cursor will update all matching records in batch, record by record -- it takes time but not locks table.
If you want to avoid locks use WAIT.
Also I am not sure, that DEFAULT constrain changes existing rows.
Probably NOT NULL constrain use together with DEFAULT causes case described by author.
If it changes add it in the end
So pseudocode will look like:
-- without NOT NULL constrain -- we will add it in the end
ALTER TABLE table ADD new_column INT DEFAULT 0
DECLARE fillNullColumn CURSOR LOCAL FAST_FORWARD
SELECT
key
FROM
table WITH (NOLOCK)
WHERE
new_column IS NULL
OPEN fillNullColumn
DECLARE
#key INT
FETCH NEXT FROM fillNullColumn INTO #key
WHILE ##FETCH_STATUS = 0 BEGIN
UPDATE
table WITH (ROWLOCK)
SET
new_column = 0 -- default value
WHERE
key = #key
WAIT 00:00:05 --wait 5 seconds, keep in mind it causes updating only 12 rows per minute
FETCH NEXT FROM fillNullColumn INTO #key
END
CLOSE fillNullColumn
DEALLOCATE fillNullColumn
ALTER TABLE table ALTER COLUMN new_column ADD CONSTRAIN xxx
I am sure that there are some syntax errors, but I hope that this
help to solve your problem.
Good luck!
Vertically segment the table. This means you will have two tables, with the same primary key, and exactly the same number of records... One will be the one you already have, the other will have just the key, and the new Non-Null column (with default value) .
Modify all Insert, Update, and delete code so they keep the two tables in synch... If you want you can create a view that "joins" the two tables together to create a single logical combination of the two that appears like a single table for client Select statements...

Resources