TSQL updating large table with other from TEMPDB causes enormous grow - database

I have a custom import tool which bulk-insert the data in temp (421776 rows). After that the tool inserts unknown rows into the target table and updates existing rows based on a hash key(combination of 2 columns). The target db has nearly the same row count. The update query looks something like this (about 20 less update columns)
update targetTable set
theDB.dbo.targetTable.code=temp.code,
theDB.dbo.targetTable.name=temp.name,
from [tempDB].[dbo].[targettable] as temp
where theDB.dbo.targetTable.hash=temp.hash COLLATE SQL_Latin1_General_CP1_CI_AS
I know the nvarchar compare with a collate is a bit bad but not easy to avoid. Still the hash column has it's own unique index. Also locally it works well but on this server of mine the temp DB keeps growing to 21 gig. Reindexing and shrinking won't work at all.
Just a side note for others who face tempdb problems. A good read is http://bradmcgehee.com/wp-content/uploads/presentations/Optimizing_tempdb_Performance_chicago.pdf

It looks like you're using tempdb explicitly with data you've put there. Is there are a reason to use tempdb as if it was your own database?
The reason tempdb is growing is because you're explicitly putting data there. 420k rows doesn't sound heavy, but it's best to keep it within your own user db.
Suggest changing your business logic to move away from [tempDB].[dbo].[targettable] to something on your own user database.

You can temporarily change the transaction logging from Full or Bulk logged down to simple. That will keep everything from getting logged for a rollback.

Is this a cartesian product when there's no explicit join?

Related

What affects does changing the column-order of a table in a database have on the server-memory?

I just started my new job and after looking at the DBM I was shocked. Its a huge mess.
Now first thing I wanted to do is get some consistency in the order of table columns. We publish new database versions over a .dacpac. My co-worker told me that changing the order of a column would force MSSQL to create a temporary table which stores all the data. MSSQL then creates a new table and inserts all the data into that table.
So lets say my server only runs 2GB of RAM and has 500MB storage left on the harddrive. The whole database weights 20GB. Is it possible that changing the order of columns will cause trouble (memory related)? Is the statement of my co-worker correct?
I couldnt find any good source for my question.
Thanks!
You should NOT "go one table by one".
You should leave your tables as they are, if you don't like the order of columns of some table just create a view reordering your columns as you want.
Not only changing order of columns will cause your tables to be recreated, all the indexes will be recreated, you'll get problems with FK constraints.
And after all, you'll gain absolutely nothig but do damage only. You'll waste server resources, make your tables temporarily inaccessible and the columns will not be stored as you defind anyway, internally they will be stored in "var-fix" format (divided into fixed-length and variable-length)

SQL inserts slow - "pre populate with blank rows??"

I have an .exe that compares a vbTab delimited .txt file with an SQL table.
Updates to the table's existing records goes very fast. Inserts into the table for new records is quite slow.
As I'm new to SQL, I'm wondering if my idea is crazy talk:
I thought that maybe a solution would be to "pre populate" the database with 10,000 empty rows (minus the primary key) and somehow have this speed up the process?
Any suggestions would be greatly appreciated.
There is no straightforward answer to your question as many things are unknown to us (DB configuration, HW, existing data etc.)
But you can try below things,
Try using DB export-import functionality
Instead of fetching records from DB with an iterator and comparing them with a record from a file and then do insert of modification you can directly import those records into DB using upsert (update if present or insert if not) strategy. Believe me this works lot faster than previous.
If you have indexes on that table, while import or insert drop the current indexes on that table and do the operation. After operation re-apply those indexes again. Indexes slows down the performance of inserts.
If import strategy is not good for you (If you are doing with those records first before insertion) then probably go for stored procedure for modification and insert new rows after dropping indexes.
During this activity check for DB configuration as well. Use proper tuning for buffers, paging, locking.
Hope this helps :)
To answer your question we may need more information.
How many rows does your table have?
I guess it may be a lack of indexes.

Is deleting all records in a table a bad practice in SQL Server?

I am moving a system from a VB/Access app to SQL server. One common thing in the access database is the use of tables to hold data that is being calculated and then using that data for a report.
eg.
delete from treporttable
insert into treporttable (.... this thing and that thing)
Update treportable set x = x * price where (...etc)
and then report runs from treporttable
I have heard that SQL server does not like it when all records from a table are deleted as it creates huge logs etc. I tried temp sql tables but they don't persists long enough for the report which is in a different process to run and report off of.
There are a number of places where this is done to different report tables in the application. The reports can be run many times a day and have a large number of records created in the report tables.
Can anyone tell me if there is a best practise for this or if my information about the logs is incorrect and this code will be fine in SQL server.
If you do not need to log the deletion activity you can use the truncate table command.
From books online:
TRUNCATE TABLE is functionally
identical to DELETE statement with no
WHERE clause: both remove all rows in
the table. But TRUNCATE TABLE is
faster and uses fewer system and
transaction log resources than DELETE.
http://msdn.microsoft.com/en-us/library/aa260621(SQL.80).aspx
delete from sometable
Is going to allow you to rollback the change. So if your table is very large, then this can cause a lot of memory useage and time.
However, if you have no fear of failure then:
truncate sometable
Will perform nearly instantly, and with minimal memory requirements. There is no rollback though.
To Nathan Feger:
You can rollback from TRUNCATE. See for yourself:
CREATE TABLE dbo.Test(i INT);
GO
INSERT dbo.Test(i) SELECT 1;
GO
BEGIN TRAN
TRUNCATE TABLE dbo.Test;
SELECT i FROM dbo.Test;
ROLLBACK
GO
SELECT i FROM dbo.Test;
GO
i
(0 row(s) affected)
i
1
(1 row(s) affected)
You could also DROP the table, and recreate it...if there are no relationships.
The [DROP table] statement is transactionally safe whereas [TRUNCATE] is not.
So it depends on your schema which direction you want to go!!
Also, use SQL Profiler to analyze your execution times. Test it out and see which is best!!
The answer depends on the recovery model of your database. If you are in full recovery mode, then you have transaction logs that could become very large when you delete a lot of data. However, if you're backing up transaction logs on a regular basis to free the space, this might not be a concern for you.
Generally speaking, if the transaction logging doesn't matter to you at all, you should TRUNCATE the table instead. Be mindful, though, of any key seeds, because TRUNCATE will reseed the table.
EDIT: Note that even if the recovery model is set to Simple, your transaction logs will grow during a mass delete. The transaction logs will just be cleared afterward (without releasing the space). The idea is that DELETE will create a transaction even temporarily.
Consider using temporary tables. Their names start with # and they are deleted when nobody refers to them. Example:
create table #myreport (
id identity,
col1,
...
)
Temporary tables are made to be thrown away, and that happens very efficiently.
Another option is using TRUNCATE TABLE instead of DELETE. The truncate will not grow the log file.
I think your example has a possible concurrency issue. What if multiple processes are using the table at the same time? If you add a JOB_ID column or something like that will allow you to clear the relevant entries in this table without clobbering the data being used by another process.
Actually tables such as treporttable do not need to be recovered to a point of time. As such, they can live in a separate database with simple recovery mode. That eases the burden of logging.
There are a number of ways to handle this. First you can move the creation of the data to running of the report itself. This I feel is the best way to handle, then you can use temp tables to temporarily stage your data and no one will have concurency issues if multiple people try to run the report at the same time. Depending on how many reports we are talking about, it could take some time to do this, so you may need another short term solutio n as well.
Second you could move all your reporting tables to a difffernt db that is set to simple mode and truncate them before running your queries to populate. This is closest to your current process, but if multiple users are trying to run the same report could be an issue.
Third you could set up a job to populate the tables (still in separate db set to simple recovery) once a day (truncating at that time). Then anyone running a report that day will see the same data and there will be no concurrency issues. However the data will not be up-to-the minute. You also could set up a reporting data awarehouse, but that is probably overkill in your case.

How to reduce size of SQL Server table that grew from a datatype change

I have a table on SQL Server 2005 that was about 4gb in size.
(about 17 million records)
I changed one of the fields from datatype char(30) to char(60) (there are in total 25 fields most of which are char(10) so the amount of char space adds up to about 300)
This caused the table to double in size (over 9gb)
I then changed the char(60) to varchar(60) and then ran a function to cut extra whitespace out of the data (so as to reduce the average length of the data in the field to about 15)
This did not reduce the table size. Shrinking the database did not help either.
Short of actually recreating the table structure and copying the data over (that's 17 million records!) is there a less drastic way of getting the size back down again?
You have not cleaned or compacted any data, even with a "shrink database".
DBCC CLEANTABLE
Reclaims space from dropped variable-length columns in tables or indexed views.
However, a simple index rebuild if there is a clustered index should also do it
ALTER INDEX ALL ON dbo.Mytable REBUILD
A worked example from Tony Rogerson
Well it's clear you're not getting any space back ! :-)
When you changed your text fields to CHAR(60), they are all filled up to capacity with spaces. So ALL your fields are now really 60 characters long.
Changing that back to VARCHAR(60) won't help - the fields are still all 60 chars long....
What you really need to do is run a TRIM function over all your fields to reduce them back to their trimmed length, and then do a database shrinking.
After you've done that, you need to REBUILD your clustered index in order to reclaim some of that wasted space. The clustered index is really where your data lives - you can rebuild it like this:
ALTER INDEX IndexName ON YourTable REBUILD
By default, your primary key is your clustered index (unless you've specified otherwise).
Marc
I know I'm not answering your question as you are asking, but have you considered archiving some of the data to a history table, and work with fewer rows?
Most of the times you might think at first glance that you need all that data all the time but when actually sitting down and examining it, there are cases where that's not true. Or at least I've experienced that situation before.
I had a similar problem here SQL Server, Converting NTEXT to NVARCHAR(MAX) that was related to changing ntext to nvarchar(max).
I had to do an UPDATE MyTable SET MyValue = MyValue in order to get it to resize everything nicely.
This obviously takes quite a long time with a lot of records. There were a number of suggestions as how better to do it. They key one was a temporary flag indicated if it had been done or not and then updating a few thousand at a time in a loop until it was all done. This meant I had "some" control over how much it was doing.
On another note though, if you really want to shrink the database as much as possible, it can help if you turn the recovery model down to simple, shrink the transaction logs, reorganise all the data in the pages, then set it back to full recovery model. Be careful though, shrinking of databases is generally not advisable, and if you reduce the recovery model of a live database you are asking for something to go wrong.
Alternatively, you could do a full table rebuild to ensure there's no extra data hanging around anywhere:
CREATE TABLE tmp_table(<column definitions>);
GO
INSERT INTO tmp_table(<columns>) SELECT <columns> FROM <table>;
GO
DROP TABLE <table>;
GO
EXEC sp_rename N'tmp_table', N'<table>';
GO
Of course, things get more complicated with identity, indexes, etc etc...

Inserting taking longer on Sql Server 2005 table

I have a table with about 45 columns and as more data goes in, the longer it takes for the inserts to happen. I have increased the size of the data and log files, reduced the fill factor on all the indexes on that table, and still slower and slower insert times. Any ideas would be GREATLY appreciated.
For inserts, you want to DECREASE the fillfactor on the indexes on the table in order to reduce page splitting.
It is somewhat expected that it will take longer to insert as more data goes in, because your indexes just plain get bigger.
Try putting in data in batches instead of row-by-row. SQL Server is more efficient that way.
Make sure you don't have too many indexes on your tables.
Consider using SQL Server 2005's INCLUDE statement on your indexes if you are just including columns in your indexes because you want them covered in your queries.
How big is the table?
What is the context? Is this a batch of many new records?
Can you post the schema including index definition?
Can you SET STATISTICS IO ON, SET STATISTICS TIME ON, and post the display for one iteration?
Is there anything pathological about the data, or the context? Is this on a server or a laptop (testing)?
Why dont you drop index before inserting and recreate index back on to table so you no need to do update statistics
You could also ensure that the indexes on that table are defragmented

Resources