Best way to rename a SQL table? - sql-server

I want to replace a SalesResults table with a new version containing latest calculated results.
I guess the following would only take a few milliseconds for SQL Server to do but is it safe for any users accessing the SalesResults table at that time?
If not, it should I enclose the following in BEGIN TRANSACTION, COMMIT in order for it to be?
DROP TABLE dbo.SalesResults;
EXEC sp_rename 'SalesResultsNew', 'SalesResults'

I would do something like this during off hours or a maintenance period just to be safe
Begin Transaction
Drop Table dbo.SalesResults
Exec sp_rename 'SalesResultsNew', 'SalesResults'
Commit Transaction
This assumes that the SalesResultsNew table already exists.

If you're doing this on a consistent basis (and it sounds like you are), I'd use a synonym instead. So your actual tables would be called something like dbo.SalesResults_20170108 and you'd do something like:
create synonym dbo.SalesResults for dbo.SalesResults_20170108;
Each day, you'd move the synonym to point to the new SalesResults table when it's ready.

Related

Does T-SQL have a mechanism for reading rows as of beginning of transaction?

I've been unable to find docs that address this, and I've tried both the serializable and snapshot isolation levels to no benefit.
I am interested in querying a table during a transaction where I've modified the table in the transaction, but my query is not aware of those data modifications.
I'm sure I could carefully sequence things and potentially use temp tables to accomplish my functionality, but if there's a query hint or isolation level that simplifies my code I'd like to use it!
In a regular table, you don't have access to the previous versions of the rows that you have modified in the current transaction.
If the table is indeed a regular table, you can add the output clause to your update statement to capture the previous version:
declare #t table(...);
update ...
set ...
output deleted.column1, ... into #t;
If the table is a temporal table through, you can access the previous version:
declare #before_update datetime = getdate();
update ... ;
select ... from table ... for system_time as of #before_update;
Note this may not be quite what you want, given the concurrent nature of SQL Server. It might return you data that is just a bit too old, if another transaction gets in between of your = getdate() and update.

how to create a rollback copy of a table just in case i do wrong insert or update

Hi i' d like to know is there any way to create a rollback copy of a table in SQL server just in case i do wrong insert or update statement i'd like to recover my data as it was before that insert or update statements.
SELECT *
INTO myBackupTableName
FROM Yourtable
Creates a backup of the table.
Assuming that we are discussing a production environment and workload: The more I think about this question/requirement the more strongly I believe rollback is the best answer.
Summarizing suggestions already made:
Select into to create a backup table will create a copy of the table but if you revert to it you will potentially be losing data from other users or batches.
Using the into clause from a select or update statement will get you the changed rows but not the original value.
Another answer would be to create an audit table and use a trigger to populate it. Your audit table would need to include enough details regarding the batch to identify it for rollback. This could end up being quite a rabbit hole. Once you have the trigger on the base table and the audit table you will then need to create the code to use this audit table to revert your batch. The trigger could become a performance issue and if the database has enough changes from enough other users then you still would not be able to revert your batch without possibly losing other users' work.
You can wrap your update AND your validation code inside the same proc and if the validation fails only your changes are rolled back.
https://learn.microsoft.com/en-us/sql/t-sql/language-elements/begin-transaction-transact-sql
https://learn.microsoft.com/en-us/sql/t-sql/language-elements/rollback-transaction-transact-sql

Drop or not drop temporary tables in stored procedures

I saw this question quite a many times but I couldn't get the answer that would satisfy me. Basically what people and books say is "Although temporary tables are deleted when they go out of scope, you should explicitly delete them when they are no longer needed to reduce resource requirements on the server".
It is quite clear to me that when you are working in management studio and creating tables, then until you close your window or disconnect, you will use some resources for that table and it is logically that it is better to drop them.
But when you work with procedure then if you would like to cleanup tables most probably you will do that at the really end of it (I am not talking about the situation when you drop the table as soon as you really do not need that in the procedure). So the workflow is something like that :
When you drop in SP:
Start of SP execution
Doing some stuff
Drop tables
End of execution
And as far as I understand how can it possibly work when you do not drop:
Start of SP execution
Doing some stuff
End of execution
Drop tables
What's the difference here? I can only imagine that some resources are needed to identify the temporary tables. Any other thoughts?
UPDATE:
I ran simple test with 2 SP:
create procedure test as
begin
create table #temp (a int)
insert into #temp values (1);
drop table #temp;
end
and another one without drop statements. I've enabled user statistics and ran the tests:
declare #i int = 0;
while #i < 10000
begin
exec test;
SET #i= #i + 1;
end
That's what I've got (Trial 1-3 dropping table in SP, 4-6 do not dropping)
As the picture shows that all stats are the same or decreased a bit when I do not drop temporary table.
UPDATE2:
I ran this test 2nd time but now with 100k calls and also added SET NOCOUNT ON. These are the results:
As the 2nd run confirmed that if you do not drop the table in SP then you actually save some user time as this is done by some other internal process but outside of the user time.
You can read more about in in this Paul White's article: Temporary Tables in Stored Procedures
CREATE and DROP, Don’t
I will talk about this in much more detail in my next post, but the
key point is that CREATE TABLE and DROP TABLE do not create and drop
temporary tables in a stored procedure, if the temporary object can be
cached. The temporary object is renamed to an internal form when DROP
TABLE is executed, and renamed back to the same user-visible name when
CREATE TABLE is encountered on the next execution. In addition, any
statistics that were auto-created on the temporary table are also
cached. This means that statistics from a previous execution remain
when the procedure is next called.
Technically, a locally scoped temp table (one with a single hashtag before it) will automatically drop out of scope after your SPID is closed. There are some very odd cases where you get a temp table definition cached somewhere and then no real way to remove it. Usually that happens when you have a stored procedure call which is nested and contains a temp table by the same name.
It's good habit to get into dropping your tables when you're done with them but unless something unexpected happens, they should be de-scoped anyway once the proc finishes.

Bulk copy of data from one column to another in SQL Server

I want to copy the value of one column to another column in SQL Server. This operation needs to be carried out across the whole DB which has 200M rows. My query syntax is:
UPDATE [Net].[dbo].[LINK]
SET [LINK_ERRORS] = [OLD_LINK_ERRORS]
However, very soon I exhaust the transaction log and the query aborts. What's the best way to initiate this in batches?
Thanks,
Updating 200M rows is not a good idea.
You could either select all of the data into a new table and copy the LINK_ERRORS field in the SELECT,
select *, OLD_LINK_ERRORS as LINK_ERRORS into LINK_tmp from LINK
GO
exec sp_rename LINK, LINK_bkp
GO
exec sp_rename LINK_tmp, LINK
GO
drop table LINK_bkp
or if the next thing you're going to do is null out the original OLD_LINK_ERRORS column, you could do something like this:
sp_rename 'LINK.OLD_LINK_ERRORS', 'LINK_ERRORS', 'COLUMN'
GO
ALTER TABLE LINK ADD OLD_LINK_ERRORS <data type>
GO
multiple updates might work.
update dbo.LINK
set LINK_ERRORS=OLD_LINK_ERRORS
where ID between 1 and 1000000
update dbo.LINK
set LINK_ERRORS=OLD_LINK_ERRORS
where ID between 1000001 and 2000000
etc...
I would consider doing this in SSIS where you can easily control the batch (transaction) size and take advantage of bulk operations SSIS provides. Of course, this may not work if you need a programmatic solution. This would be a very trivial SSIS operation.

How is a T-SQL transaction not thread-safe?

The following (sanitized) code sometimes produces these errors:
Cannot drop the table 'database.dbo.Table', because it does not exist or you do not have permission.
There is already an object named 'Table' in the database.
begin transaction
if exists (select 1 from database.Sys.Tables where name ='Table')
begin drop table database.dbo.Table end
Select top 3000 *
into database.dbo.Table
from OtherTable
commit
select * from database.dbo.Table
The code can be run multiple times simultaneously. Anyone know why it breaks?
Can I ask why your doing this first? You should really consider using temporary tables or come up with another solution.
I'm not positive that DDL statments behave the sameway in transactions as DML statements and have seen a blog post with a weird behavior and creating stored procedures within a DDL.
Asside from that you might want to verify your transaction isolation level and set it to Serialized.
Edit
Based on a quick test, I ran the same sql in two different connections, and when I created the table but didn't commit the transaction, the second transaction blocked. So it looks like this should work. I would still caution against this type of design.
In what part of the code are you preventing multiple accesses to this resource?
begin transaction
if exists (select 1 from database.Sys.Tables where name ='Table')
begin drop table database.dbo.Table end
Select top 3000 *
into database.dbo.Table
from OtherTable
commit
Begin transaction isn't doing it. It's only setting up for a commit/rollback scenario on any rows added to tables.
The (if exists, drop) is a race condition, along with the re-creation of the table with (select..into). Mutiliple people dropping into that code all at once will most certainly cause all kinds of errors. Some creating tables that others have just destroyed, others dropping tables that don't exist anymore, and others dropping tables that some are busy inserting into. UGH!
Consider the temp table suggestions of others, or using an application lock to block others from entering this code at all if the critical resource is busy. Transactions on drop/create are not what you want.
If you are just using this table during this process I would suggest using a temp table or , depending on how much data , a ram table. I use ram tables frequently to avoid any transaction costs and save on disk activity.

Resources