I am working on archiving data in my database being that it's getting really heavy and slowing down the server.
I have a script running automatically every day to move that day's data to a file.
All my existing selects must change now. Instead of selecting from the regular table it could be either the archived file or the regular one. I don't want to have a variable name defining which table to select from because then all my existing stored procedures must turn into dynamic SQL.
So I was going to have a temp table that gets filled either with my archived data or with my current data from the current table. And then I would select from the temp table.
The problem is that if it's the current I don't want to have to select from that table to put it into a temp table. It's a heavy table and I could only select from it with where clauses. Since my stored procedure uses the table multiple times with different where clauses I would have to dump the whole table into my temp table. This is affecting the wait time for the customer.
I thought maybe to have the temp table just pointing to the real table instead of selecting from it.
Is this possible in SQL Server 2014?
If not any ideas?
Related
I upload an Excel file using BCP. (Truncate the current table in DB every day and BCP in from the excel file to repopulate table). It is important for me to keep a log of all the changes made to the rows (could be row additions or changes in columns of current rows). The idea is to keep a log of all the changes made.
I have read a few articles online, where we can create a log table and trigger (have no idea how to do it). A table of logs that has columns like
Date | Field | Old Value | New Value.
Firstly, how to do this?
Secondly, whats a smarter way to not log truncating of table and just the actual changes. I'm thinking of creating a temp table (tbl_Excefile_Temp) where I will import the file and then UPDATE the current table (tbl_Excefile) from the tbl_Excefile_Temp This way all the changes made in the current table will get logged automatically in the logs table.
I know its a big use case, could you please guide.
If you are using SQL server 2016 or higher I would advise you to look into temporal tables. If you stop truncating and use a merge statement you have a very easy way of keeping a log. Whenever you make a change SQL server will write to old values away and add the datetimes when the old row was valid.
With temporal tables you can query your table as they were at a specific datetime. In regular use there is no difference with a non-temporal table.
I'm working on updating a legacy stored procedure (which calls several other child stored procedures.) Within a transaction, it manipulates data in about a dozen or so tables and performs lots of calculations in the process, sometimes triggering lock escalation up to a table lock. This process could take 20 minutes or more to complete in some cases. Obviously, locking tables for that long is a big no no. So I'm working on a 2-stage plan to the reduce the blocking caused by this sproc in phase 1 and completely rewrite it to be more efficient and not take an inordinate amount of time in phase 2.
In order to reduce the blocking, wherever there is manipulation on the database tables, I plan to move that manipulation into a temporary table. By doing all of the work in temporary table and then updating the real tables with the final results at the very end of the process, I should be able to reduce the time spent blocking other users, significantly. (That's the "quick fix" for phase 1.)
Here's my issue: some of these temp table might have 100,000 rows or more in them while I use them for various calculations. Because of this I would like to generate indexes on the temp tables to keep performance up. And since these are temp tables that are created within a stored procedure, they need to have unique names to avoid errors if multiple users execute the sproc at the same time. I know that I can manually declare the temp tables using CREATE TABLE statements, and if I do that I can specify an index without a name and let SQL Server create the name for me. What I'm hoping to be able to do is use SELECT * INTO to generate the temp table and find another way to get SQL Server to auto-generate index names. I'm sure you're asking "Why?" My company has several changes in store for the system that I'm working with. If I can manage to use the SELECT INTO method, then, if a column gets added or resized or whatever, then there won't be an issue with the developers needing to know that they have to go back into these stored procedures and change their temp table definitions to match. Using SELECT INTO will automatically keep the temp tables matching the layout of the "real" tables.
So, does anyone know of a way to get SQL Server to auto-generate the name for an index on a temp table (aside from doing it as part of the CREATE TABLE syntax)?
Thank you!
And since these are temp tables that are created within a stored procedure, they need to have unique names to avoid errors if multiple users execute the sproc at the same time.
No they don't. Each session will have their own temp tables, and they will be automatically cleaned up.
And indexes don't have global name scope, so each temp table can have the same index names. eg
create procedure TempTest
as
begin
select * into #t from sys.objects
create index foo on #t(name)
waitfor delay '00:00:10'
select * from #t
end
And you can run
exec temptest
go 10
from multiple sessions.
I have to get data from many tables and combine them into a single one.
The final table will have about 120 millions rows.
I'm planning to insert the rows in the exact order needed by the big table indexes.
My question is, in terms of performance:
Is it better create the indexes of the new table from the start, or first make the inserts and at the end of the import create the indexes ?
Also, would it make a difference if, when building indexes at the end, the rows are already sorted in terms of indexes specifications ?
I can't test both cases and get an objective comparison since the database is on the main server which is used for many other databases and applications which can be heavy loaded or not on different moment of times. I can't restore the database to my local server either, since I don't have full access to the main server yet.
I suggest that copy date in first and then create your indexes. If you insert records on the table that have index, for each insert, SQL Server refresh table index. but when you create index after insert all record to your table, SQL Server don't need to refresh table index for each insert, and rebuild index one way.
You can use SSIS in order to copy data from source tables to destination. SSIS use balk insert and have good performance. also if you have any trigger on destination database, I suggest that disable that before start your convert.
When you create index each time on your table, rows stored in terms of your index.
I am not sure if this is possible. I have an original data set that is approximately 1.5MM records. I want to do a number of things to this dataset in preparation to using it in a report with parameters. I am using SSRS and SQL Server 2008 R2.
What I was thinking of doing is creating a temp table #XYZ that would have a subset of the original 1.5MM records and would have the additional fields I need for reporting.
I can do all of that in a stored procedure. Can I use that temp table without copying it to a table in the db.
Just so you understand, two people may want to query the data at approximately the same time and I do not want to have conflicts with dropping or updating tables.
A temporary table is unique to a connection/session and gets dropped when the proc ends. If you run the same proc from two different windows in SSMS each connection gets its own temporary table, you won't have a problem...unless you use a global temporary table with two pound signs ##XYZ
I have two identical databases lets call them A & B, what I want to do is to have two copy options:
1- Copy the whole database and overwrite everything from A to B with TSQL
2- With TSQL I want to loop over each table row in A and check the last modified date filed if is greater then last modified date in B table row copy and overwrite the whole row.
Please let me now if something is not clear, any help is very appreciated and thanks in advance.
Using Visual Studio 2010 Premium or Ultimate you can do a schema compare and create the new database on the server you need to. Getting the data over there is another matter entirely and is something you can perform with SSIS or linked server queries, especially considering you're simply pumping data from one side to another.
See this post for schema compare tools:
How can I find the differences between two databases?
Something we've recently complete is using the CHECKSUM and Table Switching to bring in new data, keep the old and compare both sets.
You would basically set up three tables for each doing the switch, one for production, one for _staging and one that's an _old table that's an exact schema match. Three tables are needed since when you perform the switch (which is instantaneous, you need to switch into an empty table). You would pump the data into the staging table and execute a switch to change the pointer for the table definition from production to _old and from _staging to production. AND if you have the checksum calculated on each row, you can company and see what's new or different.
It's worked out well with our use of it, but it's a fair amount of prep work to make it happen, which could be prohibitive depending on your environment.
http://msdn.microsoft.com/en-us/library/ms190273.aspx
http://technet.microsoft.com/en-us/library/ms191160.aspx
http://msdn.microsoft.com/en-us/library/ms189788.aspx
Here's a sample query that we ran on GoDaddy.com's sql web editor:
CREATE TABLE TestTable (Val1 INT NOT NULL)
CREATE TABLE TestTable2 (Val1 INT NOT NULL)
INSERT INTO TestTable (Val1) VALUES (4)
SELECT * FROM TestTable
ALTER TABLE TestTable SWITCH TO TestTable2
SELECT * FROM TestTable2