I have a SQL Server database. The size is about 150GB which saves some data for analysis. Each day, new data comes in and we need to delete old data (based on date). Recently, the daily data size increase a lot, it will be about 8-9GB per day soon.
Currently, we delete in small batch, which takes a very long time to finish. Is there a general guide to make it faster? Tried to drop/disable index before delete, after delete finished, then rebuild index. It does not help much.
Or, this will totally depend on the actual date?
Thanks
Given the amount of data I would use a partitioned table, one for each day.
Swapping partitions in and out is going to be the fastest way to delete all data for one day.
EDIT: since truncating a partition is not as trivial as it should be in SQL Server, I figured I'd provide more details, in case you're not familiar with partitions.
In the next release of SQL Server, you should be able to just TRUNCATE PARTITION or something like that. In the meantime you have to proceed as follows:
The quickest way to delete a day of data in your database is to have the table partitioned by day and then:
Swap out the partition that you want to delete to another table: ALTER TABLE partitioned SWAP PARTITION n TO otherTableToDelete.
TRUNCATE TABLE otherTableToDelete.
Related
I have a single MSSQL 2017 Standard table, let's call it myTable, with data going back to 2015, containing 206.4 million rows. Once INSERTed by the application, these rows are never modified or deleted. The table is actively collecting data, 24/7.
My goal is to reduce the data in this table to only the most recent full 6 months plus current month, into monthly-based partitions for easy monthly pruning. myTable.dateCreated would determine which partition the data ultimately resides.
(Unrelated, but mentioning in case it ends up being relevant: I have an existing application that replicates all data that gets stored in myTable out to a data warehouse for long term storage every 15 minutes; the main application is able to query myTable for recent data and the data warehouse for older data as needed.)
Because I want to prune the oldest one month worth of data out of myTable each time a new month starts, partitioning myTable by month makes the most sense - I can simply SWITCH the oldest partition to a staging table, then truncate that staging table without causing downtime or performance on the main table.
I've come up with the following plan, and my questions are simple: Is this the best way to approach this task, and will it keep downtime/performance degradation to a minimum?
Create a new table, myTable_pending, with the same exact table structure as myTable, EXCEPT that it will have a total of 7 monthly partitions (6 months retention plus current month) configured;
In one complete step: rename myTable to myTable_transfer, and rename myTable_pending to myTable. This should have the net effect of allowing incoming data to continue being stored, but now it will be in a partition for the month of 2023-01;
Step 3 is where I need advice... which of the following might be best to get the remaining 6mos + current data back into the now-partitioned myTable, or are there additional options I should consider?
OPTION 1: Run a Bulk Insert of just the most recent 6 months of data from myTable_transfer back into myTable, causing the data to end up in the correct partitions in the process (with the understanding that this may still take some time, but not as long as a bunch of INSERTs that would end up chewing on the transaction log);
OPTION 2: Run a DELETE against myTable_transfer, getting rid of all data except the most recent full 6 months + current, and then set up and apply partitions on THIS table, that would then cause SQL Server to reorganize the data into those partitions, but without affecting access or performance on myTable, after which I could just SWITCH the partitions from myTable_transfer into myTable for immediate access; (related issue: since myTable is still collecting current data, and myTable_transfer will contain data from the current month as well, can the current month partitions be merged?)
OPTION 3: Any other way to do this, so that myTable ends up with 6 months worth of data, properly partitioned, without significant downtime?
We ended up revising our solution, since the original table was replicated to a data warehouse anyway, we simply renamed the table and created a new one with partitioning to start collecting new data from the rename point. This provided the least amount of downtime, the fastest schema changes, and gave us the partitioning we needed to maintain the table efficiently going forward.
I have two table(T_1 & T_2) with same fields. What I need, after every hour T_2 table only have the data which was inserted on T_1 table within that hour(previous hour data will be erased). I am using sql server. Please help me.
Why would you set up two tables to do this?
Your use-case seems like a canonical case for table partitioning. This is a way of storing data in separate "units" (files). You seem to want T_1 to have its data split by hour.
Then you can directly access the data for a particular hour. This will be as efficient from an access perspective as copying the data into a separate table.
If you really wanted to, you could copy the most recent partition to another table every hour -- swapping in the new data for the older data. But that seems unnecessary in practice.
BUSINESS SCENARIO, SEEKING A WAY TO PROGRAM THIS:
Every night, I have to update table ABC in the data warehouse database from the production database. The table is millions of rows, so I want to do this efficiently.
The table doesn't have any sort of timestamp marker (LastUpdated Date\Time).
The database was created by our vendor whose software we run, and they are giving us visibility into our data. We may not have much leverage in terms of asking for new columns to house information such as LastUpdate DateTime stamp.
Is there a way, absent such information, to be able to identify those rows that have changed or added.
For example, is there such a thing as query-able physical row number associated with the table record, that might help us work towards a solution? If that could be queried, and perhaps go sequentially, then maybe there is a way to get the inserted rows.
Updated rows, I am not so sure.
Just entertaining ideas at this point in time to see if there is an efficient solution for this scenario.
Ideally, the solution will be geared towards a stored procedure we can have run every night be a job.
Thank you.
I saw this comment but I am not so sure that the solution is efficient:
Find changed rows (composite key with nulls)
Please check the MERGE operator,You can create a SQL Server Job which can execute the MERGE Script to check and update the changes if any.
I've tried to search for some ideas but can't find anything that's very suitable for my scenario.
I have a table which I write and updata data to from multiple sites, maybe a row per second for specific hours of the day and on average having around 50k records added daily. Seperate to this, I have dashboards where people can query this data but some of the queries may be quite complex and take a number of seconds to complete.
I can't afford my write/updates to slow down
Although the dashboards don't need to be real time, it would be a bonus
Im hosting on Azure DB S2. What options are available?
Current idea is to use an 'active' table for writes/updates and flush the data to the full table every x min. My only concern is that I have a seeded bigint as a PK on the main table and because I also save other data linked to this, I'd have problems linking to this id until I commit to the main table. An option would be to reseed the active table and set identity insert off on the main table to populate it myself but I'm not 100% happy with this.
Just looking for suggestions until I go ahead with my current idea! Thanks
I am looking for much more better way to update tables using SSIS. Specifically, i wanted to optimize the updates on tables (around 10 tables uses same logic).
The logic is,
Select the source data from staging then inserts into physical temp table in the DW (i.e TMP_Tbl)
Update all data matching by customerId column from TMP_Tbl to MyTbl.
Inserts all non-existing customerId column from TMP_Tbl1 to MyTbl.
Using the above steps, this takes some time populating TMP_Tbl. Hence, i planned to change the logic to delete-insert but according to this:
In SQL, is UPDATE always faster than DELETE+INSERT? this would be a recipe for pain.
Given:
no index/keys used on the tables
some tables contains 5M rows, some contains 2k rows
each table update took up to 2-3 minutes, which took for about (15 to 20 minutes) all in all
these updates we're in separate sequence container simultaneously runs
Anyone knows what's the best way to use, seems like using physical temp table needs to be remove, is this normal?
With SSIS you usually BULK INSERT, not INSERT. So if you do not mind DELETE - reinserting the rows should in general outperform UPDATE.
Considering this the faster approach will be:
[Execute SQL Task] Delete all records which you need to update. (Depending on your DB design and queries, some index may help here).
[Data Flow Task] Fast load (using OLE DB Destination, Data access mode: Table of fiew - fast load) both updated and new records from source into MyTbl. No need for temp tables here.
If you cannot/don't want to DELETE records - your current approach is OK too.
You just need to fix the performance of that UPDATE query (adding an index should help). 2-3 minutes per every record updated is way too long.
If it is 2-3 minutes for updating millions of records though - then it's acceptable.
Adding the correct non-clustered index to a table should not result in "much more time on the updates".
There will be a slight overhead, but if it helps your UPDATE to seek instead of scanning a big table - it is usually well worth it.