SSIS locking table while updating it - sql-server

I have an SSIS package which when runs, updates a table. It is using a staging table and subsequently, uses slowly changing dimension table to load data into the warehouse. We have set it up as a SQL Agent job and it runs every two hours.
The isolation level of the package is serializable. The database isolation level is read committed.
The issue is that when this job runs, this job blocks that table and therefore, clients cannot run any reports. It blanks it out.
So what would be the best option for me to avoid it? clients need to see that data, meanwhile, we need to update the table every two hours.
Using Microsoft SQL Server 2012 (SP3-GDR) (KB4019092) - 11.0.6251.0 (X64)
Thanks.

You're getting "lock escalation". It's a feature, not a bug. 8-)
SQL Server combines large numbers of smaller locks into a table lock to improve performance.
If INSERT performance isn't an issue, you can do your data load in smaller chunks inside of transactions and commit after each chunk.
https://support.microsoft.com/en-us/help/323630/how-to-resolve-blocking-problems-that-are-caused-by-lock-escalation-in

Another option is to give your clients/reports access to a clone of your warehouse table.
Do your ETL into a table that no one else can read from, and when it is finished, switch the table with the clone.

Related

SQL Server suspended

I have two applications that do something to the same SQL Server table. One application uses C# SqlBulkCopy to import about two hundred thousand records into the SQL Server table, and the other application queries data from the same SQL Server table.
I find this message - please check the screenshot. The table has one hundred million rows. How can I fix it?
If any transaction is modifying a table and affecting more than 5000 rows, then SQL Server will escalate the locking from row-level locking to an exclusive table lock.
So if your application #1 is bulk-loading 200'000 rows into the table, then that table will be exclusively locked for the duration of the loading process.
Therefore, your application #2 - or any other client - won't be able to query that table, until the loading process is done.
This is normal, documented, expected behavior on the part of SQL Server.
Either make sure you load your data in batches of less than 5000 rows at a time during business hours, or then do the bulk-loading after hours, when no one is being negatively impacted by an exclusive table lock.

SQL Server: Archiving old data

I have a database that is getting pretty big, but the client is only interested in the last 2 years' data. But they would like to keep the older data "just-in-case".
Now we would like to archive the data to a different server over a WAN.
My plan is to create a stored proc to:
Copy all data from lookup tables, tables containing master data and foreign key tables over to the archive server.
Copy data from transactional tables over to the archive DB.
Delete transactional data from master db that's older than 2 years.
Although the approach will teoretically meet our needs, the 2 main problems are:
Performace: I'm copying the data over via SQL Linked Servers. Some of the big tables are really slow as it needs to compare which records exist and then update them, and the records that doesn't exists needs to be created. Seems like it will run in 3-4 hours.
We need to copy the tables in the correct sequence to prevent foreign key violations, and also the tables that have a relationship to itself (eg. Customers table with a ParentCustomer field), needs to be transferred without the ParentCustomer and then the ParentCustomer needs to be updated to prevent FK violations. Thus it becomes difficult to auto generate my Insert and Update statements (I would like to auto generate my statements as far as possible).
I just feel there might be a better way of archiving data that I do not yet know about. SSIS might be an option, but not sure if it will prevent my existing challenges. I don't know much about SSIS, so I might need to find some material to study it if that's the way to go.
I believe you need a batch process that will run as a scheduled task; perhaps every night. There are two options, which you have already discussed:
1) SQL Agent Job, which executes a Stored Procedure. The stored procedure will use Linked Servers.
2) SQL Agent Job, which will execute an SSIS package.
I believe you could benefit from a combination of both approaches, which would avoid Linked Serverd. Here are the steps:
1) An SQL Agent Job executes an SSIS package, which transfers the data to be archived from the live database to the copy database. This should be done in a specific sequence to avoid foreign key violations.
2) Once the SSIS package has executed the transfer, then it executes a stored procedure on the live database deleting the information that is over two years old. The stored procedure will not require any linked servers.
You will have to use transactions to make sure duplicate data is not archived. For example, if the SSIS package fails then the transaction should be rolled back and the Stored Procedure should not be executed.
You can use table partitions to create separate partitions for relevant date ranges.

Populating SQL Server databases and creating indexes - which is the most efficient way?

We've got a project site where we have to replicate a legacy database system into SQL Server 2008 on a nightly basis.
We are using the SQL DataWizard tool from Maestro to do the job, and because we cannot get an accurate delta every night, it was decided that we would dump the previous SQL Server database and take a fresh snapshot every night. Several million rows in about 10 different tables. The snapshot takes about 2 hours to run.
Now, we also need to create some custom indexes on the snapshot copy of the data, so that certain BI tools can query the data quickly.
My question is: is it more efficient to create the tables AND the indexes before the snapshot copy is run, or do we just create the table structures first, run the snapshot copy then create the indexes after the tables are populated?
Is there a performance different in the SQL Server database building the index WHILE adding rows vs adding all rows first then creating the indexes on the final data set?
Just trying to work out which way will result in less database server CPU overhead.
When you perform a snapshot replication, the first task is to bulk copy the data. After the data has been copied, primary and secondary indexes are added. The indexes don't exists until the second step is complete. So no, there is no improvement gain by applying an index after the snapshot.

SQL Server replication for 70 databases with transformation in a small time window

We have 70+ SQL Server 2008 databases that need to be copied from an OLTP environment to a separate reporting server. Once the DB's are copied, we will do some partial data transformation: de-normalization, row level security, etc.
SSRS Reports will be written based on these static denormalized tables and views.
We have a small nightly window for copying and transforming all 70 databases (3 hours).
Currently databases average about 10GB.
Options:
1. Transactional replication:
We would need to create 100+ static denormalized tables on each reporting database.
Doing this for all 70 databases almost reaches our nightly time limit.
As the databases grow we will exceed the time limit. We thought of mixing denormalized tables with views to speed up transformation. But then there would be some dynamic and some static data which is not a solution we can use.
Also with 70 databases using transactional replication we are concerned about bandwidth usage.
2. Snapshot replication:
Copy the entire database each night.
This means we could have a mixture of denormalized tables and views so the data transformation process is quicker.
But the snapshot is a full data copy, so as the DB grows, we will exceed our time limit for completing copy and transformation.
3. Log shipping:
In our nightly window, we could use the log shipping to update the reporting databases, then truncate and repopulate the denormalized tables and use some views.
However, I understand that with log shipping, extra tables and views cannot be added to the subscribing database.
4. Mirroring:
Mirroring is being deprecated, but also the DB is not active for reporting against until failover.
5. SQL Server 2012 AlwaysOn.
We don't have SQL Server 2012 yet, can this be configured to do an update once a day instead of realtime?
And can extra tables and views be created on the subscribing database (our reporting databases)?
6. Merge replication:
This is meant to be for combining multiple data sources into one database.
But is looks like it allows for a scheduled update (once per day) and only updates the subscriber DB with the latest changes rather than doing an entire snapshot.
It requires adding a rowversion column to every table but we could handle this. Also with this solution would additional tables be able to be created on the subscriber database without the update getting out of sync?
The final option is that we use SSIS to select only the data we need from the OLTP databases. I think this options creates more risk as we would have to handle inserts/updates/deletes to our denormalized tables, rather than just drop and recreate the denormalized tables daily.
Any help on our options would be greatly appreciated.
If I've made any incorrect assumptions, please say.
If it were me, I'd go with transactional replication that runs continuously and have views (possibly indexed) at the subscriber. This has the advantage of not having to wait for the data to come over since it's always coming over.

Recording all Sql Server Inserts and Updates

How can I record all the Inserts and Updates being performed on a database (MS SQL Server 2005 and above)?
Basically I want a table in which I can record all the inserts andupdates issues on my database.
Triggers will be tough to manage because there are 100s of tables and growing.
Thanks
Bullish
We have hundreds of tables and growing and use triggers. In newer versions of SQL server you can use change Data Capture or Change Tracking but we have not found them adequate for auditing.
What we have is are two separate audit tables for each table (one for recording the details of the instance (1 row even if you updated a million records) and one for recording the actual old and new values), but each has the same structure and is created by running a dynamic SQL proc that looks for unauditied tables and creates the audit triggers. This proc is run every time we deploy.
Then you should also take the time to write a proc to pull the data back out of the audit tables if you want to restore the old values. This can be tricky to write on the fly with this structure, so it is best to have it handy before you have the CEO peering down your neck while you restore the 50,000 users accidentally deleted.
As of SQL Server 2008 and above you have change data capture.
Triggers, although unwieldy and a maintenance nightmare, will do the job on versions prior to 2008.

Resources