SQL Server suspended - sql-server

I have two applications that do something to the same SQL Server table. One application uses C# SqlBulkCopy to import about two hundred thousand records into the SQL Server table, and the other application queries data from the same SQL Server table.
I find this message - please check the screenshot. The table has one hundred million rows. How can I fix it?

If any transaction is modifying a table and affecting more than 5000 rows, then SQL Server will escalate the locking from row-level locking to an exclusive table lock.
So if your application #1 is bulk-loading 200'000 rows into the table, then that table will be exclusively locked for the duration of the loading process.
Therefore, your application #2 - or any other client - won't be able to query that table, until the loading process is done.
This is normal, documented, expected behavior on the part of SQL Server.
Either make sure you load your data in batches of less than 5000 rows at a time during business hours, or then do the bulk-loading after hours, when no one is being negatively impacted by an exclusive table lock.

Related

SQL Server: Local Query Time vs. Network Query Time... and Locks

Querying from a view into a temp table can insert 800K records in < 30 seconds. However, querying from the view to my app across the network takes 6 minutes. Does the server build the dataset and then send it, releasing any locks acquired after the dataset is built? Or are the locks held for that entire 6 minutes?
Does the server build the dataset and then send it, releasing any locks acquired after the dataset is built?
If you're using READ COMMITTED SNAPSHOT or are in SNAPSHOT isolation then there are no row and page locks in the first place.
Past that depends a on whether it's a streaming query plan or not. With a streaming plan SQL Server may be reading slowly from the tables as the results are sent across the network.

postgresql db table locking or row locking multi query execution

I would like to understand how postgreSQL is doing multi query execution for example i have a DB that there are lots of insert queries running for example 20-40 per minute and lots of select queries like 200-300 per minute (simple query by primary key selection).
This type of queries are run on the same table and i'm curious on how postgreSQL is handling these. Is it like when insert query is run table is lock and we have to wait for select queries or it is row locking so that while insert query is in progress select queries can continue and ignore locked rows?
In mysql database there is MyISAM engine that does table lock and innoDB that does row locking i guess...
Postgres implements multiv version concurrency control (MVCC) which means that readers never block writers and writers never block readers.
For normal DML statements Postgres never takes a table lock either so the SELECT queries are never blocked by any of the INSERT statements you are running concurrently.
The Postgres Wiki contains links to more detailed descriptions on how exactly MVCC is implemented and works in Postgres.
Essentially every modern DBMS uses some kind of MVCC these days. Oracle, Firebird and DB2 have "always" been using it. SQL Server introduced it with SQL Server 2005 (although it's still not the default behaviour) and in MySQL the InnoDB engine uses it.

SSIS locking table while updating it

I have an SSIS package which when runs, updates a table. It is using a staging table and subsequently, uses slowly changing dimension table to load data into the warehouse. We have set it up as a SQL Agent job and it runs every two hours.
The isolation level of the package is serializable. The database isolation level is read committed.
The issue is that when this job runs, this job blocks that table and therefore, clients cannot run any reports. It blanks it out.
So what would be the best option for me to avoid it? clients need to see that data, meanwhile, we need to update the table every two hours.
Using Microsoft SQL Server 2012 (SP3-GDR) (KB4019092) - 11.0.6251.0 (X64)
Thanks.
You're getting "lock escalation". It's a feature, not a bug. 8-)
SQL Server combines large numbers of smaller locks into a table lock to improve performance.
If INSERT performance isn't an issue, you can do your data load in smaller chunks inside of transactions and commit after each chunk.
https://support.microsoft.com/en-us/help/323630/how-to-resolve-blocking-problems-that-are-caused-by-lock-escalation-in
Another option is to give your clients/reports access to a clone of your warehouse table.
Do your ETL into a table that no one else can read from, and when it is finished, switch the table with the clone.

How to keep indexes performant on Sql Server table frequently updated (delete/insert) by batch process

We have a table in our sql server database which holds 'today's' data and is updated by many scheduled jobs around the clock. Each job deletes rows it previously inserted and inserts new rows. The table data is also made available via a web site which runs many queries against it. The problem is the indexes are constantly fragmented and although it only has 1.5m rows querys are generally very slow and the website times out frequently.
So I would like to know if anyone else has experienced a similar scenario and if so how did you deal with it.
You need to ReOrg the indexes on a daily basis. Here's am image of a defrag job from SSMS:

PostgreSQL table lock

I have a PostgreSQL 9.2 database using pgBouncer connection pool in a debian server.
In that database, regular users performs queries over a few tables and I have a cron process fetching data and inserting in a table (using a pg/plsql function which makes some validations before insert).
The problem I have is that when I have a huge load on the cron processes (many inserts), the table get locked and the queries to this table does not respond (or takes a lot of time to respond).
Is there any way to set priorities by stored procedure, database user(cron and queries use different database users) or by type (select has higher priority than insert).
If there is no way to user priority definition in postgreSQL, is there any workaround?
Inserts can wait, but the user queries should not...
The cron process creates and drops a pgbouncer connection per insert. If I use the same connection, the problem is bigger (the queries takes even longer)
Thanks in advance,
Claudio

Resources