DB2 temp table session concurrent user issue - database

We are working with an application in .net with DB2 as a database. I am using the temp table in my stored procedure. Sometimes it throws an error "table is in use".
Declare Global Temporary Table TRNDETAILS (USERID INT ,
Name VARCHAR ( 25 )) WITH REPLACE;
As per the below document, temp tables are specific to the session. Then why is showing "table is in use". How can resolve it?
https://www.ibm.com/support/knowledgecenter/en/SS6NHC/com.ibm.swg.im.dashdb.sql.ref.doc/doc/r0003272.html

SQL0913N is either a lock-timeout or a deadlock.
This might not be in your session table. Unless your .net app is multithreading SQL on a single connection.
Check DSPRCDLCK, WRKOBJLCK related tools. You need to track down the SQL-statement(s) that are conflicting, and take action dependent on the cause. Sometimes this involves changing the isolation level in your application.
Examine the Db2 for i diagnostics to get more information , i.e. whether it is a lock-timeout or a deadlock, and which connections are involved, and which objects are involved.

QTEMP is unique for every job/connection...
I assume your app only creates the temporary table in one place and that creating the temporary table is one of the first things it does.
I also suspect that you're using connection pooling within .NET.
Thus the connection isn't actually being closed, it's left open in the connection pool.
Somewhere in your app, you're not properly disposing of a result set and/or committing changes to the temp table. Thus leaving rows inside it locked when the connection is returned to the pool.
You probably should drop the temporary table before your app closes the connection and returns the connection to the pool.
That should prevent the error, but it'd be a good idea to track down the bug that's leaving the rows locked in the first place.

Related

Ghost data rows added into Firebird database table?

I faced today strange case when receiving customer database for investigation.
System settings:
Firebird server v 2.5.9.26074
Firebird client v 2.6.5
Database file is accessed directly by the application, i.e., it is NOT registered via aliases.conf.
When I first looked into database, everything seemed to be pretty consistent. However, during the first startup there are two rows added in certain table without any detected SQL execution. I have confirmed with debugger that the application is not adding these rows. I also used Audit and Trace inferface (fbtracemgr) and saw in log file that there are not such rows added to the database.
There is one hint that something is wrong in the original database. The table that contains the problem is using INSERT trigger to set the table row's ID column value from generator. Now the generator value seem to be one too high in the original database. This leads me to think that the "ghost data" has already been entered in the file in some sort of cache as the generator is already increment by one.
The result is that after these the two ghost rows are added, the next real addition to the table leads into exception:
FirebirdSql.Data.FirebirdClient.FbException (0x80004005): violation of
PRIMARY or UNIQUE KEY constraint "INTEG_275" on table "DATALOG" --->
violation of PRIMARY or UNIQUE KEY constraint "INTEG_275" on table
"DATALOG"
as there already exist row with equal ID that the generator suggests.
Is there persistent "unsaved data cache" that could contain row data entered during the previous application runs? What could lead to this situation? Power break during database writing or backuping?
Any thoughts?
Firebird server v 2.5.9.26074
There is no such version released.
Firebird-2.5.8.27089
http://www.firebirdsql.org/en/firebird-2-5/
Basically u seem to use some destabilized FB developers internal build, which can have any number of strange averse effects.
So I would advice to use standard released verison or if using snapshot builds is required for some untold reasons - to ask developers in firebird-support mail list - http://www.firebirdsql.org/en/support/
Though don't hold your breath for much of support over exotic Firebird builds.
UPD. Thanks to Mark, here it is: https://www.firebirdsql.org/en/firebird-2-5-0/
2.5.0 - was the first release after a significant reworking of the engine. Not the most stable, obviously. For example there was an issue with indices right in the next 2.5.1 version.
if the behavior would be repeated on standard 2.5.8 Firebird, then i would suggest exporting all the database (at least all the meta-data, but maybe the data as well) into a long text file, SQL script, and then searching for the said table name in it. For example there might be on-database-connect triggers adding some data. Or stored procedures. Or views made on triggers. Or something yet else. For example - though malpractice - even UDF function may make it's own database connection and do things, though this should be shown in FBTrace.
However, during the first startup there are two rows added in certain table
startup of what ?
will those rows still be added if you use standard tools like iSQL/FlameRobin/IBExpert/etc just to connect and then disconnect from the database?
as there already exist row with equal ID that the generator suggests
Generator can not suggest things like that. It can only suggest that once such a number was reserved for possibly being added to one or another table. It does not mean the row was actually inserted, was inserted into that table, was not deleted later.
You may try to search with indices prohibited, in case index corruption could occur, something like
select id+0, count(*) from tableName group by 1
Also http://www.firebirdfaq.org/faq324/
when receiving customer database for investigation
BTW, how exactly did they created a copy of the database to give you?
Did they made back-up (FBK) ? If not, did they stopped Firebird server before making copies?

Explicitly drop temp table or let SQL Server handle it

What is best practice for handling the dropping of a temp table. I have read that you should explicitly handle the drop and also that sql server should handle the drop....what is the correct method? I was always under the impression that you should do your own clean up of the temp tables you create in a sproc, etc. But, then I found other bits that suggest otherwise.
Any insight would be greatly appreciated. I am just concerned I am not following best practice with the temp tables I create.
Thanks,
S
My view is, first see if you really need a temp table - or - can you make do with a Common Table Expression (CTE). Second, I would always drop my temp tables. Sometimes you need to have a temp table scoped to the connection (e.g. ##temp), so if you run the query a second time, and you have explicit code to create the temp table, you'll get an error that says the table already exists. Cleaning up after yourself is ALWAYS a good software practice.
EDIT: 03-Nov-2021
Another alternative is a TABLE variable, which will fall out of scope once the query completes:
DECLARE #MyTable AS TABLE (
MyID INT,
MyText NVARCHAR(256)
)
INSERT INTO
#MyTable
VALUES
(1, 'One'),
(2, 'Two'),
(3, 'Three')
SELECT
*
FROM
#MyTable
CREATE TABLE (Transact-SQL)
Temporary tables are automatically dropped when they go out of scope, unless explicitly dropped by using DROP TABLE:
A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished. The table can be referenced by any nested stored procedures executed by the stored procedure that created the table. The table cannot be referenced by the process that called the stored procedure that created the table.
All other local temporary tables are dropped automatically at the end of the current session.
Global temporary tables are automatically dropped when the session that created the table ends and all other tasks have stopped referencing them. The association between a task and a table is maintained only for the life of a single Transact-SQL statement. This means that a global temporary table is dropped at the completion of the last Transact-SQL statement that was actively referencing the table when the creating session ended.
I used to fall into the crowd of letting the objects get cleaned up by background server processes, however, recently having issues with extreme TempDB log file growth has changed my opinion. I'm not sure if this has always been the case with every version of SQL Server, but since moving to SQL 2016 and putting the drives on a PureStorage SSD array, things run a bit differently. Processes are typically CPU bound rather than I/O bound, and explicitly dropping the temp objects results in no issues with log growth. While I haven't dug in too deeply as to why, I suspect it's not unlike garbage collection in the .NET world where it's synchronous when called explicitly and asynchronous when left to the system. This would matter because the explicit drop would release the storage in the log file, and make it available at the next log backup, whereas this appears to not be the case when not explicitly dropping the object. On most systems this is likely not a big issue, but on a system supporting a high volume ERP and web storefront with many concurrent transactions, and heavy TempDB use, it has had a big impact. As for why to create the TempDB objects in the first place, with the amount of data in most of the queries, it would spill over into TempDB storage anyway, so it's usually more efficient to create the object with the necessary indexes rather than let the system handle it automatically.
In a multi-threaded scenario where each thread creates its own set of tables and the number of threads is throttled, not dropping your own tables means that the governor will consider your thread done and spawn more threads... however the temp tables are still around (and thus the connections to the server) thus you'll exceed the limits of your governor. if you manually drop the temp tables then the thread doesn't finish until they've been dropped and no new threads are spawned, thus maintaining the governor's ability to keep from overwhelming the SQL engine
As per my view. No need to drop temp tables explicitly. SQL server will handle to drop temp tables stored in temp db in case of shorage of space to process query.

Why are rollbacks needed?

Why are rollbacks so important?
Is it to prevent data (like data in a SQL DB) from being in an inconsistent state?
If so, how comes the data "store" (the SQL DB or whatever) made it possible in the first place to become in a corrupt state?
Are there data storage mechanisms that don't have a need for "rollbacks"?
Rollbacks are important in case of any kind of errors appearing during database operational. They can really save the day in case of database server crashes or a critical exception is thrown in an application that modifies contents of DB. When a significant DB operation is performed (i.e. updates, inserts, etc.) and the process is broken in the middle, it would be very hard to trace which operations were successful and usage of DB afterward would be very complicated.
The "store" itself does not generally have a built-in mechanism for consistency control - this is exactly why we use rollbacks and transactions. This can be perceived as a sort of 'live backup' mechanism.
There are cases, when you need insert/update data in many related tables - if you didn't have transactional logic, then any errors somewhere in middle of process could make data inconsistent.
Simple example. Say you need to insert both order header data into orders table and order lines into lines table. You insert order header, read identity, start inserting order lines - but this second insert fails on whatever reason. Only reliable way to recover from this situation is to rollback first insert - either explicitly (when your connection to db is alive) or implicitly (when link is gone down).

Are all deadlocks caused by a bad query

"Transaction (Process ID 63) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly."
Could this deadlock be caused by something that stored proc uses like SQL mail? Or is it always caused my something like two applications accessing the same table at the same time?
Two tables accessing the same table at the same time happens all the time in an application. Generally that won't cause a deadlock. A deadlock typically happens when you have say process 'A' attempting to update Table 1 and then Table 2 and then Table 3, and you have process 'B' attempting to update Table 3, then Table 2, and then Table 1. Process 'A' will have a resource locked that process 'B' needs and process 'B' has a resource process 'A' needs. SQL Server detects this as a deadlock and rolls one of the processes back, as a failed transaction.
The bottom line is that you have two processes attempting to update the same tables at the same time, but not in the same order. This will often lead to deadlocks.
One easy way to handle this in your application is to handle the failed transaction and simply re-execute the transaction. It will almost always execute successfully. A better solution is to make sure your processes are updating tables in the same order, as much as possible.
Missing Indexes is another common cause of deadlocks. If a select query can get the info it needs from an index instead of the base table, then it won't be blocked by any updates/inserts on the table itself.
To find out for sure, use the SQL profiler to trace for "Deadlock Graph" events, which will show you the detail of the deadlock itself.
Based on this, I don't think SQL Mail itself would directly be the culprit. I say "directly" because I don't know what you're doing with it. However, I assume SQL Mail is probably slow compared to the rest of your SQL ops, so if you're doing a lot with that, it could indirectly create a bottleneck that leads to a deadlock if you're holding onto tables while sending off the SQL Mail.
It's hard to recommend a specific strategy without having too many specifics about what you're doing. The short of it is that you should consider whether there's a way to break your dependence on holding onto the table while you're doing this such as using NOLOCK, using a temp table or non-temp "holding" table or just refactoring the SP that is doing the call.

database record locking

I have a server application, and a database. Multiple instances of the server can run at the same time, but all data comes from the same database (on some servers it is postgresql, in other cases ms sql server).
In my application, there is a process that is performed which can take hours. I need to ensure that this process is only executed one at a time. If one server is processing, no other server instance can process until the first one has completed.
The process depends on one table (let's call it 'ProcessTable'). What I do is, before any server starts the hour-long process, I set a boolean flag in the ProcessTable which indicates that this record is 'locked' and is being processed (not all records in this table are processed / locked, so I need to specifically mark each record which is needed by the process). So when the next server instance comes along while the previous instance is still processing, it sees the boolean flags and throws an exception.
The problem is, that 2 server instances might both be activated at nearly the same time, and when both check the ProcessTable, there may not be any flags set, but both servers are actually in the process of 'setting' the flags but since the transaction hasn't yet commited for either process, neither process will see the locking done by the other process. This is because the locking mechanism itself may take a few seconds, so there is that window of opportunity where 2 servers might still be able to process at the same time.
It appears that what I need is a single record in my 'Settings' table which should store a boolean flag called 'LockInProgress'. So before even a server can lock the needed records in the ProcessTable, it first must make sure that it has full rights to do the locking by checking the 'LockInProgress' column in the Settings table.
So my question is, how do I prevent two servers from both modifying that LockInProgress column in the settings table, at the same time... or am I going about this in the wrong manner?
Please note that I need to support both postgresql and ms sql server as some servers use one database, and some servers use the other.
Thanks in advance...
How about obtaining a lock on the record first and then update the record to show "locked". This would avoid the 2nd instance to get a lock successfully and thereby the update of record fails.
The point is to make sure the lock and update as one atomic step.
Make a stored procedure that hands out the lock, and run it under 'serializable' isolation. This will guarantee that one and only one process can get at the resource at any given time.
Note that this means that the second process trying to get at the lock will block until the first process releases it. Also, if you have to get multiple locks in this manner, make sure that the design of the process guarantees that the locks will be acquired and released in the same order. This will avoid deadlock situations where two processes hold resources while waiting for each other to release locks.
Unless you can't deal with your other processes blocking this would probably be easier to implement and more robust than attempting to implement 'test and set' semantics.
I've been thinking about this, and I think this is the simplest way of doing things; I just execute a command like this:
update settings set settingsValue = '333' where settingsKey = 'ProcessLock' and settingsValue = '0'
'333' would be a unique value which each server process gets (based on date/time, server name, + random value etc).
If no other process has locked the table, then the settingsValue would be = to 0, and that statement would adjust the settingsValue.
If another process has already locked the table, then that statement becomes a no-op, and nothing get's modified.
I then immediately commit the transaction.
Finally, I requery the table for the settingsValue, and if it is the correct value, then our lock succeeded and we continue on, otherwise an exception is thrown, etc. When we're done with the lock, we reset the value back down to 0.
Since I'm using SERIALIZATION transaction mode, I can't see this causing any issues... please correct me if I'm wrong.

Resources