DB2 Logfile Limitation, SQLCODE: -964 - database

I have tried a huge insert query in DB2.
INSERT INTO MY_TABLE_COPY ( SELECT * FROM MY_TABLE);
Before that, I set the followings:
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGFILSIZ 70000;
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGPRIMARY 50;
UPDATE DATABASE CONFIGURATION FOR MY_DB USING LOGSECOND 2;
db2stop force;
db2start;
and I got this error:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0964C The transaction log for the database is full. SQLSTATE=57011
SQL0964C The transaction log for the database is full.
Explanation:
All space in the transaction log is being used.
If a circular log with secondary log files is being used, an
attempt has been made to allocate and use them. When the file
system has no more space, secondary logs cannot be used.
If an archive log is used, then the file system has not provided
space to contain a new log file.
The statement cannot be processed.
User Response:
Execute a COMMIT or ROLLBACK on receipt of this message (SQLCODE)
or retry the operation.
If the database is being updated by concurrent applications,
retry the operation. Log space may be freed up when another
application finishes a transaction.
Issue more frequent commit operations. If your transactions are
not committed, log space may be freed up when the transactions
are committed. When designing an application, consider when to
commit the update transactions to prevent a log full condition.
If deadlocks are occurring, check for them more frequently.
This can be done by decreasing the database configuration
parameter DLCHKTIME. This will cause deadlocks to be detected
and resolved sooner (by ROLLBACK) which will then free log
space.
If the condition occurs often, increase the database
configuration parameter to allow a larger log file. A larger log
file requires more space but reduces the need for applications to
retry the operation.
If installing the sample database, drop it and install the
sample database again.
sqlcode : -964
sqlstate : 57011
any suggestions?

I used the maximum values for LOGFILSIZ, LOGPRIMARY, and LOGSECOND;
The max value for LOGFILSIZ may be different for windows, linux, etc. But, you can try a very big number and the DB let you know what is the max. In my case it was 262144.
Also, LOGPRIMARY + LOGSECOND <= 256. I tried 128 for each and it works for my huge query.

Instead of performing trial and errors with the DB CFG parameters, you can put these INSERT statements in the Stored Procedure with commit interval.
Refer to the following post for details: This might help.
https://prasadspande.wordpress.com/2014/06/06/plsql-ways-updatingdeleting-the-bulk-data-from-the-table/
Thanks

Related

SQL-Server: Truncate of transaction-logs inf FULL-BACKUP-Mode

I found a link expaining very well the main factors of transaction-log. But there is 1 statements I don't understand completly:
The FULL recovery model means that every part of every operation is
logged, which is called being fully logged. Once a full database
backup has been taken in the FULL recovery model, the transaction log
will not automatically truncate until a log backup is taken. If you do
not want to make use of log backups and the ability to recover a
database to a specific point in time, do not use the FULL recovery
model. However, if you wish to use database mirroring, then you have
no choice, as it only supports the FULL recovery model.
My question are:
Will the transaction-logs get truncated if I have a database in Full-Backup-Mode but have neither taken an full-backup than an log-backup? Will the free space overwriten after next checkpoint? And when will those checkpoints be reached? Do I need to set a soze limit for the transaction logs to force the truncation or not?
Many thanks in advance
When your database is in full recovery mode,only log backup frees the space in log file..
This space won't be available for file system,but will be internally marked as free,so that new transactions can use this space
Will the free space overwriten after next checkpoint? And when will those checkpoints be reached? Do I need to set a size limit for the transaction logs to force the truncation or not?
You need not do anything,just ensure log backups are taken depending on your requirements

SQL server, pyodbc and deadlock errors

I have some code that writes Scrapy scraped data to a SQL server db. The data items consist of some basic hotel data (name, address, rating..) and some list of rooms with associated data(price, occupancy etc). There can be multiple celery threads and multiple servers running this code and simultaneously writing to the db different items. I am encountering deadlock errors like:
[Failure instance: Traceback: <class 'pyodbc.ProgrammingError'>:
('42000', '[42000] [FreeTDS][SQL Server]Transaction (Process ID 62)
was deadlocked on lock resources with another process and has been
chosen as the deadlock victim. Rerun the transaction. (1205) (SQLParamData)')
The code that actually does the insert/update schematically looks like this:
1) Check if hotel exists in hotels table, if it does update it, else insert it new.
Get the hotel id either way. This is done by `curs.execute(...)`
2) Python loop over the hotel rooms scraped. For each room check if room exists
in the rooms table (which is foreign keyed to the hotels table).
If not, then insert it using the hotel id to reference the hotels table row.
Else update it. These upserts are done using `curs.execute(...)`.
It is a bit more complicated than this in practice, but this illustrates that the Python code is using multiple curs.executes before and during the loop.
If instead of upserting the data in the above manner, I generate one big SQL command, which does the same thing (checks for hotel, upserts it, records the id to a temporary variable, for each room checks if exists and upserts against the hotel id var etc), then do only a single curs.execute(...) in the python code, then I no longer see deadlock errors.
However, I don't really understand why this makes a difference, and also I'm not entirely sure it is safe to run big SQL blocks with multiple SELECTS, INSERTS, UPDATES in a single pyodbc curs.execute. As I understand pyodbc is suppose to only handle single statements, however it does seem to work, and I see my tables populates with no deadlock errors.
Nevertheless, it seems impossible to get any output if I do a big command like this. I tried declaring a variable #output_string and recording various things to it (did we have to insert or update the hotel for example) before finally SELECT #output_string as outputstring, but doing a fetch after the execute in pyodbc always fails with
<class 'pyodbc.ProgrammingError'>: No results. Previous SQL was not a query.
Experiments within the shell suggest pyodbc ignores everything after the first statement:
In [11]: curs.execute("SELECT 'HELLO'; SELECT 'BYE';")
Out[11]: <pyodbc.Cursor at 0x7fc52c044a50>
In [12]: curs.fetchall()
Out[12]: [('HELLO', )]
So if the first statement is not a query you get that error:
In [13]: curs.execute("PRINT 'HELLO'; SELECT 'BYE';")
Out[13]: <pyodbc.Cursor at 0x7fc52c044a50>
In [14]: curs.fetchall()
---------------------------------------------------------------------------
ProgrammingError Traceback (most recent call last)
<ipython-input-14-ad813e4432e9> in <module>()
----> 1 curs.fetchall()
ProgrammingError: No results. Previous SQL was not a query.
Nevertheless, except for the inability to fetch my #output_string, my real "big query", consisting of multiple selects, updates, inserts actually works and populates multiple tables in the db.
Nevertheless, if I try something like
curs.execute('INSERT INTO testX (entid, thecol) VALUES (4, 5); INSERT INTO testX (entid, thecol) VALUES (5, 6); SELECT * FROM testX; '
...: )
I see that both rows were inserted into the table tableX, even a subsequent curs.fetchall() fails with the "Previous SQL was not a query." error, so it seems that pyodbc execute does execute everything...not just the first statement.
If I can trust this, then my main problem is how to get some output for logging.
EDIT Setting autocommit=True in the dbargs seems to prevent the deadlock errors, even with the multiple curs.executes. But why does this fix it?
Setting autocommit=True in the dbargs seems to prevent the deadlock errors, even with the multiple curs.executes. But why does this fix it?
When establishing a connection, pyodbc defaults to autocommit=False in accordance with the Python DB-API spec. Therefore when the first SQL statement is executed, ODBC begins a database transaction that remains in effect until the Python code does a .commit() or a .rollback() on the connection.
The default transaction isolation level in SQL Server is "Read Committed". Unless the database is configured to support SNAPSHOT isolation by default, a write operation within a transaction under Read Committed isolation will place transaction-scoped locks on the rows that were updated. Under conditions of high concurrency, deadlocks can occur if multiple processes generate conflicting locks. If those processes use long-lived transactions that generate a large number of such locks then the chances of a deadlock are greater.
Setting autocommit=True will avoid the deadlocks because each individual SQL statement will be automatically committed, thus ending the transaction (which was automatically started when that statement began executing) and releasing any locks on the updated rows.
So, to help avoid deadlocks you can consider a couple of different strategies:
continue to use autocommit=True, or
have your Python code explicitly .commit() more often, or
use SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED to "loosen up" the transaction isolation level and avoid the persistent locks created by write operations, or
configure the database to use SNAPSHOT isolation which will avoid lock contention but will make SQL Server work harder.
You will need to do some homework to determine the best strategy for your particular usage case.

Why can't I shrink transaction log?

I’m converting some historic databases to read-only and trying to clean them up. I’d like to shrink the transaction logs to 1MB. I realize it’s normally considered bad practice to shrink transaction logs, but I think this is probably the exception to the rule.
The databases are set to SIMPLE recovery on SQL Server 2012 Standard. So I would have expected that after issuing a CHECKPOINT statement that the contents of the transaction log could be shrunk, but that’s not working.
I have tried:
Manually issuing CHECKPOINT commands.
Detaching/attaching files.
Backing up / restoring database.
Switching from Simple, to Full, back to Simple recovery.
Shaking my fist at it in a threatening manner.
After each of those attempts I tried running:
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 0)
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 0, TRUNCATEONLY)
DBCC SHRINKFILE (N'MYDATABASE_2010_log' , 1)
I’ve seen this error message a couple times:
Cannot shrink log file 2 (MYDATABASE_2010_log) because total number of
logical log files cannot be fewer than 2.
At one point I tried creating a dummy table and adding records to it in an attempt to get the transaction log to roll over and move to the end of the file, but that was just a wild guess.
Here are the results of DBCC SQLPERF(LOGSPACE)
Database Name Log Size (MB) Log Space Used (%) Status
MyDatabase_2010 10044.13 16.71015 0
Here are the results of DBCC LOGINFO:
RecoveryUnitId FileId FileSize StartOffset FSeqNo Status Parity CreateLSN
0 2 5266014208 8192 15656 0 64 0
0 2 5266014208 5266022400 15673 2 128 0
Does anyone have any idea what I'm doing wrong?
If you are unable to truncate and shrink the log file, the first thing that you should do is to check if there is a real reason that avoids the log to be truncated. Execute this query:
SELECT name ,
log_reuse_wait ,
log_reuse_wait_desc ,
FROM sys.databases AS D
You can filter by the database name
If the value for log_reuse_wait is 0, the database log can be truncated. If the value is other than 0 then there is something that avoids the truncation. See the description for the log reuse wait values in the docs for sys.databases. Or even better here: Factors That Can Delay Log Truncation. If the value is 1 you can wait for the checkpoint, or run it by hand: CHECKPOINT.
Once you have checked that there is no reason that avoids the log file truncation, you can do the usual sequence of backup (log, full of incremental) and DBCC SHRINKDATABASE or DBCC SHRINKFILE. And the file should shrink or not.
If at this point the file is not shrunk, don't worry, the reason is the physical structure of the log file, and it can be solved:
The log file works as a circular buffer, and can only be truncated by removing the end of the file. If the used part of the circular buffer is at the end of the file, then it cannot be truncated. You simply have to wait until the used part of the transaction log advances, and moves from the end of the file to the beginning of the file. Once this happens, you can run one of the shrink commands, and your file will shrink without a glitch. This is very well explained in this page: How to shrink the SQL Server log.
If you want to force the log file active part to move from the end to the start of the buffer:
do some quite heavy operation on the DB inside a transaction and roll it back, to move the transaction log pointer further
repeat the backup, to truncate the log
shrink the file. If the active part of the log moved far enough, the file will shrink
Allowing for the usual caveats about backing up beforehand. I found the answer at SQLServerCentral
DETACH the database, RENAME the log file, ATTACH the database using:
CREATE DATABASE xxxx ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\xxxx.MDF') FOR ATTACH_REBUILD_LOG;
That's a new error to me. However, the course seems clear; expand the log file by a trivial amount to get some new VLFs created and then induce some churn in your db so that the current large VLF isn't the active one.
The first two VLFs are 5GB in size each. You somehow need to get rid of them. I can think of no sequence of shrinks and growths that would do that. I have never heard of the possibility of splitting or shrinking a VLF.
Just create a new log file. A database can have multiple. Then, delete the old log file.

The transaction log for database 'Name' is full.To find out why space in the log cannot be reused, see the log_reuse_wait_desc column in sys.databases

I am getting following error while I am trying to insert 8355447 records in single insert query.i use sql-server-2008-r2.
INSERT INTO table
select * from [DbName].table
Please help me to solve.... Thanks
Check the disk space on the SQL Server as typically this occurs when the transaction log cannot expand due to a lack of free disk space.
If you are struggling for disk space, you can shrink the transaction logs of your application databases and also don't forget to shrink the transaction log of the TEMPDB database.
Note:- Posting this as a separate comment as I am a newcomer to Stackoverflow and don't have enough points to add comments.
May be more than one options are available
Is it necessary that you should run this insert as a single
transaction. If that is not mandatory, you can insert say 'n' no. of
rows as a single transaction, then next 'n' and so on.
If you can spare some space on another drive, created an additional log
file on that drive.
Check whether you can clear some space on drive under consideration by moving
some other file to somewhere else.
If none of the previous options are open, shrink SQL transaction log files
with TRUNCATE_ONLY option ( release free space available at the end of log file to OS).
dbcc sqlperf ( 'logspace') can be used to find out the log files with more free space in it.
USE' those databases and apply a shrinkfile, The Command is :-
dbcc shrinkfile ( , TRUNCATEONLY )
DBCC Shrinkfile details are available here DBCC Shrinkfile.
If you are not getting space even after that, you may have to do a rigorous shrink by re-allocating pages within the database ( by specifying a target size ) , details of this can be taken from the link provided.
Well, clean up the transaction log. THe error is extremely clear if anyone cares to read it.

Sql Server 2005 - manage concurrency on tables

I've got in an ASP.NET application this process :
Start a connection
Start a transaction
Insert into a table "LoadData" a lot of values with the SqlBulkCopy class with a column that contains a specific LoadId.
Call a stored procedure that :
read the table "LoadData" for the specific LoadId.
For each line does a lot of calculations which implies reading dozens of tables and write the results into a temporary (#temp) table (process that last several minutes).
Deletes the lines in "LoadDate" for the specific LoadId.
Once everything is done, write the result in the result table.
Commit transaction or rollback if something fails.
My problem is that if I have 2 users that start the process, the second one will have to wait that the previous has finished (because the insert seems to put an exclusive lock on the table) and my application sometimes falls in timeout (and the users are not happy to wait :) ).
I'm looking for a way to be able to have the users that does everything in parallel as there is no interaction, except the last one: writing the result. I think that what is blocking me is the inserts / deletes in the "LoadData" table.
I checked the other transaction isolation levels but it seems that nothing could help me.
What would be perfect would be to be able to remove the exclusive lock on the "LoadData" table (is it possible to force SqlServer to only lock rows and not table ?) when the Insert is finished, but without ending the transaction.
Any suggestion?
Look up SET TRANSACTION ISOLATION LEVEL READ COMMITTED SNAPSHOT in Books OnLine.
Transactions should cover small and fast-executing pieces of SQL / code. They have a tendancy to be implemented differently on different platforms. They will lock tables and then expand the lock as the modifications grow thus locking out the other users from querying or updating the same row / page / table.
Why not forget the transaction, and handle processing errors in another way? Is your data integrity truely being secured by the transaction, or can you do without it?
if you're sure that there is no issue with cioncurrent operations except the last part, why not start the transaction just before those last statements, Whichever they are that DO require isolation), and commit immediately after they succeed.. Then all the upfront read operations will not block each other...

Resources