InnoDB Deadlock when transaction waiting for lock already acquired - database

Transaction (2) holds the following lock:
RECORD LOCKS space id 11404 page no 1144152 n bits 72 index PRIMARY of table [tableName] /* Partition [tableName]_p59 */ trx id 28648068046 lock_mode X
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
I don't know exactly how it acquired it.
Transaction (1) wants the same lock:
RECORD LOCKS space id 11404 page no 1144152 n bits 72 index PRIMARY of table [tableName] /* Partition [tableName]_p59 */ trx id 28648068030 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
A batch insert is happening here in Transaction (1).
Transaction (2) then also wants the same lock again:
RECORD LOCKS space id 11404 page no 1144152 n bits 72 index PRIMARY of table [tableName] /* Partition [tableName]_p59 */ trx id 28648068046 lock_mode X insert intention waiting
Record lock, heap no 1 PHYSICAL RECORD: n_fields 1; compact format; info bits 0
0: len 8; hex 73757072656d756d; asc supremum;;
A batch insert is happening here in Transaction (2).
The only difference is that the waiting locks are intention locks which I do not know exactly how they work. I have a composite primary key which contains an id and a date.
What can cause this type of deadock?

Related

SQL Server - poor performance during Insert transaction

I have a stored procedure which executes a query and return the line into variables like below:
SELECT #item_id = I.ID, #label_ID = SL.label_id,
FROM tb_A I
LEFT JOIN tb_B SL ON I.ID = SL.item_id
WHERE I.NUMBER = #VAR
I have a IF to check if #label_ID is null or not. If it is null, it goes to INSERT statement, otherwise it goes to UPDATE statement. Let's focus on INSERT where I know I'm having problems. The INSERT part is like below:
IF #label_ID IS NULL
BEGIN
INSERT INTO tb_B (item_id, label_qrcode, label_barcode, data_leitura, data_inclusao)
VALUES (#item_id, #label_qrcode, #label_barcode, #data_leitura, GETDATE())
END
So, tb_B has a PK in ID column and a FK in item_ID column which refers to column ID in tb_A table.
I ran SQL Server Profiler and I saw that sometimes the duration for this stored procedure takes around 2300ms and the normal average for this is 16ms.
I ran the "Execution Plan" and the biggest cost is in the "Clustered Index Insert" component. Showing below:
Estimated Execution Plan
Actual Execution Plan
Details
More details about the tables:
tb_A Storage:
Index space: 6.853,188 MB
Row count: 45988842
Data space: 5.444,297 MB
tb_B Storage:
Index space: 1.681,688 MB
Row count: 15552847
Data space: 1.663,281 MB
Statistics for INDEX 'PK_tb_B'.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Name Updated Rows Rows Sampled Steps Density Average Key Length String Index
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
PK_tb_B Sep 23 2018 2:30AM 15369616 15369616 5 1 4 NO 15369616
All Density Average Length Columns
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6.506343E-08 4 id
Histogram Steps
RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1 0 1 0 1
8192841 8192198 1 8192198 1
8270245 65535 1 65535 1
15383143 7111878 1 7111878 1
15383144 0 1 0 1
Statistics for INDEX 'IDX_tb_B_ITEM_ID'.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Name Updated Rows Rows Sampled Steps Density Average Key Length String Index
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
IDX_tb_B_ITEM_ID Sep 23 2018 2:30AM 15369616 15369616 12 1 7.999424 NO 15369616
All Density Average Length Columns
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
6.50728E-08 3.999424 item_id
6.506343E-08 7.999424 item_id, id
Histogram Steps
RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ROWS AVG_RANGE_ROWS
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
0 2214 0 1
16549857 0 1 0 1
29907650 65734 1 65734 1
32097131 131071 1 131071 1
32296132 196607 1 196607 1
32406913 98303 1 98303 1
40163331 7700479 1 7700479 1
40237216 65535 1 65535 1
47234636 6946815 1 6946815 1
47387143 131071 1 131071 1
47439431 31776 1 31776 1
47439440 0 1 0 1
PK_tb_B Index fragmentation
IDX_tb_B_Item_ID
Is there any best practices where I can apply and make this execution duration stable?
Hope you can help me !!!
Thanks in advance...
It's probably that the problem is the DbType of the clustered index. Clustered indexes store the data in the table based on the key values. By default, your primary key is created with a clustered index. This is often the best place to have it,
but not always. If you have, for example, a clustered index over a NVARCHAR column, every time that an INSERT is performed, needs to find the right place to insert the new record. For example, if your table have one million rows, with registers ordered alphabetically, and your new register starts with A, then your clustered index needs to move registers from B to Z to put your new register in the A group. If your new register stars with Z, then moves a smaller number of records, but this doesn't mean that its fine too. If you donĀ“t have a column that let you insert new register sequentially, then you can create an identity column for this or have another column that logically is sequential to any transaction entered regardless of the system, for example, a datetime column that registers the time at the insert ocurrs.
If you want more info, please check this Microsoft documentation

Space consumed by a particular column's data and impact on deleting that column

I am using Oracle 12c database in my project and I have a column "Name" of type "VARCHAR2(128 CHAR) NOT NULL ". I have approximately 25328687 rows in my table.
Now I don't need the "Name" column so I want to delete it. When I calculated the total size of the data in this column(using lengthb and vsize) for all the rows it was approximately 1.07 GB.
Since the max size of the data in this column is specified, isn't all the rows will be allocated 128 bytes for this column (ignoring unicode for simplicity) and the total space consumed by this column should be 128 * number of rows = 3242071936 bytes or 3.24 GB.
Oracle Varchar2 allocate memory dynamically (definition says variable length string data type)
Char datatype is fixed length string data type.
create table x (a char(5), b varchar2(5));
insert into x value ('RAM', 'RAM');
insert into x value ('RAMA', 'RAMA');
insert into x value ('RAMAN', 'RAMAN');
SELECT * FROM X WHERE length(a) = 3; -> this will return 0 record
SELECT * FROM X WHERE length(b) = 3; -> this will return 1 record (RAM)
SELECT length(a) len_a, length(b) len_b from x ;
o/p will be like below
len_a | len_b
-------------
5 | 3
5 | 4
5 | 5
Oracle do dynamic allocation for varchar2 .
So a string of 4 char will take 5 bytes one for the length and 4 bytes for 4 char , if one-byte character set .
As the other answers say, the storage that a VARCHAR2 column uses is VARying. To get an estimate of the actual amount, you can use
1) The data dictionary
SELECT column_name, avg_col_len, last_analyzed
FROM ALL_TAB_COL_STATISTICS
WHERE owner = 'MY_SCHEMA'
AND table_name = 'MY_TABLE'
AND column_name = 'MY_COLUMN';
The result avg_col_len is the average column length. Mulitply it by your number of rows 25328687 and you get an estimate of roughly how many bytes this column uses. (If last_analyzed is NULL or very old compared to the last big data change, you'll have to refresh the optimizer stats with DBMS_STATS.GATHER_TABLE_STATS('MY_SCHEMA','MY_TABLE') first.
2) Count yourself in sample
SELECT sum(s), count(*), avg(s), stddev(s)
FROM (
SELECT vsize(my_column) as s
FROM my_schema.my_table SAMPLE (0.1)
);
This calculates the storage size of a 0.1 percent sample of your table.
3) To know for sure, I'd do a test of with a subset of the data
CREATE TABLE my_test TABLESPACE my_scratch_tablespace NOLOGGING AS
SELECT * FROM my_schema.my_table SAMPLE (0.1);
-- get the size of the test table in megabytes
SELECT round(bytes/1024/1024) as mb
FROM dba_segments WHERE owner='MY_SCHEMA' AND segment_name='MY_TABLE';
-- now drop the column
ALTER TABLE my_test DROP (my_column);
-- and measure again
SELECT round(bytes/1024/1024) as mb
FROM dba_segments WHERE owner='MY_SCHEMA' AND segment_name='MY_TABLE';
-- check how much space will be freed up
ALTER TABLE my_test MOVE;
SELECT round(bytes/1024/1024) as mb
FROM dba_segments WHERE owner='MY_SCHEMA' AND segment_name='MY_TABLE';
You could improve the test by using the same PCTFREE and COMPRESSION levels on your test table.

How to use Oracle DB sequences without losing the next sequence number in case of roll-back

Question
How to use Oracle DB sequences without losing the next sequence number in case of roll-back?
Facts collected
1 - In Oracle, we can create a sequence and use two main calls (NEXTVAL) to get the next sequence value and (CURRVAL) to get the current sequence value.
2 - When we call (NEXTVAL) will always get the next number and we will lose it if there is a rollback. In other words, Oracle sequence does not care if there is a roll-back or commit; whenever you are calling it, it will give a new number.
Possible answers I found so far
1 - I was thinking to create a simple table with one column of type (NUMBER) to service this purpose.
Simply pick the value and use it. If operation succeeded I will increment column value. Otherwise, I will keep it as it is for the next application call.
2 - Another way I found here (How do I reset a sequence in Oracle?) is to use (ALTER SEQUENCE) like the following if I want to go one step back.
That is, if the sequence is at 101, I can set it to 100 via
ALTER SEQUENCE serial INCREMENT BY -1;
SELECT serial.NEXTVAL FROM dual;
ALTER SEQUENCE serial INCREMENT BY 1;
Conclusion
Are any of the suggested solutions is good? Is their any better approach?
From my point of view, you should use a sequence and stop worrying about gaps.
From your point of view, I'd say that altering the sequence is worse than having a table. Note that access to that table must be restricted to a single user, otherwise you'll get duplicate values if two (or more) users access it simultaneously.
Here's a sample code; have a look, use/adjust it if you want.
SQL> create table broj (redni_br number not null);
Table created.
SQL>
SQL> create or replace function f_get_broj
2 return number
3 is
4 pragma autonomous_transaction;
5 l_redni_br broj.redni_br%type;
6 begin
7 select b.redni_br + 1
8 into l_redni_br
9 from broj b
10 for update of b.redni_br;
11
12 update broj b
13 set b.redni_br = l_redni_br;
14
15 commit;
16 return (l_redni_br);
17 exception
18 when no_data_found
19 then
20 lock table broj in exclusive mode;
21
22 insert into broj (redni_br)
23 values (1);
24
25 commit;
26 return (1);
27 end f_get_broj;
28 /
Function created.
SQL> select f_get_broj from dual;
F_GET_BROJ
----------
1
SQL> select f_get_broj from dual;
F_GET_BROJ
----------
2
SQL>
You can create a sequence table.
CREATE TABLE SEQUENCE_TABLE
(SEQUENCE_ID NUMBER,
SEQUENCE_NAME VARCHAR2(30 BYTE),
LAST_SEQ_NO NUMBER);
And in your PL/SQL block, you can get the sequence using below lines of code,
declare
CURSOR c1 IS
SELECT last_seq_no
FROM sequence_table
WHERE sequence_id = 21
FOR UPDATE NOWAIT;
v_last_seq_no NUMBER;
v_new_seq_no NUMBER;
resource_busy EXCEPTION;
PRAGMA EXCEPTION_INIT(resource_busy, -54);
BEGIN
LOOP
BEGIN
OPEN c1;
FETCH c1 INTO v_last_seq_no;
CLOSE c1;
v_new_seq_no := v_last_seq_no+1;
EXIT;
EXCEPTION
WHEN resource_busy THEN
NULL;
--or something you want to happen
END;
END LOOP;
--after the this line, you can put an update to the sequence table and be sure to commit/rollback at the end of the pl/sql block;
END;
/
ROLLBACK;
--or
COMMIT;
Try to run the PL/SQL code above in two oracle sessions to understand. Basically, if Oracle DB session 1 will run the code, the record queried from the cursor will be lock. So, if other session will run the same code, that session will wait for session 1's rollback/commit to finish running the code. Through this, two sessions won't have the same sequence_no and you have a choice not to update the sequence if you issue a rollback for some reasons.

Netezza: ERROR: 65536 : Record size limit exceeded

Can someone please explain below behavior
KAP.ADMIN(ADMIN)=> create table char1 ( a char(64000),b char(1516));
CREATE TABLE
KAP.ADMIN(ADMIN)=> create table char2 ( a char(64000),b char(1517));
ERROR: 65536 : Record size limit exceeded
KAP.ADMIN(ADMIN)=> insert into char1 select * from char1;
ERROR: 65540 : Record size limit exceeded => why this error during
insert if create table does not throw any error for same table as
shown above.
KAP.ADMIN(ADMIN)=> \d char1
Table "CHAR1"
Attribute | Type | Modifier | Default Value
-----------+------------------+----------+---------------
A | CHARACTER(64000) | |
B | CHARACTER(1516) | |
Distributed on hash: "A"
./nz_ddl_table KAP char1
Creating table: "CHAR1"
CREATE TABLE CHAR1
(
A character(64000),
B character(1516)
)
DISTRIBUTE ON (A)
;
/*
Number of columns 2
(Variable) Data Size 4 - 65520
Row Overhead 28
====================== =============
Total Row Size (bytes) 32 - 65548
*/
I would like to know the calculation of row size in above case.
I checked the netezza db user guide, but not able to understand its calculation in above example.
I think this link does a good job of explaining the over head of Netezza / PDA Datatypes:
For every row of every table, there is a 24-byte fixed overhead of the rowid, createxid, and deletexid. If you have any nullable columns, a null vector is required and it is N/8 bytes where N is the number of columns in the record.
The system rounds up the size of
this header to a multiple of 4 bytes.
In addition, the system adds a record header of 4 bytes if any of the following is true:
Column of type VARCHAR
Column of type CHAR where the length is greater than 16 (stored internally as VARCHAR)
Column of type NCHAR
Column of type NVARCHAR
Using UTF-8 encoding, each Unicode code point can require 1 - 4 bytes of storage. A 10-character string requires 10 bytes of storage if it is ASCII and up to 20 bytes if it is Latin, or as many as 40 bytes if it is Kanji.
The only time a record does not contain a header is if all the columns are defined as NOT NULL, there are no character data types larger than 16 bytes, and no variable character data types.
https://www.ibm.com/support/knowledgecenter/SSULQD_7.2.1/com.ibm.nz.dbu.doc/c_dbuser_data_types_calculate_row_size.html
First create a temp table based on one row of data.
create temp table tmptable as
select *
from Table
limit 1
Then check the used bytes of the temp table. That should be the size per row.
select used_bytes
from _v_sys_object_storage_size a inner join
_v_table b
on a.tblid = b.objid
and b.tablename = 'tmptable'
Netezza has some Limitations:
1)Maximum number of characters in a char/varchar field: 64,000
2)Maximum row size: 65,535 bytes
Beyond 65 k bytes is impossible for a record length in NZ.
Though NZ box offers huge space, it would be really good idea to move with accurate space forecasting rather radomly spacing. Now in your requirement does all the attributes would mandatorily require a char(64000) or can be compacted with real-time data analysis. If further compacting can be done, then revisit on the attribute length .
Also during such requirements, never go with insert into char1 select * ....... statements because this will allow system to choose preferred datatypes and that will be of higher sizing ends which might not be necessary.

Where's the rest of the space used in this table?

I'm using SQL Server 2005.
I have a table whose row size should be 124 bytes. It's all ints or floats, no NULL columns (so everything is fixed width).
There is only one index, clustered. The fill factor is 0.
Here's the table def:
create table OHLC_Bar_Trl
(
obt_obh_id int NOT NULL REFERENCES OHLC_Bar_Hdr (obh_id),
obt_bar_start_ms int NOT NULL,
obt_bar_end_ms int NOT NULL,
obt_last_price float NOT NULL,
obt_last_ms int NOT NULL,
obt_bid_price float NOT NULL,
obt_bid_size int NOT NULL,
obt_bid_ms int NOT NULL,
obt_bid_pexch_price float NOT NULL,
obt_ask_price float NOT NULL,
obt_ask_size int NOT NULL,
obt_ask_ms int NOT NULL,
obt_ask_pexch_price float NOT NULL,
obt_open_price float NOT NULL,
obt_open_ms INT NOT NULL,
obt_high_price float NOT NULL,
obt_high_ms INT NOT NULL,
obt_low_price float NOT NULL,
obt_low_ms INT NOT NULL,
obt_volume float NOT NULL,
obt_vwap float NOT NULL
)
go
create unique clustered index idx on OHLC_Bar_Trl (obt_obh_id,obt_bar_end_ms)
After inserting a ton of data, sp_spaceused returns the following
name rows reserved data index_size unused
OHLC_Bar_Trl 117076054 29807664 KB 29711624 KB 92344 KB 3696 KB
which shows a rowsize of approx (29807664*1024)/117076054 = 260 bytes/row.
Where's the rest of the space?
Is there some DBCC command I need to run to tighten up this table (I could not insert the rows in correct index order, so maybe it's just internal fragmentation)?
You can use sys.dm_db_index_physical_stats to get pretty detailed information on how data is stored in a given table. It's not the clearest thing to use, here's the template I built up over time for my first pass on troubleshooting:
-- SQL 2005 - fragmentation & air bubbles
SELECT
ob.name [Table], ind.name [Index], ind.type_desc IndexType
,xx.partition_number PartitionNo
,xx.alloc_unit_type_desc AllocationTyp
,xx.index_level
,xx.page_count Pages
,xx.page_count / 128 Pages_MB
,xx.avg_fragmentation_in_percent AvgPctFrag
,xx.fragment_count
,xx.avg_fragment_size_in_pages AvgFragSize
,xx.record_count [Rows]
,xx.forwarded_record_count [ForwardedRows]
,xx.min_record_size_in_bytes MinRowBytes
,xx.avg_record_size_in_bytes AvgRowBytes
,xx.max_record_size_in_bytes MaxRowBytes
,case xx.page_count
when 0 then 0.0
else xx.record_count / xx.page_count
end AvgRowsPerPage
,xx.avg_page_space_used_in_percent AvgPctUsed
,xx.ghost_record_count
,xx.version_ghost_record_count
from sys.dm_db_index_physical_stats
(
db_id('MyDatabase')
,object_id('MyTable')
,null
,null
,'Detailed'
) xx
inner join sys.objects ob
on ob.object_id = xx.object_id
inner join sys.indexes ind
on ind.object_id = xx.object_id
and ind.index_id = xx.index_id
Use this to check if SQL thinks the row is as long as you think it is, or if there's extra space being used/wasted somewhere.
To update "space used" statistics, use the 2nd parameter #updateusage of sp_spaceused:
EXEC sp_spaceused 'OHLC_Bar_Trl', 'true'
However, I'd also run ALTER INDEX ALL ON OHLC_Bar_Trl WITH REBUILD first to defrag the data.
For your table, yes, 124 bytes does appear to be the correct row size, and since your clustered index is unique, you shouldn't be wasting space on a uniqueifier. So let's consider how it fits together:
Page size = 8 KB (8192 bytes)
Header = 96 bytes
Available for data = 8096 bytes
Row size (fixed data) = 124 bytes
Header = 4 bytes
Null bitmap = 5 bytes (for 21 columns)
Variable data size = 2 (for 0 variable columns)
Total = 135 bytes
Rows per page = (8096 / 137) = 59
Total rows = 117076054
Total pages = 117076054 / 59 = 1984440
Actual size = 1984440 * 8 KB = 15875520 KB
(Note: calculations are derived from Estimating the Size of a Clustered Index)
So you can see from this that the absolute minimum ratio you'd be able to achieve (using the more simplistic math of total data size / max row size) is approximately 139 bytes per row.
Of course, you say that you're seeing these statistics immediately after inserting a bunch of data - data for which the clustering key is not on an auto-incrementing (IDENTITY or NEWSEQUENTIALID) column and will therefore may not be inserted in a truly sequential fashion. If that's the case, you are probably suffering from a huge number of page splits and need to defragment the clustered index:
ALTER INDEX idx
ON OHLC_Bar_Trl
REORGANIZE -- or REBUILD
Note: I'm not sure if this command is available on SQL Server 2005. The older syntax is:
DBCC INDEXDEFRAG('MyDB', 'OHLC_Bar_Trl', indexnum)
You may also need to shrink the database to reclaim all of the lost space (although most people will recommend against shrinking the data, unless you have a very good reason to do so).

Resources