I have a table with several columns, one of which is a CLOB containing large XML project file data. That one column accounts for 99% of the size of that table, and the table has grown to several Gb. Our DBA needs to migrate the database and wants to reduce the size as much as possible beforehand. We can't lose any rows from that table but we would be safe in clearing out the data in that particular CLOB column. Would updating the table to remove that data reduce the overall size (I assume if it did it would be in conjunction with some administrative re-indexing action or something)?
If you don't need any CLOB data, drop that column:
SQL> create table test
2 (id number,
3 cclob clob);
Table created.
SQL> insert into test (id, cclob) values (1, 'Littlefoot');
1 row created.
SQL> alter table test drop column cclob;
Table altered.
SQL>
Alternatively, create a new table with primary key column and CLOB column:
SQL> desc test
Name Null? Type
----------------------------------------- -------- ------------------
DEPTNO NUMBER(2)
DNAME VARCHAR2(14)
LOC VARCHAR2(13)
CCLOB CLOB
SQL> create table new_test as select deptno, cclob from test;
Table created.
SQL> alter table test drop column cclob;
Table altered.
SQL>
Now you can move the new, lightweight table to another server. If you need some of CLOB data, you can update the new table (on a new server) as
SQL> update test_on_new_server a set
2 a.cclob = (select b.cclob
3 from test_from_old_server b
4 where b.deptno = a.deptno
5 )
6 where a.loc = 'NEW YORK';
Related
Scenario: Adding column in Table, using UPDATE to populate data and then drop other column does not free space
Note: My Warehouse configuration is XL and is auto terminate after 5 minutes
Tables:
"database"."schema"."table1"
-- ID varchar(32), eg: "ajs6djnd79dhashlj172883gdb4av3"
-- ........
"database"."schema"."id_dim"
-- ID varchar(32) eg: "ajs6djnd79dhashlj172883gdb4av3"
-- ID_NUM NUMBER(12, 0) AUTOINCREMENT START 1 INCREMENT 1 eg: 1
ALTER TABLE "database"."schema"."table1" ADD ID_NUM NUMBER(12, 0);
UPDATE "database"."schema"."table1" e1
SET e1.ID_NUM = d2.ID_NUM
FROM "database"."schema"."id_dim" d2
WHERE e1.id = d2.id;
ALTER TABLE "database"."schema"."table1" DROP ID;
ALTER TABLE "database"."schema"."table1" RENAME COLUMN ID_NUM TO ID;
Q: I am still seeing that after UPDATE operation and column drop, memory consumption is more as compared to previous table size and in Snowflake doc it says that micro-partitions is written after DML operation.
Exactly, you are right: A new micro-partition is written after your DML operation.
But: This does not mean the old micro-partition is dropped. Here Time Travel comes into play and the older version is still stored.
https://docs.snowflake.com/en/user-guide/data-time-travel.html
How long the old data is stored? This depends on your table type as well as the value of your parameter DATA_RETENTION_TIME_IN_DAYS for the object: https://docs.snowflake.com/en/sql-reference/parameters.html#data-retention-time-in-days
I have a table with computed column using scalar function and I'm interested to use temporal table in SQL Server 2019.
I'm unable to change the column - how can I get the benefits from that feature?
Some suggestions:
Do not include computed column in history table at all
Create a fixed column in history table with the same name of the computed column
Thanks
You can create history table with same same column list as transaction table but instead of computed column you will create a regular column with same datatype.
Then copy the data to history table with insert into select * from statement.
Please check the below example out:
Create table with computed column:
create table sales(productid int, quantity int, price int, totalprice as quantity * price);
Insert data into table with computed column:
insert into sales (productid,quantity,price)values(1,100,10);
insert into sales (productid,quantity,price)values(2,10,12);
insert into sales (productid,quantity,price)values(1,50,100);
Output:
productid
quantity
price
totalprice
1
100
10
1000
2
10
12
120
1
50
100
5000
Create history table without any computed column.
create table saleshistory(productid int, quantity int, price int, totalprice int);
Insert data into history table without computed column from the transaction table with computed column.
insert into saleshistory select * from sales;
Select data from history table:
select * from saleshistory;
Output:
productid
quantity
price
totalprice
1
100
10
1000
2
10
12
120
1
50
100
5000
db<>fiddle here
Presumably you don't have any system versioned tables created so far on a table that includes a computed column.
The standard implementation is to simply roll your own history table instead of letting SqlServer create it for you.
It is a requirement of temporal tables for the history table schema to match the primary table - you cannot choose to omit columns.
When you set up the system versioning however, just script out your primary table and replace the computed column with the correct data type, then to set up system versioning on a table you would do
alter table dbo.MyTable add
ValidFrom datetime2 generated always as row start hidden constraint DF_MyTableSysStart default sysutcdatetime(),
ValidTo datetime2 generated always as row end hidden constraint DF_MyTableSysEnd default convert(datetime2, '9999-12-31 23:59:59.9999999'),
period for system_time (ValidFrom,ValidTo);
alter table MyTable set (system_versioning = on (history_table = history.MyTable));
Here as an example you have already created the temporal table with the same name in the schema history
I am having one table with 3 f_Key and 1 P_Key with 6054 records.
One record is lost from that table. I am trying to insert record into that table.
The record id is 2352 and last record id is 9560 so,if i insert the record then it is taking 9561 id which is next id of before id.If try to delete the others records then because of F_Key it is not allowing to delete also.If i try to update the 9561 id then it also not allowing to update.
You can use the SET IDENTITY INSERT construct to explicitly insert the PK value in a table with auto-numbering, like so:
set identity_insert #your_table on
insert into your_table (PK_COL_IDENTITY, ...) values (2352, ...)
set identity_insert #your_table off
As per my knowledge , if your ID is auto incremented then you cannot update that ID(key) .The only way to do in your case is TRUNCATE.If you will truncate the table then it will allow to generate new sequence.
You can create a temporary table and migrate the data to temporary table and truncate that parent table and again migrate the data from temporary table to parent table.
Hope it will help you.
I have a table that has 3 columns A,B,C which has rows also. Column A is the primary key.
Now as per new requirement I need to add new column D, E and F.
Also i need to remove the previous primary key from column A and add a new primary key for column D.
Column E and F is NULL.
Please help me to create alter table statement.
What you require is a multi-step process. Adding the columns, dropping the existing primary key constraint and finally adding a new one.
The most difficult thing here is adding column D. Because you want it to be the new primary key it will have to be NOT NULL. If your table has existing data you will need to handle this error:
SQL> alter table your_table
2 add ( d number not null
3 , e date
4 , f number )
5 /
alter table your_table
*
ERROR at line 1:
ORA-01758: table must be empty to add mandatory (NOT NULL) column
SQL>
So, step 1 is add the new columns with D optional; then populate it with whatever key values:
SQL> alter table your_table
2 add ( d number
3 , e date
4 , f number )
5 /
Table altered.
SQL> update your_table
2 set d = rownum
3 /
1 row updated.
SQL>
Now we can make column D mandatory:
SQL> alter table your_table
2 modify d not null
3 /
Table altered.
SQL>
Finally, we can change the primary key column from A to D:
SQL> alter table your_table
2 drop primary key
3 /
Table altered.
SQL> alter table your_table
2 add constraint yt_pk primary key (d)
3 /
Table altered.
SQL>
For some alterations we want to add a column with a default value. In this scenario it is possible to do so in one step:
alter table your_table
add new_col varchar2(1) default 'N' not null;
In later versions of Oracle this is actually an extremely efficient of populating the new column with the same value, considerably faster than the multi-step approach outlined above.
In case it's not clear the above syntax is Oracle. I expect SQL Server will be something similar.
As far as I know (this page) Oracle automatically creates an index for each UNIQUE or PRIMARY KEY declaration. Is this a complete list of cases when indexes are created automatically in Oracle?
I'll try to consolidate given answers and make it community wiki.
So indexes are automatically created by Oracle for such cases:
APC: For primary key and unique key unless such indexes already exist.
APC: For LOB storage and XMLType.
Gary: For table with a nested table.
Jim Hudson: For materialized view.
Firstly, Oracle does not always create an index when we create a primary or unique key. If there is already an index on that column it will use it instead...
SQL> create table t23 (id number not null)
2 /
Table created.
SQL> create index my_manual_idx on t23 ( id )
2 /
Index created.
SQL> select index_name from user_indexes
2 where table_name = 'T23'
3 /
INDEX_NAME
------------------------------
MY_MANUAL_IDX
SQL>
... note that MY_MANUAL_IDX is not a unique index; it doesn't matter ...
SQL> alter table t23
2 add constraint t23_pk primary key (id) using index
3 /
Table altered.
SQL> select index_name from user_indexes
2 where table_name = 'T23'
3 /
INDEX_NAME
------------------------------
MY_MANUAL_IDX
SQL> drop index my_manual_idx
2 /
drop index my_manual_idx
*
ERROR at line 1:
ORA-02429: cannot drop index used for enforcement of unique/primary key
SQL>
There is another case when Oracle will automatically create an index: LOB storage....
SQL> alter table t23
2 add txt clob
3 lob (txt) store as basicfile t23_txt (tablespace users)
4 /
Table altered.
SQL> select index_name from user_indexes
2 where table_name = 'T23'
3 /
INDEX_NAME
------------------------------
MY_MANUAL_IDX
SYS_IL0000556081C00002$$
SQL>
edit
The database treats XMLType same as other LOBs...
SQL> alter table t23
2 add xmldoc xmltype
3 /
Table altered.
SQL> select index_name from user_indexes
2 where table_name = 'T23'
3 /
INDEX_NAME
------------------------------
MY_MANUAL_IDX
SYS_IL0000556081C00002$$
SYS_IL0000556081C00004$$
SQL>
No, we're getting closer but that's not quite a complete list yet.
There will also be an index automatically created when you create materialized view since Oracle needs to be able to quickly identify the rows when doing a fast refresh. For rowid based materialized views, it uses I_SNAP$_tablename. For primary key materialized views, it uses the original PK name, modified as necessary to make it unique.
create materialized view testmv
refresh force with rowid
as select * from dual;
select index_name from user_indexes where table_name = 'TESTMV';
Index Name
--------------
I_SNAP$_TESTMV
And another one, if you create a table with a nested table you get an index created automatically. Object based storage in general can do this as there can be hidden tables created.
I think schema-based XMLTypes will also do it.
Yes, that's the complete list. Oracle automatically creates an index for each UNIQUE or PRIMARY KEY declaration.