Will Oracle ASM add data file to a tablespace automatically? - oracle11gr2

If a tablespace has 1 data file(limit is 32gb), and the datafile has extended to its maximum, Will ASM automatically handle adding of the datafile or should I manually use the below statement to add a datafile to the tablespace.
alter tablespace TS_MASTER add datafile '+DATA' size 1g autoextend on;

Yes ,you need to manually add the datafile to the tablespace,ASM will not automatically add the Datafile,If you need to Add the Datafile Automatically when the tablespace reaches a particular threshold you need to create a SCHEDULER Job or DBMS_JOB or Shell script that will monitor the tablespace space and when it reaches a particular threshold it should automatically add the datafile.

Related

Oracle 12g - How to recover overwritten tablespace file?

I accidentally altered the tablespace to add a new file to the current tablespace, but I accidentally used the same name as an existing file. In other words, I overwrite an existing one.
My questions are:
How do I know which row of my tables that were missing from that tablespace overwritten?
How do I recover that overwritten tablespace? Consider that it happened 2 days ago.
If it helps, the query that I used:
ALTER TABLESPACE [TABLESPACE NAME] ADD DATAFILE '[EXISTING DBF FILE]' SIZE 2000M AUTOEXTEND ON NEXT 10M MAXSIZE 20000M;
The database will not let you add a file that is already part of the database
SQL> create tablespace demo datafile 'X:\ORADATA\DB18\PDB1\DEMO.DBF' size 10m;
Tablespace created.
SQL> alter tablespace demo add datafile 'X:\ORADATA\DB18\PDB1\DEMO.DBF' size 10m;
alter tablespace demo add datafile 'X:\ORADATA\DB18\PDB1\DEMO.DBF' size 10m
*
ERROR at line 1:
ORA-01537: cannot add file 'X:\ORADATA\DB18\PDB1\DEMO.DBF' - file already part of database
So something else must have happened for this to occur.
But that aside, if some external operation has munched that datafile, then the only recourse is to recover that file from backup, and recover forward. How you proceed from here depends on what backups you have, what tool you are using for backup etc.
But if you are using RMAN, then the standard docs have a suite of scenarios that walk you through what is needed.
https://docs.oracle.com/en/database/oracle/oracle-database/19/bradv/index.html

Tablespace is not freed after dropping tables (Oracle 11g)

I have a Oracle 11g database with block size = 8192. So, if I'm correct maximum datafile size will be 32GB.
I have a huge table containing around 10 million records. Data in this table will be purged often. For purging we chose CTAS as a better option as we are going to delete greater portion of the data.
As we'll be dropping the old table after CTAS, the old tables are not releasing the space for new tables. I understand that a tablespace has AUTOEXTEND option but no AUTOSHRINK. But the space occupied by old tables should be available for new tables, which is not happening in this case.
I'm getting an Exception saying
ORA-01652: unable to extend temp segment by 8192 in tablespace
FYI the only operation happening all the time is CTAS + Dropping the old table. Nothing else. First time this is working fine, but when the same operation is done the second time, exception arises.
I tried adding an additional datafile to the tablespace, but after few more purge operations on the table, this is also getting full to 32GB and the issue continues.

When and from Where to call or use temporary tablespace in database

In my project we have used temporary tablespace say X_TEMP ,assume below is tablepsace code , I have found this code in tablespace section
CREATE TEMPORARY TABLESPACE X_TEMP
TEMPFILE '/oradata/mytemp_01.tmp' SIZE 800M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;
Now I want to check how it is getting called and where I have used this X_TEMP in my procedure,functions,code anywhere where ?
Any idea how can find where We have used it ?
The question doesn't make a whole lot of sense.
Unless you happened to create temporary tables in that tablespace (which would be unusual in Oracle but which would let you tie some usage to particular pieces of code), your user's temporary tablespace (I assume that after creating this temporary tablespace you made it the temporary tablespace for some user) would be used whenever Oracle needed to page data to disk. A query that needs to sort data, for example, might use temporary tablespace. Or it might not, execution to execution, depending on data volumes, how much PGA the session is able to get, the query plan used, etc. Any query this user executes could use temporary tablespace at any time. Or none of its queries might use temporary tablespace because they can all be done in memory. Or they might not use temporary tablespace today and start using it tomorrow because someone else is running some code that limits how much PGA Oracle can give the user's sessions.

unable to extend temp segment on CTAS

Before i start if someone know a better way to do this please Share as i having massive problems with data pump as it hangs on tablespace and when i check the tablespaces repot i see nothing being filled.
I am trying to CTAS few tables ( create table as select from a#database link) from production to PRE_PRED at the same time.
table sizes are 29 GB, 29GB, 35GB
indexes size are 10GB ,11GB ,13GB
Temp tablespace is 256 GB
tablespace the data is beging copied to has 340 GB.
pseudo code
create table A
compress basic
nologging
nomonitoring
tablespace PRE_PRED.A
parallel (degree defasult instances default)
as select * from B#database link;
i keep getting unable to extend temp segment in PRE_PRED.A tablespace where as i can see there is more than enough space in TEMP and specified tablespace.
the questions please let me know...thanks
The best way to do this is with datapump, which should not be difficult.
First export the tables that you need to a file on the target database server
expdp system dumpfile=MY_TABLES.dmp logfile=MY_TABLES.log exclude=statistics tables=owner.a, owner.b, owner.c
Now copy this file to the source database server and then import the tables, changing the owner and tablespace if needed (if you don't need that remove the remap options).
impdp system dumpfile=MY_TABLES.dmp logfile=MY_TABLES_IMPORT.log tables=owner.a, owner.b, owner.c remap_schema=owner:newowner remap_tablespace=tablespace:newtbspce
This will be faster and have much less load on your network and databases.
You can also just grab the tables with impdb directly from the source database using a database link if you want (but I wouldn't use this myself unless the table was very small and then CTAS would work anyway).
impdp system logfile=MY_TABLES_IMPORT.log tables=owner.a, owner.b, owner.c remap_schema=owner:newowner remap_tablespace=tablespace:newtbspce network_link=dblink

Oracle 10g temp tables

I'm trying to convert the permanent tables used in a stored procedure to global temp tables. I've looked at the stats on these permanent tables and some have tens of millions of rows of data and are on the order if gigabytes in size (up to 10 GB).
So,
CREATE TABLE my_table (
column1 NUMBER,
column2 NUMBER,
etc...
)
TABLESPACE BIGTABLESPACE
NOLOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING;
should become
CREATE GLOBAL TEMPORARY TABLE my_table (
column1 NUMBER,
column2 NUMBER,
etc..
)
ON COMMIT PRESERVE ROWS;
I'm creating an equivalent global temporary table with rows that should be preserved until the end of the session for each existing permanent table. This global temp table will be used in the procedure instead of the permanent table.
(EXECUTE IMMEDIATE 'TRUNCATE ...' at the start, and INSERT /*+ APPEND */ INTO at some later point)
All of the permanent tables have been created in a big tablespace BIGTABLESPACE
The Oracle docs state that the global temporary table will be created in the user's temp tablespace (I assume this is TEMP). The problem with this is that the TEMP tablespace is small and the extents are not set to grow to the size I need them to grow during the procedure.
The TEMP tablespace was created during the database creation
create database "$oracle\_sid"
user sys identified by "$sys\_password"
user system identified by "$system\_password"
set default bigfile tablespace
controlfile reuse
maxdatafiles 256
maxinstances $maxinstances
maxlogfiles 16
maxlogmembers 3
maxloghistory 1600
noarchivelog
character set WE8MSWIN1252
national character set AL16UTF16
datafile
'$oracle\_home/oradata/$oracle\_sid/system01.dbf' size 512M
logfile
'$oracle\_home/oradata/$oracle\_sid/redo01.log' size 1G,
'$oracle\_home/oradata/$oracle\_sid/redo02.log' size 1G,
'$oracle\_home/oradata/$oracle\_sid/redo03.log' size 1G
sysaux datafile
'$oracle\_home/oradata/$oracle\_sid/sysaux01.dbf' size 256M
default temporary tablespace temp tempfile
'$oracle\_home/oradata/$oracle\_sid/temp01.dbf' size 5G
undo tablespace "UNDOTBS1" datafile
'$oracle\_home/oradata/$oracle\_sid/undotbs01.dbf' size 5G;
The permanent tables (that I'm planning to replace) were originally created in tablespace BIGTABLESPACE
-- 50G bigfile datafile size
create bigfile tablespace "BIGTABLESPACE"
datafile '$oracle\_home/oradata/$oracle\_sid/bts01.dbf' size 50G
extent management local
segment space management auto;
The permanent table indexes were originally created in tablespace BIGTABLESPACE
-- 20G bigfile datafile size
create bigfile tablespace "BIGINDXSPACE"
datafile '$oracle\_home/oradata/$oracle\_sid/btsindx01.dbf' size 20G
extent management local
segment space management auto;
Is replacing these permanent tables with global temporary tables feasable?
The TEMP tablespace will run into a problem extending the TEMP tablespace. Is there a way to create global temporary tables and their indexes in tablespaces BIGTABLESPACE and BIGINDXSPACE?
If not, how can I make the TEMP tablespace behave like a bigfile tablespace and achieve index/table separation?
Can I create two TEMP bigfile tablespaces and create indexes into one and tables into another?
I want to use global temporary tables, but the volume of data I am handling in the procedure would seem to be above and beyond the indended design of global temporary tables.
Any suggestions?
There is no benefit to separating data and indexes into separate tablespaces other than potentially making DBAs more comfortable that similar objects are grouped together. There is a long-standing myth that separating indexes and data was beneficial for performance reasons-- that is not correct.
Temporary objects should (and must) be stored in a temporary tablespace. You could increase the size of your TEMP tablespace or create a separate temporary tablespace just for the user(s) that will own these objects if you wanted to segregate these large temporary tables into a separate tablespace. You can't (and wouldn't want to) store them in your permanent tablespaces.
Architecturally, though, I would be very curious about why temporary tables were necessary in your system. If you have sessions that are writing 10's of GB into temporary tables, then presumably reading those 10's of GB out again in order to write the data somewhere else, I would tend to suspect that there were more efficient solutions. It is very rare in Oracle to even need temporary tables-- it is far more common in other databases where readers can block writers to need to copy data out of tables before working on it. Oracle has no such limitations.
I don't think that there's anything in your description that makes GTT's unattractive. You obviously need very large temporary tablespaces but you're not consuming more space overall unless you've been making heavy use of table compression (unavailable in GTT's at least up to 10gR2). Look into the use of tablespace groups: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/tspaces.htm#ADMIN01103
Using a tablespace group, rather than a single temporary tablespace, can alleviate problems caused where one tablespace is inadequate to hold the results of a sort, particularly on a table that has many partitions. A tablespace group enables parallel execution servers in a single parallel operation to use multiple temporary tablespaces.
Also, don't neglect the use of subquery factoring clauses. They can often replace the use of temporary tables. However they might still require just as much temporary storage space because a large result set from a SQFC can spill to disk to avoid the consumption of too much memory, so you still have to go ahead with the increase in TEMP space. They're very handy for not having to deploy a new database object every time you need a new temporary table.
I looked at large sized Global Temporary Tables for a migration exercise. It worked but for debugging and rejection hadling I eventually went with plain tables.
If the GTTs don't work out, consider either Row-Level Security / VPD (or even views).
You can have a column derived from sys_context('USERENV','SESSIONID') and use that to ensure that the user can only see their own data.
Still the thought of multiple sessions dealing with multi-gigabyte datasets concurrently is a bit scary.
PS. I believe that for GTTs used through a procedure use the temp tablespace of the session user rather than the temp tablespace of the procedure owner. If you can get the sessions as separate oracle users then you have a chance at spreading your file IO over different tablespaces.

Resources