Import Oracle 11g SCHEMA dump In parts ( metadata then data ) - database

i've been trying to import an Oracle 11g schema dump using an oracle docker container (dump file size is 750 Mb):
first i had to create a tablespace with the same name in the client db and assign same Username to it , then grant him create(session,table,any procedure,view):
create tablespace TABLESPACE datafile 'TABLESPACE.dbf' size 64M reuse autoextend ON next
64M maxsize unlimited default storage (initial 10M next 1M pctincrease 10);
create user USERNAME identified by PASS default tablespace TABLESPACE quota
unlimited on TABLESPACE;
Impdp command :
impdp user/pass#localhost DIRECTORY=DATA_PUMP_DIR DUMPFILE=dumpfile.dmpL OGFILE=log_file.log SCHEMAS=SCHEMA_NAME
the execution took about 4 hours and then i was left with a large list of errors
ORA-31693: Table data object "SCHEMA_NAME"."Table_name" failed to load/unload and is being skipped due to error:
ORA-00001: unique constraint (SCHEMA_NAME.FK_name) violated
and tables were created but most were empty .
-I tried different import options such as full import ( full=y ).
-Another way i tried to do things , is to import on 2 stages ( metadata then data )
impdp ... CONTENT=METADATA_ONLY then impdp ... CONTENT=DATA_ONLY
-Also to import while excluding constraint , ref_constraints and triggers the execute a data_only import
but all theses executions were erroneous , i can't deal with each FK separately because it's over 200 Fk on this database .
So i was wondering , what's the proper way to import a dump without breaking the FK constraints , and is it okay to import metadata then data .
Also , how can i check if the dump provided by the client is dumped correctly or might has some flaws in it

Related

Docker / Oracle Database / Volume Persistence / Create Table space

I am building a Dev Docker environment and I have to set up an Oracle 19c database.
I have been successful... but not at 100%.
Everything is running correctly, I can create a tablespace, a user/schema, create a table, insert data, access via NodeJs the data too until I restart the container.
In all the tutorials, it is shown to mount a volume pointing to /opt/oracle/oradata
volumes:
- ./database/OracleDB/oradata:/opt/oracle/oradata
But the tablespace are created by default in the /opt/oracle/product/19c/dbhome_1/dbs
I tried to add a volume pointing to that directory
volumes:
- ./database/OracleDB/oradata:/opt/oracle/oradata
- ./database/OracleDB/dbs:/opt/oracle/product/19c/dbhome_1/dbs/
But I receive the following error Error response from daemon: path /home/myusr/docker-base/database/OracleDB/dbs is mounted on / but it is not a shared mount.
Anybody has already faced this issue and found a solution?
I continue of course to search to a solution ;)
System Information
Windows 10 Professionnal with WSL2
Docker version 20.10.8, build 3967b7d
Oracle Database 19c
UPDATE 1
Based on Roberto Comments. Unfortunately, it is not working.
UPDATE 2
I tried the following
CREATE TABLESPACE tbs1_test DATAFILE '/opt/oracle/oradata/tbs1_test' SIZE 100 M AUTOEXTEND ON NEXT 100 M MAXSIZE 10 G;
and it as created the file in the desired location
When you don't change the value of db_create_file_dest, Oracle will use it as default destination for datafiles. In your case, when you executed your create tablespace command, the datafile was created in the default location. That is why it does not appear on your desired directory.
1.Connect as sysdba to the database
2.Execute
SQL> alter system set db_create_file_dest = '/opt/oracle/oradata/ORCLCDB' scope=both;
3.As you have a volume already in the above directory, remove the other volume specification, as it is already shared under /
4.Remove the tablespace and create it back again ( if it is empty )
SQL> DROP TABLESPACE tbs1_test including contents and datafiles;
SQL> CREATE TABLESPACE tbs1_test DATAFILE 'tbs1_test' SIZE 100 M AUTOEXTEND ON NEXT 100 M MAXSIZE 10 G;
5.Verify that the datafile now is in the right volume
SQL> select file_id, file_name from dba_data_files where tablespace_name = 'TBS1_TEST' ;
If you want to dig more in how to create specific volumes inside a docker image, check this post in Stackoverflow, it is one of the best IMHO
How to mount host volumes into docker containers in Dockerfile during build

In the tutorial "Tutorial: Bulk Loading from a local file system using copy" what is the difference between my_stage and my_table permissions?

I started to go through the first tutorial for how to load data into Snowflake from a local file.
This is what I have set up so far:
CREATE WAREHOUSE mywh;
CREATE DATABASE Mydb;
Use Database mydb;
CREATE ROLE ANALYST;
grant usage on database mydb to role sysadmin;
grant usage on database mydb to role analyst;
grant usage, create file format, create stage, create table on schema mydb.public to role analyst;
grant operate, usage on warehouse mywh to role analyst;
//tutorial 1 loading data
CREATE FILE FORMAT mycsvformat
TYPE = "CSV"
FIELD_DELIMITER= ','
SKIP_HEADER = 1;
CREATE FILE FORMAT myjsonformat
TYPE="JSON"
STRIP_OUTER_ARRAY = true;
//create stage
CREATE OR REPLACE STAGE my_stage
FILE_FORMAT = mycsvformat;
//Use snowsql for this and make sure that the role, db, and warehouse are seelcted: put file:///data/data.csv #my_stage;
// put file on stage
PUT file://contacts.csv #my
List #~;
list #%mytable;
Then in my active Snowsql when I run:
Put file:///Users/<user>/Documents/data/data.csv #my_table;
I have confirmed I am in the correct role Accountadmin:
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
So then I try to create the table in Snowsql and am successful:
create or replace table my_table(id varchar, link varchar, stuff string);
I still run into this error after I run:
Put file:///Users/<>/Documents/data/data.csv #my_table;
002003 (02000): SQL compilation error:
Stage 'MYDB.PUBLIC.MY_TABLE' does not exist or not authorized.
What is the difference between putting a file to a my_table and a my_stage in this scenario? Thanks for your help!
EDIT:
CREATE OR REPLACE TABLE myjsontable(json variant);
COPY INTO myjsontable
FROM #my_stage/random.json.gz
FILE_FORMAT = (TYPE= 'JSON')
ON_ERROR = 'skip_file';
CREATE OR REPLACE TABLE save_copy_errors AS SELECT * FROM TABLE(VALIDATE(myjsontable, JOB_ID=>'enterid'));
SELECT * FROM SAVE_COPY_ERRORS;
//error for random: Error parsing JSON: invalid character outside of a string: '\\'
//no error for generated
SELECT * FROM Myjsontable;
REMOVE #My_stage pattern = '.*.csv.gz';
REMOVE #My_stage pattern = '.*.json.gz';
//yay your are done!
The put command copies the file from your local drive to the stage. You should do the put to the stage, not that table.
put file:///Users/<>/Documents/data/data.csv #my_stage;
The copy command loads it from the stage.
But in document its mention like it gets created by default for every stage
Each table has a Snowflake stage allocated to it by default for storing files. This stage is a convenient option if your files need to be accessible to multiple users and only need to be copied into a single table.
Table stages have the following characteristics and limitations:
Table stages have the same name as the table; e.g. a table named mytable has a stage referenced as #%mytable
in this case without creating stage its should load into default Snowflake stage allocated

COPY FROM file to Cassandra iqnoring solr_query coumn

i can't import data to cassandra because i am using DSE Solr now and as i can see it created solr_query (virtual column) in my table.
So i tried COPY table FROM 'file' WITH SKIPCOLS = "solr_query";
but getting same error.
Failed to import 10 rows: ParseError - Invalid row length 9 should be 10 - given up without retries.
So how can i import data and ignore solr_query column ?
The copy command accepts the columns to import as a list COPY. Try to list them, avoiding the solr_query column, and it should be ok:
COPY table (colA, colB, colC,...) FROM 'file'

Oracle Identity Federation - RCU OID Schema Creation Failure

I am trying to install OIF - Oracle Identity federation as per the OBE http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/oif/11g/r1/oif_install/oif_install.htm
I have installed the Oracle 11gR2 11.2.0.3 with the charset = AL32UTF8 and db_block size of 8K and nls_length_semantics=CHAR. Created database and listener needed.
Installed weblogic 10.3.6
Started installation of OIM - Oracle identity management, chosen install and configure option and schema creation options.
Installation goes fine, but during configuration it fails. Below is the relevant part of the logs.
I have tried multiple times just to fail again and again. If someone can kindly shed some light as what is going wrong in here. Please let me know, if you need more info on the setup...
_File : ...//oraInventory/logs/install2013-05-30_01-18-31AM.out_
ORA-01450: maximum key length (6398) exceeded
Percent Complete: 62
Repository Creation Utility: Create - Completion Summary
Database details:
Host Name : vccg-rh1.earth.com
Port : 1521
Service Name : OIAMDB
Connected As : sys
Prefix for (non-prefixable) Schema Owners : DEFAULT_PREFIX
RCU Logfile : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/rcu.log
RCU Checkpoint Object : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/RCUCheckpointObj
Component schemas created:
Component Status Logfile
Oracle Internet Directory Failed /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/oid.log
Repository Creation Utility - Create : Operation Completed
Repository Creation Utility - Dropping and Cleanup of the failed components
Repository Dropping and Cleanup of the failed components in progress.
Percent Complete: 93
Percent Complete: -117
Percent Complete: 100
RCUUtil createOIDRepository status = 2------------------------------------------------- java.lang.Exception: RCU OID Schema Creation Failed
at oracle.as.idm.install.config.IdMDirectoryServicesManager.doExecute(IdMDirectoryServicesManager.java:792)
at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:375)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:96)
at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
at java.lang.Thread.run(Thread.java:662)
_File : ...///fmw/Oracle_IDM1_IDP33/rcu/log/oid.log_
CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
*
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
Edited by: 1008964 on May 30, 2013 12:10 PM
Edited by: 1008964 on May 30, 2013 12:12 PM
Update :
I looked at the logs again and tracked which sql statements were leading to the above error…
CREATE BIGFILE TABLESPACE "OLTS_CT_STORE" EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO DATAFILE '/data/OIAM/installed_apps/db/oradata/OIAMDB/gcats1_oid.dbf' SIZE 32M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED;
CREATE TABLE ct_dn (
EntryID NUMBER NOT NULL,
RDN varchar2(1024) NOT NULL,
ParentDN varchar2(1024) NOT NULL)
ENABLE ROW MOVEMENT
TABLESPACE OLTS_CT_STORE MONITORING;
*CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
TABLESPACE OLTS_CT_STORE
PARALLEL COMPUTE STATISTICS;*
I ran these statements from sqlplus and I was able to create the index without issues and as per the table space creation statement, autoextend is on. If RCU – repo creation utility runs to create the schemas needed, it fails with the same error as earlier. Any pointers ?
Setting NLS_LENGTH_SEMANTICS=BYTE worked

Oracle external tables - Specifying dynamic filename

CREATE TABLE LOG_FILES (
LOG_DTM VARCHAR(18),
LOG_TXT VARCHAR(300)
)
ORGANIZATION EXTERNAL(
TYPE ORACLE_LOADER
DEFAULT DIRECTORY LOG_DIR
ACCESS PARAMETERS(
RECORDS DELIMITED BY NEWLINE
FIELDS(
LOG_DTM position(1:18),
LOG_TXT position(19:300)
)
)
LOCATION('logadm'))
)
REJECT LIMIT UNLIMITED
/
LOG_DIR is an oracle directory that points to /u/logs/
The problem though is that the contents of /u/logs/ looks like this
logadm_12012012.log
logadm_13012012.log
logadm_14012012.log
logadm_15012012.log
Is there any way i can specify the location of the file dynamically? i.e. every time i run Select * from LOG_FILES it should use the log file of the day. (e.g. log_adm_DDMMYYYYY).
I know i can use alter table log_files location ('logadm_15012012.log') but i would like not to have to issue the alter command.
Any other possibilities?
Thanks
It's a shame you're running 10g. On 11g we can associate a pre-processor script - a shell script - with an external table. In your case you could run a script which would figure out the latest file and then issue a copy command. Something like:
cp logadm_15012012.log logadm
Adrian Billington has blogged about this feature here. Frankly his write-up is more helpful than the official docs.
But as you're on 10g all you can do is run the ALTER TABLE statement, or use a scheduled job (cron or whatever) to sync a new file with the generic name.

Resources