Oracle Identity Federation - RCU OID Schema Creation Failure - database

I am trying to install OIF - Oracle Identity federation as per the OBE http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/oif/11g/r1/oif_install/oif_install.htm
I have installed the Oracle 11gR2 11.2.0.3 with the charset = AL32UTF8 and db_block size of 8K and nls_length_semantics=CHAR. Created database and listener needed.
Installed weblogic 10.3.6
Started installation of OIM - Oracle identity management, chosen install and configure option and schema creation options.
Installation goes fine, but during configuration it fails. Below is the relevant part of the logs.
I have tried multiple times just to fail again and again. If someone can kindly shed some light as what is going wrong in here. Please let me know, if you need more info on the setup...
_File : ...//oraInventory/logs/install2013-05-30_01-18-31AM.out_
ORA-01450: maximum key length (6398) exceeded
Percent Complete: 62
Repository Creation Utility: Create - Completion Summary
Database details:
Host Name : vccg-rh1.earth.com
Port : 1521
Service Name : OIAMDB
Connected As : sys
Prefix for (non-prefixable) Schema Owners : DEFAULT_PREFIX
RCU Logfile : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/rcu.log
RCU Checkpoint Object : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/RCUCheckpointObj
Component schemas created:
Component Status Logfile
Oracle Internet Directory Failed /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/oid.log
Repository Creation Utility - Create : Operation Completed
Repository Creation Utility - Dropping and Cleanup of the failed components
Repository Dropping and Cleanup of the failed components in progress.
Percent Complete: 93
Percent Complete: -117
Percent Complete: 100
RCUUtil createOIDRepository status = 2------------------------------------------------- java.lang.Exception: RCU OID Schema Creation Failed
at oracle.as.idm.install.config.IdMDirectoryServicesManager.doExecute(IdMDirectoryServicesManager.java:792)
at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:375)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:96)
at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
at java.lang.Thread.run(Thread.java:662)
_File : ...///fmw/Oracle_IDM1_IDP33/rcu/log/oid.log_
CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
*
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
Edited by: 1008964 on May 30, 2013 12:10 PM
Edited by: 1008964 on May 30, 2013 12:12 PM
Update :
I looked at the logs again and tracked which sql statements were leading to the above error…
CREATE BIGFILE TABLESPACE "OLTS_CT_STORE" EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO DATAFILE '/data/OIAM/installed_apps/db/oradata/OIAMDB/gcats1_oid.dbf' SIZE 32M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED;
CREATE TABLE ct_dn (
EntryID NUMBER NOT NULL,
RDN varchar2(1024) NOT NULL,
ParentDN varchar2(1024) NOT NULL)
ENABLE ROW MOVEMENT
TABLESPACE OLTS_CT_STORE MONITORING;
*CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
TABLESPACE OLTS_CT_STORE
PARALLEL COMPUTE STATISTICS;*
I ran these statements from sqlplus and I was able to create the index without issues and as per the table space creation statement, autoextend is on. If RCU – repo creation utility runs to create the schemas needed, it fails with the same error as earlier. Any pointers ?

Setting NLS_LENGTH_SEMANTICS=BYTE worked

Related

Import Oracle 11g SCHEMA dump In parts ( metadata then data )

i've been trying to import an Oracle 11g schema dump using an oracle docker container (dump file size is 750 Mb):
first i had to create a tablespace with the same name in the client db and assign same Username to it , then grant him create(session,table,any procedure,view):
create tablespace TABLESPACE datafile 'TABLESPACE.dbf' size 64M reuse autoextend ON next
64M maxsize unlimited default storage (initial 10M next 1M pctincrease 10);
create user USERNAME identified by PASS default tablespace TABLESPACE quota
unlimited on TABLESPACE;
Impdp command :
impdp user/pass#localhost DIRECTORY=DATA_PUMP_DIR DUMPFILE=dumpfile.dmpL OGFILE=log_file.log SCHEMAS=SCHEMA_NAME
the execution took about 4 hours and then i was left with a large list of errors
ORA-31693: Table data object "SCHEMA_NAME"."Table_name" failed to load/unload and is being skipped due to error:
ORA-00001: unique constraint (SCHEMA_NAME.FK_name) violated
and tables were created but most were empty .
-I tried different import options such as full import ( full=y ).
-Another way i tried to do things , is to import on 2 stages ( metadata then data )
impdp ... CONTENT=METADATA_ONLY then impdp ... CONTENT=DATA_ONLY
-Also to import while excluding constraint , ref_constraints and triggers the execute a data_only import
but all theses executions were erroneous , i can't deal with each FK separately because it's over 200 Fk on this database .
So i was wondering , what's the proper way to import a dump without breaking the FK constraints , and is it okay to import metadata then data .
Also , how can i check if the dump provided by the client is dumped correctly or might has some flaws in it

Docker / Oracle Database / Volume Persistence / Create Table space

I am building a Dev Docker environment and I have to set up an Oracle 19c database.
I have been successful... but not at 100%.
Everything is running correctly, I can create a tablespace, a user/schema, create a table, insert data, access via NodeJs the data too until I restart the container.
In all the tutorials, it is shown to mount a volume pointing to /opt/oracle/oradata
volumes:
- ./database/OracleDB/oradata:/opt/oracle/oradata
But the tablespace are created by default in the /opt/oracle/product/19c/dbhome_1/dbs
I tried to add a volume pointing to that directory
volumes:
- ./database/OracleDB/oradata:/opt/oracle/oradata
- ./database/OracleDB/dbs:/opt/oracle/product/19c/dbhome_1/dbs/
But I receive the following error Error response from daemon: path /home/myusr/docker-base/database/OracleDB/dbs is mounted on / but it is not a shared mount.
Anybody has already faced this issue and found a solution?
I continue of course to search to a solution ;)
System Information
Windows 10 Professionnal with WSL2
Docker version 20.10.8, build 3967b7d
Oracle Database 19c
UPDATE 1
Based on Roberto Comments. Unfortunately, it is not working.
UPDATE 2
I tried the following
CREATE TABLESPACE tbs1_test DATAFILE '/opt/oracle/oradata/tbs1_test' SIZE 100 M AUTOEXTEND ON NEXT 100 M MAXSIZE 10 G;
and it as created the file in the desired location
When you don't change the value of db_create_file_dest, Oracle will use it as default destination for datafiles. In your case, when you executed your create tablespace command, the datafile was created in the default location. That is why it does not appear on your desired directory.
1.Connect as sysdba to the database
2.Execute
SQL> alter system set db_create_file_dest = '/opt/oracle/oradata/ORCLCDB' scope=both;
3.As you have a volume already in the above directory, remove the other volume specification, as it is already shared under /
4.Remove the tablespace and create it back again ( if it is empty )
SQL> DROP TABLESPACE tbs1_test including contents and datafiles;
SQL> CREATE TABLESPACE tbs1_test DATAFILE 'tbs1_test' SIZE 100 M AUTOEXTEND ON NEXT 100 M MAXSIZE 10 G;
5.Verify that the datafile now is in the right volume
SQL> select file_id, file_name from dba_data_files where tablespace_name = 'TBS1_TEST' ;
If you want to dig more in how to create specific volumes inside a docker image, check this post in Stackoverflow, it is one of the best IMHO
How to mount host volumes into docker containers in Dockerfile during build

NiFi connection to SqlServor for ExecuteSQL

I'm trying to import some data from different SqlServer databases using ExecuteSQL in NiFi, but it's returning me an error. I've already imported a lot of other tables from MySQL databases without any problem and I'm trying to use the same workflow structure for the SqlServer dbs.
The structure is as follows:
There's a file .txt with the list of tables to be imported
This file is fetched, splitted and uptaded; so there's a FlowFile for each table of each db that has to be imported,
These FlowFiles are passed into ExecuteSQL which executes their contents
For example:
file.txt
table1
table2
table3
is being updated into 3 different FlowFiles:
FlowFile1
SELECT * FROM table1
FlowFile2
SELECT * FROM table2
FlowFile3
SELECT * FROM table3
which are passed to ExecuteSQL.
Here follows the configuration of ExecuteSQL (identical for SqlServer tables and MySQL ones)
ExecuteSQL
As the only difference with the import from MySQL db is in the connectors, this is how a generic MySQL connector has been configured:
SETTINGSPROPERTIES
Database Connection URL jdbc:mysql://00.00.00.00/DataBase?zeroDateTimeBehavior=convertToNull&autoReconnect=true
Database Driver Class Name com.mysql.jdbc.Driver
Database Driver Location(s) file:///path/mysql-connector-java-5.1.47-bin.jar
Database User user
PasswordSensitive value set
Max Wait Time 500 millis
Max Total Connections 8
Validation query No value set
And this is how a SqlServer connector has been configured:
SETTINGSPROPERTIES
Database Connection URL jdbc:jtds:sqlserver://00.00.00.00/DataBase;useNTLMv2=true;integratedSecurity=true;
Database Driver Class Name net.sourceforge.jtds.jdbc.Driver
Database Driver Location(s) /path/connectors/jtds-1.3.1.jar
Database User user
PasswordSensitive value set
Max Wait Time -1
Max Total Connections 8
Validation query No value set
It has to be noticed that one (only one!) SqlServer connector works and the ExecuteSQL processor imports the data without any problem. The even stranger thing is that the database that is being connected via this connector is located in the same place as other two (the connection URL and user/psw are identical), but only the first one is working.
Notice that I've tried appending ?zeroDateTimeBehavior=convertToNull&autoReconnect=true also to the SqlServer connections, supposing it was a problem of date type, but it didn't give any positive change.
Here is the error that is being returned:
12:02:46 CEST ERROR f1553b83-a173-1c0f-93cb-1c32f0f46d1d
00.00.00.00:0000 ExecuteSQL[id=****] ExecuteSQL[id=****] failed to process session due to null; Processor Administratively Yielded for 1 sec: java.lang.AbstractMethodError
Error retrieved from logs:
ERROR [Timer-Driven Process Thread-49] o.a.nifi.processors.standard.ExecuteSQL ExecuteSQL[id=****] ExecuteSQL[id=****] failed to process session due to java.lang.AbstractMethodError; Processor Administratively Yielded for 1 sec: java.lang.AbstractMethodError
java.lang.AbstractMethodError: null
at net.sourceforge.jtds.jdbc.JtdsConnection.isValid(JtdsConnection.java:2833)
at org.apache.commons.dbcp2.DelegatingConnection.isValid(DelegatingConnection.java:874)
at org.apache.commons.dbcp2.PoolableConnection.validate(PoolableConnection.java:270)
at org.apache.commons.dbcp2.PoolableConnectionFactory.validateConnection(PoolableConnectionFactory.java:389)
at org.apache.commons.dbcp2.BasicDataSource.validateConnectionFactory(BasicDataSource.java:2398)
at org.apache.commons.dbcp2.BasicDataSource.createPoolableConnectionFactory(BasicDataSource.java:2381)
at org.apache.commons.dbcp2.BasicDataSource.createDataSource(BasicDataSource.java:2110)
at org.apache.commons.dbcp2.BasicDataSource.getConnection(BasicDataSource.java:1563)
at org.apache.nifi.dbcp.DBCPConnectionPool.getConnection(DBCPConnectionPool.java:305)
at org.apache.nifi.dbcp.DBCPService.getConnection(DBCPService.java:49)
at sun.reflect.GeneratedMethodAccessor1696.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.nifi.controller.service.StandardControllerServiceInvocationHandler.invoke(StandardControllerServiceInvocationHandler.java:84)
at com.sun.proxy.$Proxy449.getConnection(Unknown Source)
at org.apache.nifi.processors.standard.AbstractExecuteSQL.onTrigger(AbstractExecuteSQL.java:195)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1165)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:203)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Open oracle db with broken Redo-Log-Blockheader

We have a database for developers. A handful of records that are missing are uncritical. It should only be the database available again and that is my goal.
The state of the database is as follows:
SQL> select instance_name, version, status from v$instance;
INSTANCE_NAME VERSION STATUS
---------------- ----------------- ------------
ora12 12.2.0.1.0 MOUNTED
If I try to open the database, it failes as:
SQL> alter database open;
alter database open
*
FEHLER in Zeile 1:
ORA-00354: Fehlerhafter Redo-Log-Blockheader
ORA-00353: Logfehler bei Block 14876, Verõnderung von 14597665 Zeit 01/13/2018 17:17:33
ORA-00312: Online-Log 1, Thread 1: 'C:\ORACLE\DBADMIN\VIRTUAL\ORADATA\ORA12\REDO01.LOG'
As mentioned before: A small data loss is not relevant for this database. How can I open the database?
Edit because of the suggestion of kfinity:
I try the suggestions from kfinity with the following outcome.
C:\Windows\System32>sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Do Feb 1 15:54:20 2018
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Verbunden mit:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> select instance_name, version, status from v$instance;
INSTANCE_NAME VERSION STATUS
---------------- ----------------- ------------
ora12 12.2.0.1.0 MOUNTED
SQL> recover database until cancel;
ORA-00279: ???nderung 14597437, erstellt von . Erforderlich f??r Thread 1
Log angeben: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
ORA-01547: Warnung: RECOVER erfolgreich, doch OPEN RESETLOGS w??rde folgenden
Fehler ergeben
ORA-01194: Datei 1 erfordert weiteres Recovery, um konsistent zu werden
ORA-01110: Datendatei 1: 'C:\ORACLE\DBADMIN\VIRTUAL\ORADATA\ORA12\SYSTEM01.DBF'
ORA-01112: Media Recovery nicht gestartet
SQL> ALTER Database open resetlogs;
ALTER Database open resetlogs
*
FEHLER in Zeile 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00600: internal error code, arguments: [4194], [32], [21], [], [], [], [],
[], [], [], [], []
Prozess-ID: 1480
Session-ID: 250 Seriennummer: 46338
Am I right, that the database is damaged beyond repair?
The fix to "ORA-00354: corrupt redo log block header" is to clear the log file with the problem and then immediately take a full backup, since you'll have a gap in your redo history and won't be able to recover:
alter database clear unarchived logfile 'C:\ORACLE\DBADMIN\VIRTUAL\ORADATA\ORA12\REDO01.LOG';
As this article points out, if your log files are multiplexed (which they should be - keeping multiple copies on separate disks helps avoid corruption issues like this), then you can simply replace the corrupted redo log file with a clean copy of the same file from one of the other locations.
Edit: if it won't let you clear a logfile because it's needed for recovery, then you need to do incomplete media recovery, where you basically only recover up until the corrupt redo logs (and throw the rest out). There are detailed guides on how to do this, but the basic idea is to do:
RECOVER DATABASE UNTIL CANCEL;
Which will let you apply the good redo logs to the current database state. Once you've gotten up to the corrupt one, you CANCEL and do:
ALTER DATABASE OPEN RESETLOGS;
Which discards the rest of the unapplied changes and resets the database to the last consistent state (SCN) you have.

How to change timezone in Oracle Database specific to one SID, system having many DB SID Configured

I have to change timezone for the particular Database(SID). Where I have DB Server is having multiple Database (SID) Configured and installed.
When i have connected Particular SID and run below Query :
alter database set time_zone='-05:00'
I got below error:
ERROR at line 1:
ORA-02231: missing or invalid option to ALTER DATABASE
But when i am running
alter database set time_zone = 'EST';
query also it did not give error but
Note: I have multiple Database configured in same DB Server I need to change the timezone for particularly to one Database (SID). I cant change in system (OS) level and DB Server level globally.
i am not able to change time zone any one can help.
I have done the following steps it worked for me :
$ ps -ef|grep pmon
This will show list as below :
ORADEV 7554 1 0 Oct28 ? 00:00:03 ora_pmon_MDEV230
ORADEV 20649 32630 0 03:39 pts/9 00:00:00 grep pmon
ORADEV 23386 1 0 Nov12 ? 00:00:00 ora_pmon_MQA230POC
I have added following entry in the oraenv fles as :
$ vi oraenv ( It will open file in Vi Editor)
Added the following entry at end of the files:
if [[ ${ORACLE_SID} = "MQA230POC" ]]; then
TZ=EST+05EDT
export TZ
echo "Time Zone set to EST"
else
TZ=PST+08EDT
export TZ
echo "Time Zone set to PST"
fi
if [[ ${ORACLE_SID} = "MQA230POC" ]]; then This line will is critical for selecting particular Database.
And run the following command and Test and Restart Database :
$ . oraenv
ORACLE_SID = [MQA230POC] ?
The Oracle base for ORACLE_HOME=/orasw/database12c/product/12.1.0.2/dbhome_1 is /orasw/database12c
Time Zone set to EST
$ sqlplus sys as sysdba
Enter password:XXXXX ( provide password)
It will give message as below :
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
--Run Below Command to Restart DB:
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
It worked for me I am able to set different timezone for Different Database which i was seeking for. Hope it will help others.

Resources