Open oracle db with broken Redo-Log-Blockheader - database

We have a database for developers. A handful of records that are missing are uncritical. It should only be the database available again and that is my goal.
The state of the database is as follows:
SQL> select instance_name, version, status from v$instance;
INSTANCE_NAME VERSION STATUS
---------------- ----------------- ------------
ora12 12.2.0.1.0 MOUNTED
If I try to open the database, it failes as:
SQL> alter database open;
alter database open
*
FEHLER in Zeile 1:
ORA-00354: Fehlerhafter Redo-Log-Blockheader
ORA-00353: Logfehler bei Block 14876, Verõnderung von 14597665 Zeit 01/13/2018 17:17:33
ORA-00312: Online-Log 1, Thread 1: 'C:\ORACLE\DBADMIN\VIRTUAL\ORADATA\ORA12\REDO01.LOG'
As mentioned before: A small data loss is not relevant for this database. How can I open the database?
Edit because of the suggestion of kfinity:
I try the suggestions from kfinity with the following outcome.
C:\Windows\System32>sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Do Feb 1 15:54:20 2018
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Verbunden mit:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> select instance_name, version, status from v$instance;
INSTANCE_NAME VERSION STATUS
---------------- ----------------- ------------
ora12 12.2.0.1.0 MOUNTED
SQL> recover database until cancel;
ORA-00279: ???nderung 14597437, erstellt von . Erforderlich f??r Thread 1
Log angeben: {<RET>=suggested | filename | AUTO | CANCEL}
CANCEL
ORA-01547: Warnung: RECOVER erfolgreich, doch OPEN RESETLOGS w??rde folgenden
Fehler ergeben
ORA-01194: Datei 1 erfordert weiteres Recovery, um konsistent zu werden
ORA-01110: Datendatei 1: 'C:\ORACLE\DBADMIN\VIRTUAL\ORADATA\ORA12\SYSTEM01.DBF'
ORA-01112: Media Recovery nicht gestartet
SQL> ALTER Database open resetlogs;
ALTER Database open resetlogs
*
FEHLER in Zeile 1:
ORA-00603: ORACLE server session terminated by fatal error
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-00600: internal error code, arguments: [4194], [32], [21], [], [], [], [],
[], [], [], [], []
Prozess-ID: 1480
Session-ID: 250 Seriennummer: 46338
Am I right, that the database is damaged beyond repair?

The fix to "ORA-00354: corrupt redo log block header" is to clear the log file with the problem and then immediately take a full backup, since you'll have a gap in your redo history and won't be able to recover:
alter database clear unarchived logfile 'C:\ORACLE\DBADMIN\VIRTUAL\ORADATA\ORA12\REDO01.LOG';
As this article points out, if your log files are multiplexed (which they should be - keeping multiple copies on separate disks helps avoid corruption issues like this), then you can simply replace the corrupted redo log file with a clean copy of the same file from one of the other locations.
Edit: if it won't let you clear a logfile because it's needed for recovery, then you need to do incomplete media recovery, where you basically only recover up until the corrupt redo logs (and throw the rest out). There are detailed guides on how to do this, but the basic idea is to do:
RECOVER DATABASE UNTIL CANCEL;
Which will let you apply the good redo logs to the current database state. Once you've gotten up to the corrupt one, you CANCEL and do:
ALTER DATABASE OPEN RESETLOGS;
Which discards the rest of the unapplied changes and resets the database to the last consistent state (SCN) you have.

Related

How do I export a schema with all data to my local instance on my PC (Oracle XE)

The Server is Oracle 11g 11.2.0.4.0
The local instance of my Oracle is "Oracle Database 18c Express Edition Release 18.0.0.0.0"
I want to export the schema called "AlphaTest" and all it's associated triggers, tables, views, packages, etc
and import that to my local instance.
I don't have two databases available so I'll try to do it on my local 11gXE.
First, connect as privileged user (SYS), check which directories I have and grant required privileges to users I'll export & import:
SQL> connect sys as sysdba
Enter password:
Connected.
SQL> desc dba_directories
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER NOT NULL VARCHAR2(30)
DIRECTORY_NAME NOT NULL VARCHAR2(30)
DIRECTORY_PATH VARCHAR2(4000)
SQL> col directory_name format a15
SQL> col directory_path format a60
SQL> select directory_name, directory_path from dba_directories;
DIRECTORY_NAME DIRECTORY_PATH
--------------- ------------------------------------------------------------
TEST_DIR c:\
EXT_DIR c:\temp
ORACLECLRDIR C:\oraclexe\app\oracle\product\11.2.0\server\bin\clr
DATA_PUMP_DIR C:\oraclexe\app\oracle/admin/xe/dpdump/
XMLDIR C:\oraclexe\app\oracle\product\11.2.0\server\rdbms\xml
ORACLE_OCM_CONF C:\ADE\aime_xe28\oracle/ccr/state
IG_DIR
6 rows selected.
SQL> grant read, write on directory ext_dir to scott;
Grant succeeded.
SQL> grant read, write on directory ext_dir to mike;
Grant succeeded.
SQL>
If I didn't have any directory, I'd create it as
SQL> create directory brisime_dir as 'c:\temp';
Directory created.
SQL> grant read, write on directory brisime_Dir to scott;
Grant succeeded.
SQL>
and continue with ...
... Export:
SQL> exit
Disconnected from Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
C:\Temp>expdp scott/tiger file=scott.dmp directory=ext_dir
Export: Release 11.2.0.2.0 - Production on Pon Vel 3 22:17:53 2020
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "file=scott.dmp" Location: Command Line, Replaced with: "dumpfile=scott.dmp"
Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "SCOTT"."SYS_EXPORT_SCHEMA_01": scott/******** dumpfile=scott.dmp directory=ext_dir reuse_dumpfiles=true
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 320 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/VIEW/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."ABC" 6.867 KB 6 rows
. . exported "SCOTT"."DEPT" 5.929 KB 4 rows
. . exported "SCOTT"."DUMMY" 5.007 KB 1 rows
. . exported "SCOTT"."EMP" 8.562 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.859 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows
Master table "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SCOTT.SYS_EXPORT_SCHEMA_01 is:
C:\TEMP\SCOTT.DMP
Job "SCOTT"."SYS_EXPORT_SCHEMA_01" successfully completed at 22:18:02
C:\Temp>
Import:
use system account to perform it. REMAP_SCHEMA will create a new_user for you and do the import:
C:\Temp>impdp system/pwd file=scott.dmp directory=ext_dir remap_schema=scott:new_user
Import: Release 11.2.0.2.0 - Production on Pon Vel 3 22:25:38 2020
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "file=scott.dmp" Location: Command Line, Replaced with: "dumpfile=scott.dmp"
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** dumpfile=scott.dmp directory=ext_dir remap_schema=scott:new_user
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "NEW_USER"."ABC" 6.867 KB 6 rows
. . imported "NEW_USER"."DEPT" 5.929 KB 4 rows
. . imported "NEW_USER"."DUMMY" 5.007 KB 1 rows
. . imported "NEW_USER"."EMP" 8.562 KB 14 rows
. . imported "NEW_USER"."SALGRADE" 5.859 KB 5 rows
. . imported "NEW_USER"."BONUS" 0 KB 0 rows
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/VIEW/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 22:25:41
C:\Temp>
Check ...
... whether everything is there:
C:\Temp>sqlplus sys as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Pon Vel 3 22:30:14 2020
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Enter password:
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
SQL> alter user new_user identified by new_pwd;
User altered.
SQL> connect new_user/new_pwd
Connected.
SQL> select * From tab;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
ABC TABLE
BONUS TABLE
DEPT TABLE
DUMMY TABLE
EMP TABLE
SALGRADE TABLE
V_EMP_DEPT VIEW
7 rows selected.
SQL>
Seems to be OK.

How to trace down more information about SQL Server session ID in the past?

I got into a problem when one of my DB was in "restoring" state.
After checking error logs, i found out that someone had done something.
- Starting up database "mydb"
- The database "mydb" is makred RESTORING and is in a state that does not allow recovery to be fun
- Starting up database "mydb"
- RESTORE DATABASE sucessfully processed 192392 pages in 178.seconds
All of this messages belong to spid128 source.
But i couldn't trace down who did this.
I can check all of the current session ID but that's not what i want.
I'm looking for a way to, let's say check information about that spid yesterday.
Is that possible?
The default trace captures backup and restore events so it will have details of the restore. However, since it's a rollover trace with a max of 5 files of 20MB each, older historical data might not be available depending on server activity.
Below is an example query to get backup/restore events from default trace files for the problem database:
SELECT
te.name
,tt.TextData
,tt.StartTime
,tt.HostName
,tt.LoginName
,tt.ApplicationName
FROM sys.traces AS t
CROSS APPLY fn_trace_gettable(
REVERSE(N'crt.gol' + SUBSTRING(REVERSE(t.path), CHARINDEX(N'\', REVERSE(t.path)), 128)), default) AS tt
JOIN sys.trace_events AS te ON
te.trace_event_id = tt.EventClass
JOIN sys.trace_subclass_values AS tesv ON
tesv.trace_event_id = tt.EventClass
AND tesv.subclass_value = tt.EventSubClass
WHERE
t.is_default = 1 --default trace
AND te.name = N'Audit Backup/Restore Event'
AND DatabaseName = N'mydb';

Cloning Standard:S0 database to a Basic edition (test developmen)

really simple doubt, guess it is a bug, or something I got wrong
I have a databse in Azure, as Standard:S0 Tier, now 178 mb, and I want to make a copy (in a master's procedure) but with result database in BASIC pricing tier
Tought as:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( EDITION = 'Basic')
With unhappier result :
Database is created as pricing tier Standard:S0
Then tried:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( SERVICE_OBJECTIVE = 'Basic' )
or
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic' )
With even unhappy result :
ERROR:: Msg 40808, Level 16, State 1, The edition 'Standard' does not support the service objective 'Basic'.
tried also:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( MAXSIZE = 500 MB, EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic' )
with unhappier result :
ERROR:: Msg 102, Level 15, State 1, Incorrect syntax near 'MAXSIZE'.
.
May I be doing something not allowed ?
But if you copy your database via portal, you'd notice that basic tier is not available with message'A database can only be copied within the same tier as the original database.'. The behavior is documented here:'You can select the same server or a different server, its service tier and performance level, a different performance level within the same service tier (edition). After the copy is complete, the copy becomes a fully functional, independent database. At this point, you can upgrade or downgrade it to any edition. The logins, users, and permissions can be managed independently.'

How to change timezone in Oracle Database specific to one SID, system having many DB SID Configured

I have to change timezone for the particular Database(SID). Where I have DB Server is having multiple Database (SID) Configured and installed.
When i have connected Particular SID and run below Query :
alter database set time_zone='-05:00'
I got below error:
ERROR at line 1:
ORA-02231: missing or invalid option to ALTER DATABASE
But when i am running
alter database set time_zone = 'EST';
query also it did not give error but
Note: I have multiple Database configured in same DB Server I need to change the timezone for particularly to one Database (SID). I cant change in system (OS) level and DB Server level globally.
i am not able to change time zone any one can help.
I have done the following steps it worked for me :
$ ps -ef|grep pmon
This will show list as below :
ORADEV 7554 1 0 Oct28 ? 00:00:03 ora_pmon_MDEV230
ORADEV 20649 32630 0 03:39 pts/9 00:00:00 grep pmon
ORADEV 23386 1 0 Nov12 ? 00:00:00 ora_pmon_MQA230POC
I have added following entry in the oraenv fles as :
$ vi oraenv ( It will open file in Vi Editor)
Added the following entry at end of the files:
if [[ ${ORACLE_SID} = "MQA230POC" ]]; then
TZ=EST+05EDT
export TZ
echo "Time Zone set to EST"
else
TZ=PST+08EDT
export TZ
echo "Time Zone set to PST"
fi
if [[ ${ORACLE_SID} = "MQA230POC" ]]; then This line will is critical for selecting particular Database.
And run the following command and Test and Restart Database :
$ . oraenv
ORACLE_SID = [MQA230POC] ?
The Oracle base for ORACLE_HOME=/orasw/database12c/product/12.1.0.2/dbhome_1 is /orasw/database12c
Time Zone set to EST
$ sqlplus sys as sysdba
Enter password:XXXXX ( provide password)
It will give message as below :
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
--Run Below Command to Restart DB:
SQL> shut immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.
It worked for me I am able to set different timezone for Different Database which i was seeking for. Hope it will help others.

Oracle Identity Federation - RCU OID Schema Creation Failure

I am trying to install OIF - Oracle Identity federation as per the OBE http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/oif/11g/r1/oif_install/oif_install.htm
I have installed the Oracle 11gR2 11.2.0.3 with the charset = AL32UTF8 and db_block size of 8K and nls_length_semantics=CHAR. Created database and listener needed.
Installed weblogic 10.3.6
Started installation of OIM - Oracle identity management, chosen install and configure option and schema creation options.
Installation goes fine, but during configuration it fails. Below is the relevant part of the logs.
I have tried multiple times just to fail again and again. If someone can kindly shed some light as what is going wrong in here. Please let me know, if you need more info on the setup...
_File : ...//oraInventory/logs/install2013-05-30_01-18-31AM.out_
ORA-01450: maximum key length (6398) exceeded
Percent Complete: 62
Repository Creation Utility: Create - Completion Summary
Database details:
Host Name : vccg-rh1.earth.com
Port : 1521
Service Name : OIAMDB
Connected As : sys
Prefix for (non-prefixable) Schema Owners : DEFAULT_PREFIX
RCU Logfile : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/rcu.log
RCU Checkpoint Object : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/RCUCheckpointObj
Component schemas created:
Component Status Logfile
Oracle Internet Directory Failed /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/oid.log
Repository Creation Utility - Create : Operation Completed
Repository Creation Utility - Dropping and Cleanup of the failed components
Repository Dropping and Cleanup of the failed components in progress.
Percent Complete: 93
Percent Complete: -117
Percent Complete: 100
RCUUtil createOIDRepository status = 2------------------------------------------------- java.lang.Exception: RCU OID Schema Creation Failed
at oracle.as.idm.install.config.IdMDirectoryServicesManager.doExecute(IdMDirectoryServicesManager.java:792)
at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:375)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:96)
at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
at java.lang.Thread.run(Thread.java:662)
_File : ...///fmw/Oracle_IDM1_IDP33/rcu/log/oid.log_
CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
*
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
Edited by: 1008964 on May 30, 2013 12:10 PM
Edited by: 1008964 on May 30, 2013 12:12 PM
Update :
I looked at the logs again and tracked which sql statements were leading to the above error…
CREATE BIGFILE TABLESPACE "OLTS_CT_STORE" EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO DATAFILE '/data/OIAM/installed_apps/db/oradata/OIAMDB/gcats1_oid.dbf' SIZE 32M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED;
CREATE TABLE ct_dn (
EntryID NUMBER NOT NULL,
RDN varchar2(1024) NOT NULL,
ParentDN varchar2(1024) NOT NULL)
ENABLE ROW MOVEMENT
TABLESPACE OLTS_CT_STORE MONITORING;
*CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
TABLESPACE OLTS_CT_STORE
PARALLEL COMPUTE STATISTICS;*
I ran these statements from sqlplus and I was able to create the index without issues and as per the table space creation statement, autoextend is on. If RCU – repo creation utility runs to create the schemas needed, it fails with the same error as earlier. Any pointers ?
Setting NLS_LENGTH_SEMANTICS=BYTE worked

Resources