The query is like this
select 1 from dual#link1
union
select 1 from dual#link2
then i got the read only access error.
I have users in all those 3 databases and they are all read only users, so when i do the query "select 1 from dual#link1" then i got read only error, then i tried to change the query as
set transaction read only;
select 1 from dual#link1;
then it's solved.
then i tried the query as below:
select 1 from dual#link1
union
select 1 from dual#link2
it error again, i am not sure why 1 link works but 2 links won't work.
any one knows?
It is quite a surprising limitation of having the database in read-only mode, and multiple dblinks involved.
When you have read from the first dblink, you should close the current transaction (!) and only then read from the second dblink. This obviously prevents you from having a SELECT that joins tables from both dblinks.
It is documented, but rather hard to find. In Starting Up and Shutting Down you can see:
When executing on a read-only database, you must commit or roll back any in-progress transaction that involves one database link before you use another database link. This is true even if you execute a generic SELECT statement on the first database link and the transaction is currently read-only.
To reiterate the concept, if you have access to metalink, there is an oracle note in response to a customer which shows an example (Document 1296288.1) clarifying the limitation
SQL> select * from emp#link_emp_chicago;
select * from emp#link_emp_chicago
*
ERROR at line 1:
ORA-16000: database open for read-only access
Solution
SQL> select open_mode,database_role from v$database;
OPEN_MODE DATABASE_ROLE
---------- ----------------
READ ONLY PHYSICAL STANDBY
SQL> select owner,db_link from all_db_links;
OWNER
------------------------------
DB_LINK
PUBLIC
LINK_EMP_CHICAGO
SQL> select * from emp#link_emp_chicago;
select * from emp#link_emp_chicago
*
ERROR at line 1:
ORA-16000: database open for read-only access
SQL> set transaction read only;
Transaction set.
SQL> select * from emp#link_emp_chicago;
EMPNO ENAME JOB MGR HIREDATE SAL COMM
---------- ---------- --------- ---------- --------- ---------- ----------
DEPTNO
----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
Just to speculate, the reason may be related to the fact that, in the words of Tom Kyte
distributed stuff starts a transaction "just in case".
Related
I am trying to achieve a task in SQL Server. I'm sharing the sample problem as I couldn't share the entire task description.
Problem: we have a table called Person as follows:
Person_Id Person_Name Person_Age
--------- ----------- ----------
1 AAA 25
2 BBB 25
3 CCC 25
4 DDD 25
From that table, I want to use the Person_Id = 4 that is going to be kept inside a TRANSACTION.
Person_Id Person_Name Person_Age
--------- ----------- ----------
4 DDD 25
While performing the above transaction, user wants to access (INSERT, UPDATE, DELETE) all the other records (other than Person_Id = 4) which are in the table as below:
Person_Id Person_Name Person_Age
--------- ----------- ----------
1 AAA 25
2 BBB 25
3 CCC 25
What I tried:
I tried with NOLOCK, ROWLOCK but I couldn't achieve this. Kindly help me to achieve this scenario. I have also tried this link. As per this link, using
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
the SELECT query is fetching the Unmodified data. For example, If I am trying to UPDATE the record in a TRANSACTION and the record got updated but the TRANSACTION is busy with executing other statements.
Person_Id Person_Name Person_Age
--------- ----------- ----------
4 DDD 25
Now, when other connections are trying to SELECT the records in the table, then all other records along with the Record: Person_Id = 4 (With old value) will get returned.
SERIALIZABLE Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
From the above, When I am using SERIALIZABLE isolation, it still returns the OLDER Record of Person_Id = 4. This I don't want in this case.
I want to get all the other records, other than the records in a TRANSACTION.
In other words, If a record(s) is locked in a TRANSACTION, then that record(s) should not appear in any other SELECT STATEMENT EXECUTION with different connections.
I know that syscat.tables exists in db2.
I also tried to find the count in user_tables and I got the output this way:
db2 => select count(*) from user_tables
1
-----------
999
1 record(s) selected.
but I couldn't describe the table user_tables while I could describe any other table.
Example:
db2 => describe table user_tables
Data type Column
Column name schema Data type name Length Scale Nulls
------------------------------- --------- ------------------- ---------- ----- ------
0 record(s) selected.
SQL0100W No row was found for FETCH, UPDATE or DELETE; or the result of a
query is an empty table. SQLSTATE=02000
Could you help me understand why this is happening?
DB2 has an Oracle compatibility mode which needs to be enabled for a database. As part of this users can opt to have Oracle data dictionary-compatible views created. One of the views is user_tables.
Could you try the following (not tested):
describe select * from user_tables
This should return the schema for the result table which is that view.
SELECT * FROM systables WHERE SYSTEM_TABLE_SCHEMA ='YOURSCHEMA'
I have to find a leader of a group and update employee's leader. I am not sure how to proceed with this in DataStage.
I have an employee table as shown below
Emp_id mgr_id leader_id
1 100 400
101 201 500
3 202 600
I get a file to update employee table when an employee changes group. Change code = CHG means it is a job/group change.
I do an equi join between file and employee table and can update manager id. At the same time, I need to find a leader. I need to get all the employees who report to that top level leader and use as the leader id's for every employee.
File:
emp_id mgr_id chg_cd
1 102 CHG
101 301 CHG
File Row 1: There is change in manager for emp_id = 1; need to update mgr_id, leader_id in employee table
File Row 2: There is change in manager for emp_id = 102, need to change mgr_id and leader_id for in employee table
Can you please suggest me on how to proceed with this in DataStage?
ok this problem requires a solution with recursion. As DataStage has no way to do it (if the levels between managers and leaders are variable).
So load the data into a database table and use recursive SQL to query it - this will provide you the solution you are asking for.
Example:
Extract all leaders with their business units they manage inculding different levels) with the recursive SQL statement and use this data in da DataStage lookup to enrich the file data.
I would like to understand the contents of the Oracle system table ALL_UPDATABLE_COLUMNS. The documentation says that
ALL_UPDATABLE_COLUMNS describes all columns in a join view that are updatable by the current user, subject to appropriate privileges.
I understand how some columns in join views cannot be updated, but to my surprise selecting from this table I found that regular tables and their columns are also listed here. Is there any scenario when a particular column of a regular table is not updatable? (assuming that I have the update rights on the table level)
There are cases where the columns of a table are not updatable. For example, if I create a virtual column (though this is only available starting in 11.1), I cannot update the data in that column
SQL> ed
Wrote file afiedt.buf
1 create table foo (
2 col1 number,
3 col2 number generated always as (round(col1,2)) virtual
4* )
SQL> /
Table created.
SQL> insert into foo( col1 ) values( 1.77665 );
1 row created.
SQL> select * from foo;
COL1 COL2
---------- ----------
1.77665 1.78
SQL> update foo set col2 = 2;
update foo set col2 = 2
*
ERROR at line 1:
ORA-54017: UPDATE operation disallowed on virtual columns
Interestingly, though, all_updatable_columns incorrectly indicates that I can update the virtual column
SQL> ed
Wrote file afiedt.buf
1 select column_name, updatable, insertable, deletable
2 from all_updatable_columns
3 where owner = 'SCOTT'
4* and table_name = 'FOO'
SQL> /
COLUMN_NAME UPD INS DEL
------------------------------ --- --- ---
COL1 YES YES YES
COL2 YES YES YES
If we restrict ourselves to Oracle 10g (per the tag), I don't believe that there is a way to define a column in a table that cannot be updated. You could put the entire table in a read-only tablespace which will prevent you from being able to update any column. But I wouldn't expect that to be reflected in all_updatable_columns.
I have a rather large Oracle PL/SQL script I'm testing and I wanted to know if it was possible to see what records were updated/deleted/inserted since the last commit to the database? I need a faster way to check that all the database actions were done correctly. I have access to the command line as well as Oracle's custom tool SQL Developer.
In Oracle 10g (and starting with 9i, I think) you may use Flashback Query for this.
Normally, Flashback Query is used when you need to see data as it were some time ago, but in your case the trick is that Flashback Query sees only committed data.
So, here's a quick example:
SQL> create table t1 as select level lev from dual connect by level < 100;
Table created.
SQL> select count(*) from t1;
COUNT(*)
----------
99
SQL> select count(*) from t1 as of timestamp systimestamp;
COUNT(*)
----------
99
SQL> update t1 set lev = -lev;
99 rows updated.
SQL> select max(lev) from t1 as of timestamp systimestamp;
MAX(LEV)
----------
99
SQL> select max(lev) from t1;
MAX(LEV)
----------
-1
SQL> commit;
Commit complete.
SQL> select max(lev) from t1 as of timestamp systimestamp;
MAX(LEV)
----------
-1
SQL>
UPD: even better, you can use Flashback Version Query or Flashback Transaction Query with some tweaking to filter changes made by all sessions except your current session.
If your environment allows it, you could add triggers to each and every table, creating some kind of audit log.
There is an audit feature in oracle, which you might be able to use. Google thinks, this article might be of some help: http://www.securityfocus.com/infocus/1689
I might be wrong, but I don't think that there's an easy way to do this. The only thing that comes to mind is checking the redo log but thres no interface for the user to check the operations. You can do it manually but it's not that simple.
You need to get the SCN, SYSTEM CHANGE NUMBER. It is basically a ticker that gets set after a commit.
always ASK_TOM
You need to better define your issue -
Are you looking to write a PLSQL script that captures only the data that has been updated since the last run? Does the record set that you are looking at have a unique id that is sequential?
Can you afford to select possible duplicates and validate them with the final set to see if they are indeed duplicates and thus discard them?
Depending on these cases, the complexity of your solution will change