Exception : ORA-08181 : specified number is not a valid system change number - database

I have a table which is throwing error for some of the rows when try to convert
ORA_ROWSCN for corresponding row to SCN_TO_TIMESTAMP as below:-
Select SCN_TO_TIMESTAMP(429804070) from dual; --14-NOV-22 07.52.22.000000000 AM
Select SCN_TO_TIMESTAMP(432572474) from dual; --16-NOV-22 02.00.59.000000000 AM
Select SCN_TO_TIMESTAMP(423859441) from dual; --ORA-08181: specified number is not a valid system change number
Select SCN_TO_TIMESTAMP(423859575) from dual; --ORA-08181: specified number is not a valid system change number
Kindly someone explain and provide the solution.

That's because you're out of range.
SQL> show user
USER is "SYS"
Let's check the range first:
SQL> select min(scn), max(scn) from smon_scn_time;
MIN(SCN) MAX(SCN)
---------- ----------
14831895 16817322
Apply scn_to_timestamp to MIN and MAX values - both of them are valid:
SQL> select SCN_TO_TIMESTAMP(14831895) min_scn,
2 SCN_TO_TIMESTAMP(16817322) max_scn
3 from dual;
MIN_SCN MAX_SCN
-------------------------------- --------------------------------
08-OCT-22 10.23.31.000000000 PM 19-NOV-22 10.11.36.000000000 PM
What if you try values that are out of that range (i.e. lower than MIN and higher than MAX)? None of them work, and that's your case:
SQL> select SCN_TO_TIMESTAMP(10831895) lower_than_min_scn from dual;
select SCN_TO_TIMESTAMP(10831895) lower_than_min_scn from dual
*
ERROR at line 1:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1
SQL> Select SCN_TO_TIMESTAMP(19817322) higher_than_max_scn from dual;
Select SCN_TO_TIMESTAMP(19817322) higher_than_max_scn from dual
*
ERROR at line 1:
ORA-08181: specified number is not a valid system change number
ORA-06512: at "SYS.SCN_TO_TIMESTAMP", line 1
SQL>

Related

ORACLE PLSQL - Query data in a package with the result of a table column

I have 1 table with 500k records records and for each record in the table I would like to query an oracle package and return the rows from this query. How can I do this with PL SQL ORACLE?
I tried to do it here:
declare
cursor c_t is select COLUM_TABLE from SCHEMA.COMPANY;
szSql varchar2(2048);
begin
for rec in c_t loop
szSql := 'SELECT * FROM SCHEMA.PKG_COMPANY.GET_DATA_COMPANY('||rec.COLUM_TABLE||')';
dbms_output.put_line(szSql);
execute immediate szSql;
end loop;
end;
I would like to know how to return the data as a common query and if there is a more performant way to do it.
Could you help me with examples?
EDIT
When I call the package, I get the following return:
This data is the result of a complex query that the package makes
ID_COMPANY | REGION | LATITUDE | LONGITUDE | DENSITY | COUNTRY | ROLE
1. WEST. -0110110. -0110110. 22. EUA. SUBS
how to return the data as a common query and if there is a more performant way to do it
How about a function that returns ref cursor? You'd just pass table name to it and get the result:
SQL> create or replace function f_test (par_table_name in varchar2)
2 return sys_refcursor
3 is
4 l_rc sys_refcursor;
5 begin
6 open l_rc for 'select * from ' || dbms_assert.sql_object_name(par_table_name);
7 return l_rc;
8 end;
9 /
Function created.
Let's test it:
SQL> select f_test('dept') from dual;
F_TEST('DEPT')
--------------------
CURSOR STATEMENT : 1
CURSOR STATEMENT : 1
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
Another table:
SQL> select f_test('invoice') from dual;
F_TEST('INVOICE')
--------------------
CURSOR STATEMENT : 1
CURSOR STATEMENT : 1
DATA_RUN_ FI INVOICE_ID INVOICE_
--------- -- ---------- --------
01-JUL-22 Q4 12345 Paid
01-JAN-22 Q1 12345 Not Paid
01-JUL-22 Q4 12678 Paid
01-JAN-22 Q1 12678 Not Paid
SQL>
As of your code: it is unclear what it does. There's some package and a function, but that's a black box for us as you didn't post it. Also, you're fetching values from the company table; what does it contain? Too many unknown things to debug your code.
If SCHEMA.PKG_COMPANY.GET_DATA_COMPANY() is a function and return a 'select' query like this:
select x,y,...,z from table where ....
then you can write the result into a target table:
cl scr
set SERVEROUTPUT ON
declare
cursor c_t is select COLUM_TABLE from SCHEMA.COMPANY;
szSql varchar2(3000);
begin
for rec in c_t loop
szSql := 'insert into tbl_target '||SCHEMA.PKG_COMPANY.GET_DATA_COMPANY(rec.COLUM_TABLE)||' ';
dbms_output.put_line(szSql);
execute immediate szSql;
commit;
end loop;
end;
in this manner you execute s statement like bellow and insert the result in tbl_target:
insert into tbl_target select x,y,...,z from table where ....
I can not write exact code because SCHEMA.PKG_COMPANY.GET_DATA_COMPANY() is not defined for me.

How to Find Free Space in Oracle table

In the below script i am checking regularly the table size in my oracle database but i would like to be able to check free space on the database too.
Is there any way to add how much the free space is ? please
select user_segments.SEGMENT_NAME AS Table_Name,
user_segments.BYTES/1024/1024 AS Table_Size_MB,
my_indexes.Indexes_Size_MB AS Indexes_Size_MB,
((user_segments.BYTES/1024/1024) + my_indexes.Indexes_Size_MB) AS Tot_Size_MB,
u_tables.Num_Rows AS NUM_ROWS
from USER_SEGMENTS
inner join (
select
TABLE_NAME AS INDX_TABLE_NAME,
SUM(BYTES)/1024/1024 AS Indexes_Size_MB
from (
select
user_indexes.TABLE_NAME,
user_segments.SEGMENT_NAME,
user_segments.BYTES
from user_segments
inner join user_indexes ON user_segments.SEGMENT_NAME = user_indexes.INDEX_NAME
) group by TABLE_NAME
) my_indexes on my_indexes.INDX_TABLE_NAME = user_segments.SEGMENT_NAME
inner join (
select
TABLE_NAME AS USR_TABLE_NAME,
Num_Rows
from user_tables
) u_tables on u_tables.USR_TABLE_NAME = my_indexes.INDX_TABLE_NAME
order by TOT_SIZE_MB desc;
Here's an example of what you could do
Here's my table starting "clean"
SQL> create table t as
2 select d.* from dba_objects d,
3 ( select 1 from dual connect by level <= 20 );
Table created.
SQL>
SQL>
SQL> select num_rows, avg_row_len, blocks, empty_blocks
2 from user_tables
3 where table_name = 'T';
NUM_ROWS AVG_ROW_LEN BLOCKS EMPTY_BLOCKS
---------- ----------- ---------- ------------
1745660 131 33746 0
1 row selected.
Now I'll try see if I can get to the number using an estimate based on the stats I have
SQL> select num_rows*avg_row_len/8192*100/(100-pct_free) est_blocks
2 from user_tables
3 where table_name = 'T';
EST_BLOCKS
----------
31016.9081
1 row selected.
I'm close but a little bit off which is to be expected, because blocks have some overhead etc. But I can find out what the overhead is
SQL> select round(32300/29800,2) est_overhead from dual;
EST_OVERHEAD
------------
1.08
1 row selected.
So if I factor in that 8% into my calcs (for a clean table), I can now use the dictionary stats to get a good estimate of the expected blocks required for this table given the nunber of rows and their size is.
SQL> select num_rows*avg_row_len/8192*100/(100-pct_free)*1.08 est_blocks
2 from user_tables
3 where table_name = 'T';
EST_BLOCKS
----------
33498.2607
1 row selected.
Armed with this information, you can now do easy comparisons between what the size of a table is, versus what you would expect it to be based on the rows it contains
SQL> delete from t
2 where mod(object_id,3) = 0;
582000 rows deleted.
SQL>
SQL> exec dbms_stats.gather_table_stats('','T')
PL/SQL procedure successfully completed.
My calc suggests the table should be 22329 blocks but its actually 33746.
SQL> select blocks, num_rows*avg_row_len/8192*100/(100-pct_free)*1.08 est_blocks
2 from user_tables
3 where table_name = 'T';
BLOCKS EST_BLOCKS
---------- ----------
33746 22329.999
1 row selected.
Lets see how good the estimate was. I'll reorg the table to reclaim that space
SQL> alter table t move;
Table altered.
SQL>
SQL>
SQL> exec dbms_stats.gather_table_stats('','T')
PL/SQL procedure successfully completed.
SQL>
SQL>
SQL> select num_rows, avg_row_len, blocks, empty_blocks
2 from user_tables
3 where table_name = 'T';
NUM_ROWS AVG_ROW_LEN BLOCKS EMPTY_BLOCKS
---------- ----------- ---------- ------------
1163660 131 22536 0
1 row selected.
SQL>
So you can use a similar approach (and 8% is probably a good enough fudge factor)

Where am I missing a comma?

SQL>
SQL> insert into employees values('&id','&fname','&lname','&numbermo','&yyy','&jobid','&MonthlyS','&managerID','&Did');
Enter value for id: 10
Enter value for fname: ewssfws
Enter value for lname: weffs
Enter value for numbermo: 987654321
Enter value for yyy: To_Date('2020/10/10')
Enter value for jobid: J1
Enter value for monthlys: 25000
Enter value for managerid: 20
Enter value for did: A2
old 1: insert into employees values('&id','&fname','&lname','&numbermo','&yyy','&jobid','&MonthlyS','&managerID','&Did')
new 1: insert into employees values('10','ewssfws','weffs','987654321','To_Date('2020/10/10')','J1','25000','20','A2')
insert into employees values('10','ewssfws','weffs','987654321','To_Date('2020/10/10')','J1','25000','20','A2')
*
ERROR at line 1:
ORA-00917: missing comma
The value of y contains single quotes, which breaks the string. You need to escape them by doubling them:
To_Date(''2020/10/10'')
Note that these are two consecutive single quotes characters ('), not the double quotes character (").
As Oracle tells you, error comes for the YYY parameter. You have to use double single quotes.
Though, applying TO_DATE to a string without format mask is bad practice. If I were you, I'd insert a date literal instead, which is always date 'yyyy-mm-dd'. So:
SQL> create table test (id number, yyy date);
Table created.
SQL> insert into test (id, yyy) values (&id, &yyy);
Enter value for id: 1
Enter value for yyy: date '2020-08-28'
old 1: insert into test (id, yyy) values (&id, &yyy)
new 1: insert into test (id, yyy) values (1, date '2020-08-28')
1 row created.
SQL> select * from test;
ID YYY
---------- ----------------
1 28.08.2020 00:00
SQL>
I don't know what issue you might have with your particular Oracle tool, but I wanted to also point out that your current call to TO_DATE won't work, and will generate this error:
ORA-01861: literal does not match format string
Consider this version:
INSERT INTO employees
VALUES
('10', 'ewssfws', 'weffs', '987654321', date '2020-10-10', 'J1', '25000', '20', 'A2');

Query using a statement within a VARCHAR2 column

Is there a way for a select statement to include in the WHERE clause a statement that is contained within the table? For example, the following table:
CREATE TABLE test_tab(
date_column DATE,
frequency NUMBER,
test_statement VARCHAR2(255)
)
/
If
MOD(SYSDATE - DATE, frequency) = 0
were contained within the column test_statement, is there a way to select rows where this is true? The test_statement will vary and not be the same throughout the table. I am able to do this in PL/SQL but looking to do this without the use of PL/SQL.
This kind of dynamic SQL in SQL can created with DBMS_XMLGEN.getXML. Although the query looks a bit odd so you might want to consider a different design.
First, I created a sample table and row using your DDL. I'm not sure exactly what you're trying to do with the conditions, so I simplified them into two rows with simpler conditions. The first row matches the first condition, and neither row matches the second condition.
--Create sample table and row that matches the condition.
CREATE TABLE test_tab(
date_column DATE,
frequency NUMBER,
test_statement VARCHAR2(255)
)
/
insert into test_tab values(sysdate, 1, 'frequency = 1');
insert into test_tab values(sysdate, 2, '1=2');
commit;
Here's the large query, and it only returns the first row, which only matches the first condition.
--Find rows where ROWID is in a list of ROWIDs that match the condition.
select *
from test_tab
where rowid in
(
--Convert XMLType to relational data.
select the_rowid
from
(
--Convert CLOB to XMLType.
select xmltype(xml_results) xml_results
from
(
--Create a single XML file with the ROWIDs that match the condition.
select dbms_xmlgen.getxml('
select rowid
from test_tab where '||test_statement) xml_results
from test_tab
)
where xml_results is not null
)
cross join
xmltable
(
'/ROWSET/ROW'
passing xml_results
columns
the_rowid varchar2(128) path 'ROWID'
)
);
This calls for dynamic SQL, so - yes, it is PL/SQL that handles it. I don't think that SQL layer is capable of doing it.
I don't know what you tried so far, so - just an idea: a function that returns ref cursor might help, e.g.
SQL> create table test (date_column date, frequency number, test_statement varchar2(255));
Table created.
SQL> insert into test values (trunc(sysdate), 2, 'deptno = 30');
1 row created.
SQL> create or replace function f_test return sys_refcursor
2 is
3 l_str varchar2(200);
4 l_rc sys_refcursor;
5 begin
6 select test_statement
7 into l_str
8 from test
9 where date_column = trunc(sysdate);
10
11 open l_rc for 'select deptno, ename from emp where ' || l_str;
12 return l_rc;
13 end;
14 /
Function created.
Testing:
SQL> select f_test from dual;
F_TEST
--------------------
CURSOR STATEMENT : 1
CURSOR STATEMENT : 1
DEPTNO ENAME
---------- ----------
30 ALLEN
30 WARD
30 MARTIN
30 BLAKE
30 TURNER
30 JAMES
6 rows selected.
SQL>
A good thing about it is that you could save the whole statements into that table and run any of them using the same function.
You can try this
select * from test_tab where mod(sysdate - date, frequency) = 0;

Can't perform an aggregate function on an expression containing an aggregate or a subquery

My table is created like this :
create table ##temp2(min_col1_value varchar (100))
create table ##temp1(max_col1_value varchar (100))
The table has values like this:
min_col1_value
-------------------
1
0
10
1
I'm trying to get the "frequency count of minimum length values" and expecting result as 3.
another example for maximum is :
max_col1_value
-------------------
1000
1234
10
1111
123
2345
I'm trying to get the "frequency count of maximum length values" and expecting result as 4.
When I'm running this query:
select count(min(len(convert(int,min_col1_value)))) from ##temp2 group
by min_col1_value
select count(max(len(convert(int,max_col1_value)))) from ##temp1 group by
max_col1_value
getting error as : Cannot perform an aggregate function on an expression containing an aggregate or a subquery.
How to get the desired result?
You can't aggregate twice in the same SELECT statement and, even if you could, your min(len()) will return a single value: 2 since your minimum field length of #temp2 is 2. Counting that will just give you 1 because there is only 1 value to count.
You are wanting to get the count of how many fields have that minimum length so you'll need something like:
SELECT count(*)
FROM #temp2
WHERE len(min_col1_value) IN (SELECT min(len(min_col1_value)) FROM #temp1)
That WHERE clause says, only count values in #temp2 where the length is equal to the minimum length of all the values in #temp2. This should return 3 based on your sample data.
The same logic can be applied to either table for min or max.
This should get you your desired results:
SELECT COUNT(*)
FROM ##temp2
WHERE LEN(min_col1_value) =
(
SELECT MIN(LEN(min_col1_value))
FROM ##temp2
)
SELECT COUNT(*)
FROM ##temp1
WHERE LEN(max_col1_value) =
(
SELECT MAX(LEN(max_col1_value))
FROM ##temp1
)

Resources