How to get the the most recent queries in Oracle DB - database

I have a web application and I doubt some others have deleted some records manually. Upon enquiry nobody is admitting the mistakes. How to find out at what time those records were deleted ?? Is it possible to get the history of delete queries ?

If you have access to v$ view then you can use the following query to get it. It contains the time as FIRST_LOAD_TIME column.
select *
from v$sql v
where upper(sql_text) like '%DELETE%';

If flashback query is enabled for your database (try it with select * from table as of timestamp sysdate - 1) then it may be possible to determine the exact time the records were deleted. Use the as of timestamp clause and adjust the timestamp as necessary to narrow down to a window where the records still existed and did not exist anymore.
For example
select *
from table
as of timestamp to_date('21102016 09:00:00', 'DDMMYYYY HH24:MI:SS')
where id = XXX; -- indicates record still exists
select *
from table
as of timestamp to_date('21102016 09:00:10', 'DDMMYYYY HH24:MI:SS')
where id = XXX; -- indicates record does not exist
-- conclusion: record was deleted in this 10 second window

Related

Read amount on a postgres table

Is there any way to calculate the amount of read per second on a Postgres table?
but what I need is that whether a table has any read at the moment. (If no, then I can safely drop it)
Thank you
To figure out if the table is used currently, tun
SELECT pid
FROM pg_locks
WHERE relation = 'mytable'::regclass;
That will return the process ID of all backends using it.
To measure whether s table is used at all or not, run this query:
SELECT seq_scan + idx_scan + n_tup_ins + n_tup_upd + n_tup_del
FROM pg_stat_user_tables
WHERE relname = 'mytable';
Then repeat the query in a day. If the numbers haven't changed, nobody has used the table.
Audit SELECT activity
My suggestion is to wrap mytable in a view (called the_view_to_use_instead in the example) which invokes a logging function upon every select and then use the view for selecting from, i.e.
select <whatever you need> from the_view_to_use_instead ...
instead of
select <whatever you need> from mytable ...
So here it is
create table audit_log (table_name text, event_time timestamptz);
create function log_audit_event(tname text) returns void language sql as
$$
insert into audit_log values (tname, now());
$$;
create view the_view_to_use_instead as
select mytable.*
from mytable, log_audit_event('mytable') as ignored;
Every time someone queries the_view_to_use_instead an audit record with a timestamp appears in table audit_log. You can then examine it in order to find out whether and when mytable was selected from and make your decision. Function log_audit_event can be reused in other similar scenarios. The average number of selects per second over the last 24 hours would be
select count(*)::numeric/86400
from audit_log
where event_time > now() - interval '86400 seconds';

How to get created time of record in Oracle?

I have a problem with data in Oracle database:
I want to get created time of some record in table. I can get ora_rowscn of the record, but I cannot run to change this ora_rowscn to timestamp by query SELECT SCN_TO_TIMESTAMP(ora_rowscn) because SCN_TO_TIMESTAMP() may not be available for older data (my data was inserted about 1 month ago).
Anyone have solution to resolve this problem for me to get created time?
Perhaps the best thing to do here, assuming you have a long term need for this requirement, would be to alter your current table and add a bona-fide timestamp column. You could give this timestamp column a default value of the current timestamp, e.g.
CREATE TABLE yourTable (
...
created_time TIMESTAMP(6) default CURRENT_TIMESTAMP
);
Then, when inserting a new record, omit mention of the created_time column, letting Oracle back fill it with the current timestamp.

Enable SYSTEM_VERSIONING Error - Overlapping Dates in History Table

I recently migrated my SQL 2019 database from a VM into Azure SQL.
I used the MS Data Migration tool, but unfortunately, it wouldn't migrate data from Temporal Tables.
So. I just used the tool to create the table schemas and then used SSIS to move the data.
Since my existing history table had data in it, I wanted to keep the SysStartDate and SysEndDate fields. In order to do this, I had to disable SYSTEM_VERSIONING in my Azure SQL database as well as DROP the PERIOD on the table.
The data migration was a success so I re-created my PERIOD on the table but when I tried to enable SYSTEM_VERSIONING with a specified history table, I get the following error:
Msg 13573, Level 16, State 0, Line 34
Setting SYSTEM_VERSIONING to ON failed because history table 'xxxxxHistory' contains overlapping records.
I find this odd because the existing tables were originally joined as a temporal table so I don't understand why there would be a conflict now.
ALTER TABLE xxx.xxx
ADD PERIOD FOR SYSTEM_TIME(SysStartTime, SysEndTime)
ALTER TABLE xxx.xxx
SET (SYSTEM_VERSIONING = ON (HISTORY_TABLE=xxx.xxxHistory))
I expect to get a successful temporal table. Instead, I get the following error:
Msg 13573, Level 16, State 0, Line 34
Setting SYSTEM_VERSIONING to ON failed because history table 'xxxxxHistory' contains overlapping records.
I ran the following query to identify the overlaps but I don't get any:
SELECT
xxxxKeyNumeric
,SysStartTime
,SysEndTime
FROM
xxxx.xxxxhistory o
WHERE EXISTS
(
SELECT
1
FROM
xxxx.xxxxhistory o2
WHERE
o2.xxxxKeyNumeric = o.xxxxKeyNumeric
AND o2.SysStartTime <= o.SysEndTime
AND o.SysStartTime <= o2.SysEndTime
AND o2.xxxxPK != o.xxxxPK
)
ORDER BY
o.xxxxKeyNumeric,
o.SysStartTime
I found this explanation for the error:
"There are multiple records for the same record with overlapping start and end dates. The end date for the last row in the history table should match the start date for the active record in the parent table" blog of a DBA
This happened to me after switching the historic table, touching a few rows, then trying to go back to the old historic table.
UPDATE: Happened again, and this time the table had millions of rows. I had to write a query, comparing the start date and end date of every row in the history table.
Possible causes:
For every PK, the start dates and end dates of the history rows must not overlap. The query below will find this specific issue.
the end date of the latest row in the history for that PK, has a later end date than the start date of the PK in the main table. It is possible to modify the above query to do this
in the rows with a same PK, 2 rows cover the same time interval. If they overlap by a single millisecond, and someone requests that exact millisecond, it won't know which of the 2 versions is the correct one.
For the first issue:
select ant.*,post.* , DATEDIFF(day,ant.end_date,post.start_date)
from
(SELECT
PK_column
, start_date
, end_date
, ROW_NUMBER() OVER(PARTITION BY PK_column ORDER BY end_date desc, start_date desc) AS current
,(ROW_NUMBER() OVER(PARTITION BY PK_column ORDER BY end_date desc, start_date desc))-1 AS previous
FROM huge_table_HIST
) ant
inner join
(SELECT
key_column
, start_date
, end_date
, ROW_NUMBER() OVER(PARTITION BY PK_column ORDER BY end_date desc, start_date desc ) AS current
FROM huge_table_HIST
) post
ON ant.PK_column=post.PK_column AND ant.previous=post.current
WHERE ant.end_date > post.start_date
Surprisingly, it doesn't fail if:
you have multiple rows with exactly the same start end and end date, for the same PK. SQL Server seems to consider them a single point in space, instead of an interval. They will only appear if you request the exact millisecond in which they exist.
there are gaps between the end date of a history row, and the start end of the next one. SQL server considers that the PK just didn't exist in that time interval.
Temporal tables depend on the temporal table's primary key values combined with the SysStartTime do determine uniqueness in the history table.
This can very easily happen if you make changes to the primary key definition. Also, if your history table's fields corresponding to the temporal table's PK are not populated, or many / all are populated with a default value, overlaps are detected and you get that error.
Check that your PK is defined on the system versioned temporal table, then check that the corresponding values in your history table's primary key fields are correct (i.e. unique for any given PK & SysStartTime value.)
You may have to update the history table accordingly before applying the system versioning relationship again.
This error can also occur when there are multiple records per Primary Key for any given
GENERATED ALWAYS AS ROW START or GENERATED ALWAYS AS ROW END columns.
The following queries will help identify those records.
select ID
from dbo.HistoryTable
group by ID, SysStartTime
having count(*) > 1
select ID
from dbo.HistoryTable
group by ID, SysEndTime
having count(*) > 1

How to get data from distributed table

If I have a table, which structure was updated (ie system.query_log after latest update), but somehow distributed "view" has still old structure, how I could query data of new columns from that from entire cluster?
What I meant:
If you have distributed table, it could be done easily by:
select count(1) from distributed_query_log where event_date = '2019-01-24'
But select Settings.names, Settings.values from distributed_query_log where event_date = '2019-01-24' limit 1\G will fail, because it does not have those fields, when system.query_log has:
select Settings.names, Settings.values from system.query_log where event_date = '2019-01-24' limit 1\G
In Clickhouse release 1.1.54362 was added function cluster.
So, you can do it by:
select Settings.names, Settings.values from cluster('CLUSTER_TITLE', 'system.query_log') where event_date = '2019-01-24' limit 1\G
Where CLUSTER_TITLE - your cluster's title.
Thanks: Alexander Bocharov
In general case: after changing the underlying table you need to recreate (or alter) Distributed table.

How to access the a table ABC_XXX constantly in Teradata where XXX changes periodically?

I have a table in Teradata ABC_XXX where XXX will change monthly basis.
For Ex: ABC_1902, ABC_1812, ABC_1904 etc...
I need to access this table in my application without modifying the code every month.
Is that any way to do in Teradata or any alternate solution.??
Please help
Can you try using DBC.TABLES in subquery like below:
with tbl as (select 'select * from ' || databasename||'.'||tablename as tb from
dbc.tables where tablename like 'ABC_%')
select * from tbl;
If you can get the final query executed in your application, you will be able to query the required table without editing the query.
The above solution is with expectation that the previous month's table gets dropped whenever a new month's table is created.
However, if previous table is not being dropped, then you can try the below approach:
select 'select * from db.ABC_' ||to_char(current_date,'YYMM')
Output will be
select * from db.ABC_1902
execute the output in your application, you will be able to query dynamic table.

Resources