How to disable flashback query logging for a specific table (Oracle)? - database

We have a specific table that has a lot of activity and it creates a lot of change records. The consequence is that the flashback data only goes back a couple of days. That is OK for many cases but it would be beneficial to have access to more historical data.
We would like to either restrict logging on our one high activity table. Or disable it completely. I imagine that we may be able to do this by tablespace, I just have not found much on how to make these changes.

You can disable flashback archiving with alter table clause:
alter table YOUR_TABLE_NAME no flashback archive;
It's possible also to limit archive to specified size. To do that you need to create flashback archive designated to this table with desired retention and optionally size quota:
create flashback archive YOUR_TABLE_ARCHIVE tablespace SOME_TABLESPACE quota 512M retention 1 DAY;
Then assign new archive to table:
alter table YOUR_TABLE_NAME flashback archive YOUR_TABLE_ARCHIVE;
Examine Oracle documentation to check additional requirements. E.g. you need FLASHBACK ARCHIVE ADMINSTER privilege to execute statement above.

You can generate scripts for all tables under any schema by executing following query:
SELECT 'alter table ' || OWNER || '.' || TABLE_NAME || 'no flashback archive;'
FROM ALL_TABLES WHERE OWNER IN ('YOUR_SCHEMA');

Related

SQL Server SysTable with tstamp of last inserted row of each table

Is there any system table or dmv in SQL Server 2008 R2 that contains information regarding the last DML statement (except select) that was issued against any user table?
I see that in sys.tables there is a modify_date column but that's just for any table alteration (DDL statements).
I wouldn't want to create triggers on every table in the db nor a trigger on the database level for this scope.
The reason for this is that I would like to see when was the last time an insert, update or delete statement was made into each table in order to see if I can drop some of the tables that are no longer used - this is for a DWH db, where each table in the db is supposed to have any of these 3 operations at least once a week/month/quarter/year.
Option 1:
Enabling Change Data Capture for your DB.
Refer the below link for CDC:
http://technet.microsoft.com/en-us/library/cc627369%28v=sql.105%29.aspx
Option 2:
Create trigger for each table and do logging in common table whenever INSERT/UPDATE/DELETE happens in any table(Old traditional method).

consult tablespace of a table but unused dba_tablespaces

I need know the tablespace of particulary table, the typical query SELECT owner, table_name, tablespace_name FROM dba_tables; can't use, because I haven't permissions. There is other way to consult Tablespace unused dba_tablespaces?
What permissions do you have?
If you have the ability to query the table in question, for example, you can use all_tables which has the same columns that dba_tables does but only has data for tables that you have privileges on.
If you don't have privileges on the table are there other data dictionary tables that you do have access to (dba_segments, for example)?
If you don't have privileges on the table and you don't have privileges on any of the dba data dictionary views, why do you need to know the tablespace?
Use USER_TABLES if the table is in your working schema; and ALL_TABLES if you have permissions on the table but it is not in your working schema.
Otherwise change schema or get permission to access DBA_TABLES.

DB2 - How to ensure the tablespace is clean to drop

For some reason, I have created a few tablespaces for testing in DB2, I realized that if I didn't specify which tablespace the table should be created in, DB2 will select it for me.
The question is, I want to delete the unused tablespace, but I am afraid I will delete something that I didn't know. I have checked the tables, index and sequence after dropping the unused tablespace, and the number of rows is the same. Will this checking be enough to conclude the tablespace is good to be dropped?
You can query the catalog in order to retrieve the tables and where they are stored.
select tabschema, tabname, tbspaceid, tbspace
from syscat.tables
where tabschema not like 'SYS%'"
You can change the where condition, in order to filter the tablespace you are going to drop.

Suggestions on Adding Transaction Auditing In a SQL Server 2008 Database For a Financial System

I'm working on an application that is used to store payment information. We currently have a Transaction Audit table that works thusly:
Anytime a field changes in any table under audit we write an audit row that contains: 1 the table name, the field name, the old value, the new value and the timestamp. One insert takes place per field changed per row being updated.
I've always avoided Triggers in SQL Server since they're hard to document and can make troubleshooting more difficult as well, but is this a good use case for a trigger?
Currently the application determines all audit rows that need to be added on its own and sends hundreds of thousands of audit row INSERT statements to the server at times. This is really slow and not really maintainable for us.
Take a look at Change Data Capture if you are running Enterprise edition. It provides the DML audit trail you're looking for without the overhead associated with triggers or custom user/timestamp logging.
I have worked on financial systems where each table under audit had it's own audit table (e.g. for USERS there was USERS_AUDIT), with the same schema (minus primary key) plus:
A char(1) column to indicate the type of change ('I' = insert, 'U' = update, 'D' = delete)
A datetime column with a default value of GETDATE()
A varchar(255) column indicating the user who made the change (defaulting to USER_ID())
These tables were always inserted into (append-only) by triggers on the table under audit. This will result in fewer inserts for you and better performance, at the cost of having to administer many more audit tables.
I've implemented audit logic in SPROCS before, but same idea applies to doing it in Triggers.
Working Table: (id, field1, field2, field3, ... field-n)
History Table: (userID, Date/time, action (CUD), id, field1, field2, field3, ... field-n)
This also allows for easy querying to see how data historically changed.
Each time a row in a table is changed, a record is created in History table.
Some of our tables are very large - 100+ fields, so 100+ inserts would be too intense a load and also no meaningful way to quickly see what happened to data.

Does ALTER TABLE ALTER COLUMN interrupt ongoing db access?

I have a column in a table so that it is no longer NVARCHAR(256) but is NVARCHAR(MAX). I know the command to do this (ALTER TABLE ALTER COLUMN NVARCHAR(MAX)). My quesiton is really about disruption. I have to do this on a production environment and I was wondering if while I carry this out on the live environment there is a chance that there may be some disruption to usage to users. Will users who are using the database at the time be booted off? Will this operation likely take too long?
Thanks,
Sachin
I've deleted my previous answer which claimed that this would be a metadata only change and am submitting a new one with an entirely different conclusion!
Whilst this is true for changing to up to nvarchar(4000) for the case of changing to nvarchar(max) the operation does seem extremely expensive. SQL Server will add a new variable length column and copy the previously existing data which will likely mean a time consuming blocking operation resulting in many page splits and both internal and logical fragmentation.
This can be seen from the below
CREATE TABLE T
(
Foo int IDENTITY(1,1) primary key,
Bar NVARCHAR(256) NULL
)
INSERT INTO T (Bar)
SELECT TOP 4 REPLICATE(CHAR(64 + ROW_NUMBER() OVER (ORDER BY (SELECT 0))),50)
FROM sys.objects
ALTER TABLE T ALTER COLUMN Bar NVARCHAR(MAX) NULL
Then looking at the page in SQL Server Internals Viewer shows
The white 41 00 ... is wasted space from the previous version of the column.
Any ongoing queries will not be affected. The database has to wait until it can make an exclusive table lock before it can be altered.
While the update is done, no queries can use the table, so if there is a lot of records in the table, the database will seem unresponsive to any queries that would need to use the table.
The advice has to be - make a backup and do it out of hours if you can.
That having been said, I would not expect your database to be disrupted by the change and it will not take very long to do it.
What about your client software ? How will that be affected ?
It should be fine, unless you have a massive amount of rows (millions).. Yes, it will lock the table while it's updating but pending requests will just wait on it.

Resources