can database integrity check be run per table? - database

As above, PRAGMA INTEGRITY_CHECK works on the entire database, can it, under any condition work on table basis?
(in reference to SQLite3)

Yes. PRAGMA integrity_check(\<TABLENAME\>).

Related

Truncate a DDL or DML command

I always knew that TRUNCATE is a DDL command but Microsoft documents are confusing me.
This link says that TRUNCATE is a DDL command and this says that TRUNCATE is a DML command
Also, does clarification of DDL and DML are different in different database? Ex. Oracle, MySql etc.
Personally, I would say that TRUNCATE is a DML command; you're manipulating the data using it, not changing the definition.
There are a few bits on the docs that conflict, mainly as so e are older than others. They can't even decide if CASE is a statement or an expression.
Wikipedia says TRUNCATE is DML:
In SQL, the TRUNCATE TABLE statement is a Data Manipulation Language
(DML) operation that marks the extents of a table for deallocation
(empty for reuse). The result of this operation quickly removes all
data from a table, typically bypassing a number of integrity enforcing
mechanisms. It was officially introduced in the SQL:2008 standard.
https://en.wikipedia.org/wiki/Truncate_(SQL)
Well, TRUNCATE TABLE, as decribed in Microsoft documentation is similar to DELETE (by their own admission). And DELETE is DML, therefore, TRUNCATE TABLE shold be DML as well.
Perhaps the one who wrote the first article did a mistake putting it there. OR perhaps he/she wanted to point out that it's a command to use with the same caution you use for DDL.
I must admit that it's the first time I see that command, and I don't know if it's included in the SQL standard.
The fact is that TRUNCATE is a DDL command. The first link you provided is correct, the second one was fixed yesterday.
Wikipedia also defines it as a DDL command, but an incorrect edit made on 12 February 2018‎ (and properly reverted on 3 April 2018‎) made it say otherwise for a while.

How can I check If column already exists to avoid ALTER TABLE in an sql script file for SQLite

I am adding versioning to my database a bit later than I should, and as such I have some tables with inconsistent states. I have a table that a column was added to in Java, but not all tables are guaranteed to have that column at this point.
What I had been doing is on the first run of the program, checking if the column existed, and adding it if it did not exist.
The library (flyway.org) I am using to deal with versioning takes in a bunch of .sql files in order to set up the database. For many tables, this is simple, I just have an sql file that has "CREATE TABLE IF NOT EXISTS XXX," which means it is easily handled, those can still be run.
I am wondering if there is some way to handle these alter tables without SQLite generating an error that I haven't thought of, or if I haven't found out how to do it.
I've tried looking to see if there is a command to add a column if it doesn't exist, but there doesn't seem to be one. I've tried to find a way to handle errors in sqlite, for example running the alter table anyways, and just ignoring the error, but there doesn't seem to be a way of doing that (as far as I can tell). Does anyone have any suggestions? I want a solution 100% in a .sql script if possible.
There is no "IF NOT EXIST" clause for Alter Tables in SQLite, it doesn't exist.
There is a way to interrogate the database on what columns a table contains with PRAGMA table_info(table_name);. But there is no 100% SQL way to take that information and apply it to an Alter Table statement.
Maybe one day, but not today.

For Oracle Database How to find when the row was inserted? (timestamp) [duplicate]

Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'

Do relational databases execute insert statements in parallel or sequentially?

If two users were to execute INSERT INTO statements on the same target table at the same time, would these be executed in parallel or in sequence?
Will this behavior change based on whether the target table has a primary key or not?
Is this a defined rule for all relational databases or do different vendors implement this in different ways?
In general they will (should) be executed in parallel, also if a primary key is defined.
The behaviour depends heavily on the DBMS. MySQL with MyISAM will e.g. block any further access to a table if DML is being executed against the table. The same is true for SQL Server in the default installation and older DB2 version.
In general if the DBMS is using MVCC (Oracle, PostgreSQL, Firebird, MySQL/InnoDB, ...) then you can expect inserts to run in parallel
One thing that can block concurrent inserts is if two transactions insert the same primary key value. In that case the second transaction will need to wait for the first to either commit (then the second one will get a pk violation error) or rollback (the second one will succeed).

Drop table which is currently in use [sybase]

Is there any way to force drop a table in sybase which is currently in use ?
Or any way to get rid of the lock.
Force drop will be a better option. But does a force drop exist ?
Sybase is totally online, and multi-user, no need for single user mode.
If you have enough privilege you can perform various actions. None of these actions "break data or database or referential integrity" that id already defined in DDL:
if the problem is that the table (not pages) is locked, and you want you eliminate the table lock, which is preventing other users accessing the table, kill the spid. sp_lock will identify the server process id.
if you actually want to drop the table, but it is locked, first kill the spid; then drop the table.
(There is a "force drop" command, but that is undocumented and unsupported; more important it is for special cases, not necesary for your case.)
No you can't because if it does, Sybase would break the integrity of the database.
Imagine : an user is reading datas from a table and at the same time another is destroying this same table !!
If you want to force it, you have to turn on the database into "single-user" and after that no one -but you- will can connect to the database and do what you want...
try http://www.tek-tips.com/viewthread.cfm?qid=220392&page=49
for switching to single user.

Resources