Am working on Microsoft SQL server
I have a database with 30 tables
Some tables have a column called LicenceID
I want to Force delete all records in all the tables that have LicenceID = 38
When i mean force delete, i want to delete even if there is constraints
Please can anyone help me
Thx
mike
The first step is to determine table dependencies (if is dependent of another table or there are tables that are dependent from the initial table; in base of that (and if you can't alter the constraints for some reason), deactivate temporally its constraints (for each table):
ALTER TABLE <NAMETABLE> NOCHECK CONSTRAINT ALL
do the necessary deletes in each table which is dependent and then reactivate its constraints (again, for each table)
ALTER TABLE <NAMETABLE> CHECK CONSTRAINT ALL
Try this with ROLLBACK first
PD: I think maybe there are better solutions, but I hope it helps
Your constraints are there for a reason. Spend the time, figure out the dependencies, and do the deletes in the right order so that you don't violate them.
Is it possible to change a global temporary table in Oracle from PRESERVE ROWS to DELETE ROWS?
I have tried the following command and I get a syntax error. If it is possible, what is the correct syntax?
ALTER TABLE BLOCKING_RESULTS ON COMMIT DELETE ROWS
SQL Error: ORA-01735: invalid ALTER TABLE option
01735. 00000 - "invalid ALTER TABLE option"
It is not possible. The valid syntax is documented, and doesn't include the ability to change this. Not being able to change this isn't listed explicitly as one of the restrictions for GTTs, but that only refers to things that are allowed for other types of table.
You'll have to drop and recreate the table with the new on commit clause.
Tom Kyte made a succinct comment on this way back in 2003.
(I'd speculate that it might be related to the statement that table locks are not acquired on temporary tables.; though how it lets you add columns without that being an issue is interesting. Altering the preservation while sessions have data in the GTT might have odd side-effects anyway...)
According to this example/article in step 7:
-You cannot alter a temporary table to change its data duration.
-You must drop and create it agin.
http://oracle-plsql-tech.blogspot.com.tr/2013/03/temporary-tables.html
If your case is same.
syntax you have used is wrong. Try PL/SQL alter table syntax.
I've just started playing around with Replicating our system and am not sure how best to handle this issue.
I want to filter data, but its not as easy as "where columnName = 'abc'". So I'm writing a big complicated process that is determining which records from each table are going to be replicated. I'm storing the PKs for each table in temp tables. I envisioned that the pre_snapshot_script would create and populate these tables and the post_snapshot_script would delete them. The filter statements for these tables then read something like "where PK in (select pk_id from temp table)"
So. Where can I put this data? Do I need to make persistent tables in my database in order to have them marked for replication? I assume any #temp or ##temp tables won't work.
I think you're PK in (select PK from table) idea may be correct.
You're right that it would need to be a persistent table. Can you provide a little more detail on the scenario? Are you generating snapshots hourly/daily/weekly? Any transactional replication going on after? What sort of logic are you using?
These answers might help illuminate other solutions or help verify that your initial path was the correct one.
Normaly i would do a delete * from XXX but on this table thats very slow, it normaly has about 500k to 1m rows in it ( one is a varbinary(MAX) if that mathers ).
Basicly im wondering if there is a quick way to emty the table of all content, its actualy quicker to drop and recreate it then to delete the content via the delete sql statement
The reason i dont want to recreate the table is because its heavly used and delete/recreate i assume will destroy indexs and stats gathered by sql server
Im also hoping there is a way to do this because there is a "clever" way to get row count via sys.sysindexes , so im hoping there is a equaly clever way to delete content
Truncate table is faster than delete * from XXX. Delete is slow because it works one row at a time. There are a few situations where truncate doesn't work, which you can read about on MSDN.
As other have said, TRUNCATE TABLE is far quicker, but it does have some restrictions (taken from here):
You cannot use TRUNCATE TABLE on tables that:
- Are referenced by a FOREIGN KEY constraint. (You can truncate a table that has a foreign key that references itself.)
- Participate in an indexed view.
- Are published by using transactional replication or merge replication.
For tables with one or more of these characteristics, use the DELETE statement instead.
The biggest drawback is that if the table you are trying to empty has foreign keys pointing to it, then the truncate call will fail.
You can rename the table in question, create a table with an identical schema, and then drop the original table at your leisure.
See the MySQL 5.1 Reference Manual for the [RENAME TABLE][1] and [CREATE TABLE][2] commands.
RENAME TABLE tbl TO tbl_old;
CREATE TABLE tbl LIKE tbl_old;
DROP TABLE tbl_old; -- at your leisure
This approach can help minimize application downtime.
I would suggest using TRUNCATE TABLE, it's quicker and uses less resources than DELETE FROM xxx
Here's the related MSDN article
Truncate table in MS Sql Server
Truncate table in Mysql
I had to delete all the rows from a log table that contained about 5 million rows. My initial try was to issue the following command in query analyzer:
delete from client_log
which took a very long time.
Check out truncate table which is a lot faster.
I discovered the TRUNCATE TABLE in the msdn transact-SQL reference. For all interested here are the remarks:
TRUNCATE TABLE is functionally identical to DELETE statement with no WHERE clause: both remove all rows in the table. But TRUNCATE TABLE is faster and uses fewer system and transaction log resources than DELETE.
The DELETE statement removes rows one at a time and records an entry in the transaction log for each deleted row. TRUNCATE TABLE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log.
TRUNCATE TABLE removes all rows from a table, but the table structure and its columns, constraints, indexes and so on remain. The counter used by an identity for new rows is reset to the seed for the column. If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE TABLE may not be used on tables participating in an indexed view.
There is a common myth that TRUNCATE somehow skips transaction log.
This is misunderstanding, and is clearly mentioned in MSDN.
This myth is invoked in several comments here. Let's eradicate it together ;)
For reference TRUNCATE TABLE also works on MySQL
I use the following method to zero out tables, with the added bonus that it leaves me with an archive copy of the table.
CREATE TABLE `new_table` LIKE `table`;
RENAME TABLE `table` TO `old_table`, `new_table` TO `table`;
forget truncate and delete. maintain your table definitions (in case you want to recreate it) and just use drop table.
truncate table client_log
is your best bet, truncate kills all content in the table and indices and resets any seeds you've got too.
truncate table is not SQL-platform independent. If you suspect that you might ever change database providers, you might be wary of using it.
On SQL Server you can use the Truncate Table command which is faster than a regular delete and also uses less resources. It will reset any identity fields back to the seed value as well.
The drawbacks of truncate are that it can't be used on tables that are referenced by foreign keys and it won't fire any triggers. Also you won't be able to rollback the data if anything goes wrong.
Note that TRUNCATE will also reset any auto incrementing keys, if you are using those.
If you do not wish to lose your auto incrementing keys, you can speed up the delete by deleting in sets (e.g., DELETE FROM table WHERE id > 1 AND id < 10000). It will speed it up significantly and in some cases prevent data from being locked up.
Yes, well, deleting 5 million rows is probably going to take a long time. The only potentially faster way I can think of would be to drop the table, and re-create it. That only works, of course, if you want to delete ALL data in the table.
The suggestion of "Drop and recreate the table" is probably not a good one because that goofs up your foreign keys.
You ARE using foreign keys, right?
If you cannot use TRUNCATE TABLE because of foreign keys and/or triggers, you can consider to:
drop all indexes;
do the usual DELETE;
re-create all indexes.
This may speed up DELETE somewhat.
I am revising my earlier statement:
You should understand that by using
TRUNCATE the data will be cleared but
nothing will be logged to the
transaction log. Writing to the log
is why DELETE will take forever on 5
million rows. I use TRUNCATE often
during development, but you should be
wary about using it on a production
database because you will not be able
to roll back your changes. You should
immediately make a full database
backup after doing a TRUNCATE to
establish a new basis for restoration.
The above statement was intended to prompt you to be sure that you understand there is difference between the two. Unfortunately, it is poorly written and makes unsupported statements as I have not actually done any testing myself between the two. It is based on statements that I have heard from others.
From MSDN:
The DELETE statement removes rows one
at a time and records an entry in the
transaction log for each deleted row.
TRUNCATE TABLE removes the data by
deallocating the data pages used to
store the table's data, and only the
page deallocations are recorded in the
transaction log.
I just wanted to say that there is a fundamental difference between the two and because there is a difference, there will be applications where one or the other may be inappropriate.
DELETE * FROM table_name;
Premature optimization may be dangerous. Optimizing may mean doing something weird, but if it works you may want to take advantage of it.
SELECT DbVendor_SuperFastDeleteAllFunction(tablename, BOZO_BIT) FROM dummy;
For speed I think it depends on...
The underlying database: Oracle, Microsoft, MySQL, PostgreSQL, others, custom...
The table, it's content, and related tables:
There may be deletion rules. Is there an existing procedure to delete all content in the table? Can this be optimized for the specific underlying database engine? How much do we care about breaking things / related data? Performing a DELETE may be the 'safest' way assuming that other related tables do not depend on this table. Are there other tables and queries that are related / depend on the data within this table? If we don't care much about this table being around, using DROP might be a fast method, again depending on the underlying database.
DROP TABLE table_name;
How many rows are being deleted? Is there other information that is quickly gleaned that will optimize the deletion? For example, can we tell if the table is already empty? Can we tell if there are hundreds, thousands, millions, billions of rows?