Column Indexing and DELETE query performance - sql-server

I have a table with few Columns like ID (PK), Name, created_time etc. and periodically I DELETE rows from this table by using this simple DELETE query.
DELETE FROM my_table WHERE created_time < 'some time';
Just want to know what will be the performance impact on INSERT, SELECT and DELETE if I make or not make an INDEX on created_time.
This table may have millions of rows and one DELETE query may delete rows in hundred thousands in one go.
Database : Oracle, JavaDB, DB2, SQL Server

If you make an INDEX on created_time.
1. Insert it will slow because it need to check column created_time is one time only(like a key).
2. SELECT and DELETE will quick if your condition where WHERE created_time = 'some time' It's mean if it find the first row of condition it will stop.

Related

Table is mutating. Trigger may not see it (Even on different tables)

I have two tables: Group and Schedule.
The Group table consists of columns: Group_Name, No_of_member.
The Schedule table consists of column: Group_Name, Schedule_Date.
A group can have many schedules.
I'm trying to create a trigger which deletes all corresponding records from the Schedule table when a particular record is deleted from the Group table. I have written a trigger as:
CREATE OR REPLACE TRIGGER group_test
AFTER DELETE ON Group
FOR EACH ROW
BEGIN
DELETE FROM Schedule WHERE Schedule.group_name = :old.group_name;
END;
But when I try to delete a record from the group table with command: DELETE FROM Group WHERE group_name = 'AwesomeGroup'; , Oracle shows:
Table is mutating, trigger/function may not see it.
at line 2, error during execution of trigger 'group_test'
This would make sense if I was trying to delete a record from the same table in which I've created the trigger (Group in this case), but I have the trigger on Group table and I'm trying to delete the records on Schedule table. So, why does oracle keep giving me this error?
You are getting this error because you have a ON DELETE CASCADE clause on table Schedule for the foreign key on column group_name.
So naturally, when you do the DELETE on table Group, Oracle automatically follows the foreign key and go delete the records in Schedule, which makes the table change.
This is what causes your trigger group_test to dysfunction. In fact, this trigger is redundant with the ON DELETE CASCADE clause. I agree with #WernfriedDomscheit, you must choose between one of the two solutions.

SQL Server execution plan is suggesting to create an index containing all the columns in the table

I've got a key table with 2 columns: Key, Id.
In a stored procedure I've written, my code joins the Employee table to the Key column, then selects the Id - something like this:
SELECT
E.EmployeeName, K.Id
FROM
Employee E
JOIN
KeyTable K ON E.Key = K.Key
The execution plan is suggesting to create the following index:
[schema].[Employee] ([Key]) INCLUDE ([Id])
My question is why? If all the information is in the table to begin with why create an index and duplicate that information?
Just because all of the information is "in the table", that doesn't mean that searching the entire table is going to be the most efficient way of obtaining the results for this query.
Here, the server is saying that, if it had a way to quickly locate rows in this table, given a Key value, that the query should be able to be processed more quickly (not that it's 100% reliable in its suggestions, so you should test before implementing).
This can be true if the table is a heap (no clustered index) or for a clustered table where the clustering key(s) don't match the desired access order for the query.
Also, if you think about it - every (non-clustered) index duplicates information. It's just that usually its a subset of the information rather than the whole set.

Added an Index to a field and it's still running slow

We have 10M records in a table in a SQL Server 2012 database and we want to retrieve the top 2000 records based on a condition.
Here's the SQL statement:
SELECT TOP 2000 *
FROM Users
WHERE LastName = 'Stokes'
ORDER BY LastName
I have added a non clustered index to the column LastName and it takes 9secs to retrieve 2000 records. I tried creating an indexed view with an index, created on the same column, but to no avail, it takes about the same time. Is there anything else I can do, to improve on the performance?
Using select * will cause key lookups for all the rows that match your criteria (=for each value of the clustered key, the database has to travel through the clustered index into the leaf level to find the rest of the values).
You can see that in the actual plan, and you can also check that the index you created is actually being used (=index seek for that index). If keylookup is what is the reason for the slowness, the select will go fast if you run just select LastName from ....
If there is actually just few columns you need from the table (or there's not that many columns in the table) you can add those columns in as included columns in your index and that should speed it up. Always specify what fields you need in the select instead of just using select *.

Permanently sorting a table in SQLServer based on pre-existing data

I have made a table in SQL Server based on pre-existing data:
SELECT pre_existing_data
INTO new_table
FROM existing_table
I am trying to get the output to permanently sort by a particular field once the table is created. I thought this would be as simple as adding an ORDER BY clause at the end of the chunk of code that makes the table, but the data still won't sort properly.
There is no way to permanently sort a table in SQL.
You can create an index on the table and queries which use the index (in an ORDER BY clause) will be returned quicker, but the order the data is stored on the disk is not controllable.
You can create an index-organized table by using a CLUSTERED INDEX, which stores the data on disk in an ordered fashion on the clustering key. Then if you ORDER BY in your query based on the clustering key, data should come out very fast. Note that you have to use the ORDER BY in your query no matter what.
I have made a new table on SQL Server on pre-existing Schema
insert into new_table
select * from old_table
ORDER BY col ASC|DSC;
After it drop old_table and rename new table to old_table_name
drop table old_table_name;
rename new_table_name to old_table_name;
Try this trick to short your data in the table permanently

Oracle database truncate table (or something similar, but not delete) partially based on condition

I am not sure if this question is an obvious one. I need to delete a load of data. Delete is expensive. I need to truncate the table but not fully so that the memory is released and watermark is changed.
Is there any feature which would allow me to truncate a table based on a condition for select rows?
Depends on how your table is organised.
1) if your (large) table is partitioned based on similar condition ( eg. you want to delete previous month's data and your table is partitioned by month), you could truncate only that partition, instead of the entire table.
2) The other option, provided you have some downtime, would be to insert the data that you want to keep into a temporary table, truncate the original table and then load the data back.
insert into <table1>
select * from <my_table>
where <condition>;
commit;
truncate table my_table;
insert into my_table
select * from <table1>;
commit;
--since the amount of data might change considerably,
--you might want to collect statistics again
exec dbms_stats.gather_table_stats
(ownname=>'SCHEMA_NAME',
tabname => 'MY_TABLE');

Resources