One table has been truncated by mistake and inserted few records in the table. So could you please suggest how to get the previous table back
RESTORE YOUR BACKUP
One of the advantages of a truncate is it almost doesn't let a footprint in the log. A delete affects row-by-row while the truncate is a lot faster because it completely deallocates the data pages, at once.
After the commit, there's nothing left to be rolled back.
EDIT
If there any identity in that table it will also be reset. So any inserts will "duplicate" truncated ids.
Related
I was getting in more details for Truncate Vs. Delete. Most differences are commonly observed /practiced in daily routine. However the one I would like to know is how to retrieve records removed by truncating.
I read that truncate command does not log transaction log for each row like delete does, instead it logs for deallocating of page.
So can I retrieve a truncated (and committed) record from a table?
e.g.
Truncate table Student_Record;
If I simply run this, can records from Student_Record be retrieved?
No, you can't. You will have to backup your data before doing the truncate.
According to the official docs:
TRUNCATE
Removes all rows from a table or specified partitions of a table,
without logging the individual row deletions. TRUNCATE TABLE is
similar to the DELETE statement with no WHERE clause; however,
TRUNCATE TABLE is faster and uses fewer system and transaction log
resources.
By not logging anything you are making a faster delete, saving space and thus not able to recover these items.
You can't even trigger any operations while doing the truncate:
TRUNCATE TABLE cannot activate a trigger because the operation does
not log individual row deletions. For more information, see CREATE
TRIGGER (Transact-SQL).
Be careful when using truncate, if you want to retrieve deleted rows you would have to run a delete statement rather than the truncate and look at the logs.
EDIT
There are some solutions and tools that read from the data pages once the truncate has been executed. Data in the data pages is marked to be overwritten by new data. Tools like ApexSQL read these, thus letting you generate a recreation script for the truncated data. If this is executed immediately after running the truncate, you could recover the info.
However
If after you truncate, there are update, insert or delete operations, the data pages might be overwritten. If this happens, not even ApexSQL can help you.
Oracle 10g -- due to a compatibility issue with a 9i database, I'm pulling data through a 10g database (to be used by an 11g database) using INSERT INTO...SELECT statements via a scheduled job that runs every 15 minutes. I notice that TRUNCATE statements are much faster than DELETE statements and have read that a 'downside' to DELETE statements is that they never decrease the table high-water mark. My use for this data is purely read-only -- UPDATEs and INSERTs are never issued against the tables in question.
Given the above, I want to avoid the possible situation where my 'working' database (Oracle 11g) attempts to read from a table on my staging database (10g) that is empty for a period of time because the TRUNCATE happened straight away and the INSERT INTO...SELECT from the 9i database is taking a couple of minutes to complete.
So, I'm wondering if that is how Oracle handles TRUNCATEs within a transaction, or if the whole operation is performed and COMMITted, despite the fact that TRUNCATEs can't be rolled back? Or, put another way, from an external SELECT point of view, if I wrap a TRUNCANTE and INSERT INTO...SELECT on a table in a transaction, will the table ever appear empty to an external SELECT reading from the table?
Once a table has been truncated in a transaction, you cannot do anything else with that table in the same transaction; you have to commit (or rollback) the transaction before you can use that table again. Or, it may be that truncating a table effectively terminates the current transaction. Either way, if you use TRUNCATE, you have a window when the table is truncated (empty) but the INSERT operation has not completed. This is not what you wanted, but it is what Oracle provides.
You can do partition exchange. Have 2 partitions in staging table; p_OLD and p_NEW.
Before insert do partition exchange "new"->"old" and truncate "new" partition. (At this point if you select from table you see old data)
Insert data into "new" partition, truncate "old" partition. (At this point you see new data).
With this approach your table is never empty to the onlooker.
Why do you need 3 Oracle environments?
A recent employee of our company had a stored procedure that has gone haywire, and caused mass inserts into a debug table of his. The table is unindexed, is now at close to 1.7 billion rows, and is taking up so much space that the backup no longer fits on the backup drive (Backups now reach close to 250GB).
I haven't really seen anything like this, so I'm seeking advice from the MSSQL Gurus out here.
I know I could nibble away at the table, but being unindexed, the DELETE FROM [TABLE] WHERE ID IN (SELECT TOP 10000 [ID] FROM [TABLE]) nearly locks up the server searching for them.
I also don't want my log file to get massive, it's currently sitting at 480GB on a 1TB drive. If I delete this table, will I be able to shrink it back down? (My recovery mode is simple)
We could index the id field on the table, though we only have around 9 hours downtime a day, and during business hours we can't be locking up the database.
Just looking for advice here, and a point in the right direction.
Thanks.
You may want to consider TRUNCATE
MSDN reference: http://technet.microsoft.com/en-us/library/aa260621(v=sql.80).aspx
Removes all rows from a table without logging the individual row deletes.
Syntax:
TRUNCATE TABLE [YOUR_TABLE]
As #Rahul suggests in the comments, you could also use DROP TABLE [YOUR_TABLE] if you no longer plan to use the table in question. The TRUNCATE option would simply empty the table but leave it in place if you wanted to continue to use it.
With regards to the space issue, both of these operations will be comparatively quick and the space will be reclaimed, but it won't happen instantly. When using TRUNCATE, the data still has to be deleted, but SQL Server will simply deallocate the data pages used by the table and use a background process to actually perform the clean up afterwards.
This post should provide some useful information.
One suggestion would be ... take the back up of only that 1.7 billion rows table (probably in a tape drive/somewhere with good enough space) and then drop the table saying drop table table_name.
That way, if at all that debug table data is needed in future; you have a copy and can restore from backup.
I would remove the logging for this table and launch a delete stored procedure that would commit every 1000 rows.
I have this table in a SQL Server 2008 R2 instance which I have a scheduled process that runs nightly against it. The table can have upward to 500K records in it at any one time. After processing this table I need to remove all rows from it so I am wondering which of the following methods would produce the least overhead (ie Excessive Transaction Log entries):
Truncate Table
Drop and recreate the table
Deleting the contents of the table is out due to time and extra Transaction log entries it makes.
The consensus seems to be Truncation, Thanks everyone!
TRUNCATE TABLE is your best bet. From MSDN:
Removes all rows from a table without logging the individual row
deletes.
So that means it won't bloat your transaction log. Dropping and creating the table not only requires more complex SQL, but also additional permissions. Any settings attached to the table (triggers, GRANT or DENY, etc.) will also have to be re-built.
Truncating the table does not leave row-by-row entries in the transaction log - so neither solution will clutter up your logs too much. If it were me, I'd truncate over having to drop and create each time.
I would go for TRUNCATE TABLE. You can potentially have overheads when indexes, triggers, etc get dropped. Plus you will lose permissions which will also have to be re-created along with any other required objects required for that table.
Also on DROP TABLE in MDSN below it mentions a little gotcha if you execute DROP and CREATE TABLE in the same batch
DROP TABLE and CREATE TABLE should not be executed on the same table
in the same batch. Otherwise an unexpected error may occur.
Dropping the table will destroy any associated objects (indexes, triggers) and may make procedures or views invalid. I would go with truncate, since it won't blow up your log and causes none of the possible issues a drop and create does.
I am moving a system from a VB/Access app to SQL server. One common thing in the access database is the use of tables to hold data that is being calculated and then using that data for a report.
eg.
delete from treporttable
insert into treporttable (.... this thing and that thing)
Update treportable set x = x * price where (...etc)
and then report runs from treporttable
I have heard that SQL server does not like it when all records from a table are deleted as it creates huge logs etc. I tried temp sql tables but they don't persists long enough for the report which is in a different process to run and report off of.
There are a number of places where this is done to different report tables in the application. The reports can be run many times a day and have a large number of records created in the report tables.
Can anyone tell me if there is a best practise for this or if my information about the logs is incorrect and this code will be fine in SQL server.
If you do not need to log the deletion activity you can use the truncate table command.
From books online:
TRUNCATE TABLE is functionally
identical to DELETE statement with no
WHERE clause: both remove all rows in
the table. But TRUNCATE TABLE is
faster and uses fewer system and
transaction log resources than DELETE.
http://msdn.microsoft.com/en-us/library/aa260621(SQL.80).aspx
delete from sometable
Is going to allow you to rollback the change. So if your table is very large, then this can cause a lot of memory useage and time.
However, if you have no fear of failure then:
truncate sometable
Will perform nearly instantly, and with minimal memory requirements. There is no rollback though.
To Nathan Feger:
You can rollback from TRUNCATE. See for yourself:
CREATE TABLE dbo.Test(i INT);
GO
INSERT dbo.Test(i) SELECT 1;
GO
BEGIN TRAN
TRUNCATE TABLE dbo.Test;
SELECT i FROM dbo.Test;
ROLLBACK
GO
SELECT i FROM dbo.Test;
GO
i
(0 row(s) affected)
i
1
(1 row(s) affected)
You could also DROP the table, and recreate it...if there are no relationships.
The [DROP table] statement is transactionally safe whereas [TRUNCATE] is not.
So it depends on your schema which direction you want to go!!
Also, use SQL Profiler to analyze your execution times. Test it out and see which is best!!
The answer depends on the recovery model of your database. If you are in full recovery mode, then you have transaction logs that could become very large when you delete a lot of data. However, if you're backing up transaction logs on a regular basis to free the space, this might not be a concern for you.
Generally speaking, if the transaction logging doesn't matter to you at all, you should TRUNCATE the table instead. Be mindful, though, of any key seeds, because TRUNCATE will reseed the table.
EDIT: Note that even if the recovery model is set to Simple, your transaction logs will grow during a mass delete. The transaction logs will just be cleared afterward (without releasing the space). The idea is that DELETE will create a transaction even temporarily.
Consider using temporary tables. Their names start with # and they are deleted when nobody refers to them. Example:
create table #myreport (
id identity,
col1,
...
)
Temporary tables are made to be thrown away, and that happens very efficiently.
Another option is using TRUNCATE TABLE instead of DELETE. The truncate will not grow the log file.
I think your example has a possible concurrency issue. What if multiple processes are using the table at the same time? If you add a JOB_ID column or something like that will allow you to clear the relevant entries in this table without clobbering the data being used by another process.
Actually tables such as treporttable do not need to be recovered to a point of time. As such, they can live in a separate database with simple recovery mode. That eases the burden of logging.
There are a number of ways to handle this. First you can move the creation of the data to running of the report itself. This I feel is the best way to handle, then you can use temp tables to temporarily stage your data and no one will have concurency issues if multiple people try to run the report at the same time. Depending on how many reports we are talking about, it could take some time to do this, so you may need another short term solutio n as well.
Second you could move all your reporting tables to a difffernt db that is set to simple mode and truncate them before running your queries to populate. This is closest to your current process, but if multiple users are trying to run the same report could be an issue.
Third you could set up a job to populate the tables (still in separate db set to simple recovery) once a day (truncating at that time). Then anyone running a report that day will see the same data and there will be no concurrency issues. However the data will not be up-to-the minute. You also could set up a reporting data awarehouse, but that is probably overkill in your case.