I have updated my database with this query:
UPDATE suppliers SET contactname = 'Dirk Lucky'
So all the rows in the suppliers table have the contact name: "Dirk Lucky."
How can I rollback this transaction? I want to restore my the contactname column in the suppliers table.
Thanks,
Programmer
Sounds like your transaction has already been committed. You can't roll it back anymore.
Your only option is restoring a backup. You might be able to restore a backup as a new database, so you can copy only the contactnames and not lose any other changes.
If you have the full log in FULL mode, you can do a restore from the log. There's an excellent blog post about it here.
If you don't, I seriously hope that you have a backup.
For future reference: When I do updates, I use the following syntax:
SELECT *
--UPDATE a SET col = 'val'
FROM myTable a
WHERE id = 1234
That way, you can see what you're selecting to update first. Then, when you're finally ready to update, you just select from UPDATE down and run the query. I've caught myself many times with this trick. It also works with deletes, so that's a bonus.
Hopefully your database is using the Full Recovery Model. If so:
First you need to take a transaction log backup now.
You can then look to perform a database restore from your FULL backup + Differntials.
Once done, you can then restore the transaction log to a specific point in time prior to your update statement.
See "How to Restore to a point in time"
As other posters have suggested, you could restore to another database, and apply an update back to the live database should you wish to minimise downtime.
Hope this makes sense but let me know if you need assistance.
Related
I have a database that was storing customer information (dbo.custinfo). The telephone numbers in the table were removed in the past for security purposes by replacing with Null values. Now it has been decided that the phone numbers need to be put back in.
There have been other customers added to this database since then. What is the best way to restore this data while making sure the correct numbers correspond to the correct customers?
I have the backup of the db that was done before removing the phone numbers.
Thank you in advance
While in accordance with Larnu comment the side by side restore of the database is required, this operation can be accomplished by SSMS and GUI.
However, there is no wizard for building update statement that uses joins, so perhaps it is kind of advanced case for someone who are not specialized in SQL.
So, a generic snippet to get values back:
-- Another backup to rollback this change
BACKUP DATABASE yourDB TO DISK= 'C:\Somewhere\YourDB.bak' WITH COPY_ONLY, COMPRESSION, STATS=1
-- UPDATE + JOIN
UPDATE cur_ci
SET cur_ci.phone = res_ci.phone
FROM [CURRENTDB].dbo.custinfo cur_ci
JOIN [RESTOREDB].dbo.custinfo res_ci on cur_ci.customerID = res_ci.customerID
I was wondering if the following SQL code would be stored on the log, so we could look back at a future date to see what a user has typed in when querying the database?
BEGIN TRAN
SELECT *
FROM pictures p
INNER JOIN product pr
ON p.item_id = pr.item_id
ROLLBACK TRAN
I think that if the code is wrapped in a rollback clause, no record of what the user has typed in will be visible?
In short, no. Since no data changes are taking place, there's no need to store anything in the log. In fact, the ROLLBACK doesn't matter, even if it would be COMMITed, there's still no data changes taking place, and thus no logging.
DELETE, UPDATE and INSERT is recorded. SELECT is not. If you want to log those kind of queries, you can use a trace, use SQL Audit, build your own solution to log, or use a third party product tool.
Here's some information on different techniques:
http://solutioncenter.apexsql.com/auditing-select-statements-on-sql-server/
Here's more info on the SQL Audit:
http://blogs.msdn.com/b/sreekarm/archive/2009/01/05/auditing-select-statements-in-sql-server-2008.aspx
Is this possible without restoring whole database?
I have made changes which I would like to undo, but without putting DB offline, and doing full restore.
No, SQL Server does not have Ctrl + Z.
You protect yourself from this scenario by wrapping all DML statements in a transaction. So you have query windows with this:
BEGIN TRANSACTION;
UDPATE ...
-- COMMIT TRANSACTION;
-- ROLLBACK TRANSACTION;
When you run the update, verify that you updated the right number of rows, the right rows, the right way, etc. And then highlight either the commit or the rollback, depending on whether you performed the update correctly.
On the flip side, be careful with this, as it can mess you up the other way - begin a transaction, forget to commit or rollback, then go out for lunch, leave for the day, go on vacation, etc.
Unfortunately that will only help you going forward. In your current scenario, your easiest path is going to be to restore a copy of the database, and harvest the data from that copy (you don't need to completely over-write the current database to restore the data affected by this update).
The short answer is: No.
However, you don't have to take the DB offline to do a partial restore on a table or tables.
You can restore a backup to a separate database and then use TSQL queries to restore the rows that were negatively impacted by your update. This can take place while the main database is online.
More info on restoring a database to a new location:
http://technet.microsoft.com/en-us/library/ms186390.aspx
For future reference, as per my comment,
It is a good practice to use a TRANSACTION.
-- Execute a transaction statement before doing an update.
BEGIN TRANSACTION
... < your update code >
Then if the update is wrong or produces undesired results, you can ROLLBACK the TRANSACTION
-- Ooops I screwed up! Let's rollback!
--ROLLBACK TRANSACTION -- I have this commented out and then just select the command when needed. This helps to not accidentally rollback if you just press CTRL+E, (or F5 in SSMS 2012)
... and it goes away :)
When all is well you just COMMIT the TRANSACTION.
-- COMMIT TRANSACTION -- commented out, see above
Or else you lock the database for all users!
So don't forget to commit!
Yes, besides doing a full restore, there is a viable solution provided by 3rd party tool, which reads information from a database transaction log, parse it, and then creates an undo T-SQL script in order to rollback user actions
Check out the How to recover SQL Server data from accidental updates without backups online article for more information. The article is focused on the UPDATE operation, but with appropriate settings and filters, you can rollback any other database change that's recorded within the transaction log
Disclaimer: I work as a Product Support Engineer at ApexSQL
It is not possible unless you version your data appropriately or do a restore.
Possible but It will require lot of efforts.
SQL Server maintains logs for DELETED/UPDATED/INSERTED data in non-readable format and to read them you should have the efficient tool Event Log Analyzer.
As a slightly modified version to the answers above, I sometimes like to use an automatically rolled back transaction in combination with the OUTPUT keyword and the INSERTED internal table to see what will actually update as a result set.
For instance,
BEGIN TRANSACTION;
UPDATE TableA
SET TableA.Column1 = #SomeValue
OUTPUT INSERTED.*
WHERE <condition>
ROLLBACK TRANSACTION;
If the result set looks good to me, then I'll change the last statement to COMMIT TRANSACTION;.
I am moving a system from a VB/Access app to SQL server. One common thing in the access database is the use of tables to hold data that is being calculated and then using that data for a report.
eg.
delete from treporttable
insert into treporttable (.... this thing and that thing)
Update treportable set x = x * price where (...etc)
and then report runs from treporttable
I have heard that SQL server does not like it when all records from a table are deleted as it creates huge logs etc. I tried temp sql tables but they don't persists long enough for the report which is in a different process to run and report off of.
There are a number of places where this is done to different report tables in the application. The reports can be run many times a day and have a large number of records created in the report tables.
Can anyone tell me if there is a best practise for this or if my information about the logs is incorrect and this code will be fine in SQL server.
If you do not need to log the deletion activity you can use the truncate table command.
From books online:
TRUNCATE TABLE is functionally
identical to DELETE statement with no
WHERE clause: both remove all rows in
the table. But TRUNCATE TABLE is
faster and uses fewer system and
transaction log resources than DELETE.
http://msdn.microsoft.com/en-us/library/aa260621(SQL.80).aspx
delete from sometable
Is going to allow you to rollback the change. So if your table is very large, then this can cause a lot of memory useage and time.
However, if you have no fear of failure then:
truncate sometable
Will perform nearly instantly, and with minimal memory requirements. There is no rollback though.
To Nathan Feger:
You can rollback from TRUNCATE. See for yourself:
CREATE TABLE dbo.Test(i INT);
GO
INSERT dbo.Test(i) SELECT 1;
GO
BEGIN TRAN
TRUNCATE TABLE dbo.Test;
SELECT i FROM dbo.Test;
ROLLBACK
GO
SELECT i FROM dbo.Test;
GO
i
(0 row(s) affected)
i
1
(1 row(s) affected)
You could also DROP the table, and recreate it...if there are no relationships.
The [DROP table] statement is transactionally safe whereas [TRUNCATE] is not.
So it depends on your schema which direction you want to go!!
Also, use SQL Profiler to analyze your execution times. Test it out and see which is best!!
The answer depends on the recovery model of your database. If you are in full recovery mode, then you have transaction logs that could become very large when you delete a lot of data. However, if you're backing up transaction logs on a regular basis to free the space, this might not be a concern for you.
Generally speaking, if the transaction logging doesn't matter to you at all, you should TRUNCATE the table instead. Be mindful, though, of any key seeds, because TRUNCATE will reseed the table.
EDIT: Note that even if the recovery model is set to Simple, your transaction logs will grow during a mass delete. The transaction logs will just be cleared afterward (without releasing the space). The idea is that DELETE will create a transaction even temporarily.
Consider using temporary tables. Their names start with # and they are deleted when nobody refers to them. Example:
create table #myreport (
id identity,
col1,
...
)
Temporary tables are made to be thrown away, and that happens very efficiently.
Another option is using TRUNCATE TABLE instead of DELETE. The truncate will not grow the log file.
I think your example has a possible concurrency issue. What if multiple processes are using the table at the same time? If you add a JOB_ID column or something like that will allow you to clear the relevant entries in this table without clobbering the data being used by another process.
Actually tables such as treporttable do not need to be recovered to a point of time. As such, they can live in a separate database with simple recovery mode. That eases the burden of logging.
There are a number of ways to handle this. First you can move the creation of the data to running of the report itself. This I feel is the best way to handle, then you can use temp tables to temporarily stage your data and no one will have concurency issues if multiple people try to run the report at the same time. Depending on how many reports we are talking about, it could take some time to do this, so you may need another short term solutio n as well.
Second you could move all your reporting tables to a difffernt db that is set to simple mode and truncate them before running your queries to populate. This is closest to your current process, but if multiple users are trying to run the same report could be an issue.
Third you could set up a job to populate the tables (still in separate db set to simple recovery) once a day (truncating at that time). Then anyone running a report that day will see the same data and there will be no concurrency issues. However the data will not be up-to-the minute. You also could set up a reporting data awarehouse, but that is probably overkill in your case.
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause.
I've been all up and down the mgmt studio options, but can't find a confirm option. I know other tools for other databases have it.
I'd suggest that you should always write SELECT statement with WHERE clause first and execute it to actually see what rows will your DELETE command delete. Then just execute DELETE with the same WHERE clause. The same applies for UPDATEs.
Under Tools>Options>Query Execution>SQL Server>ANSI, you can enable the Implicit Transactions option which means that you don't need to explicitly include the Begin Transaction command.
The obvious downside of this is that you might forget to add a Commit (or Rollback) at the end, or worse still, your colleagues will add Commit at the end of every script by default.
You can lead the horse to water...
You might suggest that they always take an ad-hoc backup before they do anything (depending on the size of your DB) just in case.
Try using a BEGIN TRANSACTION before you run your DELETE statement.
Then you can choose to COMMIT or ROLLBACK same.
In SSMS 2005, you can enable this option under Tools|Options|Query Execution|SQL Server|ANSI ... check SET IMPLICIT_TRANSACTIONS. That will require a commit to affect update/delete queries for future connections.
For the current query, go to Query|Query Options|Execution|ANSI and check the same box.
This page also has instructions for SSMS 2000, if that is what you're using.
As others have pointed out, this won't address the root cause: it's almost as easy to paste a COMMIT at the end of every new query you create as it is to fire off a query in the first place.
First, this is what audit tables are for. If you know who deleted all the records you can either restrict their database privileges or deal with them from a performance perspective. The last person who did this at my office is currently on probation. If she does it again, she will be let go. You have responsibilites if you have access to production data and ensuring that you cause no harm is one of them. This is a performance problem as much as a technical problem. You will never find a way to prevent people from making dumb mistakes (the database has no way to know if you meant delete table a or delete table a where id = 100 and a confirm will get hit automatically by most people). You can only try to reduce them by making sure the people who run this code are responsible and by putting into place policies to help them remember what to do. Employees who have a pattern of behaving irresponsibly with your busness data (particulaly after they have been given a warning) should be fired.
Others have suggested the kinds of things we do to prevent this from happening. I always embed a select in a delete that I'm running from a query window to make sure it will delete only the records I intend. All our code on production that changes, inserts or deletes data must be enclosed in a transaction. If it is being run manually, you don't run the rollback or commit until you see the number of records affected.
Example of delete with embedded select
delete a
--select a.* from
from table1 a
join table 2 b on a.id = b.id
where b.somefield = 'test'
But even these techniques can't prevent all human error. A developer who doesn't understand the data may run the select and still not understand that it is deleting too many records. Running in a transaction may mean you have other problems when people forget to commit or rollback and lock up the system. Or people may put it in a transaction and still hit commit without thinking just as they would hit confirm on a message box if there was one. The best prevention is to have a way to quickly recover from errors like these. Recovery from an audit log table tends to be faster than from backups. Plus you have the advantage of being able to tell who made the error and exactly which records were affected (maybe you didn't delete the whole table but your where clause was wrong and you deleted a few wrong records.)
For the most part, production data should not be changed on the fly. You should script the change and check it on dev first. Then on prod, all you have to do is run the script with no changes rather than highlighting and running little pieces one at a time. Now inthe real world this isn't always possible as sometimes you are fixing something broken only on prod that needs to be fixed now (for instance when none of your customers can log in because critical data got deleted). In a case like this, you may not have the luxury of reproducing the problem first on dev and then writing the fix. When you have these types of problems, you may need to fix directly on prod and you should have only dbas or database analysts, or configuration managers or others who are normally responsible for data on the prod do the fix not a developer. Developers in general should not have access to prod.
That is why I believe you should always:
1 Use stored procedures that are tested on a dev database before deploying to production
2 Select the data before deletion
3 Screen developers using an interview and performance evaluation process :)
4 Base performance evaluation on how many database tables they do/do not delete
5 Treat production data as if it were poisonous and be very afraid
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause
Probably the only solution will be to replace someone with someone else ;). Otherwise they will always find their workaround
Eventually restrict the database access for that person and provide them with the stored procedure that takes the parameter used in the where clause and grant them access to execute that stored procedure.
Put on your best Trogdor and Burninate until they learn to put in the WHERE clause.
The best advice is to get the muckety-mucks that are mucking around in the database to use transactions when testing. It goes a long way towards preventing "whoops" moments. The caveat is that now you have to tell them to COMMIT or ROLLBACK because for sure they're going to lock up your DB at least once.
Lock it down:
REVOKE delete rights on all your tables.
Put in an audit trigger and audit table.
Create parametrized delete SPs and only give rights to execute on an as needed basis.
Isn't there a way to give users the results they need without providing raw access to SQL? If you at least had a separate entry box for "WHERE", you could default it to "WHERE 1 = 0" or something.
I think there must be a way to back these out of the transaction journaling, too. But probably not without rolling everything back, and then selectively reapplying whatever came after the fatal mistake.
Another ugly option is to create a trigger to write all DELETEs (maybe over some minimum number of records) to a log table.