It is very easy to make mistakes when it comes to UPDATE and DELETE statements in SQL Server Management Studio. You can easily delete way more than you want if you had a mistake in the WHERE condition or, even worse, delete the whole table if you mistakenly write an expression that evaluates to TRUE all the time.
Is there anyway to disallow queries that affects a large number of rows from within SQL Server Management Studio? I know there is a feature like that in MySQL Workbench, but I couldn't find any in SQL Server Management Studio.
No.
It is your responsibility to ensure that:
Your data is properly backed up, so you can restore your data after making inadverdent changes.
You are not writing a new query from scratch and executing it directly on a production database without testing it first.
You execute your query in a transaction, and review the changes before committing the transaction.
You know how to properly filter your query to avoid issuing a DELETE/UPDATE statement on your entire table. If in doubt, always issue a SELECT * or a SELECT COUNT(*)-statement first, to see which records will be affected.
You don't rely on some silly feature in the front-end that might save you at times, but that will completely screw you over at other times.
A lot of good comments already said. Just one tiny addition: I have created a solution to prohibit occasional execution of DELETE or UPDATE without any WHERE condition at all. This is implemented as "Fatal actions guard" in my add-in named SSMSBoost.
(My comments were getting rather unwieldy)
One simple option if you are uncertain is to BEGIN TRAN, do the update, and if the rows affected count is significantly different than expected, ROLLBACK, otherwise, do a few checks, e.g. SELECTs to ensure just the intended data was updated, and then COMMIT. The caveat here is that this will lock rows until you commit / rollback, and potentially require escalation to TABLOCK if a large number of rows are updated, so you will need to have the checking scripts planned in advance.
That said, in any half-serious system, no one, not even senior DBA's, should really be executing direct ad-hoc DML statements on a prod DB (and arguably the formal UAT DB too) - this is what tested applications are meant for (or tested, verified patch scripts executed only after change control processes are considered).
In less formal dev environments, does it really matter if things get broken? In fact, if you are an advocate of Chaos Monkey, having juniors break your data might be a good thing in the long run - it will ensure that your process re scripting, migration, static data deployment, integrity checking are all in good order?
My suggestion for you is disable auto commit. where you can commit your changes after review it. and commit it before ending the session.
for more details you can please follow the MSDN link:
http://msdn.microsoft.com/en-us/library/ms187807.aspx
There is any code from backing from update on SQL Server. I mean without triggers and logs.
KR,
Çağın
If you happen to have change data capture or audit logging, you can easily recover from a bad change. Or as suggested you can restore yesterday's backup to another instance and then copy the data as much as possible. If you don't have any of these things, perhaps you need to them set up for future problems. Or maybe even hire a database professional so you don't get caught like this again. And of course, take all update, delete and insert rights away from application ddevelopers on production. Sometimes the best thing you can do is at least learn from your mistakes and make the system better for the next time.
I guess it's rather just "update was wrong" and now finding out there is no Undo like in Word.
Depends on what has been done else to the data. You could restore the effected columns by restoring your yesterday's backup of the database to another database, different name (don't overwrite your current database ...), and set up an update query to restore just your columns. Referencing tables in other databases is basically a simple syntax like database.schema.table; you have to look up the details on msdn, haven't had the need to do that before.
Alternatively, use a frontend dbms, e.g. Access, link both old and new table and name them in a way easy to distinguish between them, and set up an update query in Access to restore the old values. Might be that you need to cache your old values in a local table in your dbms.
If your answer is, you don't have a backup, then you are really lost.
In a couple of my tables in my SQL Server 2005 database all of my data has been erased. Is there anyway to get a log in SQL Server of all the statements that have ran in the past day? I am trying to find out if someone did this on accident, there is a vulnerability in my web app, or the actual DB has been compromised.
You're looking for the transaction log. Depending on how, and if, it is setup, you'll be able to see what was run. There some info on it at http://www.databasedesign-resource.com/sql-server-transaction-log.html. Given that, I'm sure you can also Google some better resource.
You could also try running the command DBCC LOG(database,3). It will output the data that is in the transaction log.
See the following there are a couple of programs which will allow you to read the log.
https://web.archive.org/web/20080215075500/http://sqlserver2000.databases.aspfaq.com/how-do-i-recover-data-from-sql-server-s-log-files.html
The one from Red Gate is called SQL Rescue and looks pretty good.
You could try a log rescue tool like Log Rescue
I would also sort out some auditing of your own.
Log Rescue doesn't support SQL 2005 so you could also try Apex SQL Log
There are applications you can buy that can convert a transaction log backup into the actual statements that were run. You may be able to find a trial version of some of these, unfortunately I cannot reccommend any specific one though.
Something else to keep in mind: if a hacker gained enough access to clean out some tables, there's a good chance they gained enough access to have their way with your log files as well.
Make a Transaction Log Backup in SQL Server, download a Trial Version of TOAD for SQL Server there you can import your Transactionlog Backup.
And if you want you can also create INSERT Scripts of the DELETED records. But I dont know if there are any restrictions in the TOAD trial version.
You know the one I am talking about.
We have all been there at some point. You get that awful feeling of dread and the realisation of oh my god did that actually just happen.
Sure you can laugh about it now though, right, so go on and share your SQL Server mishaps with us.
Even better if you can detail how you resolved your issue so that we can learn from our mistakes together.
So in order to get the ball rolling, I will go first……..
It was back in my early years as a junior SQL Server Guru. I was racing around Enterprise Manager, performing a few admin duties. You know how it is, checking a few logs, ensuring the backups ran ok, a little database housekeeping, pretty much going about business on autopilot and hitting the enter key on the usual prompts that pop up.
Oh wait, was that a “Are you sure you wish to delete this table” prompt. Too late!
Just to confirm for any aspiring DBA’s out there, deleting a production table is a very very bad thing!
Needless to say a world record was promptly set for the fastest database restore to a new database, swiftly followed by a table migration, oh yeah. Everyone else was none the wiser of course but still a valuable lesson learnt. Concentrate!
I suppose everyone has missed the WHERE clause off a DELETE or UPDATE at some point...
Inserted 5 million test persons into a production database. The biggest mistake in my opinion was to let me have write access to the production db in the first place. :P Bad dba!
My biggest SQL Server mistake was assuming it was as capable as Oracle when it came to concurrency.
Let me explain.
When it comes to transactional isolation level in SQL Server you have two choices:
Dirty reads: transactions can see uncommitted data (from other transactions); or
Selects block on uncommitted updates.
I believe these come from ANSI SQL.
(2) is the default isolation level and (imho) the lesser of two evils. But it's a huge problem for any long-running process. I had to do a batch load of data and could only do it out of hours because it killed the website while it was running (it took 10-20 minutes as it was inserting half a million records).
Oracle on the other hand has MVCC. This basically means every transaction will see a consistent view of the data. They won't see uncommitted data (unless you set the isolation level to do that). Nor do they block on uncommitted transactions (I was stunned at the idea an allegedly enterprise database would consider this acceptable on a concurrency basis).
Suffice it to say, it was a learning experience.
And ya knkow what? Even MySQL has MVCC.
I changed all of the prices to zero on a high-volume, production, eCommerce site. I had to take the site down and restore the DB from backup.. VERY UGLY.
Luckily, that was a LOOONG time ago.
forgetting to highlight the WHERE clause when updating or deleting
scripting procs and checking drop dependent objects first and running this on production
I was working on the payment system on a large online business. Multi-million Euro business.
Get a script from a colleague with a few small updates.
Run it on production.
Get an error report 30 minutes later from helpdesk, complaining about no purchases last 30 minutes.
Discover that all connections are waiting on a table lock to be released
Discover that the script from my colleague started with an explicit BEGIN TRANSACTION and expected me to manually type COMMIT TRANSACTION at the end.
Explain to boss why 30 minutes of sales were lost.
Blame myself for not reading the script documentation properly.
Starting off a restore from last week onto a production instance instead of the development instance. Not a good morning.
I've seen plenty other people miss a WHERE clause.
Myself, I always type the WHERE clause first, and then go back to the start of the line and type in the rest of the query :)
Thankfully we only ever make one cock-up before you realise that using transactions really is very, very trivial. I've amended thousands of records on accident before, luckily roll-back is there...
If you're querying the live environment without having thoroughly tested your scripts then I think embarrassing should really be foolhardy or perhaps unprofessional.
One of my favorites happened in an automated import when the client changed the data structure without telling us first. The Social Security number column and the amount of money we were to pay the person got switched. Luckily we found it before the system tried to pay someone his social security number. We now have checks in automated imports that look for funny data before running and stop it if the data seems odd.
Like zabzonk said, forgot the WHERE clause on an update or two in my day.
We had an old application that didn't handle syncing with our HR database for name updates very efficiently, mainly due to the way they keyed in changes to titles. Anyway, a certain woman got married, and I had to write a database change request to update her last name, I forgot the where clause and everyone in said application's name was now Allison Smith.
Columns are nullable, and parameter values fail to retrieve the correct information...
The biggest mistake was giving developers "write" access to the production DB
many DEV and TEST records were inserted / overwritten and backup- ed too production until it was wisely suggested (by me!) to only allow read access!
Sort of SQL-server related. I remember learning about how important it is to always dispose of a SqlDataReader. I had a system that worked fine in development, and happened to be running against the ERP database. In production, it brought down the database because I assumed it was enough to close SqlConnection, and had hundreds, if not thousands of open connections.
At the start of my co-op term I ended up expiring access to everyone who used this particular system (which was used by a lot of applications in my Province). In my defense, I was new to SQL Server Management Studio and didn't know that you could 'open' tables and edit specific entries with a sql statement.
I expired all the user access with a simple UPDATE statement (access to this application was given by a user account on the SQL box as well as a specific entry in an access table) but when I went to highlight that very statement and run it, I didn't include the WHERE clause.
A common mistake I'm told. The quick fix was unexpire everyones accounts (including accounts that were supposed to be expired) until the database could be backed up. Now I either open tables and select specific entries with SQL or I wrap absolutely everything inside a transaction followed by an immediate rollback.
Our IT Ops decided to upgrade to SQL 2005 from SQL 2000.
The next Monday, users were asking why their app didn't work. Errors like:
DTS Not found etc.
This lead to a nice set of 3 Saturdays in the office rebuilding the packages in SSIS with a good overtime package :)
Not, exactly a "mistake" but back when I was first learning PHP and MYSQL I would spend hours daily, trying to figure out why my code was not working, not knowing that I had the wrong password/username/host/database credentials to my SQL database. You cant believe how much time I wasted on that, and to make it even worse this was not a one time incident. But LOL, its all good, it builds character.
I once, and only once, typed something similar to the following:
psql> UPDATE big_table SET foo=0; WHERE bar=123
I managed to fix the error rather quickly. Since that and another error my updates always start out as:
psql> UPDATE table SET WHERE foo='bar';
Much easier to avoid errors that way.
I worked with a junior developer once who got confused and called "Drop" instead of "Delete" on a table.
It was a long night working to get the backup restored...
Edit: I should have mentioned, this was in a production environment, and the table was full of data...
This was before the days when Google could help. I didn't encounter this problem with SQL Server, but with it's ugly older cousin Sybase.
I updated a table schema in a production environment. Not understanding at the time that stored procedures that use SELECT * must be recompiled to pickup new fields, I proceeded to spend the next eight hours trying to figure out why the stored procedure that performed a key piece of work kept failing. Only after a server reboot did I clue in.
Losing thousands of dollars and hundreds of (end user) man-hours at your flagship customer's site is quite an educational experience. Highly recommended!!
A healthy amount of years ago I was working on a clients site, that had a nice script to clear the dev environment of all orders, carts and customers.. to ease up testing, so I of course put the damn script on the productions server query analyzer and ran it.
Took some 5-6 minutes to run too, I was bitching about how slow the dev server was until the number of deleted rows came up. :)
Fortunately I had just ran a full backup since I was about to do an installation..
Beyond the typical where clause error. Ran a drop on an incorrect DB and thus had to run a restore. Now I triple check my server name. Thankfully I had a good backup.
I set the maximum server memory to 0. I was thinking at the time that would automatically tell SQL server to use all available memory (It was early). No such luck. SQL server decided to use only 16 MB and I had to connect in single user mode to get the setting changed back.
Hit "Restore" instead of "Backup" in Management Studio.
And if not, is there a way to tell when a trigger was disabled/enabled?
FOLLOWUP:
It's a rather interesting diagnostic case. I was only involved from the periphery, and the guy doing the diagnostics isn't a database guy.
Anyways, he had a trigger that would move data from one table to another. He did a comparison and not all the data had made it to the second table. I said, I'm a critic of SQL Server but I trust that their triggers fire in the same transaction. He said but some of the data made it... if it was just disabled, nothing should make it. True. So I said maybe someone is enabling and disabling the triggers. Hence the question.
But what really happened is someone permanently disabled the trigger and copied the code into a sproc that was set to run at a certain time.
The correct forensic test would have been to look at the dependencies of the second table, see what else was using it. That would show the tumor sproc... (I've been watching lots of House reruns, can ya tell).
No auditing, though there is a company called Lumigent that offers a product "Audit DB" which will do DDL auditing (among other things) for SQL Server.
You can look in the sysobjects table for the crdate which will tell you when the object was created.
Your problem looks quite similar to the one that Randy Volters wrote about in Simple-Talk
http://www.simple-talk.com/sql/database-administration/dml-trigger-status-alerts/
I suspect it will help