I have a table that gets updated from an outside source. It normally sits empty until they push data to me. With this data I am supposed to add, update or delete records in two other tables (link by a primary/foreign key). Data is pushed to me one row at a time and occasionally in a large download twice a year. They want me to update my tables in real time. SHould I use a trigger and have it read line by line or merge the tables?
I'd have a scheduled job that runs a sproc to check for work to do in that table, and them process them in batches. Have a column on the import/staging table that you can update with a batch number or timestamp so if something goes wrong (like they have pushed you some goofy data) you know where to restart from and can identify which row caused the problem.
If you use a trigger, not only might it slow down them feeding you a large batch of data, but you'll also possibly lose the ability to keep a record of where the process got to if it fails.
If it was always one row at a time then I think the trigger method would be okay option.
Edit: Just to clarify the point about batch number/timestamp, this is so if you have new/unexpected data which crashes your import, you can alter the code and re-run the process as much as you like without having to ask for a fresh import.
I'm working with cursors at the moment and it's getting messy for me, hope you can highlight some questions to me.
I have checked the oracle documentation about cursors but I cannot find out:
When a cursor is opened, is a local copy of the result created on memory?
Yes: Does it really make sense if I have a table with lot of data? I think it would not be really efficient, isn't it?.
No:
Is the whole data locked to other processes?
YES
: What if I'm doing a truly heavy process for each row, the data would be unavaliable for so long...
NO
: What would happen if another process modify the data I'm currently using with the cursor or if it adds new rows, would it be updated for the cursor?
Thanks so much.
You may want to read the section on Data Concurrency and Consistency in the Concepts Guide.
The answers to your specific questions:
When a cursor is opened, is a local copy of the result created on
memory?
No, however via Oracle's "Multiversion Read Consistency" (see link above) the rows fetched by the cursor will all be consistent with the point in time at which the cursor was opened - i.e. each row when fetched will be a row that existed when the cursor was opened and still has the same values (even though another session may have updated or even deleted it in the mean time).
No: Is the whole data locked to other processes?
No
NO : What would happen if another process modify the data I'm currently using with the cursor or if it adds new rows, would it be updated for the cursor?
Your cursor would not see those changes, it would continue to work with the rows as they existed when the cursor was opened.
The Concepts Guide explains this in detail, but the essence of the way it works is as follows:
Oracle maintains a something called a System Change Number (SCN) that is continually incremented.
When your cursor opens it notes the current value of the SCN.
As the cursor fetches rows it looks at the SCN stamped on them. If this SCN is the same or lower than the cursor's starting SCN then the data is up to date and is used. However if the row's SCN is higher that the cursor's then this means that another session has changed the row (and committed the change). In this case Oracle looks in the rollback segments for the old version of the row and uses that instead. If the query runs for a long time it is possible that the old version has been overwritten in the rollback segments. In this case the query fails with an ORA-01555 error.
You can modify this default behaviour if needed. For example, if it is vital that no other session modifies the rows you are querying during the running of your cursor, then you can use the FOR UPDATE clause to lock the rows:
CURSOR c IS SELECT sal FROM emp FOR UPDATE OF sal;
Now any session that tries to modify a row used in your query while it is running is blocked until your query has finished you commit or rollback.
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause.
I've been all up and down the mgmt studio options, but can't find a confirm option. I know other tools for other databases have it.
I'd suggest that you should always write SELECT statement with WHERE clause first and execute it to actually see what rows will your DELETE command delete. Then just execute DELETE with the same WHERE clause. The same applies for UPDATEs.
Under Tools>Options>Query Execution>SQL Server>ANSI, you can enable the Implicit Transactions option which means that you don't need to explicitly include the Begin Transaction command.
The obvious downside of this is that you might forget to add a Commit (or Rollback) at the end, or worse still, your colleagues will add Commit at the end of every script by default.
You can lead the horse to water...
You might suggest that they always take an ad-hoc backup before they do anything (depending on the size of your DB) just in case.
Try using a BEGIN TRANSACTION before you run your DELETE statement.
Then you can choose to COMMIT or ROLLBACK same.
In SSMS 2005, you can enable this option under Tools|Options|Query Execution|SQL Server|ANSI ... check SET IMPLICIT_TRANSACTIONS. That will require a commit to affect update/delete queries for future connections.
For the current query, go to Query|Query Options|Execution|ANSI and check the same box.
This page also has instructions for SSMS 2000, if that is what you're using.
As others have pointed out, this won't address the root cause: it's almost as easy to paste a COMMIT at the end of every new query you create as it is to fire off a query in the first place.
First, this is what audit tables are for. If you know who deleted all the records you can either restrict their database privileges or deal with them from a performance perspective. The last person who did this at my office is currently on probation. If she does it again, she will be let go. You have responsibilites if you have access to production data and ensuring that you cause no harm is one of them. This is a performance problem as much as a technical problem. You will never find a way to prevent people from making dumb mistakes (the database has no way to know if you meant delete table a or delete table a where id = 100 and a confirm will get hit automatically by most people). You can only try to reduce them by making sure the people who run this code are responsible and by putting into place policies to help them remember what to do. Employees who have a pattern of behaving irresponsibly with your busness data (particulaly after they have been given a warning) should be fired.
Others have suggested the kinds of things we do to prevent this from happening. I always embed a select in a delete that I'm running from a query window to make sure it will delete only the records I intend. All our code on production that changes, inserts or deletes data must be enclosed in a transaction. If it is being run manually, you don't run the rollback or commit until you see the number of records affected.
Example of delete with embedded select
delete a
--select a.* from
from table1 a
join table 2 b on a.id = b.id
where b.somefield = 'test'
But even these techniques can't prevent all human error. A developer who doesn't understand the data may run the select and still not understand that it is deleting too many records. Running in a transaction may mean you have other problems when people forget to commit or rollback and lock up the system. Or people may put it in a transaction and still hit commit without thinking just as they would hit confirm on a message box if there was one. The best prevention is to have a way to quickly recover from errors like these. Recovery from an audit log table tends to be faster than from backups. Plus you have the advantage of being able to tell who made the error and exactly which records were affected (maybe you didn't delete the whole table but your where clause was wrong and you deleted a few wrong records.)
For the most part, production data should not be changed on the fly. You should script the change and check it on dev first. Then on prod, all you have to do is run the script with no changes rather than highlighting and running little pieces one at a time. Now inthe real world this isn't always possible as sometimes you are fixing something broken only on prod that needs to be fixed now (for instance when none of your customers can log in because critical data got deleted). In a case like this, you may not have the luxury of reproducing the problem first on dev and then writing the fix. When you have these types of problems, you may need to fix directly on prod and you should have only dbas or database analysts, or configuration managers or others who are normally responsible for data on the prod do the fix not a developer. Developers in general should not have access to prod.
That is why I believe you should always:
1 Use stored procedures that are tested on a dev database before deploying to production
2 Select the data before deletion
3 Screen developers using an interview and performance evaluation process :)
4 Base performance evaluation on how many database tables they do/do not delete
5 Treat production data as if it were poisonous and be very afraid
So for the second day in a row, someone has wiped out an entire table of data as opposed to the one row they were trying to delete because they didn't have the qualified where clause
Probably the only solution will be to replace someone with someone else ;). Otherwise they will always find their workaround
Eventually restrict the database access for that person and provide them with the stored procedure that takes the parameter used in the where clause and grant them access to execute that stored procedure.
Put on your best Trogdor and Burninate until they learn to put in the WHERE clause.
The best advice is to get the muckety-mucks that are mucking around in the database to use transactions when testing. It goes a long way towards preventing "whoops" moments. The caveat is that now you have to tell them to COMMIT or ROLLBACK because for sure they're going to lock up your DB at least once.
Lock it down:
REVOKE delete rights on all your tables.
Put in an audit trigger and audit table.
Create parametrized delete SPs and only give rights to execute on an as needed basis.
Isn't there a way to give users the results they need without providing raw access to SQL? If you at least had a separate entry box for "WHERE", you could default it to "WHERE 1 = 0" or something.
I think there must be a way to back these out of the transaction journaling, too. But probably not without rolling everything back, and then selectively reapplying whatever came after the fatal mistake.
Another ugly option is to create a trigger to write all DELETEs (maybe over some minimum number of records) to a log table.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
For my customer I occasionally do work in their live database in order to fix a problem they have created for themselves, or in order to fix bad data that my product's bugs created. Much like Unix root access, it's just dangerous. What lessons should I learn ahead of time?
What is the #1 thing you do to be careful about operating on live data?
BEGIN TRANSACTION;
That way you can rollback after a mistake.
Three things I've learned the hard way over the years...
First, if you're doing updates or deletes on live data, first write a SELECT query with the WHERE clause you'll be using. Make sure it works. Make sure it's correct. Then prepend the UPDATE/DELETE statement to the known working WHERE clause.
You never want to have
DELETE FROM Customers
sitting in your query analyzer waiting for you to write the WHERE clause... accidentally hit "execute" and you've just killed your Customer table. Oops.
Also, depending on your platform, find out how to take a quick'n'dirty backup of a table. In SQL Server 2005,
SELECT *
INTO CustomerBackup200810032034
FROM Customer
will copy every row from the entire Customer table into a new table called CustomerBackup200810032034, which you can then delete once you've done your updates and made sure everything's OK. If the worst happens, it's a lot easier to restore missing data from this table than to try and restore last night's backup from disk or tape.
Finally, be wary of cascade deletes getting rid of stuff you didn't intend to delete - check your tables' relationships and key constraints before modifying anything.
Do a backup first: it should be the number 1 law of sysadmining anyways
EDIT: incorporating what others have said, make sure your UPDATES have appropriate WHERE clauses.
Ideally, changing a live database should never happen (beyond INSERTs and basic maintenance). Changing the live DB's structure is especially fraught with potential bad karma.
Make your changes to a copy, and when you're satisfied, then apply the fix to live.
Often before I do an UPDATE or DELETE, I write the equivalent SELECT.
NEVER do an update unless you are in a BEGIN TRAN t1--not in a dev database, not in production, not anywhere. NEVER run a COMMIT TRAN t1 outside a comment--always type
--COMMIT TRAN t1
and then select the statement in order to run it. (Obviously, this only applies to GUI query clients.) If you do these things, it will become second nature to do them and you won't lose hardly any time.
I actually have a "update" macro that types this. I always paste this in to set up my updates. You can make a similar one for deletes and inserts.
begin tran t1
update
set
where
rollback tran t1
--commit tran t1
Always make sure your UPDATEs and DELETEs have the proper WHERE clause.
To answer my own question:
When writing an update statement, write it out of order.
Write UPDATE [table-name]
Write WHERE [conditions]
Go back and write SET [columns-and-values]
Choosing the rows you want to update before you say what values you want to change is much safer than doing it in the other order. It makes it impossible for update person set email = 'bob#bob.com' to be sitting in your query window, ready to be run by a misplaced keystroke, ready to mess up every row in the table.
Edit: As others have said, write the WHERE clause for your deletes before you write DELETE.
As an example, I create SQL like this
--Update P Set
--Select ID, Name as OldName,
Name='Jones'
From Person P
Where ID = 1000
I highlight the text from the end up to the Select and run that SQL. Once I verify that it is pulling the record I want to update, I hit shift-up to hightlight the Update statement and run that.
Note that I used an alias. I never update a table name explicity. I always use an alias.
If I do this in conjunction with transactions and rollback/commits, I am really, really safe.
My #1 way to be careful with a live database? Don't touch it. :)
Backups can undo damage that you inflict on the database, but you're still likely to introduce negative side effects during that span of time.
No matter how solid you think the script you're working with is, run it through a test cycle. Even if a "test cycle" means running the script against your own instance of the database, make sure you do it. It's much better to introduce defects on your local box than a production environment.
Check, recheck, and check again any statment that is doing updates. Even if you think you're just doing a simple, single column update, sooner or later you will not have enough coffee and forget a 'where' clause, nuking a whole table.
A couple other things I've found helpful:
if using MySQL, enable Safe updates
If you have a DBA, ask them to do it.
I 've found these 3 things have kept me from doing any serious harm.
Nobody wants backup but everyone cries for recovery
Create your DB with foreign key references, because you should:
make it as hard as possible for yourself to update/delete data and destroying the structural integrity / something else with that
If possible, run on a system where you have to commit the changes before you permanently store them (i.e. deactivate autocommit while repairing the db)
Try to identify your problem's classes so that you get an understanding how to fix without trouble
Get a routine in playing backups into a database, always have a second database on a test server at hand so you can just work on that
Because remember: If something fails totally, you need to be up and running again as fast as any possible
Well, that's about all I can think of now. Take the bold passages and you see whats #1 for me. ;-)
Maybe consider not using any deletes or drops at all. Or maybe reduce the user permissions so that only a special DB user can delete/drop things.
If you're using Oracle or another database that supports it, verify your changes before doing a COMMIT.
Data should always be deployed to live via scripts, which can be rehearsed as many times as it is required to get it right on dev. When there's dependent data for the script to run correctly on dev, stage it appropriately -- you can not get away with this step if you truly want to be careful.
Check twice, commit once!
Backup or dump the database before starting.
To add on to what #Wayne said, write your WHERE before the table name in a DELETE or UPDATE statement.
BACK UP YOUR DATA. Learned that one the hard way working with customer databases on a regular basis.
Always add a using clause.
My rule (as an app developer): Don't touch it! That's what the trained DBAs are for. Heck, I don't even want permission to touch it. :)
Different colors per environment: We've setup our PL\SQL developer (IDE for Oracle) so that when you logon to the production DB all the windows are in bright red. Some have gone as far as assigning a different color for dev and test as well.
Make sure you specify a where clause when deleting records.
always test any queries beyond select on development data first to ensure it has the correct impact.
if possible, ask to pair with someone
always count to 3 before pressing Enter (if alone, as this will infuriate your pair partner!)
If I'm updating a database with a script, I always make sure I put a breakpoint or two at the start of my script, just in case I hit the run/execute by accident.
I'll add to recommendations of doing BEGIN TRAN before your UPDATE, just don't forget to actually do the COMMIT; you can do just as much damage if you leave your uncommitted transaction open. Don't get distracted by phones, co-workers, lunch etc when in the middle of updates or you'll find everyone else is locked up until you COMMIT or ROLLBACK.
I always comment out any destructive queries (insert, update, delete, drop, alter) when writing out adhoc queries in Query Analyzer. That way, the only way to run them, is to highlight them, without selecting the commented part, and press F5.
I also think it's a good idea, as already mentioned, to write your where statement first, with a select, and ensure that you are altering the right data.
Always back up before changing.
Always make mods (eg. ALTER TABLE) via a script.
Always modify data (eg. DELETE) via a stored procedure.
Create a read only user (or get the DBA to do it) and only use that user to look at the DB. Add the appropriate permissions to schema so that you can view the content of stored procedures/views/triggers/etc. but not have the ability to change them.
I have designed database tables (normalised, on an MS SQL server) and created a standalone windows front end for an application that will be used by a handful of users to add and edit information. We will add a web interface to allow searching accross our production area at a later date.
I am concerned that if two users start editing the same record then the last to commit the update would be the 'winner' and important information may be lost. A number of solutions come to mind but I'm not sure if I am going to create a bigger headache.
Do nothing and hope that two users are never going to be editing the same record at the same time. - Might never happed but what if it does?
Editing routine could store a copy of the original data as well as the updates and then compare when the user has finished editing. If they differ show user and comfirm update - Would require two copies of data to be stored.
Add last updated DATETIME column and check it matches when we update, if not then show differences. - requires new column in each of the relevant tables.
Create an editing table that registers when users start editing a record that will be checked and prevent other users from editing same record. - would require carful thought of program flow to prevent deadlocks and records becoming locked if a user crashes out of the program.
Are there any better solutions or should I go for one of these?
If you expect infrequent collisions, Optimistic Concurrency is probably your best bet.
Scott Mitchell wrote a comprehensive tutorial on implementing that pattern:
Implementing Optimistic Concurrency
A classic approach is as follows:
add a boolean field , "locked" to each table.
set this to false by default.
when a user starts editing, you do this:
lock the row (or the whole table if you can't lock the row)
check the flag on the row you want to edit
if the flag is true then
inform the user that they cannot edit that row at the moment
else
set the flag to true
release the lock
when saving the record, set the flag back to false
# Mark Harrison : SQL Server does not support that syntax (SELECT ... FOR UPDATE).
The SQL Server equivalent is the SELECT statement hint UPDLOCK.
See SQL Server Books Online for more information.
-first create filed (update time) to store last update record
-when any user select record save select time,
compare between select time and update time field if( update time) > (select time) that mean another user update this record after select record
SELECT FOR UPDATE and equivalents are good providing you hold the lock for a microscopic amount of time, but for a macroscopic amount (e.g. the user has the data loaded and hasn't pressed 'save' you should use optimistic concurrency as above. (Which I always think is misnamed - it's more pessimistic than 'last writer wins', which is usually the only other alternative considered.)
Another option is to test that the values in the record that you are changing are the still the same as they were when you started:
SELECT
customer_nm,
customer_nm AS customer_nm_orig
FROM demo_customer
WHERE customer_id = #p_customer_id
(display the customer_nm field and the user changes it)
UPDATE demo_customer
SET customer_nm = #p_customer_name_new
WHERE customer_id = #p_customer_id
AND customer_name = #p_customer_nm_old
IF ##ROWCOUNT = 0
RAISERROR( 'Update failed: Data changed' );
You don't have to add a new column to your table (and keep it up to date), but you do have to create more verbose SQL statements and pass new and old fields to the stored procedure.
It also has the advantage that you are not locking the records - because we all know that records will end up staying locked when they should not be...
The database will do this for you. Look at "select ... for update", which is designed just for this kind of thing. It will give you a write lock on the selected rows, which you can then commit or roll back.
With me, the best way i have a column lastupdate (timetamp datatype).
when select and update just compare this value
another advance of this solution is that you can use this column to track down the time data has change.
I think it is not good if you just create a colum like isLock for check update.