SQl-SERVER accidental update - sql-server

Hi I have accidently updated a row in SQL-SERVER that I should not have is there anyway to get the previous value of the row using this query:
UPDATE Documents
SET Name = 'Files'
WHERE Id = 950
Is there any way to recover the previous value?

Yes, it is possible, but only under certain circumstances.
If you had wrapped the UPDATE in a transaction, you could ROLLBACK. This would undo the UPDATE.
Assuming you didn't put it in a transaction, you need to reset the database to a previous point in time. This is only possible if you have some form of back-up on the database. How to do this is shown in this MSDN page
Not that both of these options will UNDO the update, not just tell you the previous values.

Related

Logic Apps - SQL Connector returning cached data?

I have a Logic App that uses the "SQL Server - When an item is modified (V2)" trigger, monitoring an Azure SQL DB for updated rows. When running this LA, I noticed that the modified row that came as output for this trigger did NOT contain the updated data.
I thought this might be by design (don't really see why, but ok...) so I added a "Get Row" action directly after the trigger, to go fetch the most recent data for the row that triggered the LA. But even this step still returned the old, not-updated data for that row.
However, when I resubmit the run some seconds later, the "Get Row" action does get the updated data from the database.
Is this normal behavior? Is the SQL DB row version already updated even though the data update isn't committed yet, triggering the Logic App but not returning the updated data yet?
Thanks for pointing me to add a timestamp to my table, I add the timestamp and then I can find the table in the selection. I test it in my side but the trigger works fine, it output the updated data. I provide my logic below for your reference:
My table show as:
My logic app:
Please note I disable the "Split On" in "Settings" of the trigger.
After running the update sql:
update Table3 set name = 'hury1' where id = 1;
update Table3 set name = 'jim1' where id = 2;
I got the result (the variable updateItems in screenshot contains both updated items):

SQL Change tracking SYS_CHANGE_COLUMNS

We are running SQL 2008 R2 and have started exploring change tracking as our method for identifying changes to export to our data warehouse. We are only interested in specific columns.
We are identifying the changes on a replicated copy of the source database. If we query the change table on the source server, any specific column update is available and the SYS_CHANGE_COLUMNS is populated.
However on the replicated copy the changes are being tracked but the SYS_CHANGE_COLUMNS field is always NULL for an update change.
Track columns updated is set to true on the subscriber.
Is this due to the way replication works and it is performing whole row updates and therefore you cannot get column level changes on a subscriber?
Any help or alternative approaches would be much appreciated.
Thanks
I realize this is an old question, but since I've happened across it I figure I may as well provide an answer for others who come later.
SYS_CHANGE_COLUMNS is null when every column is "updated". "Updated" here doesn't nessarily mean the value changed, it just means the column was touched by the DML statement. So, "update t set c = c" would mean column c was "updated".
Inserts and deletes will therefore always have a SYS_COLUMNS_CHANGED value of "null", since the whole row is affected by an insert or a delete. But most replication technologies do an update by setting every column value to the value of the column on the replication source. Therefore, a replication "update" will touch every column, and so the SYS_CHANGE_COLUMNS value will always be null.

Find any of the field is updated or not in sql server

If we receive an update statement that does not check if the value has changed in the where clause, what are the different ways to ignore that update inside a trigger?
I know we can do a comparison of each individual field (handling the ISNULL side as well), but where it's a table that has 50+ fields, is there a faster/easier way to do it?
Note:I want to save each and every event in logs for updated fields.for example i have 50 fields and one of the field is updated(for single row not for entire table),then i want to save only that updated field old value and new value in logs.
Thanks in Advance, RAHUL
If this is more about logging changes to tables, a simpler solution may be to use Change Data Capture (CDC) tables.
Every time a change is made to a table, a row is written to your CDC table. Then you could write a query over the CDC table to bring you back just the data that has changed.
More information is on CDC tables is availble here:
http://msdn.microsoft.com/en-us/library/bb522489(v=sql.105).aspx

SQL Server 2000: Is there a way to tell when a record was last modified?

The table doesn't have a last updated field and I need to know when existing data was updated. So adding a last updated field won't help (as far as I know).
SQL Server 2000 does not keep track of this information for you.
There may be creative / fuzzy ways to guess what this date was depending on your database model. But, if you are talking about 1 table with no relation to other data, then you are out of luck.
You can't check for changes without some sort of audit mechanism. You are looking to extract information that ha not been collected. If you just need to know when a record was added or edited, adding a datetime field that gets updated via a trigger when the record is updated would be the simplest choice.
If you also need to track when a record has been deleted, then you'll want to use an audit table and populate it from triggers with a row when a record has been added, edited, or deleted.
You might try a log viewer; this basically just lets you look at the transactions in the transaction log, so you should be able to find the statement that updated the row in question. I wouldn't recommend this as a production-level auditing strategy, but I've found it to be useful in a pinch.
Here's one I've used; it's free and (only) works w/ SQL Server 2000.
http://www.red-gate.com/products/SQL_Log_Rescue/index.htm
You can add a timestamp field to that table and update that timestamp value with an update trigger.
OmniAudit is a commercial package which implments auditng across an entire database.
A free method would be to write a trigger for each table which addes entries to an audit table when fired.

Editing database records by multiple users

I have designed database tables (normalised, on an MS SQL server) and created a standalone windows front end for an application that will be used by a handful of users to add and edit information. We will add a web interface to allow searching accross our production area at a later date.
I am concerned that if two users start editing the same record then the last to commit the update would be the 'winner' and important information may be lost. A number of solutions come to mind but I'm not sure if I am going to create a bigger headache.
Do nothing and hope that two users are never going to be editing the same record at the same time. - Might never happed but what if it does?
Editing routine could store a copy of the original data as well as the updates and then compare when the user has finished editing. If they differ show user and comfirm update - Would require two copies of data to be stored.
Add last updated DATETIME column and check it matches when we update, if not then show differences. - requires new column in each of the relevant tables.
Create an editing table that registers when users start editing a record that will be checked and prevent other users from editing same record. - would require carful thought of program flow to prevent deadlocks and records becoming locked if a user crashes out of the program.
Are there any better solutions or should I go for one of these?
If you expect infrequent collisions, Optimistic Concurrency is probably your best bet.
Scott Mitchell wrote a comprehensive tutorial on implementing that pattern:
Implementing Optimistic Concurrency
A classic approach is as follows:
add a boolean field , "locked" to each table.
set this to false by default.
when a user starts editing, you do this:
lock the row (or the whole table if you can't lock the row)
check the flag on the row you want to edit
if the flag is true then
inform the user that they cannot edit that row at the moment
else
set the flag to true
release the lock
when saving the record, set the flag back to false
# Mark Harrison : SQL Server does not support that syntax (SELECT ... FOR UPDATE).
The SQL Server equivalent is the SELECT statement hint UPDLOCK.
See SQL Server Books Online for more information.
-first create filed (update time) to store last update record
-when any user select record save select time,
compare between select time and update time field if( update time) > (select time) that mean another user update this record after select record
SELECT FOR UPDATE and equivalents are good providing you hold the lock for a microscopic amount of time, but for a macroscopic amount (e.g. the user has the data loaded and hasn't pressed 'save' you should use optimistic concurrency as above. (Which I always think is misnamed - it's more pessimistic than 'last writer wins', which is usually the only other alternative considered.)
Another option is to test that the values in the record that you are changing are the still the same as they were when you started:
SELECT
customer_nm,
customer_nm AS customer_nm_orig
FROM demo_customer
WHERE customer_id = #p_customer_id
(display the customer_nm field and the user changes it)
UPDATE demo_customer
SET customer_nm = #p_customer_name_new
WHERE customer_id = #p_customer_id
AND customer_name = #p_customer_nm_old
IF ##ROWCOUNT = 0
RAISERROR( 'Update failed: Data changed' );
You don't have to add a new column to your table (and keep it up to date), but you do have to create more verbose SQL statements and pass new and old fields to the stored procedure.
It also has the advantage that you are not locking the records - because we all know that records will end up staying locked when they should not be...
The database will do this for you. Look at "select ... for update", which is designed just for this kind of thing. It will give you a write lock on the selected rows, which you can then commit or roll back.
With me, the best way i have a column lastupdate (timetamp datatype).
when select and update just compare this value
another advance of this solution is that you can use this column to track down the time data has change.
I think it is not good if you just create a colum like isLock for check update.

Resources