I have a database where one of the columns in one table keeps going blank. There's nothing in our software that can clear that column, so we are quite perplexed how it keeps happening.
Any suggestions on how I can figure this out? I'm thinking of creating a trigger that runs every time this table gets updated, and ideally when that field becomes empty.
But what kind of info can I actually track that will help me figure this out? Can I store the SQL statement that gets run when that update occurs? Can I store the Windows process that is connected to the database?
Any other suggestions? Thanks
You could also throw an error from trigger and have your client fail. If your client code is written to handle errors and log them, you can find out what causes the issue that way.
One thing you can try is using a trigger and testing for the specific column being updated with if update(column).
You can then capture some diagnositc data such as the following into a logging table:
select ##Spid, r.plan_handle, p.program_name, p.loginame, b.event_info
from sys.dm_exec_requests r
join sys.sysprocesses p on p.spid=r.session_id
cross apply sys.dm_exec_input_buffer(r.session_id, r.request_id)b
where session_id=##Spid
Something like this may work, if you create the correct set of columns in some kind of logging table:
IF EXISTS (SELECT 1 FROM inserted WHERE LEN(ProblemColumn) = 0)
BEGIN
INSERT INTO dbo.SomeLoggingTable(cols)
SELECT getdate(), i.key, buf.*
FROM inserted AS i
CROSS APPLY sys.dm_exec_input_buffer(##SPID, NULL)
WHERE LEN(ProblemColumn) = 0;
END
Related
I am just wondering, can I find out if somebody wrote a query and updated a row against specific table in some date?
I tried this :
SELECT id, name
FROM sys.sysobjects
WHERE NAME = ''
SELECT TOP 1 *
FROM ::fn_dblog(NULL,NULL)
WHERE [Lock Information] LIKE '%TheOutoput%'
It does not show me ?
Any suggestions.
No, row level history/change stamps is not built into SQL Server. You need to add that in the table design. If you want an automatic update date column it would typically be set by a trigger on the table.
There is however a way if you really need to find out what happened in a forensics scenario. But that is only available if you have the right backup plans. What you can do then is to use the DB transaction log to find when the modification was done. Note that this is not anything an application can or should do runtime.
I'm new to SQL Server and am doing some cleanup of our transaction database. However, to accomplish the last step, I need to update a column in one table of one database with the value from another column in another table from another database.
I found a SQL update code snippet and re-wrote it for our own needs but would love someone to give it a once over before I hit the execute button since the update will literally affect hundreds of thousands of entries.
So here are the two databases:
Database 1: Movement
Table 1: ItemMovement
Column 1: LongDescription (datatype: text / up to 40 char)
Database 2: Item
Table 2: ItemRecord
Column 2: Description (datatype: text / up to 20 char)
Goal: set Column1 from db1 to the value of Colum2 from db2.
Here is the code snippet:
update table1
set table1.longdescription = table2.description
from movement..itemmovement as table1
inner join item..itemrecord as table2 on table1.itemcode = table2.itemcode
where table1.longdescription <> table2.description
I added the last "where" line to prevent SQL from updating the column where it already matches the source table.
This should execute faster and just update the columns that have garbage. But as it stands, does this look like it will run? And lastly, is it a straightforward process, using SQL Server 2005 Express to just backup the entire Movement db before I execute? And if it messes up, just restore it?
Alternatively, is it even necessary to re-cast the tables as table1 and table 2? Is it valid to execute a SQL query like this:
update movement..itemmovement
set itemmovement.longdescription = itemrecord.description
from movement..itemmovement
inner join item..itemrecord on itemmovement.itemcode = itemrecord.itemcode
where itemmovement.longdescription <> itemrecord.description
Many thanks in advance!
You don't necessarily need to alias your tables but I recommend you do for faster typing and reduce the chances of making a typo.
update m
set m.longdescription = i.description
from movement..itemmovement as m
inner join item..itemrecord as i on m.itemcode = i.itemcode
where m.longdescription <> i.description
In the above query I have shortened the alias using m for itemmovement and i for itemrecord.
When a large number of records are to be updated and there's question whether it would succeed or not, always make a copy in a test database (residing on a test server) and try it out over there. In this case, one of the safest bet would be to create a new field first and call it longdescription_text. You can make it with SQL Server Management Studio Express (SSMS) or using the command below:
use movement;
alter table itemmovement add column longdescription_test varchar(100);
The syntax here says alter table itemmovement and add a new column called longdescription_test with datatype of varchar(100). If you create a new column using SSMS, in the background, SSMS will run the same alter table statement to create a new column.
You can then execute
update m
set m.longdescription_test = i.description
from movement..itemmovement as m
inner join item..itemrecord as i on m.itemcode = i.itemcode
where m.longdescription <> i.description
Check data in longdescription_test randomly. You can actually do a spot check faster by running:
select * from movement..itemmovement
where longdescription <> longdescription_test
and longdescription_test is not null
If information in longdescription_test looks good, you can change your update statement to set m.longdescription = i.description and run the query again.
It is easier to just create a copy of your itemmovement table before you do the update. To make a copy, you can just do:
use movement;
select * into itemmovement_backup from itemmovement;
If update does not succeed as desired, you can truncate itemmovement and copy data back from itemmovement_backup.
Zedfoxus provided a GREAT explanation on this and I appreciate it. It is excellent reference for next time around. After reading over some syntax examples, I was confident enough in being able to run the second SQL update query that I have in my OP. Luckily, the data here is not necessarily "live" so at low risk to damage anything, even during operating hours. Given the nature of the data, the updated executed perfectly, updating all 345,000 entries!
Hi everyone getting this error message when trying to create a trigger and its got me a little stumped.
Here is my trigger code.
CREATE OR REPLACE TRIGGER CUSTOMER_AD
AFTER DELETE ON CUSTOMER
REFERENCING OLD AS OLD
FOR EACH ROW
DECLARE
nPlaced_order_count NUMBER;
BEGIN
SELECT COUNT(*)
INTO nPlaced_order_count
FROM PLACED_ORDERS p
WHERE p.FK1_CUSTOMER_ID = OLD.CUSTOMER_ID;
IF nPlaced_order_count > 0 THEN
INSERT into previous_customer
(customer_id,
first_name,
last_name,
address,
AUDIT_USER,
AUDIT_DATE)
VALUES
(:old.customer_id,
:old.first_name,
:old.last_name,
:old.address,
UPPER(v('APP_USER')),
SYSDATE);
END IF;
END CUSTOMER_AD;
And the error I'm getting 'Error at line 4: PL/SQL: SQL Statement ignored 0.10 seconds'
Anyone any guesses why?
thanks for the help
The error shown is only the highest level. Depending where you're running it you should be able to see the stack trace. The client will determine exactly how to do that; SQL*Plus or SQL Developer would show you more than this anyway, but I don't really know about other clients. If you can't see the details in your client then you can query for them with:
select * from user_errors where name = 'CUSTOMER_AD' and type = 'TRIGGER'
Assuming the tables all exist, it's probably this line:
WHERE p.FK1_CUSTOMER_ID = OLD.CUSTOMER_ID;
which should be:
WHERE p.FK1_CUSTOMER_ID = :OLD.CUSTOMER_ID;
When referencing the old (or new) value from the table, the name as specified in the referencing clause has be preceded by a colon, so :OLD in this case. As you're doing already in the insert ... values() clause.
(From comments, my assumption turned out to be wrong - as well as the missing colon problem, the table name is really placed_order, without an s).
Seems like you copied code from both answers to your previous question without really understanding what they were doing. You might want to look at the trigger design guidelines (particularly the one about not dupicating database functionality) and the syntax for create trigger which introduces the referencing clause.
In my database I have certain data that is important to the functioning of the app (constants, ...). And I have test data that is being generated by testing the site. As the test data is expendable it delete it regularly. Unfortunately the two types of data occur in the same table so I cannot do a delete from T but I have to do a delete from T where IsDev = 0.
How can I make sure that I do not accidentally delete the non-dev data by forgetting to put the filter in? If that happens I have to restore from a production backup which is wasting my time. I would require some sort of foreign key like behavior that fails a delete when a certain condition is met. This would also be useful to ensure that my code does not do anything harmful due to a bug.
Well, you could use a trigger that throws an exception if any of the records in the deleted meta-table have IsDev = 1.
CREATE TRIGGER TR_DEL_protect_constants ON MyTable FOR DELETE AS
BEGIN
IF EXISTS(SELECT 1 FROM deleted WHERE IsDev <> 0)
BEGIN
ROLLBACK
RAISERROR('Can''t delete constants', 1, 16)
RETURN
END
END
I'm guessing a bit on the syntax, but you get the idea.
I would use a trigger.
keep a backup of the rows you want to retain in a separate admin table
Seems like you need a trigger on delete operation that would look at the row and rollback transaction if it sees that it's a row that should never be deleted.
Also, you might want to read this article: Prevent accidental update or delete commands of all rows in a SQL Server table
Depending on how transparent you want to make this, you could use an INSTEAD OF trigger that will always remember the WHERE for you.
CREATE TRIGGER TR_IODEL_DevOnly ON YourTable
INSTEAD OF DELETE
AS
BEGIN
DELETE FROM t
FROM Deleted d
INNER JOIN YourTable t
ON d.PrimaryKey = t.PrimaryKey
WHERE t.IsDev = 0
END
I suggest that instead of writing the delete statement from scratch every time, just create a stored procedure to do the deletions and execute that.
create procedure ResetT as delete from T where IsDev = 0
You could create an extra column IS_TEST in your tables, rename the TABLE_NAME to TABLE_NAME_BAK, and create a view TABLE_NAME on the TABLE_NAME_BAK so that only rows where IS_TEST was set are displayed in it. Setting IS_TEST to zero for the data you wish to keep, and adding a DEFAULT 1 to the IS_TEST column should complete the job. It is similar to the procedure required for creating 'soft deletes'.
I'm looking for a way to explain a deadlocking issue. I think I know what is causing it, but I'm not sure of the exact events.
We have a long-running view (several seconds). We are updating one of the tables that is used in that view. The updates can also take several seconds. The update statements that are running when the deadlock error is thrown join to the view. For example:
UPDATE t1 SET
Field1 = 'someValue'
FROM Table1 t1
JOIN TheView v ON v.TableId = t1.TableId
WHERE v.Condition = 'TheCondition'
The statement that appears to be getting shut down due to the deadlock is like:
SELECT * FROM TheView
Where the view is defined as:
CREATE VIEW TheView AS
SELECT *
FROM Table1 t1
JOIN Table2 t2 ON t2.foo = t1.foo
I'm pretty sure that the deadlock is occurring because both the view and the update statement depend on Table1. Is this scenario possible?
Thanks
Have you tried using SQL Profiler? Profiler will tell you exactly what statements are involved in a deadlock and include the resources each process has locked that the other process needs etc.
Is it possible? Sure. You'll need to do some work to find out for sure. See: How to Track Down Deadlocks Using SQL Server 2005 Profiler
It is definitely possible. I posted several similar repro scripts here: Reproducing deadlocks involving only one table
One way around is to use snapshot isolation.