Everyday, some of my database rows are getting deleted automatically.
Even the log files are getting deleted, so I am unable to check who deleted those files.
I dont understand what to do.
If the SQL server is pre-production, you could just yank all delete rights to the target table and wait to see who complains. If deletes are not allowed on this table anyway, even in production, then it would be a good idea to restrict that functionaity moving forward.
Beyond that, try adding a delete trigger to the table to do auditing. You can get the source IP address, logged in user info, etc. You can even rollback the delete if needed.
Here's a good article on using triggers for auditing.
http://weblogs.asp.net/jgalloway/archive/2008/01/27/adding-simple-trigger-based-auditing-to-your-sql-server-database.aspx
Edit:
If you want to stop all deletes on a table, you can use the following trigger.
CREATE TRIGGER dbo.MyTable_Delete_Instead_Of_Trigger
ON dbo.MyTable
INSTEAD OF DELETE
AS
BEGIN
raiserror('Deletes are not allowed.', 16, 1)
END
Run SQL Profiler against the DB capturing all RPC Completed and SQL BatchCompleted events and review it to find whatever is performing the deletes.
Related
this is something critical for me. In the last few days some data from one table is deleted (Oracle 11g).
I have checked with DBA but in the database, nothing is logged and he said that data is deleted from the front end.
Can anyone help me to find out what data is deleted and who did it as neither application's logs are having any information and Audit_trial in DB is set to none?
You may try using LogMiner. This is a built-in utility of oracle that scans redo-logs/archived-logs and displays all DML commands that were run.
I have a Microsoft SQL database that we have been using for several years. Starting this morning a single table in the database is throwing a time-out error whenever we attempt to insert or update any records.
I have tried to insert and update through:
Microsoft Access ODBC
a .Net Program via Entity Framework
a stored procedure run as an automatic job -- that runs each morning
a custom query written this morning to test the database and executed through SQL Server Management Studio
Opening the table directly via 'Edit Top 200 Rows' and typing in the appropriate values
We have restarted the service, then restarted the entire server and continue to get the same problems. The remainder of the database appears to be working fine. All data can be read even from the affected table, and other tables allow updates and inserts to be run just fine.
Looking through the data in the table, I have not found anything that appears out of the ordinary.
I am at a loss as to the next steps on finding the cause or solution.
Its not a space issue is it ? try ...
SELECT volume_mount_point Drive,
cast(sum(available_bytes)*100 / sum(total_bytes) as int) as [Free%],
avg(available_bytes/1024/1024/1024) FreeGB
from sys.master_files f
cross apply sys.dm_os_volume_stats(f.database_id, f.[file_id])
group by volume_mount_point
order by volume_mount_point;
I was planning to use SSIS logging to get task level details (duration of running, error message thrown-if any, user who triggered the job ) for my package.
SSIS was creating dbo.syssisLog table under System table and it was working just fine. Suddenly it stops creating table under System table and start creating under Users table. Also now it is not logging some events which were logged previously when created under System table. Events like: PackageStart and User:PackageStart/User:PackageEnd event for some tasks.
Can anyone please guide me what's going wrong here ?
The table showing under System versus User tables is fairly meaningless but if you want the table to show the same, set it as a MS shipped table
EXECUTE sys.sp_MS_marksystemobject 'sysssislog'
The way database logging works in the package deployment model, is that SSIS will attempt to log to dbo.sysdtslog90/dbo.sysssislog (depending on your version) but if that table doesn't exist, it will create it for you. There is a copy of that table in the msdb catalog which is marked as a system object. When SSIS creates its own copy, it just has the DDL somewhere in the bowels of the code that does logging. You'll notice it also creates a stored procedure sp_ssis_addlogentry to assist in the logging.
As for your observation for inconsistent logging behaviour, all I can say is I've never seen that. The only reason it won't log an event is if the event doesn't occur - either a precursor condition didn't happen or the package errors out. If you can provide a reproducible scenario where it does and then doesn't log events, I'll be happy to tell you why it does/doesn't do it.
PreSQL and postSQL in Informatica is not getting executed.
ISSUE DESCRIPTION :
I have table in Microsoft SQL server. I am trying to update/insert this table using Informatica powercenter session by calling SP through Stored Procedure Transformation. But its not happening. After further digging up, I got to know that reason behind this are triggers on table that we are trying to update/insert. There are couple of triggers defined on the table and it has got on insert and on update triggers also. So I thought of disabling all the triggers on the table in PreSQL and enable them back again in postSQL of the session that I am running. But its not working.
However when I execute the trigger disable statement directly on DB through Microsoft SQL server client and run the session, session is updating/inserting the records.
Below are the Presql and postSQL commands used by me:
BEGIN TRANSACTION
ALTER TABLE schemaname.tablename DISABLE TRIGGER ALL
commit;
BEGIN TRANSACTION
ALTER TABLE schemaname.tablename ENABLE TRIGGER ALL
commit;
Please let me know if I am going wrong anywhere/if there is any possible resolution for this.
your sql gets parsed by powercenter before going to the db.
Check the server config - there should be some option to send unparsed sql.
I have a SQL Server 2005 database that has been deleted, and I need to discover who deleted it. Is there a way of obtaining this user name?
Thanks, MagicAndi.
If there has been little or no activity since the deletion, then the out-of-the-box trace may be of help. Try running:
DECLARE #path varchar(256)
SELECT #path = path
FROM sys.traces
where id = 1
SELECT *
FROM fn_trace_gettable(#path, 1)
[In addition to the out-of-the-box trace, there is also the less well-known 'black box' trace, which is useful for diagnosing intermittent server crashes. This post, SQL Server’s Built-in Traces, shows you how to configure it.]
I would first ask everyone who has admin access to the Sql Server if they deleted it.
The best way to retrieve the information is to restore the latest backup.
Now to discuss how to avoid such problems in the future.
First make sure your backup process is running correctly and frequently. Make transaction log baclup evey 15 mintues or half an hour if it is a higly transactional database. Then the most you lose is a half an hour's worht of work. Practice restoring the database until you can easily do it under stress.
In SQL Server 2008 you can add DDL triggers (not sure if you can do this in 2005) which allow you to log who did changes to structure. It might be worth your time to look into this.
Do NOT allow more than two people admin access to your production database - a dba and a backup person for when the dba is out. These people should load all changes to the database structure and code and all of the changes should be scripted out, code reviewed and tested first on QA. No unscripted, "run by the seat of your pants" code should ever be run on prod.
Here is bit more precise TSQL
SELECT DatabaseID,NTUserName,HostName,LoginName,StartTime
FROM
sys.fn_trace_gettable(CONVERT(VARCHAR(150),
( SELECT TOP 1
f.[value]
FROM sys.fn_trace_getinfo(NULL) f
WHERE f.property = 2
)), DEFAULT) T
JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id
WHERE TE.trace_event_id =47 AND T.DatabaseName = 'delete'
-- 47 Represents event for deleting objects.
This can be used in the both events of knowing or not knowing the database/object name. Results look like this: