Audit Log records not showing in CRM view - sql-server

On one of our test servers the view in Settings -> Auditing -> Audit Log Management says there are no deletable audit logs available, even though a select count (*) from AuditBase in the database returns nearly 90 million records. The user looking at the view has System Administrator privileges. In our other 2 servers (all using the same version of CRM), the view displays records as expected.
What might we do to get the records to show, or alternately to clean up the audit table without using the view?

Strange that the Audit Logs are not appearing in the view inside Dynamics, despite records existing in the database.
It might make sense to open a Microsoft ticket about it.
Or, since your system is on-prem, here's a discussion about manually deleting data from the AuditBase table, which is of course highly unsupported.

Related

Detect Table Changes In A Database Without Modifications

I have a database ("DatabaseA") that I cannot modify in any way, but I need to detect the addition of rows to a table in it and then add a log record to a table in a separate database ("DatabaseB") along with some info about the user who added the row to DatabaseA. (So it needs to be event-driven, not merely a periodic scan of the DatabaseA table.)
I know that normally, I could add a trigger to DatabaseA and run, say, a stored procedure to add log records to the DatabaseB table. But how can I do this without modifying DatabaseA?
I have free-reign to do whatever I like in DatabaseB.
EDIT in response to questions/comments ...
Databases A and B are MS SQL 2008/R2 databases (as tagged), users are interacting with the DB via a proprietary Windows desktop application (not my own) and each user has a SQL login associated with their application session.
Any ideas?
Ok, so I have not put together a proof of concept, but this might work.
You can configure an extended events session on databaseB that watches for all the procedures on databaseA that can insert into the table or any sql statements that run against the table on databaseA (using a LIKE '%your table name here%').
This is a custom solution that writes the XE session to a table:
https://github.com/spaghettidba/XESmartTarget
You could probably mimic functionality by writing the XE events table to a custom user table every 1 minute or so using the SQL job agent.
Your session would monitor databaseA, write the XE output to databaseB, you write a trigger that upon each XE output write, it would compare the two tables and if there are differences, write the differences to your log table. This would be a nonstop running process, but it is still kind of a period scan in a way. The XE only writes when the event happens, but it is still running a check every couple of seconds.
I recommend you look at a data integration tool that can mine the transaction log for Change Data Capture events. We are recently using StreamSets Data Collector for Oracle CDC but it also has SQL Server CDC. There are many other competing technologies including Oracle GoldenGate and Informatica PowerExchange (not PowerCenter). We like StreamSets because it is open source and is designed to build realtime data pipelines between DB at the schema level. Till now we have used batch ETL tools like Informatica PowerCenter and Pentaho Data Integration. I can near real-time copy all the tables in a schema in one StreamSets pipeline provided I already deployed DDL in the target. I use this approach between Oracle and Vertica. You can add additional columns to the target and populate them as part of the pipeline.
The only catch might be identifying which user made the change. I don't know whether that is in the SQL Server transaction log. Seems probable but I am not a SQL Server DBA.
I looked at both solutions provided by the time of writing this answer (refer Dan Flippo and dfundaka) but found that the first - using Change Data Capture - required modification to the database and the second - using Extended Events - wasn't really a complete answer, though it got me thinking of other options.
And the option that seems cleanest, and doesn't require any database modification - is to use SQL Server Dynamic Management Views. Within this library residing, in the System database, are various procedures to view server process history - in this case INSERTs and UPDATEs - such as sys.dm_exec_sql_text and sys.dm_exec_query_stats which contain records of database transactions (and are, in fact, what Extended Events seems to be based on).
Though it's quite an involved process initially to extract the required information, the queries can be tuned and generalized to a degree.
There are restrictions on transaction history retention, etc but for the purposes of this particular exercise, this wasn't an issue.
I'm not going to select this answer as the correct one yet partly because it's a matter of preference as to how you approach the problem and also because I'm yet to provide a complete solution. Hopefully, I'll post back with that later. But if anyone cares to comment on this approach - good or bad - I'd be interested in your views.

Test data inserted to Prod Database how do i delete that record

I mistakenly inserted few test records using PROD GUI, which got written to PROD Database. Is there a way to find what tables and column did those record touched ?
Thanks
I suppose you don't have a running trace, CDC or other tracking mechanism enabled. So it seems like the following steps would be a reasonable solution:
Make sure that you can't find and drop that data from the
Application GUI
Run SQL Profiler Trace using Tuning Template (it will give you enough information). Include ApplicationName and HostName columns to identify your connection.
Insert one more test record using UI (try to do the same operations as you did before)
Stop the trace and find the data you've inserted in it.
Identify other modification which was done from your application using ApplicationName, HostName, and SPID.
Create a SQL Script to delete those records.
Identify records which you had inserted before (probably they were inserted into the same tables)
Write a query to delete them too
Open transaction
Delete those records
Check that you have deleted only needed records
Commit transaction
UPD: according to the comment to this answer (with which I completely agree), if you have DEV or TEST environment on which you can do the same operation, do it and find modified records there. After that find modified records in the same tables on PROD.
P.S. I can not guarantee that following these steps you will be able to clean the data you've inserted, but probably you will be able to do this. I also recommend creating a full backup before deleting data.
If you have proper transaction logging enabled and using SQL Server 2008 and above. You can try using the Change Data Capture Stored Procedures (Transact-SQL) and check the changes happened to the tables. Hope it helps
Well you could track through the code to see what tables it touches. Run Profiler on dev to see what code it sends or procs it calls when enter a new records the same way that you did on prod.
If you have formal PK and FK relationships, you likely will find out by trial and error because it won't let you delete the parent records until all the children are deleted. And test with some other record on the dev environment to figure out what tables might be involved. Or you could script the FKs to see what other tables are related to the parent table.
If you have auditing (as every Enterprise solution should have, but I digress), you can often find out by looking in the audit tables for transactions at that time. Our audit tables have both the datetime of the transaction and the user which makes it easier to filter for these things.
Of course if you know your data model, you should have a pretty good idea before you start. Or if you have a particular id that is in all the child tables and you do not have nice convenient FKs, then you could check out the system tables to find what tables have that column name. That assumes a fairly standard naming convention though. If you call the same column different things in different tables, yo u might miss some.
If you are using an ORM, there should be some way to check what tables are in the object related to the particular task you did. So if you inserted a test order of instance, check out what is contained in the order object.

Viewing database records realtime in WPF application

disclaimer: I must use a microsoft access database and I cannot connect my app to a server to subscribe to any service.
I am using VB.net to create a WPF application. I am populating a listview based on records from an access database which I query one time when the application loads and I fill a dataset. I then use LINQ to dataset to display data to the user depending on filters and whatnot.
However.. the access table is modified many times throughout the day which means the user will have "old data" as the day progresses if they do not reload the application. Is there a way to connect the access database to the VB.net application such that it can raise an event when a record is added, removed, or modified in the database? I am fine with any code required IN the event handler.. I just need to figure out a way to trigger a vb.net application event from the access table.
Think of what I am trying to do as viewing real-time edits to a database table, but within the application.. any help is MUCH appreciated and let me know if you require any clarification - I just need a general direction and I am happy to research more.
My solution idea:
Create audit table for ms access change
Create separate worker thread within the users application to query
the audit table for changes every 60 seconds
if changes are found it will modify the affected dataset records
Raise event on dataset record update to refresh any affected
objects/properties
Couple of ways to do what you want, but you are basically right in your process.
As far as I know, there is no direct way to get events from the database drivers to let you know that something changed, so polling is the only solution.
I the MS Access database is an Access 2010 ACCDB database, and you are using the ACE drivers for it (if Access is not installed on the machine where the app is running) you can use the new data macro triggers to record changes to the tables in the database automatically to an audit table that would record new inserts of updates, deletes, etc as needed.
This approach is the best since these happen at the ACE database driver level, so they will be as efficient as possible and transparent.
If you are using older versions of Access, then you will have to implement the auditing yourself. Allen Browne has a good article on that. A bit of search will bring other solutions as well.
You can also just run some query on the tables you need to monitor
In any case, you will need to monitor your audit or data table as you mentioned.
You can monitor for changes much frequently than 60s, depending on the load on the database, number of clients, etc, you could easily check ever few seconds.
I would recommend though that you:
Keep a permanent connection to the database while your app is running: open a dummy table for reading, and don't close it until you shutdown your app. This has no performance cost to anyone, but it will ensure that the expensive lock file creation is done only once, and not for every query you run. This can have a huge performance import. See this article for more information on why.
Make it easy for your audit table (or for your data table) to be monitored: include a timestamp column that records when a record was created and last modified. This makes checking for changes very quick and efficient: you just need to check if the most recent record modified date matches the last one you read.
With Access 2010, it's easy to add the trigger to do that. With older versions, you'll need to do that at the level of the form.
If you are using SQL Server
Up to SQL 2005 you could use Notification Services
Since SQL Server 2008 R2 it has been replaced by StreamInsight
Other database management systems and alternatives
Oracle
Handle changes in a middle tier and signal the client
Or poll. This requires you to configure the interval so you do not miss out on a change too long.
In general
When a server has to be able to send messages to clients it needs to keep a channel/socket open to the clients this can become very expensive when there are a lot of clients. I would advise against a server push and try to do intelligent polling. Intelligent polling means an interval that is as big as possible and appropriate caching on the server to prevent hitting the database to many times for the same data.

Sybase trigger to find the deleted query

In my Sybase server, some rows of a table (TBL_RESOURCE) are deleting from unknown source at random intervals. I tried a lot, but I cannot able to locate from which source/file/process this data is deleting. Is there any mechanism to locate this problem? I need to find out who is deleting these rows..
How we can find out who is deleted it and from which file?
Can we use a trigger to find the source of deletion?
Ok, so you do not have stored procs or transactions (which would allow the normal security: grant permissions to sprocs only; no direct updates to tables from users). therefore you have direct grants to users. Which means they can insert/update/delete from any client-side program, including Excel. Therefore it is quite possible that there is no code segment in the source code of the app, that deletes from the table. Having rows deleted at random moments is the nature of an online database; protecting it from unauthorised deletes is the requirement of the DBA.
I presume you have given permissions to specific people, not the whole world, and you are not sure exactly who is doing the nasty. The easiest is to simply ask the group.
The next easiest is to turn on Auditing for that table, or for the group (or role) of users permitted. But if you have not set up auditing, than can pose an obstacle.
Third, the trigger.
There are other methods, but they have a substantial overhead (22%), require substantial implementation labour, and you will have to wade through massive amounts of data.
If your environment is as insecure and unstable as it sounds, and the table is not supposed to be deleted from, simply revoke permissions on that (one) table, and wait until some one comes to you crying that their permissions have changed.
"This is assuming you don't have every single user logging in as DBA or some other [privileged] account."
Which of course is a very silly thing to do, asking for, pleading for disaster. As silly as granting delete on all tables to all users. I see where you are coming from.
Something like this would do the trick.
create trigger deltrig
on TBL_RESOURCE
for delete
as
BEGIN
insert TBL_LOG (modifiedBy, modifiedDate)
select user_name(), getdate() from deleted
END
(you have to create the logging table TBL_LOG obviously)
yes, you can use triggers. see the sybase doc to see how to create delete triggers. In the trigger code, you can choose to log information (insert) like the current user, user id etc into a a table for auditing.

Does Oracle have something like Change Data Capture in SQL Server 2008?

Change Data Capture is a new feature in SQL Server 2008. From MSDN:
Change data capture provides
historical change information for a
user table by capturing both the fact
that DML changes were made and the
actual data that was changed. Changes
are captured by using an asynchronous
process that reads the transaction log
and has a low impact on the system
This is highly sweet - no more adding CreatedDate and LastModifiedBy columns manually.
Does Oracle have anything like this?
Sure. Oracle actually has a number of technologies for this sort of thing depending on the business requirements.
Oracle has had something called Workspace Manager for a long time (8i days) that allows you to version-enable a table and track changes over time. This can be a bit heavyweight, though, because it is based on views with instead-of triggers.
Starting in 11.1 (as an extra cost option to the enterprise edition), Oracle has a Total Recall that asynchronously mines the redo logs for data changes that get logged to a separate table which can then be queried using flashback query syntax on the main table. Total Recall is automatically going to partition and compress the historical data and automatically takes care of purging the data after a specified data retention period.
Oracle has a LogMiner technology that mines the redo logs and presents transactions to consumers. There are a number of technologies that are then built on top of LogMiner including Change Data Capture and Streams.
You can also use materialized views and materialized view logs if the goal is to replicate changes.
Oracle has Change Data Notification where you register a query with the system and the resources accessed in that query are tagged to be watched. Changes to those resources are queued by the system allowing you to run procs against the data.
This is managed using the DBMS_CHANGE_NOTIFICATION package.
Here's an infodoc about it:
http://www.oracle-base.com/articles/10g/dbms_change_notification_10gR2.php
If you are connecting to Oracle from a C# app, ODP.Net (Oracles .Net client library) can interact with Change Data Notification to alert your c# app when Oracle changes are made - pretty kewl. Goodbye to polling repeatedly for data changes if you ask me - just register the table, set up change data notifcation through ODP.Net and wala, c# methods get called only when necessary. woot!
"no more adding CreatedDate and LastModifiedBy columns manually" ... as long as you can afford to keep complete history of your database online in the redo logs and never want to move the data to a different database.
I would keep adding them and avoid relying on built-in database techniques like that. If you have a need to keep historical status of records then use an audit table or ship everything off to a data warehouse that handles slowly changing dimensions properly.
Having said that, I'll add that Oracle 10g+ can mine the log files simply by using flashback query syntax. Examples here: http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_10002.htm#i2112847
This technology is also used in Oracle's Datapump export utility to provide consistent data for multiple tables.
I believe Oracle has provided auditing features since 8i, however the tables used to capture the data are rather complex and there is a significant performance impact when this is turned on.
In Oracle 8i you could only enable this for an entire database and not a table at a time, however 9i introduced Fine Grained Auditing which provides far more flexibility. This has been expanded upon in 10/11g.
For more information see http://www.oracle.com/technology/deploy/security/database-security/fine-grained-auditing/index.html.
Also in 11g Oracle introduced the Audit Vault, which provides secure storage for audit information, even DBA's cannot change this data (according to Oracle's documentation, I haven't used this feature yet). More info can be found at http://www.oracle.com/technology/deploy/security/database-security/fine-grained-auditing/index.html.
Oracle has mechanism called Flashback Data Archive. From A Fresh Look at Auditing Row Changes:
Oracle Flashback Query retrieves data as it existed at some time in the past.
Flashback Data Archive provides the ability to track and store all transactional changes to a table over its lifetime. It is no longer necessary to build this intelligence into your application. A Flashback Data Archive is useful for compliance with record stage policies and audit reports.
CREATE TABLESPACE SPACE_FOR_ARCHIVE
datafile 'C:\ORACLE DB12\ARCH_SPACE.DBF'size 50G;
CREATE FLASHBACK ARCHIVE longterm
TABLESPACE space_for_archive
RETENTION 1 YEAR;
ALTER TABLE EMPLOYEES FLASHBACK ARCHIVE LONGTERM;
select EMPLOYEE_ID, FIRST_NAME, JOB_ID, VACATION_BALANCE,
VERSIONS_STARTTIME TS,
nvl(VERSIONS_OPERATION,'I') OP
from EMPLOYEES
versions between timestamp timestamp '2016-01-11 08:20:00' and systimestamp
where EMPLOYEE_ID = 100
order by EMPLOYEE_ID, ts;

Resources