Audit on oracle schema for dml statements - database

I need to secure an oracle user for doing inserts/updates/deletes from outside programs written by me.
I googled a bit around to find what I need. I know you can use own written db triggers.
And I now there are two major systems from oracle (at least that is what I found).
You can use fine grained auditing. And you can use the audit trail.
I think in my case the audit trail comes close but just isn't what I am looking for. Because I need to now from which program the connection to the DB is coming. For example I need to register all connections that are doing inserts/updates/deletes with there statements executed that are coming from sql developer or toad. But all the other connections may pass without audit.
On daily basis I have lots of connections so registering everything is too much overload.
I hope one of you have a good idea on how to set this up.
Regards

You can use a product of Oracle: Oracle Audit Vault and Database Firewall. Because you want to know also from which program the connection comes, you need the Database Firewall. It can monitor all traffic through the database, specifying the IP address and the client from which the connection was started. You can also specify if you want to audit DML or DDL,or other statements. Data is stored locally in the product's database,not in the secure target (your database). Just have a look, it it just what you need: http://www.oracle.com/technetwork/products/audit-vault-and-database-firewall/overview/overview-1877404.html

Related

Alarm DB Logger (Intouch) configuration with SQL Server Mirroring

I have an installation which has two SCADA (Intouch) HMIs and I want to save the data in an SQL Server database which will be in another computer. To be as sure as possible that I have an operating database I'm going to set a SQL Server mirroring. So I will have 2 SQL server databases with a distributor. About this I don't have any doubt. To make it easy to understand I've made an image with the architecture of the system.
Architecture.
My doubt is how do I configure the Alarm DB Logger to make it point, automatically, to the secondary database in case that the principal database is down for any unknown failover.
PS: I don't know if it's even possible.
Configure it the database in Automatic failover. The connection are handled automatically in case of a failover. Read on Mirroring EndPoints
The below Links should have more than enough information.
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/role-switching-during-a-database-mirroring-session-sql-server
https://learn.microsoft.com/en-us/sql/database-engine/database-mirroring/the-database-mirroring-endpoint-sql-server
The AlarmDBLogger reads its configuration from the registry, so you could try the following:
Stop AlarmLogger
Change ServerName in registry [HKLM].[Software].[Wonderware].[AlarmLogger].[SQLServer]
Start AlarmLogger
But what about the two InTouch-nodes? What if one of those fails? You would have to make sure one of them logs alarms, and that they don't log duplicates!
The standard controls and activex for alarms use a specific view in the alarm database. You cannot change that behaviour, but you can script a server change in InTouch or System Platform.
Keep in mind that redundancy needs to be tested, and should only be implemented if 100% uptime is necessary. In many cases you will be creating new problems to solve instead of solving an actual problem.

Log inserted/updated/deleted rows in all tables for a given database in SQL Server 2008

Whats the best way to track/Log inserted/updated/deleted rows in all tables for a given database in SQL Server 2008?
Or is there a better "Audit" feature in SQL Server 2008?
Short answer is that there is no one single solution fits all. It depends on the system but and requirements but here are couple different approaches.
DML Triggers
Relatively easy to implement, because you have to write one that works well for one table and then apply it to other tables.
Downside is that it can get messy when you have a lot of tables and even more triggers. Managing 600 triggers for 200 tables (insert, update and delete trigger per table) is not an easy task.
Also, it might cause a performance impact.
Creating audit triggers in SQL Server
Log changes to database table with trigger
Change Data Capture
Very easy to implement, natively supported but only in enterprise edition which can cost a lot of $ ;). Another disadvantage is that CDC is still not as evolved as it should be. For example, if you change your schema, history data is lost.
Transaction log analysis
Biggest advantage of this is that all you need to do is to put the database in full recovery mode and all info will be stored in transaction log
However, if you want to do this correctly you’ll need a third party log reader because this is not natively supported.
Read the log file (*.LDF) in SQL Server 2008
SQL Server Transaction Log Explorer/Analyzer
If you want to implement this I’d recommend you try out some of the third party tools that exist out there. I worked with couple tools from ApexSQL but there are also good tools from Idera and Netwrix
ApexSQL Log – auditing by reading transaction log
ApexSQL Comply – uses traces in the background and then parses those traces and stores results in central database.
Disclaimer: I’m not affiliated with any of the companies mentioned above.
Change Data Capture is designed to do what you want, but it requires each table be set up individually, so depending on the number of tables you have, there may be some logistics to it. It will also only store the data in capture tables for a couple of days by default, so you may need an SSIS package to pull it out and store for longer periods.
I don't remember whether there is already some tool for this, but you could always use triggers (then you will have access for temporal tables with changed rows- INSERTED and DELETED). Unfortunately, it could be quite a work to do if you would like to track all tables. I believe that there should be some simpler solution, but do not remember as I said.
EDIT.
Maybe this could be helpful:
--Change tracking
http://msdn.microsoft.com/en-us/library/cc280462.aspx
http://msdn.microsoft.com/en-us/library/cc280386.aspx
This allows you to do audits at the database level; it may or may not be enough to meet the business requirements, as database records usually don't make all that much sense without the logic to glue them together. For instance, knowing that user x inserted a record into the "time_booked" table with a foreign key to the "projects", "users", "time_status" tables may not make all that much sense without the SQL query to glue those 4 tables together.
You may also need to have each database user connect with their own user ID - this is fine with integrated security and a client app, but probably won't work with a website using a connection pool.
The sql server logs are not possible to analyze just like that. There are some 3rd party tools available to read the logs but as far as I know you can't query them for statistics and such. If you need this kind of info you'll have to create some sort of auditing to capture all these events in separate tables. You can use "DDL triggers".

How to protect a database from the Server Administrator in Sql Server

We have a requirement from a client to protect the database our application uses, even from their local administrators (Auditors just gave them that requirement).
In their requirement, protecting the data means that the Sql Server admin cannot read, nor modify sensitive data stored in tables.
We could do that with Encryption in Sql Server 2005, but that would interfere with our third party ORM, and it has other cons, like indexing, etc.
In Sql Server 2008 we could use TDE, but I understand that this solution doesn't protect against a user with Sql Server admin rights to query the database.
Is there any best practice or known solution to this problem?
This problem could be similar to the one of having an application hosted by a host provider, and you want to protect the data from the host admins.
We can use Sql Server 2005 or 2008.
This has been asked a lot in the last few weeks. The answers usually boil down to:
(
a) If you don't control the application you are doomed to trust the DBA
or
b) If you do control the application you can encrypt everything with a key only known to the application, and decrypt on the way out. It'll hurt performance a bit (or a lot) though, that's why TDE exists. A variant of this to prevent tampering is to use a cryptographic hash of the values in the column, checking them upon application access.
)
and
c) Do extensive auditing, so you can control what are your admins doing.
I might have salary information in my tables, and I don't want my trusted dba's to see.
Faced with the same problem we have narrowed are options to:
1- Encrypt outside SQLServer, before inserts and updates and decrypt after selects. ie: Using .net encryption.
Downside: You loose some indexing and searching capabilities, cannot use like and betweens.
2- Use third party tools (at io level) that block crud to the database unless a password is provided. ie: www.Blockkk.com
Downside: You will need to trust a third party tool installed in your server. It might not keep up with SQL Server patches, etc...
3- Use an Auditing Solution that will keep track of selects, inserts, deletes, etc... And will notify (by email or event log)if violations occurred. A sample violation could be a dba running a select on your Salaries table. then fire the dba and change everyone salaries.
Auditors always ask for this, like they ask for other things that can never be done.
What you need to do is put it into risk-mitigation terms and show what controls you do have (tracking when users are elevated to administrators, what they did and that they were de-elevated afterwards) instead of in absolutes.
I once had a boss ask for total system redundancy without defining what he meant or how much he was willing to pay and sacrifice.
I think the right solution would be to only allow trusted people be DBA's.
It is implicit in being DBA, that you have full access, so in my opinion, your auditor should demand that you have procedures for restricting who has DBA access.
That way you work with the system through processes in stead of working aginst the system (ie. sql server).
To have person you don't trust be DBA would be nuts...
If you don't want any people in the admin group on the server to be able to access the database, then remove the "BUILTIN\Administrators" user on the server.
However, make sure you have another user that is a sysadmin on the server!
another way i heard that a company has implemented but i haven't seen it is:
there's a government body which issues kind of timestamped certificate.
each db change is sent to async queue and is timestamped with this certificate and is stored off site. this way noone can delete anything without breaking the timestamp chain.
i don't know how exactly this works on a deeper level.

Database mirroring/Replication, SQL Server 2005

I have two database servers running SQL Server 2005 Enterprise that I want to make one of them as mirror database server.
What I need is; to create an exact copy database from primary server on mirror server, so when the primary server was down, we could switch database IP on application to use mirror server.
I have examined "mirror" feature on SQL Server 2005, and based on this article:
http://aspalliance.com/1388_Database_Mirroring_in_Microsoft_SQL_Server_2005.all
The mirror database cannot be accessed directly; however snapshots of the mirror database can be taken for read only purposes. (Prerequisites no. 4)
So how it can be useful when I can't access it when primary server was down?
I've been thinking about creating a regular backup on primary server and restore it on mirror server on hourly basis, but that's quite inefficient (slow) especially if I want an exact copy (since hundreds data's are added once in minute).
Any other suggestion?
EDIT:
Maybe what I mean was a replication thing, not a mirror (thanks JP for commenting)
They are referring to the fact that you can't perform queries on the mirrored copy, but you can get around that limitation by creating a snapshot of the mirrored database. This is often done to create a read-only database copy for reporting uses. You would have full access of the mirror if the primary were to fail, but it will not failover automatically.
Log shipping is another option, which allows you to query (read-only) the standby database without having to create a snapshot.
If I understand your question correctly, you shouldn't have to do that. There are several role switching forms you can use to have your mirror take over as primary. You don't change the IP address at the application level, the cluster itself has a virtual IP address that allows access to the data at any given time (given a reasonable amount of time for the switch over to the mirror from a primary failure). The mirror stays in synch by itself. :) There are good articles here and here on clustering.
Edit: Okay, based on the comments, check out the various options for replication.
Your confusion is common - there's a lot of ways to do disaster recovery planning with SQL Server. I've recorded a 10-minute video tutorial of SQL Server disaster recovery options including log shipping, mirroring, replication and more. If you like that one, we've got a longer one at Quest called Disaster Recovery Techniques but that one requires registration.
Instead of investigating a specific technology here, what you might want to do is tell us what your needs are, and then we can help you find out what option is right for you. The videos will give you an idea of what kinds of information you need to know before selecting a particular solution.
When using only two SQL Servers, you need to do the fail-over manually. The 'backup' database will be usable after you do two things;
Disable mirroring on it
Restore the database with RECOVERY (but without a backup file, this will make the database usable).
Therefore mirroring in this manner does make scense, however it is hard to maintain;
Moving back from the backup database to the primary is a 'pain' as you have to set-up the complete mirroring again using a backup of the redundant server. This is needed to get the primary back up to speed.
My recommendation would be to get a thrid SQL Server into the picture that can act as a witness. The witness will monitor the status of the mirroring databases. Your bonus; you will get automatic failover, and will not have the fail-over (and after fail-over) issues.
If I remeber correct, the witness server can be running SQL Express so no need for the Enterprise version on all three - just the two where the actual mirroring will take place.
Let me know if you need Transact SQL for the commands to fail-over and 'anti-fail-over' in a two server scenario, and I can dig them up.

How to Audit Database Activity without Performance and Scalability Issues?

I have a need to do auditing all database activity regardless of whether it came from application or someone issuing some sql via other means. So the auditing must be done at the database level. The database in question is Oracle. I looked at doing it via Triggers and also via something called Fine Grained Auditing that Oracle provides. In both cases, we turned on auditing on specific tables and specific columns. However, we found that Performance really sucks when we use either of these methods.
Since auditing is an absolute must due to regulations placed around data privacy, I am wondering what is best way to do this without significant performance degradations. If someone has Oracle specific experience with this, it will be helpful but if not just general practices around database activity auditing will be okay as well.
I'm not sure if it's a mature enough approach for a production
system, but I had quite a lot of success with monitoring database
traffic using a network traffic sniffer.
Send the raw data between the application and database off to another
machine and decode and analyse it there.
I used PostgreSQL, and decoding the traffic and turning it into
a stream of database operations that could be logged was relatively
straightforward. I imagine it'd work on any database where the packet
format is documented though.
The main point was that it put no extra load on the database itself.
Also, it was passive monitoring, it recorded all activity, but
couldn't block any operations, so might not be quite what you're looking for.
There is no need to "roll your own". Just turn on auditing:
Set the database parameter AUDIT_TRAIL = DB.
Start the instance.
Login with SQLPlus.
Enter the statement audit all;This turns on auditing for many critical DDL operations, but DML and some other DDL statements are still not audited.
To enable auditing on these other activities, try statements like these:audit alter table; -- DDL audit
audit select table, update table, insert table, delete table; -- DML audit
Note: All "as sysdba" activity is ALWAYS audited to the O/S. In Windows, this means the Windows event log. In UNIX, this is usually $ORACLE_HOME/rdbms/audit.
Check out the Oracle 10g R2 Audit Chapter of the Database SQL Reference.
The database audit trail can be viewed in the SYS.DBA_AUDIT_TRAIL view.
It should be pointed out that the internal Oracle auditing will be high-performance by definition. It is designed to be exactly that, and it is very hard to imagine anything else rivaling it for performance. Also, there is a high degree of "fine-grained" control of Oracle auditing. You can get it just as precise as you want it. Finally, the SYS.AUD$ table along with its indexes can be moved to a separate tablespace to prevent filling up the SYSTEM tablespace.
Kind regards,
Opus
If you want to record copies of changed records on a target system you can do this with Golden Gate Software and not incur much in the way of source side resource drain. Also you don't have to make any changes to the source database to implement this solution.
Golden Gate scrapes the redo logs for transactions referring to a list of tables you are interested in. These changes are written to a 'Trail File' and can be applied to a different schema on the same database, or shipped to a target system and applied there (ideal for reducing load on your source system).
Once you get the trail file to the target system there are some configuration tweaks you can set an option to perform auditing and if needed you can invoke 2 Golden Gate functions to get info about the transaction:
1) Set the INSERTALLRECORDS Replication parameter to insert a new record in the target table for every change operation made to the source table. Beware this can eat up a lot of space, but if you need comprehensive auditing this is probably expected.
2) If you don't already have a CHANGED_BY_USERID and CHANGED_DATE attached to your records, you can use the Golden Gate functions on the target side to get this info for the current transaction. Check out the following functions in the GG Reference Guide:
GGHEADER("USERID")
GGHEADER("TIMESTAMP")
So no its not free (requires Licensing through Oracle), and will require some effort to spin up, but probably a lot less effort/cost than implementing and maintaining a custom solution rolling your own, and you have the added benefit of shipping the data to a remote system so you can guarantee minimal impact on your source database.
if you are using oracle then there is feature called CDC(Capture data change) which is more performance efficient solution for audit kind of requirements.

Resources