I have a set of wcf services that interact with a sql server database and I want to add some auditing to it to find when the service was called and what it did.
I believe there are 2 options open to me.
a. add triggers to the database tables that log to another database on updates, inserts etc
b. add an interceptor to my wcf services which log calls to Mongo big data storage database with necessary data to audit
What is the best practise in this area and any suggestions as to an approach to follow?
An old question, but I've been working on a library that could probably help others.
With Audit.Wcf (an extension for Audit.NET), you can log the interaction with your WCF service, and you can configure it to store the audit logs in a MongoDB database by using the Audit.MongoDB extension.
[AuditBehavior] // <-- enable the audit
public class YourService : IServiceContract
{
public Order GetOrder(int orderId)
{
...
}
}
As a first-response, I would try enabling tracing. You'd be surprised what kind of detail these can provide, especially in a diagnostic manner. It's also very simple and doesn't involve any recompilation:
<system.diagnostics>
<trace autoflush="true" />
<sources>
<source name="System.ServiceModel"
switchValue="Information, ActivityTracing"
propagateActivity="true">
<listeners>
<add name="sdt"
type="System.Diagnostics.XmlWriterTraceListener"
initializeData= "SdrConfigExample.e2e" />
</listeners>
</source>
</sources>
</system.diagnostics>
And if you use Verbose logging, you'll get debug-level logging of almost every event going on. Then use SvctraceViewer to go back and audit these logs.
Beyond that, look in to using the Trace.* methods within your service to provide any additional level of detail you may need (like calls to and from the database). Depending on how you set the up service, you can probably also look in to a debugger that plugs right in to your database context and outputs when it makes calls to and from.
Related
I've started using the logback DBAppender for centralizing the logging of my java microservices as described here https://logback.qos.ch/manual/appenders.html
Basically when the applications starts up, all of their logging informations are inserted in 3 DB table with just this code in the logback configuration file:
<appender name="DB" class="ch.qos.logback.classic.db.DBAppender">
<connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource">
<driverClass>com.mysql.jdbc.Driver</driverClass>
<url>jdbc:mysql://host_name:3306/datebase_name</url>
<user>username</user>
<password>password</password>
</connectionSource>
</appender>
My question is, since there are multiple DbAppenders (one for each microservice) that insert log events in the same 3 table, could it happen that the db tables get locked if the access to them is requested at the same time? Or is this situation handled by logback samehow?
Because sharing access to the same db table looks to me the antipattern of microservice.
I've tried to search in the documentation but couldn't find any answer.
Thanks for sharing your knowledge with me
How would you recommend in Camel to define key/value expressions in routes for things you want to save for auditing, and have them be picked up and written to a database transparently?
i.e. the route contains an array or set of expressions for things to save for auditing, but doesn't know how it actually gets picked up and written to a DB.
This would be like Mule's auditing feature, where you can put <flow> elements in the Mule XML and define expressions to save to Mule's DB for tracking.
I have looked at Interceptor, Event Notifiers, Tracers, WireTaps, MDC Logging - I am sure the answer lies in one or a combination of these elements but it's not clear to me.
I'm using this example of Mule auditing XML from its documentation as a comparison:
<flow name="bizFlow">
<tracking:custom-event event-name="Retrieved Employee" doc:name="Custom Business Event">
<tracking:meta-data key="Employee ID" value="#[payload['ID']]"/>
<tracking:meta-data key="Employee Email" value="#[payload['Email']]"/>
<tracking:meta-data key="Employee Git ID" value="#[payload['GITHUB_ID']]"/>
</tracking:custom-event>
</flow>
Thanks very much
For auditing I used wireTap to send exchange to special audit route where I do what I need for auditing. Not actually to DB but to JMS queue, but it does not matter.
There is only one restriction: whatever goes for auditing must not be changed after wireTap by main route (both run in parallel), so I cloned such auditing data before wireTap to special Exchange property to be used in audit route.
I have a highly standardized project in DDD (Domain-Driven Design), so it means that each layer has it's responsibilities and no layer knows other than itself and the Domain Layer.
Here's the structure of my project:
My Infra.Data layer is responsible for connecting with the Database, and i'm persisting using EntityFramework.
My problem is: in order to make it work with SQLServer Databases, i need to add a reference to EntityFramework.SqlServer in my WebApplication layer, which breaks my separation of concerns concept, as you can see below.
Even having the same reference in my Infra.Data layer, which is where it only should be, as you can see below.
If i remove the EntityFramework.SqlServer reference from the WebApplication layer, it stops working, and throws exception every time i try to persist data, as you can see below.
I need to know how to remove this reference to keep separation of concerns, because the way it is now, i'll have to change my WebApplication if i want to change my persistence. My Web layer is prohibited to even have anything with the word "EntityFramework" in it. I want FULL separation of concerns to change any layer without affecting no other.
If i register my <entityFramework> provider in my Web.config file, it will only works if i have the EntityFramework.SqlServer in the project, but without the EntityFramework.SqlServer reference on the WebApplication, it miss namespaces and complain about it.
Note: My project also connects to MySql Databases successfully, and i don't need no references to MySql.Data or any other MySql library in my WebApplication layer, as expected.
Please help me, my DDD/Separation of Concerns OCD is cracking on it, thanks.
You can!
Just create this class in your Infra.Data project:
internal static class ForceEFToCopyDllToOutput
{
private static SqlProviderServices instance = SqlProviderServices.Instance;
}
When you do this you let the compiler know that the specific resource is used and should be available in the bin folder.
Some consider this a hack but it's useful if you want to keep your layers free from infrastructure concerns.
You can read more about this here: DLL reference not copying into project bin
EDIT:
All you'll need now is to copy the connection string from your Infra.Data app.config to your WebApplication web.config
<connectionStrings>
<add name="DatabaseConnectionString" providerName="System.Data.SqlClient" connectionString="..." />
</connectionStrings>
You would not be able to get rid of Entity-framework configuration and the required DLL in your Web-application :
Lets say your infrastructure layer and domain layer need to depend on Entity-framework. This means these two libraries need to have physical access to Entity Framework DLLs(Have Entity-framework package installed) and configured.
When you run your web application which has dependency on infrastructure and domain libraries, all Dlls used by underlying libraries (infrastructure and domain) need to be present physically and configured otherwise you will have run time issue(program might be compile-able but you will get run-time errors).
Morale of the story : If application x [Irrespective of the layer it belongs to] has dependency to library y,z and library y,z rely on some dll and require configuration, for application x to work at run-time you need to have all dlls needed by y,z available and provide their configuration (web.config) in your instance.
You can obviously provide some workarounds such as copying the files directly and providing separate config files for each layer but I strongly advise against it because it would get extremely messy and very hard to maintain in the long run.
Anybody here has actually implemented any logging strategy when application is running as XBAP ? Any suggestion (as code) as to how to implement a simple strategy base on your experience.
My app in desktop mode actually logs to a log file (rolling log) using integrated asop log4net implementation but in xbap I can't log cause it stores the file in cache (app2.0 or something folder) so I check if browser hosted and dont log since i dont even know if it ever logs...(why same codebase)....if there was a way to push this log to a service like a web service or post error to some endpoint...
My xbap is full trust intranet mode.
I would log to isolated storage and provide a way for users to submit the log back to the server using either a simple PUT/POST with HttpWebRequest or, if you're feeling frisky, via a WCF service.
Keep in mind an XBAP only gets 512k of isolated storage so you may actually want to push those event logs back to the server automatically. Also remember that the XBAP can only speak back to it's origin server, so the service that accepts the log files must run under the same domain.
Here's some quick sample code that shows how to setup a TextWriterTraceListener on top of an IsolatedStorageFileStream at which point you can can just use the standard Trace.Write[XXX] methods to do your logging.
IsolatedStorageFileStream traceFileStream = new IsolatedStorageFileStream("Trace.log", FileMode.OpenOrCreate, FileAccess.Write);
TraceListener traceListener = new TextWriterTraceListener(traceFileStream);
Trace.Listeners.Add(traceListener);
UPDATE
Here is a revised answer due to the revision you've made to your question with more details.
Since you mention you're using log4net in your desktop app we can build upon that dependency you are already comfortable working with as it is entirely possible to continue to use log4net in the XBAP version as well. Log4net does not come with an implementation that will solve this problem out of the box, but it is possible to write an implementation of a log4net IAppender which communicates with WCF.
I took a look at the implementation the other answerer linked to by Joachim Kerschbaumer (all credit due) and it looks like a solid implementation. My first concern was that, in a sample, someone might be logging back to the service on every event and perhaps synchronously, but the implementation actually has support for queuing up a certain number of events and sending them back to the server in batch form. Also, when it does send to the service, it does so using an async invocation of an Action delegate which means it will execute on a thread pool thread and not block the UI. Therefore I would say that implementation is quite solid.
Here's the steps I would take from here:
Download Joachim's WCF appender implementation
Add his project's to your solution.
Reference the WCFAppender project from your XBAP
Configure log4net to use the WCF appender. Now, there are several settings for this logger so I suggest checking out his sample app's config. The most important ones however are QueueSize and FlushLevel. You should set QueueSize high enough so that, based on how much you actually are logging, you won't be chattering with the WCF service too much. If you're just configuring warnings/errors then you can probably set this to something low. If you're configuring with informational then you want to set this a little higher. As far as FlushLevel you should probably just set this to ERROR as this will just guarantee that no matter how big the queue is at the time an error occurs everything will be flushed at the moment an error is logged.
The sample appears to use LINQ2SQL to log to a custom DB inside of the WCF service. You will need to replace this implementation to log to whatever data source best suits your needs.
Now, Joachim's sample is written in a way that's intended to be very easy for someone to download, run and understand very quickly. I would definitely change a couple things about it if I were putting it into a production solution:
Separate the WCF contracts into a separate library which you can share between the client and the server. This would allow you to stop using a Visual Studio service reference in the WCFAppender library and just reference the same contract library for the data types. Likewise, since the contracts would no longer be in the service itself, you would reference the contract library from the service.
I don't know that wsHttpBinding is really necessary here. It comes with a couple more knobs and switches than one probably needs for something as simple as this. I would probably go with the simpler basicHttpBinding and if you wanted to make sure the log data was encrypted over the wire I would just make sure to use HTTPS.
My approach has been to log to a remote service, keyed by a unique user ID or GUID. The overhead isn't very high with the usual async calls.
You can cache messages locally, too, either in RAM or in isolated storage -- perhaps as a backup in case the network isn't accessible.
Be sure to watch for duplicate events within a certain time window. You don't want to log 1,000 copies of the same Exception over a period of a few seconds.
Also, I like to log more than just errors. You can also log performance data, such as how long certain functions take to execute (particularly out-of-process calls), or more detailed data in response to the user explicitly entering into a "debug and report" mode. Checking for calls that take longer than a certain threshold is also useful to help catch regressions and preempt user complaints.
If you are running your XBAP under partial trust, you are only allowed to write to the IsolatedStorage on the client machine. And it's just 512 KB, which you would probably want to use in a more valuable way (than for logging), like for storing user's preferences.
You are not allowed to do any Remoting stuff as well under partial trust, so you can't use log4net RemotingAppender.
Finally, under partial trust XBAP you have WebPermission to talk to the server of your app origin only. I would recommend using a WCF service, like described in this article. We use similar configuration in my current project and it works fine.
Then, basically, on the WCF server side you can do logging to any place appropriate: file, database, etc. You may also want to keep your log4net logging code and try to use one of the wcf log appenders available on the internets (this or this).
I ran across a new problem in the last week. Due to the nature of my project and available budget a small intranet web application I've been working on is both the testing and live server, as well as serves up the pages and is the sql server. This will last at least until the project is out of the major development cycle. Now that the project has real users but I am continuing development I duplicated the database to have a safe copy to mess with that won't cause havoc to live business data and a development copy of the website.
All was well until I discovered an anomoly on the test copy of the site, anything that uses a sql datasource was properly pulling it's data from the test database, but anything that gets it's data from a stored procedure triggered in the code behind was pulling it's data from the live databse.
My confusion comes from the fact that all stored procedures and sql datasources ultimately point back to the same connectionstring setting in the web.config file to know where to connect to. I just rename the database name depending on if I'm uploading the latest changes to the test or live site.
My question comes down to, why would with one connection string in each site would my test site accessing data one way get it from one database and accessing the other get it's data from the other database?
Here's my connection string they all point back to, names/passwords of course change for obvious reasons, but the structure is intact.
<add name="db_Connection" connectionString="Data Source=SERVERNAME;Initial Catalog=DATABASE_live;Persist Security Info=False;User ID=USERID;Password=password" providerName="System.Data.SqlClient"/>
I added a key to the appsettings to reference the name of the database connection so I could easily change it's name if need be without having to edit dozens of pages for the code behind SProc calls.
<add key="defaultDB" value="db_Connection" />
Am I violating some rule I'm unaware of or is there something else going on that I need to be aware of and change so I can have a true test environment as I continue to develop an active site?
EDIT This project is in ASP.NET 2.0 VB, fixed the code display.
solution found I have tracked down the solution, thanks for the pointers, they got me looking elsewhere. When I copied the site to a different location for testing I forgot to update my appsetting key for the site's location, this caused the following part of the call for stored procedures to grab data from the live site's web.config aparently.
System.Web.Configuration.WebConfigurationManager.OpenWebConfiguration(pubvar_webConfig)
Change the username and password on the dev database. If your problem persists then you might have a connection string set somewhere else that you don't know about.
I would search all of the files in your solution to make sure you don't have one of the database names hard coded some place. Maybe in the designer files?
It may be worth running the two applications in different app pools via IIS (if you aren't already or course!). This should eliminate any concurrency issues between the test and production sites at the application level.
IMHO with a shared test / production environment seperate app pools is good practice at any time.