I am using LLBL Gen Pro v2.6 and am attempting to create a means of auditing changes made to the database. Now, I know that LLBL Gen has auditing built into it using AuditorBase and dependency injection. The question I have is; I need to track not only the stuff that LLBL Gen exposes as auditable, but also the User who made the changes. From what I've seen there isn't a built in way of gathering this information. Has anyone used LLBL Gen's built in auditing and determined a way to do this?
Wayne E. Pfeffer
I have used LLBLGens Auditing classes. Determining the user is really something that you will have to handle. There are too many variables for LLBLGen to actually do this for you. How are your users handled? Is this a winforms or asp.net application?
The best solution would be to store the UserId in a session variable or static variable depending on which is more appropriate for your application. In your implementation of the Auditing class you can just pull the UserId from its storage place.
Another potential solution would be to override the Entity Class or the Data Adapter classes and pass the UserId in to your save methods. However, this would be a lot more work.
In my WinForms app, I accomplish this by creating a custom IPrincipal and stick that on System.Threading.Thread.CurrentPrincipal when a user logs into the application. Then, I can easily grab that from inside my LLBLGen auditing classes.
We're also about to implement auditing of changes to the db, and have the same issue of adding the user_id to the auditlog. I can see you can do the pull-approach, e.g. fetchingn the user_id from the web session (we're building a web application), but this would completely mess up the layering of the application, as I see it!?
I.e. if the DAL pulls data information from the presentation layer (web session), I won't be able to use the DAL in other contexts?
Best regards,
--thomas
Related
Im wondering ,if there is a way where we can add Intents/Entities/Dialogues through the Restful service as this will allow my application to be extremely dynamic adding its own entities/intents through interactions with the user.Can someone let me know is this possible or not?
Kind regards harish
Yes this is possible using the Workspace API. Example:
https://www.ibm.com/watson/developercloud/conversation/api/v1/#intents
From what you described, be aware that modifying intents+entities will trigger a retraining event. Changing a dialog tree live can possibly cause users interactions to reset.
Also the workspace API is currently free to use, but is rate limited. So good for updating a workspace, but not in a live interaction environment.
For past projects(the last few have been web using asp.net mvc) we created a service that caches our reference tables(as required) to be used primarily for dropdown lists.
Now I'm working on a desktop application.An upgrade from vb6/sybase to vb.net/sql server
I'm trying out WPF.
I started down the same path building up my DAL. one entity for each reference table.
I'm at the stage now where I want to setup the business layer (some reference tables can be edited)
And I'm not sure if I should follow the same process which is to use ReferenceTableService to "manage" the reference tables.(interacts with the DAL, Controller)
This will be an application that sits on a share that multiple users run.
What's the best way to deal with the reference tables? Caching them doesn't seem to be an option. Should I simply load them as each user opens up a new form in the application? Perhaps using a "ReferenceTableService"?
In this case, the Reference Table Service is thin layer in the application. Not a process running elsewhere.
I haven't done much WPF (be interesting to see what the WPF Gurus think) but I think your existing approach is sound and I don;t see why you should deviate from it.
Loading up on app start sounds reasonable; you just have to think about the expected lifetime of a user session vs the expected frequency of changes to the reference data.
Caching: if the data comes from a central service you could always introduce caching there.
I am building an application in Qt/QML.
I have a table view of the database (PostgreSQL).
Is there a way to dynamically refresh my table if there is any change in the database.
One no-so-efficient way to do it is to keep sending periodic SQL queries.
Is there any automatic way to keep my view refreshed?
I am open to use any other Database also if required.
Qt seems to support the NOTIFY mechanism of PostgreSQL databases. Googling for it it found some bug reports, so not sure of well implemented that is. Since I've never used it, I'll have to refer you to google.
If you use QSqlTableModel (or an editable subclass of QSqlQueryModel) with QTableView, any edits made will immediately be visible.
I need to build an offline database application on WP7.
App is simple - it's about making orders from our clients, then translate it to main server (MS SQL).
Spend a days read about existing techologies - but I'am still confused. Which is right for that project?
Sync Framework.
Looking good, but as I understand - it provides single tables - no reference beetwen them. All the references I have to build on client side. Sad.
Entity FrameWork on server side.
And I have no clue - what can I use on client side. Is there a way to serialize entity object to Isolate Store, then restore it, and continue work with it? May be I can use Sync FrameWork, but scheme will become strange then - kinda one way.)))
Working with WCF & XML - most simple for me. A lot of code and conversion, but in this case I understand the data flow. In other view - I already have app with pure SQL-queries. I wanna be advanced. ))))
Using ext. databases (siaqodb for example).
Which one? siaqodb suppots "Sync provider", but it doesn't support references beetwen objects - so I have to build them by myself? Any gain? I don't know.
Is there another way to build such apps? Point it please.
If this has to be done offline, then I would generally use something like:
storing the minimal amount of the required data within isolated using a WP7 specific database like Sterling
using either a new REST or a new RIA/WCF service with objects/functions you define in order to provide the required data synchronisation
I think this is your option 3?
I've never really liked automatic data synchronisation. I just find it easier to code the sync and deal with the error cases myself - this is especially the case if your wp7 client app uses quite a small footprint of data in relation to the larger main server db.
I'm using NHibernate on a project and I need to do data auditing. I found this article on codeproject which discusses the IInterceptor interface.
What is your preferred way of auditing data? Do you use database triggers? Do you use something similar to what's dicussed in the article?
For NHibernate 2.0, you should also look at Event Listeners. These are the evolution of the IInterceptor interface and we use them successfully for auditing.
[EDIT]
Post NH2.0 release, please look at the Event Listeners as suggested below. My answer is outdated.
The IInterceptor is the recommended way to modify any data in nhibernate in a non-invasive fashion. It's also useful for decryption / encryption of data without your application code needing to know.
Triggers on the database are moving the responsibility of logging (an application concern) in to the DBMS layer which effectively ties your logging solution to your database platform. By encapsulating the auditing mechanics in the persistance layer you retain platform independance and code transportability.
I use Interceptors in production code to provide auditing in a few large systems.
I prefer the CodeProject approach you mentioned.
One problem with database triggers is that it leaves you no choice but to use Integrated Security coupled with ActiveDirectory as access to your SQL Server. The reason for that is that your connection should inherit the identity of the user who triggered the connection; if your application uses a named "sa" account or other user accounts, the "user" field will only reflect "sa".
This can be overriden by creating a named SQL Server account for each and every user of the application, but this will be impractical for non-intranet, public facing web applications, for example.
I do like the Interceptor approach mentioned, and use this on the project I'm currently working on.
However, one obvious disadvantage that deserves highlighting is that this approach will only audit data changes made via your application. Any direct data modifications such as ad-hoc SQL scripts that you may need to execute from time to time (it always happens!) won't be audited, unless you remember to perform the audit table insertions at the same time.
I understand this is an old question. But I would like to answer this in the light of the new Event System in NH 2.0. Event Listeners are better for auditing-like-functions than Interceptors. Ayende wrote a great example on his blog last month. Here's the URL to his blog post -
ayende.com/Blog/archive/2009/04/29/nhibernate-ipreupdateeventlistener-amp-ipreinserteventlistener.aspx
As an entirely different approach, you could use the decorator pattern with your repositories.
Say I have
public interface IRepository<EntityType> where EntityType:IAuditably
{
public void Save(EntityType entity);
}
Then, we'd have our NHibernateRepository:
public class NHibernateRepository<EntityType>:IRepository<EntityType>
{
/*...*/
public void Save ( EntityType entity )
{
session.SaveOrUpdate(entity);
}
}
Then we could have an Auditing Repository:
public class AuditingRepository<EntityType>:IRepository<EntityType>
{
/*...*/
public void Save ( EntityType entity )
{
entity.LastUser = security.CurrentUser;
entity.LastUpdate = DateTime.UtcNow;
innerRepository.Save(entity)
}
}
Then, using an IoC Framework (StructureMap, Castle Windsor, NInject) you could build it all up without the rest of your code every knowing you had auditing going on.
Of course, how you audit the elements of cascaded collections is another issue entirely...