dotnetnuke event logging - Synchronous? Potential speed issues? - dotnetnuke

I'd like to leverage the DotNetNuke logging system (both Event Logging and Exception Logging).
However, I'm concerned about potential performance issues if we begin logging too many Events.
Does the logging system write events to the database asynchronously? If not, is there an efficient way to make this happen? I'd hate to make an end-user wait on several database writes just to view a page.
Thanks

Short answer: It depends on your DNN configuration. By default, logging is synchronous.
Detailed answer
Event Logging uses LoggingProvider set up in the web.config.
DotNetNuke is shipped with DBLoggingProvider and XMLLoggingProvider.
By default, DotNetNuke uses DBLoggingProvider that writes into EventLog table.
How this is is done, depends on the Host settings and the site's Event Viewer settings. If "Enable Event Log Buffer" in checked the Host settings, logging should be asynchronous.
Should be, since asynchronous logging uses the Scheduler, and if the scheduler is not enabled or is stopped, logging will be immediate.
Immediate logging can also be enforced with LogInfo.BypassBuffering property.
Event log settings determine what is going to be logged by Log Type basis. If you are using Event Logging in your modules, you have to pass the log type in the EventLogController.AddLog method. I usually use EventLogType.HOST_ALERT, since these are easily recognizable in the Event Log view and are logged by default (unlike ADMIN_ALERT).
For more details, check the AddLog implementation in the DNN source code:
DBLoggingProvider.AddLog(ByVal LogInfo As LogInfo)
See Also:
DotNetNuke Host Settings Explained

Related

Durable Function "monitor" in portal and Orchestration replay events

I've set logReplayEvents to false even though that's the default just to be sure but I'm still seeing multiple entries for my orchestration function for a single invocation in the Monitor section on the Azure portal:
Any idea how, if it's possible, I can change this so it just shows one.
I hope you have set LogReplayEvents to false under 'durableTask' key (i.e., disabled emitting verbose orchestration replay events) in the host.json file.
And yeah, by default all non-replay tracking events are emitted.
Perhaps I am missing something but looking at the screenshot provided by you, I cannot really figure out that those logs are related to replay events. That's fine if you have checked each event and found that they are replay event, or else can you go to your App Insights and run below query to see if any replay events are present or not.
traces| extend isReplay = tobool(tolower(customDimensions["prop__isReplay"]))| where isReplay == "True"
Hope this helps at least up to some extend!! Cheers!! :)

Salesforce Schedulable not working

I have several HTTP callouts that are in a schedulable and set to run ever hour or so. After I deployed the app on the app exchange and had a salesforce user download it to test, it seems the jobs are not executing.
I can see the jobs are being scheduled to run accordingly however the database never seems to change. Is there any reason this could be happening or is there a good chance the flaw lies in my code?
I was thinking that it could be permissions however I am not sure (its the first app I am deploying).
Check if the organisation of your end user has added your endpoint to "remote site settings" in the setup. By endpoint I mean an address that's being called (or just the domain).
If the class is scheduled properly (which I believe would be a manual action, not just something that magically happens after installation... unless you've used a post-install script?) you could also examine Setup -> Apex Jobs and check if there are any errors. If I'm right, there will be an error about callout not allowed due to remote site settings. If not - there's still a chance you'll see something that will make you think. For example batch job has executed successfully but there were 0 iterations -> problem?
Last but not least - you can always try the debug logs :) Enable them in Setup (or open the developer console), fire the scheduled class's execute() manually and observe the results? How to fire it manually? Sth like this pasted to "execute anonymous":
MySchedulableClass sched = new MySchedubulableClass();
sched.execute(null);
Or - since you know what's inside the scheduled class - simply experiment.
Please note that if the updates you might be performing somehow violate for example validation rules your client has - yes, the database will be unchanged. But in such case you should still be able to see failures in Setup -> Apex Jobs.

Redirect SQL Server's events from std Application log into a custom one

Had to seep through loads of info trying to restore the sequence of events that led to a crash yesterday, and got really interested in finding a solution to offload events from SQL Server to a custom event log. Google yields only a single promising result, with the link to a guide on creating custom event logs..
While i wouldn't go that far as to call SQL events pointless (though agreed, 17101 and 17103 spelling out "(c) 20?? Microsoft Corporation" and "All rights reserved." upon each restart are a definite waste!),
IMHO it would certainly be useful and beneficial to re-route SQL events to its own log!
Hell, even IE has got one, built-in! Why can't SQL Server take that as a better practice to implement? Especially on Vista/Win7, which provide tons of individual logs for loads of other apps - quite useless IMHO (never had any need to dig in there), but forcing UI to slow to a crawl each time you open it:
I successfully follow the guidelines of creating a 'SQLServer' custom log, add the source definitions to it. Unfortunately, any attempts to re-route SQL events to it seem to bump into an issue of MSSQLSERVER (the log source matching the default name for the SQL service) being some kind of a built-in source:
EventCreate /l "SQLServer" /t Information /so MSSQLSERVER /id 1 /d "Log created"
ERROR: Source parameter is used to identify custom application/scripts only (not built-in sources).
When i mark MSSQLSERVER under my log as CustomSource (DWORD=1), the error above disappears:
EventCreate /l "SQLServer" /t Information /so MSSQLSERVER /id 2 /d "New entry"
SUCCESS: A 'information' type event is created in the 'MSSQLSERVER' log/source.
and indeed an event with ID=2, desc='New entry' is added to the custom event log! However, in this configuration the real MSSQLSERVER service does not write events to either this new log or to the standard 'Application' log :(. Functionality is restored upon reverting log definitions in registry (no reboots needed!), so it is a reversible scenario.
Also, from the above it looks as any source can only be associated with a single log.. Logical enough. But what defines these built-in sources then, if i remove the explicit registry entries? Maybe I should've restarted the machine after making these changes (though that was not necessary to revert back)?
Has anybody explored this further and maybe had any success?
EDIT: So far, like i said, there seems to be the only way to deal with this by filtering out MSSQLSERVER (or other SQL service name) events from view, like so:
But the XML tab exposes what goes under the hood, and it's quite ugly (as in extremely inefficient):
I want a better way to manage this event data, and am sure i'm not the only one.
So if any folks at Microsoft are reading this - take note!
ERRORLOG?
If you wish to create an event viewer filter to exclude a particular source, here is the XML (Suppressing SQL Express events from the 'Application' log).
<QueryList>
<Query Id="0" Path="Application">
<Select Path="Application">*
</Select>
<Suppress Path="Application">*[System[Provider[#Name='MSSQL$SQLEXPRESS']]]</Suppress>
</Query>
</QueryList>
See Event Selection on MSDN

Best practice for updating silverlight deployment that is actively being used

I am currently running a SL3 project where we are in a highly iterative development mode with about 25 active test customers. I am making small changes at a clip of about 4 new builds per day. It is important to know this application is mission critical line of business for these 25 people, it is the tool they use all day to do their work so they are using it constantly and often launch their browser and the app in the morning and never close it until the end of the day.
The challenge is that when I make an update to the application I have no clean way to notify the users, in most cases this is ok as it is rare that I introduce a data contract change or something that would be a classic 'breaking' change to the app/service. Users keep plugging along and will get the change next time they refresh.
Right now we have resorted to emailing everyone and telling them to force refresh or close the browser and log back in.
Surely there is a better way...
Right now my train of thought is to have a method on the server that compares client xap versions and determines if the client being used is the most up to date, if so I will notify the user and make them update.
What have you done to solve this problem?
One way of doing it is to use a push mechanism (I used Kaazing Websoocket Gateway but any would do). When a new version of the XAP is released a message (either manually entered into the system by admin or automated triggered by XAP file change event) would be sent to all the clients. In the simplest scenario some notification would be shown to a user (telling him that a new version is released and the application needs to refresh) and then the app would refresh (by simply reloading the page) saving user's state if necessary.
If I would do this I would just keep it simple. A configuration value in web.config and a corresponding service method that simply returns that value (the value itself could be anything, but a counter is probably wise). Then you could have your Silverlight app poll that service method at regular intervals. Whenever the value changes (which you would do manually when you deploy a new version), just pop up a dialog telling the user to refresh the browser or log in/out. This way you don't have to force them to refresh every time. If you go with the idea of comparing xap file versions they will always be required to refresh, even for non-breaking changes.
If you want to take it further you could come up with some sort of mechanism to distinguish between different severity levels. For instance, if the new config value would contain the string "update_forced", you could force the users to reload the app by logging them out automatically (a little harsh, perhaps). If it contains the string "update_recommended", just show a little icon at the top right corner saying that there is a new version and that they should upgrade in their own time.
Granted, this was targeted at Silverlight 3, but with the PollingDuplex client and such in the newer versions of Silverlight, you could publish an "Update Now" bit to the clients, and build a mechanism in the client to alert the user that there is an update that is now out... that they should update it shortly, etc. You may even be able, through serialization and such, to save the state that they are in when they close the app to reload it.
We've done stuff similar with a LOB app that we built, so that as users are changing things, the rest of the userbase sees those changes immediately. Next up will be putting the flags in to change authorization and upgrades "on the fly" if you will.

Logging when application is running as XBAP?

Anybody here has actually implemented any logging strategy when application is running as XBAP ? Any suggestion (as code) as to how to implement a simple strategy base on your experience.
My app in desktop mode actually logs to a log file (rolling log) using integrated asop log4net implementation but in xbap I can't log cause it stores the file in cache (app2.0 or something folder) so I check if browser hosted and dont log since i dont even know if it ever logs...(why same codebase)....if there was a way to push this log to a service like a web service or post error to some endpoint...
My xbap is full trust intranet mode.
I would log to isolated storage and provide a way for users to submit the log back to the server using either a simple PUT/POST with HttpWebRequest or, if you're feeling frisky, via a WCF service.
Keep in mind an XBAP only gets 512k of isolated storage so you may actually want to push those event logs back to the server automatically. Also remember that the XBAP can only speak back to it's origin server, so the service that accepts the log files must run under the same domain.
Here's some quick sample code that shows how to setup a TextWriterTraceListener on top of an IsolatedStorageFileStream at which point you can can just use the standard Trace.Write[XXX] methods to do your logging.
IsolatedStorageFileStream traceFileStream = new IsolatedStorageFileStream("Trace.log", FileMode.OpenOrCreate, FileAccess.Write);
TraceListener traceListener = new TextWriterTraceListener(traceFileStream);
Trace.Listeners.Add(traceListener);
UPDATE
Here is a revised answer due to the revision you've made to your question with more details.
Since you mention you're using log4net in your desktop app we can build upon that dependency you are already comfortable working with as it is entirely possible to continue to use log4net in the XBAP version as well. Log4net does not come with an implementation that will solve this problem out of the box, but it is possible to write an implementation of a log4net IAppender which communicates with WCF.
I took a look at the implementation the other answerer linked to by Joachim Kerschbaumer (all credit due) and it looks like a solid implementation. My first concern was that, in a sample, someone might be logging back to the service on every event and perhaps synchronously, but the implementation actually has support for queuing up a certain number of events and sending them back to the server in batch form. Also, when it does send to the service, it does so using an async invocation of an Action delegate which means it will execute on a thread pool thread and not block the UI. Therefore I would say that implementation is quite solid.
Here's the steps I would take from here:
Download Joachim's WCF appender implementation
Add his project's to your solution.
Reference the WCFAppender project from your XBAP
Configure log4net to use the WCF appender. Now, there are several settings for this logger so I suggest checking out his sample app's config. The most important ones however are QueueSize and FlushLevel. You should set QueueSize high enough so that, based on how much you actually are logging, you won't be chattering with the WCF service too much. If you're just configuring warnings/errors then you can probably set this to something low. If you're configuring with informational then you want to set this a little higher. As far as FlushLevel you should probably just set this to ERROR as this will just guarantee that no matter how big the queue is at the time an error occurs everything will be flushed at the moment an error is logged.
The sample appears to use LINQ2SQL to log to a custom DB inside of the WCF service. You will need to replace this implementation to log to whatever data source best suits your needs.
Now, Joachim's sample is written in a way that's intended to be very easy for someone to download, run and understand very quickly. I would definitely change a couple things about it if I were putting it into a production solution:
Separate the WCF contracts into a separate library which you can share between the client and the server. This would allow you to stop using a Visual Studio service reference in the WCFAppender library and just reference the same contract library for the data types. Likewise, since the contracts would no longer be in the service itself, you would reference the contract library from the service.
I don't know that wsHttpBinding is really necessary here. It comes with a couple more knobs and switches than one probably needs for something as simple as this. I would probably go with the simpler basicHttpBinding and if you wanted to make sure the log data was encrypted over the wire I would just make sure to use HTTPS.
My approach has been to log to a remote service, keyed by a unique user ID or GUID. The overhead isn't very high with the usual async calls.
You can cache messages locally, too, either in RAM or in isolated storage -- perhaps as a backup in case the network isn't accessible.
Be sure to watch for duplicate events within a certain time window. You don't want to log 1,000 copies of the same Exception over a period of a few seconds.
Also, I like to log more than just errors. You can also log performance data, such as how long certain functions take to execute (particularly out-of-process calls), or more detailed data in response to the user explicitly entering into a "debug and report" mode. Checking for calls that take longer than a certain threshold is also useful to help catch regressions and preempt user complaints.
If you are running your XBAP under partial trust, you are only allowed to write to the IsolatedStorage on the client machine. And it's just 512 KB, which you would probably want to use in a more valuable way (than for logging), like for storing user's preferences.
You are not allowed to do any Remoting stuff as well under partial trust, so you can't use log4net RemotingAppender.
Finally, under partial trust XBAP you have WebPermission to talk to the server of your app origin only. I would recommend using a WCF service, like described in this article. We use similar configuration in my current project and it works fine.
Then, basically, on the WCF server side you can do logging to any place appropriate: file, database, etc. You may also want to keep your log4net logging code and try to use one of the wcf log appenders available on the internets (this or this).

Resources