I've started using the logback DBAppender for centralizing the logging of my java microservices as described here https://logback.qos.ch/manual/appenders.html
Basically when the applications starts up, all of their logging informations are inserted in 3 DB table with just this code in the logback configuration file:
<appender name="DB" class="ch.qos.logback.classic.db.DBAppender">
<connectionSource class="ch.qos.logback.core.db.DriverManagerConnectionSource">
<driverClass>com.mysql.jdbc.Driver</driverClass>
<url>jdbc:mysql://host_name:3306/datebase_name</url>
<user>username</user>
<password>password</password>
</connectionSource>
</appender>
My question is, since there are multiple DbAppenders (one for each microservice) that insert log events in the same 3 table, could it happen that the db tables get locked if the access to them is requested at the same time? Or is this situation handled by logback samehow?
Because sharing access to the same db table looks to me the antipattern of microservice.
I've tried to search in the documentation but couldn't find any answer.
Thanks for sharing your knowledge with me
Related
How would you recommend in Camel to define key/value expressions in routes for things you want to save for auditing, and have them be picked up and written to a database transparently?
i.e. the route contains an array or set of expressions for things to save for auditing, but doesn't know how it actually gets picked up and written to a DB.
This would be like Mule's auditing feature, where you can put <flow> elements in the Mule XML and define expressions to save to Mule's DB for tracking.
I have looked at Interceptor, Event Notifiers, Tracers, WireTaps, MDC Logging - I am sure the answer lies in one or a combination of these elements but it's not clear to me.
I'm using this example of Mule auditing XML from its documentation as a comparison:
<flow name="bizFlow">
<tracking:custom-event event-name="Retrieved Employee" doc:name="Custom Business Event">
<tracking:meta-data key="Employee ID" value="#[payload['ID']]"/>
<tracking:meta-data key="Employee Email" value="#[payload['Email']]"/>
<tracking:meta-data key="Employee Git ID" value="#[payload['GITHUB_ID']]"/>
</tracking:custom-event>
</flow>
Thanks very much
For auditing I used wireTap to send exchange to special audit route where I do what I need for auditing. Not actually to DB but to JMS queue, but it does not matter.
There is only one restriction: whatever goes for auditing must not be changed after wireTap by main route (both run in parallel), so I cloned such auditing data before wireTap to special Exchange property to be used in audit route.
I created a yesod project with yesod init dans chose SQLite database.
My question : How do we initialise the SQLite Database with initial entries in the Yesod Scaffoled Site at the launch? Where do we put our insert instructions in this Yesod Templace Site?
I'm talking about actually adding records to some existing tables.
The Persistence chapter in the Yesod book explains how to insert a line in the database in the Handler of a ressource, but it doesn't explain how to add database entries at the start of the Yesod site (when the Yesod site is launched using yesod devel for example).
This is probably not an efficient way to proceed, so any suggestions on the completion of this task would be helpful.
Thank you.
I've never done this before since in my usual use case the DB is used for persistence across runs.
However I think the most appropriate place to do this would be in the makeFoundation function inside the Application module
Thats where your DB resources (connection pool etc...) are initialised.
Theres a line that looks something like this:
-- Perform database migration using our application's logging settings.
runLoggingT
(Database.Persist.runPool dbconf (runMigration migrateAll) p)
(messageLoggerSource foundation logger)
That sets up your tables, its probably just after this that you want to add your records.
I think the link in your question needs to be corrected, but the Persistent chapter of the Yesod book that you referred to does have valid examples for inserting records in an IO () action, so it should be able to get you the rest of the way.
Hope this helped
I have a set of wcf services that interact with a sql server database and I want to add some auditing to it to find when the service was called and what it did.
I believe there are 2 options open to me.
a. add triggers to the database tables that log to another database on updates, inserts etc
b. add an interceptor to my wcf services which log calls to Mongo big data storage database with necessary data to audit
What is the best practise in this area and any suggestions as to an approach to follow?
An old question, but I've been working on a library that could probably help others.
With Audit.Wcf (an extension for Audit.NET), you can log the interaction with your WCF service, and you can configure it to store the audit logs in a MongoDB database by using the Audit.MongoDB extension.
[AuditBehavior] // <-- enable the audit
public class YourService : IServiceContract
{
public Order GetOrder(int orderId)
{
...
}
}
As a first-response, I would try enabling tracing. You'd be surprised what kind of detail these can provide, especially in a diagnostic manner. It's also very simple and doesn't involve any recompilation:
<system.diagnostics>
<trace autoflush="true" />
<sources>
<source name="System.ServiceModel"
switchValue="Information, ActivityTracing"
propagateActivity="true">
<listeners>
<add name="sdt"
type="System.Diagnostics.XmlWriterTraceListener"
initializeData= "SdrConfigExample.e2e" />
</listeners>
</source>
</sources>
</system.diagnostics>
And if you use Verbose logging, you'll get debug-level logging of almost every event going on. Then use SvctraceViewer to go back and audit these logs.
Beyond that, look in to using the Trace.* methods within your service to provide any additional level of detail you may need (like calls to and from the database). Depending on how you set the up service, you can probably also look in to a debugger that plugs right in to your database context and outputs when it makes calls to and from.
I am new in this and try to found information in the web have not got any success. I need to create some log tables but have no idea what information should this table contains and how to organize them.
For example:
LogErrorTabble, LogChangesTable, etc..
Could anyone give me some articles about this or link to site with example solutions that he has used?
First of all what log library do you use? If you're on java got for log4j, if you're on .NET go for log4net. Both of these frameworks provide db log appenders that log to the database out of the box.
In case you're not using a log library: use a log library :)
In case you really want to do that on your own I can recommend a layout I used in a project where log messages were stored in a table logs and exceptions associated with an entry in the logs table were stored in an exceptions table but that highly depends on your platform.
You can find a lot of useful information on how to design your log tables in the log4net and log4j documentation. For example take a look at the log4net AdoNetAppender Class.
From time to time, the number of database connections from our Drupal 6.20 system to our Mysql database reaches 100-150 and after a while the website goes offline. The error message when trying to connect to Mysql manually is "blocked because of many connection errors. Unblock with 'mysqladmin flush-hosts'". Since the database is hosted on an Amazon RDS I don't have the permission to issue this command, but I can reboot the database and once rebooted the website works normally again. Until next time.
Drupal reports multiple errors prior to going offline, of two types:
Duplicate entry
'279890-0-all' for key
'PRIMARY' query:
node_access_write_grants /* Guest :
node_access_write_grants */ INSERT
INTO node_access (nid, realm, gid,
grant_view, grant_update,
grant_delete) VALUES (279890,
'all', 0, 1, 0, 0) in
/var/www/quadplex/drupal-6.20/modules/node/node.module
on line 2267.
Lock wait timeout exceeded; try
restarting transaction query:
content_write_record /* Guest :
content_write_record */ UPDATE
content_field_rating SET vid = 503621,
nid = 503621, field_rating_value =
1212 WHERE vid = 503621 in
/var/www/quadplex/drupal-6.20/sites/all/modules/cck/content.module
on line 1213.
The nids in these two queries are always the same and refer to two nodes that are frequently automatically updated by a custom module. I can track down a correlation between these errors and unusually many web requests in the Apache logs. I would understand that the website would become slower because of this. But:
Why do these errors occur, and how can they be solved? It seems to me it's to do with several web requests trying to update the same node at the same time. But surely Drupal should deal with this by locking the tables etc? Or should I deal with it in some special way?
Despite the higher web load, why does the database completely lock and require to be rebooted? Wouldn't it be better if the website still had access to Mysql and so, once the load is lower, it can serve pages again? Is there some setting for this?
Thank you!
Can be solved one or all of these three things to check:
are you out of disk space? From ssh, type command df -h and make sure you still have disk space.
Are the tables damaged? Repair the tables in phpMyAdmin, or CLI instructions here: http://dev.mysql.com/doc/refman/5.1/en/repair-table.html
Have you performance-tuned your mysql with an /etc/my.cnf? See this for more ideas: http://drupal.org/node/51263