Logging CakePHP 3 exceptions to database - cakephp

I'm developing an API which throws various exceptions on bad requests, internal errors, etc..
I'd like to log these exceptions to a log table I've built. I could do this above each throw, but I assume Cake has a way of consolidating this into a custom exception handler. I just can't figure it out from the documentation http://book.cakephp.org/3.0/en/development/errors.html
Can someone point me to a resource I've failed to find or throw me some sample code?
Edit: Preference is to log to file as well as database.

It's all there in the docs:
Logging Exceptions
Logging (read the whole chapter)
Creating Loggers (Log to DB, includes Example!)
Implementation of the logger from the book

Related

GAE custom Go runtime - internal.flushLog error

I have recently changed to use custom Go runtime on GAE, and noticed many errors like this from logs:
internal.flushLog: Flush RPC: Call error 3: invalid security ticket: 6c8027dc99b3ed3e
internal.flushLog: Flush RPC: Canceled: (timeout)
The server is still running well, but I have no idea about that error, as well as why it happens.
I'm using a custom Go runtime by using Dockerfile, and App Engine Release is 1.9.37.
Any help to clarify the error would be highly appreciated. Thanks.
This is a known issue with the Go runtime on App Engine Flexible. It tends to happen when a line is logged right before the end of a request/response.
What happens is that when the line is logged it is actually put in a list of log lines to be batched together and sent to the application server as an RPC at periodic intervals. The security ticket is canceled at the end of a request/response which sometimes can happen before the log lines have been flushed. It's harmless, except that you may lose a log line or two. :\
We're actively working on fixing it.

App Engine silently fails on some requests

Some requests silently fail in my python app, intermittently and unpredictably. The hallmarks of the failure are:
Request returns a 200, so the client doesn't know there's a problem.
Request does NOT successfully execute on the server.
No logging statements are recorded for the request.
Below is an example from my logs of a bunch of requests which are each supposed to write an entity to the datastore. You can see for the lower, successful request, a blue 'i' is present, indicating that info level logs were recorded. When I examine the datastore, an entity was successfully written for this request.
However, for the failed request, you can see there is just a white box, and there are no logging statements present at all. While the server returned a 200, no entity was written to the datastore for this request.
Has anyone encountered something like this before on App Engine? Any ideas on how to debug it? I've seen it in multiple different apps myself, but I've never been able to figure it out.
EDIT
To clarify, the main problem here is that code doesn't execute, as measured by the failure to write an entity. The spurious 200 and lack of logging is an associated symptom.
From a comment originally, but seems to be the resolution path for this issue:
Given that there are no log statements at all in the line and you appear to unpack the arguments and log them as soon as you enter the handler, this starts to look like an infrastructure/platform issue.
In such a case, it's best to open a public issue tracker issue, with "Type-Production" as a tag, including your app's app id and a timeframe, and as much information about your app and request handler involved as possible, and platform support will pick up the issue in the course of triage.
That said, it's worth examining the handler to make absolutely sure there's no way you could be exiting from the handler and sending a 200 without logging anything or seeing an exception. It all depends on what the code handling the request is capable of, what stack of libraries it's build upon, etc.

What can I do with generated error logs?

I'm currently working on a web application which generates daily error (and non error) logs.
The current system outputs a log per task to a text file, and outputs critical errors as well as "start" and "finish" type messages to an email account.
The current workflow is as follows: scour the email box for errors, then go and find the .txt file to look at the associated errors and find the cause.
There are around 30 txt files split across about 5 servers.
This system was set up before me, but I'm looking for any advice on how to deal with the situation.
I have control of the script forming the error logs so can do pretty much anything - but I'm lost where to start: I'd considered some kind of web facing dashboard tool, maybe output the files to RSS or something?
Are there any external or internal tools I should be using?
Of course you may use the SQL Server Reporting Services or review this comparison table, there are some packages which may support SQL Server but they may be overwhelming for your task.
It's not really clear what your problem is or what you want to do, but if I understand correctly, your biggest problem is that some messages are logged to a log file but others are sent by email. Therefore, there is no single location that has all error messages in it and that makes analysis and troubleshooting difficult.
The best solution would be to use a logging framework that supports multiple logging destinations (file, DB, email) and severities. That would allow you to specify a configuration like "all errors are logged to a text file and critical ones are also sent by email", so you can ensure that you have everything in one place for general analysis but critical errors are also handled with priority.
You didn't mention what programming language you use, but assuming it's .NET-based then log4net and Enterprise Library are two common frameworks and there are many questions about them here on SO. Googling should give you a good idea of the pros and cons for your situation. If you're using a different language then you can look for the equivalent package: log4j (Java), logging (Python) etc.

App Engine backup never finishes only clue is failure in map reduce worker_callback

Over the last few weeks we have repeatedly failed on doing a complete backup of the data store using the datastore admin tool. We thought the issues had to do with quota errors we were running into so we switched our application from a free to a paid app and we still have problems.
Each time we are attempting to back up to the blobstore and what occurs is that the process never finishes. We see the backup in our Pending Backups list but it never actually completes. We only have a total of 43MB of data right now so we don't see it as a data transfer problem. Looking at our default Task Queues it shows that we have two pending tasks one is a call to /_ah/mapreduce/controller_callback and another is a call to /_ah/mapreduce/worker_callback
The worker_callback racks up its retry count and the only error clue we have is on the Previous Run tab it shows the last http response code to be 500. There is no error message, nothing shows up in our error logs, it just keeps trying over and over again.
We've been able to narrow the backup problems to a specific entity kind for a particular namespace but we can't figure out why that entity kind is failing whereas the others are not. The major difference is the entity kind has a large number of embedded entities, but if the app engine is able to read / put those entities we can't understand why it seems to be having problems backing it up. The particular namespace that the error occurs in has the largest data stored for that entity kind compared to the other namespaces we have setup.
We think if we can see what error is occurring in the worker_callback we may be able to figure out why the backup is failing, or what is wrong with our data that's preventing the backup. Is there something we need to setup / enable through settings / configuration files to give us more detailed information on the backup? Or is there some other avenue we should explore to figure out how to investigate/fix this problem?
I should mention we are using the Java SDK as well as Objectify V3 to work with the data store. We are also backing up data to the Blobstore.
Thank you.
Well with the app engine team's help we figured what the problem was and we worked around the issue. I want to give details in case anyone else runs into this problem.
From issue 8363 the app engine team indicated that from their logs they could see that the map reduce failed because of the large number of properties that our entity kind had. The specific entity kind that was causing the failure had a large number of variable properties that was generating errors when map reduce tried to write out a schema. They indicated that the solution on their end was to ignore entities that were like this in the backup to make it so the backup worked successfully.
What we did to work around the issue and make the backup work was change how we told objectify to store out data. The large number of properties were being created due to our use of the #embedded keyword on a HashMap() class member field. Since the embedded keyword breaks down classes into individual components it was generating a large number of properties. We switched the member field to be #serialized and then ran a conversion process to make it use the new serialized property. This made the backup / restore work again.
You can read more about the differences between embedded and serialized on objectify's website
snielson, would you mind opening an issue on our Public issue tracker here. Remember to add your Application ID so we can further debug this specific scenario.
Thanks!

Where should exceptions be caught and handled in a WPF application?

We have exception catching code in most of our event handlers etc, this leads to very complex logic, with flags that are set to say if there has been an exception so as not do the next step etc.
At first sight I would move all exception report/logging to the AppDomain.UnhandledException event, however from my experience with WinForms this will lead to a lot of exceptions being lost.
Also when there is an exception we have include details of the operation the user was trying to do in the log message.
So what are people experiences both bad and good at exception logging/reporting/recovering in WCF applications?
(I would love to say that we had something like the Model-View ViewModel (MVVM) pattern) in use, but we don’t and are a long way from being able to use any “clean” design like that)
Its not specific to WPF, but the best place to handle exceptions is to handle them at the point where user interaction with the form is converted into a logic process. This is either in the codebehind or in a controller method.
Only at this level do you know what the user is trying to do and what reasonable steps to take when an exceptional situation is encountered.
Of course, if you don't know what exceptions may be thrown don't try to handle them. And don't bother handling exceptions that you can't do anything about.
You should never have to use flags to say exceptions have been handled - that smells like bad design.
Exceptions fall into two categories:
expected (e.g. validation failed, data could not be put into database)
unexpected
your expected ones should be handled pretty quickly, and logged depending on the type of exception. For instance, if the user entered some data that was rejected by validation code in the business layer, i would catch the exception and notify the user, but not log it - because it was expected and i can deal with it. Others could be "expected", but you cannot deal with it - like a WCF call failed due to timeout or oversized data packet. This you should definately log - you may even be able to recover from it, so once again it should be caught and dealt with. Note the lack of flags - an exception is either dealt with, or it continues to bubble up. If you need to take an action you can do so, and then rethrow the exception to let it bubble up further - look, still no flags :)
Another approach i have taken in the past when throwing (custom) expected exceptions in an ASP.NET application is to mark it as being capable of being handled locally or not. This means that when the aspx caught the error (generic error handler in a base page that all apsx's inherited from), then it knew whether it should just show it locally within the page (after doing a text lookup in a resource file), or whether it should redirect to an error page. This approach was especially useful when doing a mixture of standard postbacks and ajax callbacks (may not be particularly useful to WPF apps though).
For major unexpected errors, you can catch those at the Application level, here is a previous SO post about it. Another two related posts that might be useful here, and here.
Another thing i should mention is to make sure your error logging is relatively bulletproof - there is nothing worse than your exception logging process throwing an exception, and losing all the valuable details of that tricky bug you are trying to track down for that irate user.

Resources