I have an issue, i need to be able to see the php error when debug mode is set as false on production environment.
I currently see the internal error message but i would like to see the php error in case things break.
How can i do this? I don't want to have the debug kit activated as well.
/**
* Debug Level:
*
* Production Mode:
* false: No error messages, errors, or warnings shown.
*
* Development Mode:
* true: Errors and warnings shown.
*/
'debug' => false,
In the Logs dir you can always view the error.log.
I usually spit out the error.log on a page that has restricted access, cause I cant always get on the filesystem and its just faster and easier for others aswell.
Restricted DebugKit access
Depending on the security requirements of your application, you can implement some form of check to turn on debug mode while in production. For instance, a custom key + IP restriction.
So you can check to see if `$_GET['key'] is equal to your key AND the IP matches your machine. If so, turn debug on, otherwise, leave it off. This will allow you to much more easily debug your live application.
You are opening yourself a bit to potential concerns (though I'm not a good enough hacker to know any particular ones). But if you're doing banking level software, or storing any type of PCI-compliant data, you should probably not do this. Otherwise, it's a good solution.
Or, you could simply turn on debug kit if logged in as user with a specific role.
CakePHP Logs
As others have mentioned, you can use the internal Cake Logs via accessing them directly or, as Alex points out in another answer, display the error log on a page with restricted access.
Third Party Logs
Companies like PaperTrailApp make sorting through your logs very nice and easy.
If you want there is a plugin called error email cakephp, I use in my projects you install and define what kind of errors he might send you an email , it's very good and works on cakephp 3. You can find the project on github, its very documented.
Related
I'm building a vClould client application via the REST APIs, however, the documentation is inconsistent an in some cases just wrong and misleading.
All I really need is a solid debug tool or even a log file. Any recommendations?
You already mentioned you have access to the message stream, which is one of the first steps. Typically if I'm using the Apache HttpClient/HttpComponents I'll go increase the log level so it logs the full HTTP requests.
My next step is usually to cheat and to log into vCD as a system administrator and see what's going on. When vCD was designed there was a very deliberate decision to not reveal infrastructure level problems to tenants of the cloud (normal org users or org admins), as that would break the cloud abstraction. Sadly, that means as an org-level user you're often going to get "contact your cloud admin" error responses. We are aware that this isn't ideal and try to find ways to make it better when we can (IIRC the new 5.5 release that was announced last month does have some improvements in that area).
The last step is usually to cheat even more and to look at the server side logs (vcloud-container-debug.log, specifically). That usually gives me a better clue as to what went wrong. Of course, you may be unlucky and not have access to the vCD cell machine.
My workaround in the latter two cases is to try the operations via the vCD UI and see (1) if they work as expected and (2) if they do, to check the system state via the API and see if I'm sending the wrong request payloads, etc. because the doc or schema reference may not have been clear enough.
In regards to the documentation, please use the feedback links () found on individual doc pages to let us know! Our technical writer reviews all the feedback and tries to address them.
My final suggestion is that you might want to post API questions to the vCloud API community forum VMware has. There are a number of experts (both users and VMware employees) that monitor it and respond to questions.
I'm currently working on a web application which generates daily error (and non error) logs.
The current system outputs a log per task to a text file, and outputs critical errors as well as "start" and "finish" type messages to an email account.
The current workflow is as follows: scour the email box for errors, then go and find the .txt file to look at the associated errors and find the cause.
There are around 30 txt files split across about 5 servers.
This system was set up before me, but I'm looking for any advice on how to deal with the situation.
I have control of the script forming the error logs so can do pretty much anything - but I'm lost where to start: I'd considered some kind of web facing dashboard tool, maybe output the files to RSS or something?
Are there any external or internal tools I should be using?
Of course you may use the SQL Server Reporting Services or review this comparison table, there are some packages which may support SQL Server but they may be overwhelming for your task.
It's not really clear what your problem is or what you want to do, but if I understand correctly, your biggest problem is that some messages are logged to a log file but others are sent by email. Therefore, there is no single location that has all error messages in it and that makes analysis and troubleshooting difficult.
The best solution would be to use a logging framework that supports multiple logging destinations (file, DB, email) and severities. That would allow you to specify a configuration like "all errors are logged to a text file and critical ones are also sent by email", so you can ensure that you have everything in one place for general analysis but critical errors are also handled with priority.
You didn't mention what programming language you use, but assuming it's .NET-based then log4net and Enterprise Library are two common frameworks and there are many questions about them here on SO. Googling should give you a good idea of the pros and cons for your situation. If you're using a different language then you can look for the equivalent package: log4j (Java), logging (Python) etc.
I need to log Fatal-errors of my website.
I normally check error.log and debug.log files for CakePHP errors.
But I found out that PHP related fatal errors aren't logged someplace.
It is also discussed in this thread.
I checked php.ini. IT has following lines:
log_errors = On
;error_log = filename
I don't have rights to change php.ini. I can ask admin to change this, but it seems like I need to ask him every time I need a change :) I also have concerns about performance. Whether logging errors can decrease performance or not?
So I find out that I can put following two lines inside my script to log errors and change folder or file name when I need.
ini_set("log_errors", 1);
ini_set("error_log", "/path/to/php-error.log");
So I want to know where to put this lines inside my codes? Should I put it inside AppController::beforeFilter ? Or is there a better place/solution in CakePHP 2 configuration?
this is an old thread.
in the meantime with cake 2.x errors are all logged in productive mode - so also fatal errors.
trigger one and check out your /tmp/logs/error.log
but you can easily find that out looking at the core code:
https://github.com/cakephp/cakephp/blob/master/lib/Cake/Error/ErrorHandler.php#L189
There is framework defined configuration settings. You can use the Error Handling configuration class.
Here is changing fatal error behavior link, that will help you to achieve the same.
register_shutdown_function();
http://php.net/manual/en/function.register-shutdown-function.php
We use custom settings in a SalesForce app. We access it like so:
MySettings__c settings = MySettings__c.getOrgDefaults();
This was working fine, but today the app completely crashed. By that I mean the page doesn't load at all, I just get a white screen telling me an internal error occurred. We traced it down to this line of code - when it is commented out the page loads as well as it can without those settings (but at least it loads).
Running that single line of code in the System Log (using the Execute functionality) also causes a report of Internal System Error. The only thing the system log reports is "FATAL_ERROR Internal Salesforce.com Error." The Apex code modal reports "Internal System Error: 1018505045-332 (-920440070)"
The setting has values for the organization - we've also tried deleting the settings and recreating them to no affect. So far SalesForce has been no help beyond telling us to ask on their website.
This is very frustrating as it was working fine on Friday and today it was broken before anyone touched anything.
What you have there is a platform error. Whenever you get those you should report them to SFDC support and they will be able to see further internal logging to sort it out.
Nothing anyone out here can do to help I am afraid.
Paul
try setting the apiVersion of the affected code back to version 21.0. We had the same issue and making this change has provided an effective workaround.
This was a bug in Salesforce's infrastructure, which has been reported resolved. If you're still seeing this error with API version 22.0, you should create a case with salesforce support.
Assumption: live/production web app suppresses errors being shown to end-users.
Suppose your tech support team wants to see live data but through the eyes of the development-side of the application (maybe you want to see what errors are occurring, or want to see when you've got an issue fixed using an end-user's data).
Right now we've got one database serving both the dev and live boxes (not my idea - I know it's gross).
Ideas?
Edit: Best/handy tools for implementing your suggestion?
We replicate the data back to a different database. Yes, there is a delay, but it keeps people hands out of the production servers. This also allows us to "hide" information that tech support (and other people for that matter) aren't supposed to see.
In addition to replicating data down, on production, we see who's logged into the application, and if it's a member of the company, send them to the real error page versus the happy kitten playing with a ball of yarn apologizing.
Back up and restore from live to dev on a regular basis (once, twice a day). It doesn't need to be realtime (as you might be entering data from the dev side anyway, which could cause problems).
If you have PCI or HIPAA data, make sure you don't put that in your dev environment -- that might break laws.
I generally like to have a 3-tier system for web development:
Development
Testing
Live
Most of the time testing is an exact copy of the live system, except that errors are turned on, when a new version is about to be moved live it's replaced with the new version BEFORE live is, to detect upgrade issues.
Development is completely separate from live, to allow for major changes to things like the database, or changes to the production environment.
I would firstly make errors are either emailed to someone with details of how the user got there or at minimum logged so you can watch the error log while you perform similar actions to see if you get the same messages in the log.
And yes, copying the database on the dev server/site is probably your only option. You don't want any changes made by the development team to live data and you'll probably also have changes that won't work with the production database at some point.
I wouldn't recommend doing a nightly copy as a developer might be in the middle of some new feature where they have added data and then it's erased that night. I usually copy the production database(s) to dev each time a major version is released. This also allows me to do speed testing with a lot of live data. On some systems I also change everyones password to a default so I can login easily as any user.
If your configuration permits it:
a. Add a logging function (if there isn't one already) to write messages of interest to a log file.
b. Run the unix command
tail -f < logfile.txt
which will stream the growing log file to your console.
http://www.monkey.org/cgi-bin/man2html?tail
If you have Windows, you might try this:
http://tailforwin32.sourceforge.net/