CakePHP debug() isn't working but Debugger::dump() is fine - cakephp

Ever since PHP4 and Cake 1.3 I have been using debug($data); to debug things such as model output in CakePHP.
However, since upgrading to PHP5.4, I have noticed that debug($data) doesn't always seem to work. For example, today I did a straightforward $data = $this->Model->find('all'); and the contents of debug($data); appears to be empty. No error, just a reference in the HTML output to the fact that I called debug and the line number and then no debug output.
However, if I run Debugger::dump($data); on the exact same find, it works find and I see the entire output.
It seems to only be happening when $data has a significant amount of data (say, 100+ records), but I worked with datasets this size prior to PHP5.4 and there was never a problem and there are no errors, inline or in the apache/php logs indicating that there are any memory issues and I have debugging set to 3.
Does anyone have any idea why this is? I can obviosly starting using Debugger::dump($data); easily but it's just a little extra to have to try out each time and I'd like to know why I can't just use deubg(); anymore.

This can happen with non-utf8 encoded data in your db records - if the rest of your application is UTF-8 that is.
debug() will then just output "nothing". var_dump(), print_r() and other php internal methods should still print the output, though.
You can usually re-encode them to utf8 using iconv() etc.

Related

How to catch "Nrpe unable to read output" when occured?

I'm trying to catch "nrpe unable to read output" output from plugin and send an email when this one occurs and I'm a little bit stuck :) . Thing is there are different return codes when this error occurs on different plugin:
Return code Service status
0 OK
1 WARNING
2 CRITICAL
3 UNKNOWN
Is there a way either to unify return codes of all plugins I use(that there always will be 2[CRITICAL] when this problem occurs), or any other way to catch those alerts? I want to keep return codes for different situations as is(i.e. filesystem /home will be warning(return code 1) for 95% and critical(return code 2) for 98%
Most folks would rather not have this error sending alert emails, because it does not represent an actual failed check. Basically it means nothing more than:
The command/plugin (local or remote) was ran by NRPE, but
failed to return any usable status and/or text back to nrpe.
This most often means something went wrong with the command/plugin and it hasn't done the job it was expected to perform. You don't want alerts being thrown for checks, when the check wasn't actually performed - as this would be very misleading. It's also important to note that the Return Code is not even be coming from the command/plugin.
In my experience, the number one cause of this error is a bad check. And as the docs for NPRE state, you should run the check (with all its options!) to make sure it runs correctly. Do yourself a favor and test both working AND not working states. About 75% of the time, this has happened because the check only works correctly when it has OK results, and blows up when something not-OK must be reported.
Another issue that causes these are network glitches. NRPE connects and runs the check; but the connection is closed before any response is seen. Once again, not a true check result.
For a production Nagios monitoring system, these should be very rare errors. If they are happening frequently, then you likely have other issues that need to be fixed.
And as far as I can tell, all built-in Nagios plugins use the exact same set of return codes. Are you certain this isn't a 'custom' check?
Ok, I think I've found the solution for my problems-I will try to check nagios.log on each node for those errors.

Comments not shown in wordpress - wp_list_comments(), comments_form()

In 90% of the cases, that is refreshing the same page, functions wp_list_comments(), comments_form(), has_comments() and other functions alike aren't yielding the expected results.
So I refresh the same page and these functions return different results, for example either 0 comments, 5 or 21 comments, while no other user is using the database since it's a test system with XAMPP. It's noticeable that only values 0, 5 and 21 seem to pop up for the numer of comments.
Looking further down the call stack I could notice that sometimes $wpdb->has_comments() returns 0 although the post contains comments.
I suspect this may be something related to wordpress caching system of my version 4.5 and the only issue I found on their bug tracker was related to including wp_list_comments() twice in the same file, which is not the case. Updating to latest version also does not solve this.
I have noticed that the below debug log was printed in the debug.log file, but only once although the page has been reloaded hundreds of times:
WordPress database error You have an error in your SQL syntax; check the
manual that corresponds to your MariaDB server version for the right syntax to
use near 'WHERE AND comment_parent IN (61,62,66) ORDER BY comment_date_gmt
ASC, comment_' at line 1 for query WHERE AND comment_parent IN (61,62,66)
ORDER BY comment_date_gmt ASC, comment_ID ASC made by
require('C:\xampp\htdocs\boxify\chef\wordpress\wp-blog-header.php'),
require_once('C:\xampp\htdocs\boxify\chef\wordpress\wp-includes\template
loader.php'), include('C:\xampp\htdocs\boxify\chef\wordpress\wp-
content\themes\mytheme\single.php'), get_template_part, locate_template,
load_template, <...more files here...>, comments_template,
WP_Comment_Query->__construct, WP_Comment_Query->query, WP_Comment_Query-
>get_comments, WP_Comment_Query->fill_descendants
Issue occurs on multiple self-hosted wordpress installations.
With other themes, like twentysixteen or others I don't see this sort of behavior, so it's definitely something wrong on my side and I suspect the caching configuration, which I didn't touch.
Also, I've checked for wp_reset_postdata() usage and there's none in my single template. Add theme support is used for comments and everything. Any hint or some direction in which I should dig further would be great!
Found out what the issue was.
Somewhere in the code, in the middle of THE MAIN LOOP a file was included with get_template_part() and in that file a function running a new WP_Query was missing wp_reset_postdata(). This caused the global $post to become corrupt, at least from my point of view.
Since that WP_Query was used to get a random post, this caused comments to be shown for that random post. Sometimes they existed, other times they didn't.
Also, further functions like get_next_post() and get_previous_post() that were relying on $post were now returning results relative to the new random post instead of the old post, as expected.
Only when I noticed these adjacent functions were returning invalid results did I understand where the issue was.

Reading FILEOBJECTs of Saved Note

I am involved in using the C API to interact with Lotus Notes and Lotus Domino. I'm running into issues when reading existing Notes out of an NSF. Specifically, reading TYPE_OBJECT fields and even more specifically, $FILE fields (though I'm sure all TYPE_OBJECT fields would fail if I had any others).
I'm using NSFItemInfo to get the summary data on the $FILE field (so I don't need the saved file, I need information about it such as its size, name, etc...).
If I create the Note in memory, Commit it, then read the $FILE field, everything works. If I change my unit test to read an existing Note (instead of creating it in memory), Lotus PANICS with an Invalid Handle Lookup message.
So I'm left feeling like there is something different about loading those fields when I create a Note from scratch Vs opening one already created. Even reading in an already created Note that my own code created gives me the same error, so I think I'm creating the Notes correctly.
I've explored the NSFNoteOpenExt's flags options and have attempted to open the Note with every possible flag described in OPEN_xxx and I always get the panics except when I open the Note with OPEN_ABSTRACT or OPEN_NOOBJECTS. The reason those don't error though, is because they open the Note without the $FILE fields at all, so when I see if the field exists I get a false and the code to read in TYPE_OBJECT fields is never executed.
Any ideas what I'm missing?
I'd provide code, but I'm actually using .NET interop to accomplish all this, and the code is spread across multiple files, etc.... If you have any questions please ask and I'll provide as much detail as I can.
Craig
I figured out the issue. It came from the fact that when using interop in C#, you can't call C macros. OSLockBlock is defined as a macro to another macro to a function. Essentially, it locks the BlockId.Pool pointer, then increments the pointer by BlockId.BlockHandle. I was mis-interpreting that macro logic to be first increment BlockId.Pool by BlockId.BlockHandle, then lock.
Essentially:
Lock(BlockId.Pool)+BlockId.BlockHandle Vs Lock(BlockId.Pool+BlockId.BlockHandle)
It's interesting that the latter would work when creating a new note with new attachments. I finally figured that out as well, BlockId.BlockHandle was always zero when doing that. So that's why that always worked.

Problem with performance counters on Vista

I'm running into a strange issue on Vista with the Performance monitoring API. I'm currently using code that worked fine on XP/2k, based around PdhGetFormattedCounterValue(). I start out using PdhExpandWildCardPath to expand the counters (I'm interested in overall network statistics), the counters I'm looking at are:
\\Network Interface(*)\\Bytes Received/sec
\\Network Interface(*)\\Bytes Sent/sec
\\Processor(_Total)\\% Processor Time
The problem is that on their first call they return PDH_INVALID_DATA, I don't think this is a problem, since if I query it again I will start getting data without the error. The problem is this - while the processor time is worked exactly as expected, neither of the network interface counters are returning anything - just 0 all the time. I verified using Perfmon that they are reporting data normally, so I'm at a loss as to what might be the issue. I caught this at MS:
http://support.microsoft.com/?scid=kb%3Ben-us%3B287159&x=11&y=9
But I'm not interested in multi-language for my task, so I don't think this is relevant. I will see if I can come up with some basic code showing exactly what I'm doing, but nothing is returning anything strange, and it worked on XP/2k, so I suspect something changed under the hood. Thanks!
It turns out the issue was that the network interfaces are both wildcards, whereas the Processor one is actually already rolled up by the performance monitoring. What I didn't realize was that it PdhExpandWildCardPath didn't return something directly usable by PdhAddCounter. By this I mean that if ExpandWildCard returns 3 expanded matches, they come back as a null separated strings - I understood this, but I had assumed that AddCounter would be effectively create a counter containing all three. Nope, reality is I needed to break up each path and request it individually from AddCounter, then roll up the results manually when I get them.
Hopefully this helps someone else to avoid the same mistake I made with less frustration. ;)

How to find and tail the Oracle alert log

When you take your first look at an Oracle database, one of the first questions is often "where's the alert log?". Grid Control can tell you, but its often not available in the environment.
I posted some bash and Perl scripts to find and tail the alert log on my blog some time back, and I'm surprised to see that post still getting lots of hits.
The technique used is to lookup background_dump_dest from v$parameter. But I only tested this on Oracle Database 10g.
Is there a better approach than this? And does anyone know if this still works in 11g?
Am sure it will work in 11g, that parameter has been around for a long time.
Seems like the correct way to find it to me.
If the background_dump_dest parameter isn't set, the alert.log will be put in $ORACLE_HOME/RDBMS/trace
Once you've got the log open, I would consider using File::Tail or File::Tail::App to display it as it's being written, rather than sleeping and reading. File::Tail::App is particularly clever, because it will detect the file being rotated and switch, and will remember where you were up to between invocations of your program.
I'd also consider locking your cache file before using it. The race condition may not bother you, but having multiple people try to start your program at once could result in nasty fights over who gets to write to the cache file.
However both of these are nit-picks. My brief glance over your code doesn't reveal any glaring mistakes.

Resources