"Oops" message eminating from Apple code - osx-snow-leopard

I have a call, within an NSTextView subclass, that looks like this:
[[self textStorage] replaceCharactersInRange:fullRange withAttributedString:sa];
This call used to work fine and now (after Snow Leopard installation) generates a short message in the console: "Oops". It doesn't crash, it just generates this message and then fails to set the text correctly. The "Oops" message is coming from Apple code, not mine, which is absolutely infuriating.
Can anybody tell me what is going on? Why would the textStorage of an NSTextView EVER generate this message?
I don't know whether it is relevant that when the Oops message is generated, fullRange is equal to (0,0).
...LATER...
Well, I've managed to fix this. This is going to sound crazy. It turns out that the NSTextView I was working with was added to an NSStatusItem as part of an awakeFromNib routine. For whatever reason, Snow Leopard refuses to display the status item until awakeFromNib returns.
When I moved the code for display of the status item into applicationDidFinishLaunching, the problem went away.
I'm crazy, you say? I know, this sounds silly, but try it yourself using sleep(). Prep a statusItem and then sleep() in your awakeFromNib routine. The statusItem won't appear until sleep is over and awakeFromNib returns.

Please file a bug via http://bugreport.apple.com (and append the bug number here).
I could see how Foundation might cause that to happen. That your range is (0,0) seems a bit odd. Are you trying to insert a string at the beginning of the text storage? If so, use -insertAttributedString:atIndex: instead.
If that fixes the Oops, please do still file a bug. The Foundation should definitely be more helpful than "Oops"!

Related

Implementing a simple code in Microsemis Soft Console

I would like to get into the field of FPGAs a bit. I currently have a PolarFire Everest Dev Board and would like to try something small on it for testing purposes. My current level is very low, i.e. complete beginner. My first working project was a counter that counts binary to 15 and outputs it via the LEDs of the board. Now I wanted to play with RISC-V. Unfortunately I can't find anything on the internet that meets my expectations and almost nothing is "beginner friendly". My current goal is actually just to implement something on the level of a Hello World program in C via the SoftConsole. Unfortunately I have no idea how to go about it. Can anyone help me or recommend a good entry on the internet? Most of the stuff is either unusable, requires licenses I can't get, or is simply no longer available (which happens to me quite often with PDFs from Microsemi).
Since I don't really know what I could do with it to start with, I don't have any code yet that I would like to include. The plan would actually be to create something where I also get feedback via the board that something has been done. Later when I have more understanding SRAMs should be managed with it.

How can I debug the cause of a 0xc0000417 exit code

I get an exit error code 0xc0000417 (which translates to STATUS_INVALID_CRUNTIME_PARAMETER) in my executable (mixed Fortran/C) and try to find out what's causing it. It seems to occur when trying to write to a file which I infer because the file is created but there's nothing in it. Yet I have the suspicion that's not the /real/ cause. When I disable writing of that file, which is done from C code, it crashes when writing a different file, this time from Fortran code.
The unfortunate thing is: this only happens after the program (a CPU heavy calculation) has finished after having run for ~2-3 days. When I tried to shorten the calculation time by various means to facilitate debugging, the problem did not occur anymore. It almost seemed like the long runtime was crucial for triggering the problem.
I tried running it in Visual Studio 2015 but VS does not break/stop (like it would if e.g. a segfault had happened) despite having turned on breaking at all the C++ Exceptions, like was suggested in some other thread and all Common Language Runtime Exceptions.
What I would like VS to do is to either break whenever that error code is 'produced' and examine the values of variables or at least get a stack trace.
I searched intensively but I could not find a satisfactory solution to my problem. In essence, my question is similar to how to debug "Invalid parameter passed to C runtime function"? but the problem does not occur with the linux version of my program, so I'm looking for directions on how to debug it on Windows, either with Visual Studio or some other tool.
Edit:
Sadly, I was not able to find any convenient means of breaking automatically when the error occurs. So I went with the manual way of setting a breakpoint (in VS) near the supposed crash and step through the code.
It turned out that I got a NULL pointer from fopen:
myfile = fopen("somedir\\somefile.xml");
despite the file being created. But when trying to write to that file (via the NULL handle!), a segfault occurred. Strangely, it seems I only get a NULL pointer from fopen when the process has a long lifetime. But that's offtopic for that question.
Edit 2:
Checking the global errno variable gave error code 22 which again translates to an invalid argument. However, the argument to fopen is not invalid as I verified with the debugger and the fact that the file is actually created correctly (with 0 bytes length). Now I think that that that error code 22 is simply misleading because when I check (via a watch in VS) $err, hr I get:
0x000005aa ERROR_NO_SYSTEM_RESOURCES : Insufficient system resources exist to complete the requested service.
Just like mentioned here, I have plenty of HD space (1.4 GB), plenty of free RAM (3.2 GB), and I fear it is something not directly caused by my program but something due to broken Windows design of file handling (it does not happen under Linux).
Edit 3: OK, it seems it is not Windows itself that's the culprit but rather the Intel Fortran compiler I'm using. Every time I'm doing formatted write statements in my program, a Mutant (Windows speak for mutex) handle is leaked. Using WinDbg and !htrace -enable, then running a bit further, break and issue !htrace -diff gives loads of these backtraces:
0x00000000777ca25a: ntdll!NtCreateMutant+0x000000000000000a
0x000007fefd54a1b7: KERNELBASE!CreateMutexExW+0x0000000000000057
0x000007fefd551d60: KERNELBASE!CreateMutexExA+0x0000000000000050
0x000007fedfab24db: libifcoremd!for_lge_ssll+0x0000000000001dcb
0x000007fedfb03ed6: libifcoremd!for_write_int_fmt+0x0000000000000056
0x000000014085aa21: myprog!MY_ROUTINE+0x0000000000000121
During the program runtime these mutant handles seem to accumulate until they exhaust all handle resources (16711680 handles) so that there's nothing left for file handles.
Edit 4: It's a bug in the intel fortran runtime libraries that has been fixed with a later version (see here). Using the patched version of libifcoremd.dll fixes the problem, i.e. the handle count does not increase anymore during formatted writes.
It could be too many open files or leaked (not closed) handles. You can check that with e.g. Process Explorer (I think you could see the number of handles in the process with it).

Pixels are not being drawn to screen and unable to quit. May be MacOSX related bug?

I'm following this tutorial on SDL. I tried translating the code to C, which can be viewed here. As you can guess, I want to use this code in an implementation of Conway's Game of Life. The code compiles and runs, and a white screen is drawn. But I am not getting any pixels drawn to the screen when I click around, and the program does not quit when I press the red OSX quit button or select quit from the menu.
If you look at the comments section of the tutorial, you can see that I asked the author of the tutorial the same question. He replied saying that the code works for him and that my problem may be due to a bug with OSX. I tried asking the forum that he recommended, but posting to the forum requires special user privileges that I haven't yet been granted. Sticking SDL_GetError() in various places in my program doesn't change the behavior of the program either.
A more widely known (but incomplete) tutorial is located here. I don't know if that helps.
Use SDL_PollEvent() instead of SDL_WaitEvent(). The reason for this is that WaitEvent waits untill an event happens (pausing the program) while PollEvent does not. Second the events should be handled in a loop ie: while(SDL_PollEvent(&e) {...} not SDL_WaitEvent(&e);...
The best thing for you do do is follow the other tutorials that I linked to since the tutorial you are using does not seem to be very well written. (you can just quickly reed through the stuff you already know)

How to view variables during program execution

I am writing a relatively simple C program in Visual C++, and have two global variables which I would like to know the values of as the program runs. The values don't change once they are assigned, but my programming ability is not enough to be able to quickly construct a text box that displays the values (I'm working in Win32) so am looking for a quick routine that can perhaps export the values to a text file so I can look at them and check they are what they ought to be. Values are 'double'.
I was under the impression that this was the purpose of the debugger, but for me the debugger doesn't run as the 'file not found' is always the case.
Any ideas how I can easily check the value of a global variable (double) in a Win32 app?
Get the debugger working. You should maybe post another question with information about why it won't work - with as much info as possible.
Once you have done that, set a breakpoint, and under Visual C++ (I just tried with 2010), hover over the variable name.
You could also use the watch window to enter expressions and track their values.
If your debugger isn't working try using printf statements wherever the program iterates.
Sometimes this can be a useful way of watching a variable without having to step into it.
If however you wish to run through the program in debug mode set a breakpoint as suggested (in VS2010 you can right click on the line you want to set a breakpoint on).
Then you just need to go to Toolbars -> Debug Toolbar.
I usually like to put #ifdef _DEBUG (or write an appropriate macro or even extra code) to do the printing, and send to the output everything that can help me tracking what the program's doing. Since your variables are never changing, I would do so.
However, flooding the console with lots of values is bad imo, and in such cases I would rely on assertions and the debugger - you should really see why it's not working.
I've done enough Python and Ruby to tell you that debugging a complex program when all you have is a printf, although doable, is extremely frustrating and takes way longer than what it should.
Finally, since you mention your data type is double (please make sure you have a good reason for not using floats instead), in case you add some assertion, remember that == is to be avoided unless you know 100% that == is what you really really want (which is unlikely if your data comes from calculations).

Bug fixed with four nops in an if(0), world no longer makes sense

I was writing a function to figure out if a given system of linear inequalities has a solution, when all of a sudden it started giving the wrong answers after a seemingly innocuous change.
I undid some changes, re-did them, and then proceeded to fiddle for the next two hours, until I had reduced it to absurdity.
The following, inserted anywhere into the function body, but nowhere else in the program, fixes it:
if(0) {
__asm__("nop\n");
__asm__("nop\n");
__asm__("nop\n");
__asm__("nop\n");
}
It's for a school assignment, so I probably shouldn't post the function on the web, but this is so ridiculous that I don't think any context is going to help you. And all the function does is a bunch of math and looping. It doesn't even touch memory that isn't allocated on the stack.
Please help me make sense of the world! I'm loathe to chalk it up to the GCC, since the first rule of debugging is not to blame the compiler. But heck, I'm about to. I'm running Mac OS 10.5 on a G5 tower, and the compiler in question identifies itself as 'powerpc-apple-darwin9-gcc-4.0.1' but I'm thinking it could be an impostor...
UPDATE: Curiouser and curiouser... I diffed the .s files with nops and without. Not only are there too many differences to check, but with no nops the .s file is 196,620 bytes, and with it's 156,719 bytes. (!)
UPDATE 2: Wow, should have posted the code! I came back to the code today, with fresh eyes, and immediately saw the error. See my sheepish self-answer below.
Most times when you modify the code inconsequentially and it fixes your problem, it's a memory corruption problem of some sort. We may need to see the actual code to do proper analysis, but that would be my first guess, based on the available information.
It's faulty pointer arithmetic, either directly (through a pointer) or indirectly (by going past the end of an array). Check all your arrays. Don't forget that if your array is
int a[4];
then a[4] doesn't exist.
What you're doing is overwriting something on the stack accidentally. The stack contains both locals, parameters, and the return address from your function. You might be damaging the return address in a way that the extra noops cures.
For example, if you have some code that is adding something to the return address, inserting those extra 16 bytes of noops would cure the problem, because instead of returning past the next line of code, you return into the middle of some noops.
One way you might be adding something to the return address is by going past the end of a local array or a parameter, for example
int a[4];
a[4]++;
I came back to this after a few days busy with other things, and figured it out right away. Sorry I didn't post the code sooner, but it was hard coming up with minimal example that displayed the problem.
The root problem was that I left out the return statements in the recursive function. I had:
bool function() {
/* lots of code */
function()
}
When it should have been:
bool function() {
/* lots of code */
return function()
}
This worked because, through the magic of optimization, the right value happened to be in the right register at the right time, and made it to the right place.
The bug was originally introduced when I broke the first call into its own special-cased function. And, at that point, the extra nops were the difference between this first case being inlined directly into the general recursive function.
Then, for reasons that I don't fully understand, inlining this first case led to the right value not being in the right place at the right time, and the function returning junk.
Does it happen in debug and release mode build (with symbols and without)? Does it behave the same way using a debugger? Is the code moultithreaded? Are you compiling with optimizations? Can you try another machine?
Can you confirm that you are indeed getting different executables when you add the if(0) {nops}? I don't see nops on my system.
$ gcc --version
powerpc-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5490)
$ cat nop.c
void foo()
{
if (0) {
__asm__("nop");
__asm__("nop");
__asm__("nop");
__asm__("nop");
}
}
$ gcc nop.c -S -O0 -o -
.
.
_foo:
stmw r30,-8(r1)
stwu r1,-48(r1)
mr r30,r1
lwz r1,0(r1)
lmw r30,-8(r1)
blr
$ gcc nop.c -S -O3 -o -
.
.
_foo:
blr
My guess is stack corruption -- though gcc should optimize anything inside an if(0) out, I would have thought.
You could try sticking a big array on the stack in your function and see if that also fixes it -- that would also implicate stack corruption.
Are you sure you're running what you think you're running? (dumb question, but it happens.)
Looks like you will need to put in some hard work and elbow grease
Your problem sounds similar to something I have debugged in the past where my app was running regular ... when out of nowhere it jumped to a different part of the app and the callstack got completely messed up ( however this was embedded programming )!
It sounds like you are spending your time "thinking" about "what should be happening" ... when you should be "looking" at "what is actually happening". A lot of the times the hardest bugs are things that you would never think "should happen".
I would approach the problem like so:
Break out your favorite debugger
Start stepping through your code and watch the call stack and local variables and look for suspicious activity
Make the system fail
Focus in to where the system is failing
Focus on iterating your code changes:
making code changes that will "make the system fail"
running/debugging and watching
If it runs fine you are looking/trying the wrong thing and you need to try something else. If you make it fail then you have made progress towards finding the bug.
If you don't know where or how the system fails you will not be able to solve the problem.
This will be a good opportunity to build your debugging skills. For more help on building your debugging skills read check out the book "9 rules for debugging".
Here is a poster from the book:
(source: google.com)
Concrete suggestions:
If you think it is the compiler, then run a different platform/OS/compiler.
Once you have ruled out the platform/OS/compiler, then try restructuring the code. Look for the "clever" code parts and see if they are actually doing what the code meant to do... maybe the clever solution wasn't actually clever and is doing something else.
I am the author of "Debugging" so kindly referenced above by Trevor Boyd Smith. He has it right -- the key rules here are #2 Make It Fail (which you seem to be doing okay), and #3 Quit Thinking and Look. The conjectures above are very good (demonstrating mastery of rule #1 -- Understand the System -- in this case the way code size can change a bug). But actually watching it fail with a debugger will show you what's actually happening without guesswork.
Break out that one function into a separate .c file (or .cpp or whatever). Compile just that one file with the nops and without them, to .s files and compare them.
Try an old version of gcc. Go back 5 or 10 years and see if things get stranger.

Resources