I'm working with "igraph" package, and the "evalWithTimeout" function in "R.utils".
I'm trying to do maximal clique detection, which I know it can get terrible (as terrible O(3^n) being n the number of nodes) so I encapsulated in a timeOut, but it gets ignored.
Minimal code to reproduce the problem
library(igraph)
library(R.utils)
g<-erdos.renyi.game(1e6,1e7,type="gnm")
o<-evalWithTimeout(maximal.cliques(g),timeout=1)
This should stop after one second. However it doesn't. I wonder if this is due to the use of underlying C / Fortran code (which is what maximal.cliques does). If so, how can i solve this?
This won't work with most C code, because R cannot interrupt C code, unless the C code cooperates. evalWithTimeout calls setTimeLimit, and this is from the manual page from setTimeLimit:
Time limits are checked whenever a user interrupt could occur.
This will happen frequently in R code and during Sys.sleep, but
only at points in compiled C and Fortran code identified by the
code author.
It is not trivial to make C code interruptible, because you need to deallocate all allocated memory.
I suggest to report a bug at https://github.com/igraph/igraph/issues and request to make maximal.cliques interruptible.
Related
I get an exit error code 0xc0000417 (which translates to STATUS_INVALID_CRUNTIME_PARAMETER) in my executable (mixed Fortran/C) and try to find out what's causing it. It seems to occur when trying to write to a file which I infer because the file is created but there's nothing in it. Yet I have the suspicion that's not the /real/ cause. When I disable writing of that file, which is done from C code, it crashes when writing a different file, this time from Fortran code.
The unfortunate thing is: this only happens after the program (a CPU heavy calculation) has finished after having run for ~2-3 days. When I tried to shorten the calculation time by various means to facilitate debugging, the problem did not occur anymore. It almost seemed like the long runtime was crucial for triggering the problem.
I tried running it in Visual Studio 2015 but VS does not break/stop (like it would if e.g. a segfault had happened) despite having turned on breaking at all the C++ Exceptions, like was suggested in some other thread and all Common Language Runtime Exceptions.
What I would like VS to do is to either break whenever that error code is 'produced' and examine the values of variables or at least get a stack trace.
I searched intensively but I could not find a satisfactory solution to my problem. In essence, my question is similar to how to debug "Invalid parameter passed to C runtime function"? but the problem does not occur with the linux version of my program, so I'm looking for directions on how to debug it on Windows, either with Visual Studio or some other tool.
Edit:
Sadly, I was not able to find any convenient means of breaking automatically when the error occurs. So I went with the manual way of setting a breakpoint (in VS) near the supposed crash and step through the code.
It turned out that I got a NULL pointer from fopen:
myfile = fopen("somedir\\somefile.xml");
despite the file being created. But when trying to write to that file (via the NULL handle!), a segfault occurred. Strangely, it seems I only get a NULL pointer from fopen when the process has a long lifetime. But that's offtopic for that question.
Edit 2:
Checking the global errno variable gave error code 22 which again translates to an invalid argument. However, the argument to fopen is not invalid as I verified with the debugger and the fact that the file is actually created correctly (with 0 bytes length). Now I think that that that error code 22 is simply misleading because when I check (via a watch in VS) $err, hr I get:
0x000005aa ERROR_NO_SYSTEM_RESOURCES : Insufficient system resources exist to complete the requested service.
Just like mentioned here, I have plenty of HD space (1.4 GB), plenty of free RAM (3.2 GB), and I fear it is something not directly caused by my program but something due to broken Windows design of file handling (it does not happen under Linux).
Edit 3: OK, it seems it is not Windows itself that's the culprit but rather the Intel Fortran compiler I'm using. Every time I'm doing formatted write statements in my program, a Mutant (Windows speak for mutex) handle is leaked. Using WinDbg and !htrace -enable, then running a bit further, break and issue !htrace -diff gives loads of these backtraces:
0x00000000777ca25a: ntdll!NtCreateMutant+0x000000000000000a
0x000007fefd54a1b7: KERNELBASE!CreateMutexExW+0x0000000000000057
0x000007fefd551d60: KERNELBASE!CreateMutexExA+0x0000000000000050
0x000007fedfab24db: libifcoremd!for_lge_ssll+0x0000000000001dcb
0x000007fedfb03ed6: libifcoremd!for_write_int_fmt+0x0000000000000056
0x000000014085aa21: myprog!MY_ROUTINE+0x0000000000000121
During the program runtime these mutant handles seem to accumulate until they exhaust all handle resources (16711680 handles) so that there's nothing left for file handles.
Edit 4: It's a bug in the intel fortran runtime libraries that has been fixed with a later version (see here). Using the patched version of libifcoremd.dll fixes the problem, i.e. the handle count does not increase anymore during formatted writes.
It could be too many open files or leaked (not closed) handles. You can check that with e.g. Process Explorer (I think you could see the number of handles in the process with it).
I am trying to use MATLAB Coder to convert code from Matlab to a MEX file. If I have a code snippet of the following form:
x = zeros(a,1)
x(a+1) = 1
then in Matlab this will resize the array to accommodate the new element, while in the MEX file this will give an "Index exceeds matrix dimensions" error. I expect there are lots of places in the code where this is happening.
What I want to do is run the MATLAB version of the code (without using the coder) but have MATLAB generate an error or warning whenever it resizes an array because I assign to something outside the bounds. (I could just use the MEX file and see where the errors pop up, but this requires rebuilding the whole MEX file using MATLAB Coder every time I change the code at all, which takes a while.)
Is there a way to do this? Is there any kind of setting in MATLAB that will turn off the "automatically resize if you assign to an out-of-bounds index", or give a warning if this happens?
EDIT: As of Matlab 2015b, Coder now has runtime error checking as an option (from the Matlab release notes):
In R2015b, generated standalone libraries and executables can detect and report run-time errors such as out-of-bounds array indexing. In
previous releases, only generated MEX detected and reported run-time
errors.
By default, run-time error detection is enabled for MEX. By
default, run-time error detection is disabled for standalone libraries
and executables.
To enable run-time error detection for standalone
libraries and executables:
At the command line, use the code
configuration property RuntimeChecks.
cfg = coder.config('lib'); % or
'dll' or 'exe'
cfg.RuntimeChecks = true;
codegen -config cfg myfunction
Using the MATLAB Coder app, in the project build settings,
on the Debugging tab, select the Generate run-time error checks check
box.
The generated libraries and executables use fprintf to write
error messages to stderr and abort to terminate the application. If
fprintf and abort are not available, you must provide them. Error
messages are in English.
See Run-Time Error Detection and Reporting in Standalone C/C++ Code and Generate Standalone Code That Detects and Reports Run-Time Errors.
Original answer:
The answer in the comments regarding declaring a class subclassed from double wherein the subsref method is overloaded to disallow growing would be one good way to do it.
Another easy way to do it is to sprinkle assert commands throughout the code (in each loop iteration or at the bottom of a function) to assert that the size has not increased over the allocated size.
For instance, if you had the code in a format of:
x = zeros(a,1)
x(a+1) = 1
... lots of other operations
if coder.target('MATLAB')
assert(isequal(size(x), [a,1]), 'x has been indexed out of bounds')
end
this would let you have the assertion fail if any value were assigned that extended the array.
To make this a bit tidier, you might even make a function that bounds checked all the variables you care about, again wrapping the coder.target if statement around it. Then you could sprinkle this throughout your code.
It's not as elegant as overloading the double class, but on the other hand it doesn't add any overhead to the compiled code at all. It also won't give you errors exactly when the overrun happens, but it will give you confidence that the code is working well in a variety of situations.
Another way that you can be more confident of assignments is to do your own bounds checking on the assignment in situations where this may be appropriate. A common problem I've seen in assignment goes something like this. We have an allocated array and are copying in data from another array with a vector assignment. For instance, consider the following situation:
t = char(zeros(5,7)); % Allocate a 5 by 7 char array
tempstring = 'hello to anyone'; % Here's a string we want to put into it.
t(1, 1:numel(tempstring)) = tempstring; % A valid assignment in MATLAB
>> size(t)
ans =
5 15
Uh oh, exactly what you're concerned about in the question has happened: the t array has been automatically resized during the assignment, which works in MATLAB, but in Coder created code will cause a segfault or MEX error. An alternative is to use the power of the end function to keep the assignment tidy (but truncated.) If we change the assignment to:
t(1,1:min(end,numel(tempstring))) = tempstring(1:size(t, 2));
The size of t will remain unchanged, but the assignment will be truncated. The use of end allows bounds checking during the assignment. For some situations, this can be a nice way of handling the issue and will give you confidence that the bounds will never be exceeded, but obviously in some situations this is very undesirable (and won't give you error messages in MATLAB.)
Another helpful tool MATLAB provides is in the editor itself. If you use the %#codegen tag in your code, it will signal the editor's syntax checker to highlight various code generation problems, including places where you are obviously increasing the size of an array by indexing. This can't catch every situation, but it's a good help.
One last note. As mentioned in the question, a MEX file generated by Coder will give you an "Index exceeds matrix dimensions" error right at the time of the assignment, and will gracefully exit and even tell you the original line of code where the error occurred. A C library generated out of Coder has no such nice behavior or bounds checking and will segfault out completely with no diagnostics. The intermediate answer is to do exactly what you're doing, which is to run the code as a MEX. That's not very helpful to your question (as you say, rebuilding the MEX can take time), but for those of us who code for the cold, cruel world of external C code, the intermediate test of being able to run the MEX to find these errors is a lifesaver.
The bottom line is this is a divergence in behavior between MATLAB and Coder generated C code, and it can be a source of significant problems. In my own code, I am very careful with array access and growth for exactly this reason. It's an area where I'd like to see improvement in the Coder tool itself, but there are ways to be very careful when writing MATLAB code targeted for Coder.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What kind of errors should I expect with a computer game written in C and how to handle them? With computer game I imply a program where there is no danger of any kind to human lives or "property".
I would like to add as few error handling code as necessary to keep everything as clear and simple as possible. For example I do not want to do this, because this is much simpler and sufficient for a game.
Up to now I have thought about this:
Error: out-of-memory when calling malloc.
Handling: Print error message and call exit(EXIT_FAILURE); (like this)
Error: A programming error, i.e. something which would work if implemented correctly.
Handling: Use assert to detect (which aborts the program if failed).
Error: Reading a corrupted critical file (e.g. game resource).
Handling: Print error message and call exit(EXIT_FAILURE);
Error: Reading a corrupted non-critical file (e.g. load a saved game).
Handling: Show message to user and ask to load another file.
Do you think this is a reasonable approach? What other error should I expect and what is a reasonable minimal approach to handle them?
You can expect at least those errors to happen as mentioned in the documentation to the libraries you use. For a C program that typically is libc at least.
Check the ERRORS section of the man-pages for the functions you'd be using.
I'd also think this over:
I do not want to do this, because this is much simpler and sufficient for a game.
Imagine you'd have fought yourself through a dozen game-levels and then suddenly the screen is gone with an odd OOM*1-error message. And ... - you didn't save! DXXM!
*1 Out-Of-Memory
As I've already stated in the comment I think this is a very broad question. However, it's Xmas and I'll try and be helpful (lest I upset Santa).
The general best practices have been given in the answers posted by #alk and #user2485710. I will try and give a generic boiler-plate for error handling as I see it in C.
You can't guard against everything without writing perfect code. Perfect code is unreachable (kind of like infinity in calculus) though you can try and get close.
If you try to put too much error handling code in, you will be affecting performance. So, let me define what I will call a simple function and a user function.
A user function is a function that can return an error value. e.g. fopen
A simple function is a function that can not return an error value. e.g.
long add(int a, int b)
{
long rv = a; // #alk - This way it shouldn't overflow. :P
return rv + b;
}
Here are a couple rules to follow:
All calls to user functions must handle the returned errors.
All calls to simple functions are assumed safe so no error handling is needed.
If a simple function's parameter is restricted (i.e. an int parameter that must be between 0 and 9) use an assert to ensure its validity (unless the value is the result of user input in which case you should either handle it or propagate it making this a user function).
If a user function's parameter is restricted and it doesn't cause an error do the same as above. Otherwise, propagate it without additional asserts.
Just like your malloc example you can wrap your user functions with code that will gracefully exit your game thereby turning them into simple functions.
This won't remove all errors but should help reduce them whilst keeping performance in mind. Testing should reduce the remaining errors to a minimum.
Forgive me for not being more specific, however, the question seems to ask for a generic method of error handling in C.
In conclusion I would add that testing, whether unit testing or otherwise, is where you make sure that your code works. Error handling isn't something you can plan for in its entirety because some possible errors will only be evident once you start to code (like a game not allowing you to move because you managed to get yourself stuck inside a wall which should be impossible but was allowed because of some strange explosive mechanics). However, testing can and should be planned for because that will reveal where you should spend more time handling errors.
My suggestion is about:
turning on the compiler's flags for raising errors and warning, make your compiler as much pedantic as possible, -Wall, -Werror, -Wextra, for example are a good start for both clang and gcc
be sure that you know what undefined behaviour means and what are the scenarios that can possibly trigger an UB, the compiler doesn't always helps, even with all the warnings turned on.
make your program modular, especially when it comes to memory management and the use of malloc
be sure that your compiler and your standard library of choice both support the C standard that you pick
I am writing a relatively simple C program in Visual C++, and have two global variables which I would like to know the values of as the program runs. The values don't change once they are assigned, but my programming ability is not enough to be able to quickly construct a text box that displays the values (I'm working in Win32) so am looking for a quick routine that can perhaps export the values to a text file so I can look at them and check they are what they ought to be. Values are 'double'.
I was under the impression that this was the purpose of the debugger, but for me the debugger doesn't run as the 'file not found' is always the case.
Any ideas how I can easily check the value of a global variable (double) in a Win32 app?
Get the debugger working. You should maybe post another question with information about why it won't work - with as much info as possible.
Once you have done that, set a breakpoint, and under Visual C++ (I just tried with 2010), hover over the variable name.
You could also use the watch window to enter expressions and track their values.
If your debugger isn't working try using printf statements wherever the program iterates.
Sometimes this can be a useful way of watching a variable without having to step into it.
If however you wish to run through the program in debug mode set a breakpoint as suggested (in VS2010 you can right click on the line you want to set a breakpoint on).
Then you just need to go to Toolbars -> Debug Toolbar.
I usually like to put #ifdef _DEBUG (or write an appropriate macro or even extra code) to do the printing, and send to the output everything that can help me tracking what the program's doing. Since your variables are never changing, I would do so.
However, flooding the console with lots of values is bad imo, and in such cases I would rely on assertions and the debugger - you should really see why it's not working.
I've done enough Python and Ruby to tell you that debugging a complex program when all you have is a printf, although doable, is extremely frustrating and takes way longer than what it should.
Finally, since you mention your data type is double (please make sure you have a good reason for not using floats instead), in case you add some assertion, remember that == is to be avoided unless you know 100% that == is what you really really want (which is unlikely if your data comes from calculations).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I had my code working earlier in the day on the unix machine, but when compiled under windows it gave me completely strange and incorrect output.
Since our code is going to be marked based on compilation on unix I thought hey that's good enough. But now I just finished refactoring my code (basically just adding comments, getting rid of variables which were never used in the program and getting rid of some functions which I wrote to test the program) and now suddenly my code seems to be giving me the proper output on windows and wrong output on unix.
Note that I have done nothing to modify the functionality of the code.
After spending so many hours working on this banging my head against Seg Fault errors through the week, this last minute bug is going to put it all to waste. What am I supposed to do when the bug is seemingly appearing at random?
Edit: The program is supposed to read a file similar to an html file and print out the tables. I'm loading the data of each individual cell onto a node in a Linked List and then printing out the info based on an algorithm. The output is working fine on windows now but not on unix. I don't even know what part of the code I need to look since I have no idea what's causing this.
Based on the amount of information you supplied (next to none), the best guess is to look for uninitialized variables. That will produce different output on different platforms, and is a common beginner mistake in C.
I suggest you use gdb to debug your code and check where the segmentation fault is arising. That will give you a good hint of were to start looking, even though you don't remember to have done any modification.
There is plenty documentation on the web.
These are the basics:
shell> gdb myprogram
gdb> backtrace #lists the steps until the segmentation fault arises
gdb> select 2 #You can select any step you want (e.g. 2)
gdb> print number #print variables to hack around
There are a lot of features for gdb. I think this will give you a hint quickly.
Don't forget to use a version control system the next time. It's a safe and nice way of having your code organized and clean, and off course!, to avoid these terrible accidents.
(SVN or GIT are cool enough)
Step 1, make a copy of everything.
Copy the entire project somewhere. Make a note of what state the project was in when you made that copy and the date:time. DO NOT edit that copy. You may even make the files unwritable if you want. You need to be able to see what you have changed as well as go back to it. Even though the program does not currently work on Unix, it does work under Windows, so you know that it does have some merit and is close to being useful to turn in. When I get upset at a program I am writing or at the compiler for not understanding it (this happens a lot less now then it did 10 years ago) I tend to lose track of what all I am changing, so changing it back becomes difficult. Using some type of version control (even just keeping extra copies around) will help you to keep track of what you have changes so when you make a mistake you can unmake that mistake pretty easily. Differencing tools, like diff are very helpful when you know how to use them. For right now you might want to try:
diff --minimal --side-by-side --ignore-all-space old_file.c new_file.c | less
Hopefully you are using a diff that supports those options because I think that they may be the most helpful for you in the short time that you have. If you find that you need to fit more on the screen and your terminal window is large you can also add in the --width= command and give it a number of characters in a line on your terminal.
Anyway, make and keep lots of copies of your code until you know that you won't need them anymore (and maybe even then).
If you have graphical access see if kdiff3 is available. It may be easier for you to use quickly. The 3 in its name refers to the ability to compare 3 versions of a file at one time (a common starting point and two edited versions of that file) and is useful, but you can learn about that later. It is perfectly able to compare just two versions of a file and produce decent output.
Step 2 Don't ignore warnings
I suggest that you compile it with the highest warning level possible with your compiler and DO NOT ignore any warnings. If you already have warnings without telling the compiler to issue more warnings then examine those first. Warnings are there for a reason, and only occasionally should you ever encounter code the produces warnings that should just be ignored (and even then I usually add a comment about the expected type of warning and why it is not an error). With gcc you can add the -Wall option to the compile command to issue all warnings.
gcc -Wall my_program.c -o my_program
Some may not make sense to you, but you can at least look at the code and see what might be unclear about it in the vicinity of the warning line.
step 3 Use simple lines of code
Something that will make warnings easier to understand is using very simple to understand lines of code. Trying to fit too much functionality into one line of code makes it so that any warning or error message about that line of code is much more difficult to understand.
step 4 Use temporary variables
Temporary variables don't necessarily mean "my program uses more memory" but they do often mean the compiler gives more meaningful warnings because the data-types of variables in expressions are much clearer.
step 5 Use functions
This is just a continuation of the philosophy from 3 and 4. Functions make things easier to understand. They also make it so that often when you find an error and fix it you don't have to worry about having copies of the erroneous code elsewhere in the program that also needs to be fixed (though you should still search for similar code just to be sure).
step 6 assert
There is a macro (like a function, but not quite) called assert that lives in #include <assert.h> and can help you find all kinds of errors by making your program fail earlier than it otherwise would. This sounds bad, but very often (especially with memory related problems like segmentation faults (SIGSEGV) ) programs are in a fatal state well before they die. Using assert helps you to move their death to an earlier place so that you can see what their fatal mistake was, rather than just seeing the result of it.
assert takes as its parameter a boolean expression -- any comparison, integer, floating point number, or pointer will do. Anything that you could use as a condition in an if or while will do. If this expression is false (0 or NULL) then your program will die right there and on many systems it will give you a helpful error message about where the assertion that killed the program was located and maybe even what the assertion was. There is another helpful thing that this causes which I'll talk about in a little bit, but for now, to use assert you just do:
assert(x < y);
and if x is not less than y the program will abort (actually call the abort function).
This is helpful for things like:
int clear_buffer(char * buffer, unsigned len)
{ /* len should be size_t but I don't want to explain that right now */
assert(buffer);
memset(buffer, 0, len);
}
Step 7, Use a debugger
If you have gdb on your Unix system then GREAT. If not, you probably have some other debugger than you can learn how to use. Many Unix C compilers take the -g option to mean "include debugging symbols", so add that to the other options you are passing to the compiler and recompile your program, and then do:
gdb ./myprogram
Which will print some stuff and then prompt you with:
(gdb)
Then you can set break points and all kinds of good stuff, but since you are in a hurry and getting crashes just do:
(gdb) r
Include any arguments after the r that you would be passing to your program when you normally ran it. gdb will then run your program until something odd happens. The something odd, in this case, should be a SIGSEGV (what UNIXes do to your program when it tries to access memory addresses that it shouldn't). gdb will then prompt you with (gdb) again. You can then do:
(gdb) bt
bt stands for back trace and gdb will print out the call stack, meaning all functions that were called to get to the current function. You should see main near the bottom. Look for the first function near the top that is a function you wrote. This is where you need to start trying to find errors. If the top function on the list is not one of yours then try issuing:
(gdb) up
Which will make it examine the previous function on the call stack. Once in one of your functions say:
(gdb) list
And it will show you some code around the area where things are wrong.
To exit gdb you do:
(gdb) quit
And answer Y if it ask you if you really want to quit.
If you were to use assert and that killed your program then you would not end up with quite as much library stuff on top of the call stack to confuse you.
Sadly 3, 4, and 5 mess up the ability to get good info from diff so I suggest trying to limit your adding of this programming style into places where you are having errors or warnings already (at least for now).
I hope that this helps
First of all, we will need your code to see what's going on. But if what you described is true then it is most likely that your code contains what's called undefined behavior. Undefined behavior can occur due to too many reasons, such as crossing array boundaries, incorrectly deleting pointers etc.etc. So, without code nothing can be said
Run it through valgrind.
I can guarantee you will find your error with valgrind.
If you've got access to a unix or linux machine, you should never release code that you haven't run through valgrind, even if the code works.
With the data you've provided, here is my solution.
Take a break and zone out of the problem domain for a while.
Use a debugger, step through the program, identify where it is segfaulting.
Print data at the point of the segfault and validate it.
That should solve the problem.
Compile your code with all warnings on.
Don't hide warnings with bogus casts, but take them seriously and resolve the real problems.
Use different compilers. On linux clang is a good alternative and gives way more indications than gcc.