Error management for a C computer game [closed] - c

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What kind of errors should I expect with a computer game written in C and how to handle them? With computer game I imply a program where there is no danger of any kind to human lives or "property".
I would like to add as few error handling code as necessary to keep everything as clear and simple as possible. For example I do not want to do this, because this is much simpler and sufficient for a game.
Up to now I have thought about this:
Error: out-of-memory when calling malloc.
Handling: Print error message and call exit(EXIT_FAILURE); (like this)
Error: A programming error, i.e. something which would work if implemented correctly.
Handling: Use assert to detect (which aborts the program if failed).
Error: Reading a corrupted critical file (e.g. game resource).
Handling: Print error message and call exit(EXIT_FAILURE);
Error: Reading a corrupted non-critical file (e.g. load a saved game).
Handling: Show message to user and ask to load another file.
Do you think this is a reasonable approach? What other error should I expect and what is a reasonable minimal approach to handle them?

You can expect at least those errors to happen as mentioned in the documentation to the libraries you use. For a C program that typically is libc at least.
Check the ERRORS section of the man-pages for the functions you'd be using.
I'd also think this over:
I do not want to do this, because this is much simpler and sufficient for a game.
Imagine you'd have fought yourself through a dozen game-levels and then suddenly the screen is gone with an odd OOM*1-error message. And ... - you didn't save! DXXM!
*1 Out-Of-Memory

As I've already stated in the comment I think this is a very broad question. However, it's Xmas and I'll try and be helpful (lest I upset Santa).
The general best practices have been given in the answers posted by #alk and #user2485710. I will try and give a generic boiler-plate for error handling as I see it in C.
You can't guard against everything without writing perfect code. Perfect code is unreachable (kind of like infinity in calculus) though you can try and get close.
If you try to put too much error handling code in, you will be affecting performance. So, let me define what I will call a simple function and a user function.
A user function is a function that can return an error value. e.g. fopen
A simple function is a function that can not return an error value. e.g.
long add(int a, int b)
{
long rv = a; // #alk - This way it shouldn't overflow. :P
return rv + b;
}
Here are a couple rules to follow:
All calls to user functions must handle the returned errors.
All calls to simple functions are assumed safe so no error handling is needed.
If a simple function's parameter is restricted (i.e. an int parameter that must be between 0 and 9) use an assert to ensure its validity (unless the value is the result of user input in which case you should either handle it or propagate it making this a user function).
If a user function's parameter is restricted and it doesn't cause an error do the same as above. Otherwise, propagate it without additional asserts.
Just like your malloc example you can wrap your user functions with code that will gracefully exit your game thereby turning them into simple functions.
This won't remove all errors but should help reduce them whilst keeping performance in mind. Testing should reduce the remaining errors to a minimum.
Forgive me for not being more specific, however, the question seems to ask for a generic method of error handling in C.
In conclusion I would add that testing, whether unit testing or otherwise, is where you make sure that your code works. Error handling isn't something you can plan for in its entirety because some possible errors will only be evident once you start to code (like a game not allowing you to move because you managed to get yourself stuck inside a wall which should be impossible but was allowed because of some strange explosive mechanics). However, testing can and should be planned for because that will reveal where you should spend more time handling errors.

My suggestion is about:
turning on the compiler's flags for raising errors and warning, make your compiler as much pedantic as possible, -Wall, -Werror, -Wextra, for example are a good start for both clang and gcc
be sure that you know what undefined behaviour means and what are the scenarios that can possibly trigger an UB, the compiler doesn't always helps, even with all the warnings turned on.
make your program modular, especially when it comes to memory management and the use of malloc
be sure that your compiler and your standard library of choice both support the C standard that you pick

Related

C function error: is it better to abort the function or simply exit the program?

The title says it all. Since C doesn't have exceptions, I'm not exactly sure how to handle errors. I've thought of the advantages and disadvantages of both:
ABORTING:
Basically what I mean by this is to return an error code (which will be declared in a .h file, maybe with its own perror()-like function) and abort the function, and the obvious advantage is that it helps the user do error-handling, but the disadvantages are:
If the function is not checked every time after it is executed for an error, and an error does indeed occur, it could cause big problems as the program progresses and the user will have a hard time finding where the issue is coming from.
Looking through the header file for the error codes can be tough and annoying.
Error codes may conflict with error codes in other libraries or the builtin C error codes.
EXIT THE PROGRAM:
Pretty self-explanatory: as soon as an error is found, print the error to stderr and exit. The advantage of this is that if the error message is detailed enough, the user will easily know what is wrong with their code and fix it, but the main disadvantage is that the user will not be able to write any code that could handle a possible error and would instead have to change the code itself (and it becomes a bigger problem when you need to ask for input or something similar, in which there are millions of possible errors that could arise from incorrect input).
This largely depends on what your program is doing. Some programs such as simple command line utilities will just abort on invalid input for example, without affecting much neither the user experience, nor the system stability. The user will simply correct themselves and rerun. On the other hand, if it is a safety-critical system, such as military, medical or transportation equipment (read autopilot, pacemaker and such) aborting its program will cost human lives. Or as suggested in the comments - a simple word processor. The user might be very unsatisfied if they loose all their work after made some stupid mistake which caused some program error.
So the general approach to writing a robust software would be to classify your errors as fatal and non-fatal. Non-fatal are the ones you can anticipate during normal program run and can be gracefully handled in a way which allows the program to continue. Fatal ones are the ones which are caused by some abnormal conditions (hardware failure, missing components and such) which make the program not to be able to continue.
Depending on the system nature you might want to loosen or tighten the above classification.
Return a proper error code from that function. Otherwise it would be hard to use that function in a different context, like a unit test. Also it wouldn't be possible for the calling program to cleanup it's resources or, simply print an error message.

evalWithTimeout ignored when calling C / Fortran routines?

I'm working with "igraph" package, and the "evalWithTimeout" function in "R.utils".
I'm trying to do maximal clique detection, which I know it can get terrible (as terrible O(3^n) being n the number of nodes) so I encapsulated in a timeOut, but it gets ignored.
Minimal code to reproduce the problem
library(igraph)
library(R.utils)
g<-erdos.renyi.game(1e6,1e7,type="gnm")
o<-evalWithTimeout(maximal.cliques(g),timeout=1)
This should stop after one second. However it doesn't. I wonder if this is due to the use of underlying C / Fortran code (which is what maximal.cliques does). If so, how can i solve this?
This won't work with most C code, because R cannot interrupt C code, unless the C code cooperates. evalWithTimeout calls setTimeLimit, and this is from the manual page from setTimeLimit:
Time limits are checked whenever a user interrupt could occur.
This will happen frequently in R code and during Sys.sleep, but
only at points in compiled C and Fortran code identified by the
code author.
It is not trivial to make C code interruptible, because you need to deallocate all allocated memory.
I suggest to report a bug at https://github.com/igraph/igraph/issues and request to make maximal.cliques interruptible.

Is it a bad practice to output error messages in a function with one input and one output [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I was once told that functions with one input and one output(not exactly one) should not print messages when being called. But I don't understand. Is it for security or just for convention?
Let me give an example. How to deal with an attempt that access data in a sequential list with an incorrect index?
// 1. Give out the error message inside the function directly.
DataType GetData(seqList *L, int index)
{
if (index < 0 || index >= L->length) {
printf("Error: Access beyond bounds of list.\n");
// exit(EXIT_FAILURE);
}
return L->data[index];
}
// 2. Return a value or use a global variable(like errno) that
// indicates whether the function performs successfully.
StateType GetData(seqList *L, int index, int *data)
{
if (index < 0 || index >= L->length) {
return ERROR;
}
*data = L->data[index];
return OK;
}
I think there are two things going on here:
Any visible and unexpected side-effect such as writing to streams is generally bad, and not just for functions with one input and one output. If I was using a list library, and it started silently writing error messages to the same output stream I was using for my regular output, I'd consider that a problem. However, if you are writing such a function for your own personal use, and you know ahead of time that the action you want taken is always to print a message and exit(), then it's fine. Just don't force this behavior on everyone else.
This is a specific case of the general problem of how to inform callers about errors. A lot of the time, a function cannot know the correct response to an error, because it doesn't have the context that the caller does. Take malloc(), for instance. The vast majority of the time, when malloc() fails, I just want to terminate, but once in a great while I might want to deliberately fill the memory by calling malloc() until it fails, and then proceed to do something else. In this case, I don't want the function to decide whether or not to terminate - I just want it to tell me it's failed, and then pass control back to me.
There are a number of different approaches to handling errors in library functions:
Terminate - fine if you're writing a program yourself, but bad for a general purpose library function. In general, for a library function, you'll want to let the caller decide what to do in the case of an error, so the function's role is limited to informing the caller of the error.
Return an error value - sometimes OK, but sometimes there is no feasible error value. atoi() is a good case in point - all the possible values it returns could be correct translations of the input string. It doesn't matter what you return on error, be it 0, -1 or anything else, there is no way to distinguish an error from a valid result, which is precisely why you get undefined behavior if it encounters one. It's also semantically questionable from a slightly purist point of view - for instance, a function which returns the square root of a number is one thing, but a function which sometimes returns the square root of a number, but which sometimes returns an error code rather than a square root is another thing. You can lose the self-documenting simplicity of a function when return values serve two completely separate purposes.
Leave the program in an error state, such as setting errno. You still have the fundamental problem that if there is no feasible return value, the function still can't tell you that an error has occurred. You could set errno to 0 in advance and check it afterwards every time, but this is a lot of work, and may just not be feasible when you start involving concurrency.
Call an error handling function - this basically just passes the buck, since the error function then also has to address the issues above, but at least you could provide your own. Also, as R. notes in the comments below, other than in very simple cases like "always terminate on any error" it can be asking too much of a single global error handling function to be able to sensibly handle any error that might arise in a way that your program can them resume normal execution. Having numerous error handling functions and passing the appropriate ones individually to each library function is technically possible, but hardly an optimal solution. Using error handling functions in this way can also be difficult or even impossible to use correctly in the presence of concurrency.
Pass in an argument that gets modified by the function if it encounters an error. Technically feasible, but it's not really desirable to add an additional parameter for this purpose to every library function ever written.
Throw an exception - your language has to support them to do this, and they come along with all kinds of associated difficulties including unclear structure and program flow, more complex code, and the like. Some people - I'm not one of them - consider exceptions to be the moral equivalent of longjmp().
All the possible ways have their drawbacks and advantages, as of yet humanity has not discovered the perfect way of reporting errors from library functions.
In general you should make sure you have a consistent and coherent error handling strategy, which means considering whether you want to pass an error up to a higher level or handle it at the level it initially occurs. This decision has nothing to do with how many inputs and outputs a function has.
In a deeply embedded system where a memory allocation failure occurs at a critical juncture, for example, there's no point passing that error back up (and indeed you may well not be able to) - all you can do might be enter a tight loop to let the watchdog reset you. In this case there's no point reserving invalid return values to indicate error, or indeed in even checking the return value at all if it doesn't make sense to do so. (Note I am not advocating just lazily not bothering to check return values, that is a different matter entirely).
On the other hand, in a lovely beautiful GUI app you probably want to fail as gracefully as possible and pass the error up to a level where it can be either worked around / retried / whatever is appropriate; or presented to the user as an error if nothing else can be done.
It is better to use perror() to display error messages rather than using printf()
Syntax:
void perror(const char *s);
Also error messages are supposed to be sent to the stderr stream than stdout.
Yes, it's a bad practice; even worse is that you're sending the output to stdout rather than stderr. This could end up corrupting data by mixing error message in with output.
Personally, I believe very strongly that this kind of "error handling" is harmful. There is no way you can validate that the caller passed a valid value for L, so checking the validity of index is inconsistent. The documented interface contract for the function should simply be that L must be a valid pointer to an object of the correct type, and index a valid index (in whatever sense is meaningful to your code). If an invalid value for L or index is passed, this is a bug in your code, not a legitimate error that can occur at runtime. If you need help debugging it, the assert macro from assert.h is probably a good idea; it makes it easy to turn off the check when you no longer need it.
One possible exception to the above principle would be the case where the value of L is coming from other data structures in your program, but index is coming from some external input that's not under your control. You could then perform an external validation step before calling this function, but if you always need the validation, it makes sense to integrate it like you're doing. However, in that case you need to have a way to report the failure to the caller, rather than printing a useless and harmful message to stdout. So you need to either reserve one possible return value as an error sentinel, or have an extra argument that allows you to return both a result and an error status to the caller.
Return a reserved value that's invalid for a success condition. For example, NULL.
It is advisable not to print because:
It doesn't help the calling code reason about the error.
You're writing to a stream that might be used by higher level code for something else.
The error may be recoverable higher, so you might be just printing misleading error messages.
As others have said, consistency in how you deal with error conditions is also an important factor. But consider this:
If your code is used as a component of another application, one that does not follow your printing convention, then by printing you're not allowing the client code to remain faithful to its own strategy. Thus, using this strategy you impose your convention to all related code.
On the other hand, if you follow the "cleaner" solution of returning a reserved value and the client code wants to follow the printing convention, the client code can easily adapt to what you return and even print an error, by making simple wrappers around your functions. Thus, using this strategy you give the users of your code enough space to choose the strategy that best works for them and to be faithful to it.
It is always best if code deals with one thing only. It is easier to understand, it is easier to use, and is applicable in more instances.
Your GetData function that prints an error message isn't suitable for use in cases where there may not be a value. i.e. The calling code wants to try to get a value and handle the error by using a default if it doesn't exist.
Since GetData doesn't know the context it can't report a good error message. As an example higher up the call stack we can report hey you forgot to give this user an age, vs in GetData where all it knows is we couldn't get some value.
What about a multithreaded situation? GetData seems like it would be something that might get called from multiple threads. With a random bit of IO shoved in the middle it will cause contention over who has access to the console if all the threads need to write at the same time.

Taking an Image from a Webcam in Ubuntu Using C [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am trying to use my webcam (Creative Live! Cam Chat) to take an image in C/C++ and save it to a certain folder(running Ubuntu). Ideally I'm looking to something as simple as possible despite it not being the most elegant solution.
So far I've found v4l2grab which I find incredibly confusing to understand, and also doesn't seem to work with the Creative webcam (returns a black picture that is 5Kb in size) although it does seem to work with the webcam installed as a part of my laptop.
Are there any simple C libraries or code that I could use to do this?
I don't know of a good library for the purpose (please add comment and tell me if there is one :-)). Note: for some uses, eg. OpenCV is just fine, and if it is enough for you, definitely do use it. But if you want more, read on.
So you should just write your own code to use it, it's not particularly hard. Here's one related question: How to use/learn Video4Linux2 (On Screen Display) Output APIs?
Some points to make learning easier:
After calling an IOCTL, always check return status and print possible error message. You will be getting lots of these while you work, so just be systematic about it. I suggest a function like check_error shown below, and calling it always immediatly after any ioctl call.
IMO a must: use IDE/editor, which can follow symbol to the actual header file (for example in Qt Creator, which is a fine pure C application IDE despite the name, hit F2 on symbol, and it will go even to system headers to show you where it is defined). Use this liberally on V4L2 related symbols and defines, and read comments in the header file, that's often the best documentation.
Use the query ioctls and write functions to dump values they return in nice format. For example have function void dump_cap(const struct v4l2_capability &cap) {...}, and add a similar function for every struct you use in your code as you go.
Don't be lazy about setting values inside structs you pass to IOCTL. Always initialize structs to 0 with memset(&ioctl_struct_var, 0, sizeof(ioctl_struct_var)); after declaring them, and also if you reuse them (except when doing 'get-modify-set' operation on some settings, which is quite common with V4L2).
If possible, have two (or more) different webcams (different resolutions, different brand), and test with both (all). This is easiest if you take video device as command line parameter, so you can just call your program with different argument for each cam you have.
Small steps. Often ioctls may not return what you expect, so no point writing code which uses the returned data, before you have actually seen what the query returns for your cameras.
The check_error function mentioned above:
void check_error(int return_value_of_ioctl, const char *msg) {
if (return_value_of_ioctl != -1) return; /* all ok */
int eno = errno; /* just to avoid accidental clobbering of errno */
fprintf(stderr, "error (%d) with %s: %s\n", eno, msg, strerror(eno));
exit(1); /* optional, depending on how you want to work with your code */
}
Call that immediatly after every ioctl, for example:
struct v4l2_capability cap;
setmem(&cap, 0, sizeof(cap));
int r=ioctl(fd, VIDIOC_QUERYCAP, &cap);
check_error(r, "VIDIOC_QUERYCAP");
dump_querycap(&cap);
You can use OpenCV. Use cvCreatecameraCapture (You can call it with argument 0 to get to the fault cam) to create an object and then call cvQueryFrame on that object. Calling cvQueryFrame each time will return a frame.
Have you had a look at OpenCV? It's quite handy for all sorts of image getting and processing. The process of taking picture is well documented, but I suggest you look at something like this, if you do indeed decide to use it.
Take a look at uvccapture source code. It is very simple, yet standard C which uses only V4L2 interface. OpenCV would also work, but it is more complicated to setup and compile.

C strange bug... pulling my hair out [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I had my code working earlier in the day on the unix machine, but when compiled under windows it gave me completely strange and incorrect output.
Since our code is going to be marked based on compilation on unix I thought hey that's good enough. But now I just finished refactoring my code (basically just adding comments, getting rid of variables which were never used in the program and getting rid of some functions which I wrote to test the program) and now suddenly my code seems to be giving me the proper output on windows and wrong output on unix.
Note that I have done nothing to modify the functionality of the code.
After spending so many hours working on this banging my head against Seg Fault errors through the week, this last minute bug is going to put it all to waste. What am I supposed to do when the bug is seemingly appearing at random?
Edit: The program is supposed to read a file similar to an html file and print out the tables. I'm loading the data of each individual cell onto a node in a Linked List and then printing out the info based on an algorithm. The output is working fine on windows now but not on unix. I don't even know what part of the code I need to look since I have no idea what's causing this.
Based on the amount of information you supplied (next to none), the best guess is to look for uninitialized variables. That will produce different output on different platforms, and is a common beginner mistake in C.
I suggest you use gdb to debug your code and check where the segmentation fault is arising. That will give you a good hint of were to start looking, even though you don't remember to have done any modification.
There is plenty documentation on the web.
These are the basics:
shell> gdb myprogram
gdb> backtrace #lists the steps until the segmentation fault arises
gdb> select 2 #You can select any step you want (e.g. 2)
gdb> print number #print variables to hack around
There are a lot of features for gdb. I think this will give you a hint quickly.
Don't forget to use a version control system the next time. It's a safe and nice way of having your code organized and clean, and off course!, to avoid these terrible accidents.
(SVN or GIT are cool enough)
Step 1, make a copy of everything.
Copy the entire project somewhere. Make a note of what state the project was in when you made that copy and the date:time. DO NOT edit that copy. You may even make the files unwritable if you want. You need to be able to see what you have changed as well as go back to it. Even though the program does not currently work on Unix, it does work under Windows, so you know that it does have some merit and is close to being useful to turn in. When I get upset at a program I am writing or at the compiler for not understanding it (this happens a lot less now then it did 10 years ago) I tend to lose track of what all I am changing, so changing it back becomes difficult. Using some type of version control (even just keeping extra copies around) will help you to keep track of what you have changes so when you make a mistake you can unmake that mistake pretty easily. Differencing tools, like diff are very helpful when you know how to use them. For right now you might want to try:
diff --minimal --side-by-side --ignore-all-space old_file.c new_file.c | less
Hopefully you are using a diff that supports those options because I think that they may be the most helpful for you in the short time that you have. If you find that you need to fit more on the screen and your terminal window is large you can also add in the --width= command and give it a number of characters in a line on your terminal.
Anyway, make and keep lots of copies of your code until you know that you won't need them anymore (and maybe even then).
If you have graphical access see if kdiff3 is available. It may be easier for you to use quickly. The 3 in its name refers to the ability to compare 3 versions of a file at one time (a common starting point and two edited versions of that file) and is useful, but you can learn about that later. It is perfectly able to compare just two versions of a file and produce decent output.
Step 2 Don't ignore warnings
I suggest that you compile it with the highest warning level possible with your compiler and DO NOT ignore any warnings. If you already have warnings without telling the compiler to issue more warnings then examine those first. Warnings are there for a reason, and only occasionally should you ever encounter code the produces warnings that should just be ignored (and even then I usually add a comment about the expected type of warning and why it is not an error). With gcc you can add the -Wall option to the compile command to issue all warnings.
gcc -Wall my_program.c -o my_program
Some may not make sense to you, but you can at least look at the code and see what might be unclear about it in the vicinity of the warning line.
step 3 Use simple lines of code
Something that will make warnings easier to understand is using very simple to understand lines of code. Trying to fit too much functionality into one line of code makes it so that any warning or error message about that line of code is much more difficult to understand.
step 4 Use temporary variables
Temporary variables don't necessarily mean "my program uses more memory" but they do often mean the compiler gives more meaningful warnings because the data-types of variables in expressions are much clearer.
step 5 Use functions
This is just a continuation of the philosophy from 3 and 4. Functions make things easier to understand. They also make it so that often when you find an error and fix it you don't have to worry about having copies of the erroneous code elsewhere in the program that also needs to be fixed (though you should still search for similar code just to be sure).
step 6 assert
There is a macro (like a function, but not quite) called assert that lives in #include <assert.h> and can help you find all kinds of errors by making your program fail earlier than it otherwise would. This sounds bad, but very often (especially with memory related problems like segmentation faults (SIGSEGV) ) programs are in a fatal state well before they die. Using assert helps you to move their death to an earlier place so that you can see what their fatal mistake was, rather than just seeing the result of it.
assert takes as its parameter a boolean expression -- any comparison, integer, floating point number, or pointer will do. Anything that you could use as a condition in an if or while will do. If this expression is false (0 or NULL) then your program will die right there and on many systems it will give you a helpful error message about where the assertion that killed the program was located and maybe even what the assertion was. There is another helpful thing that this causes which I'll talk about in a little bit, but for now, to use assert you just do:
assert(x < y);
and if x is not less than y the program will abort (actually call the abort function).
This is helpful for things like:
int clear_buffer(char * buffer, unsigned len)
{ /* len should be size_t but I don't want to explain that right now */
assert(buffer);
memset(buffer, 0, len);
}
Step 7, Use a debugger
If you have gdb on your Unix system then GREAT. If not, you probably have some other debugger than you can learn how to use. Many Unix C compilers take the -g option to mean "include debugging symbols", so add that to the other options you are passing to the compiler and recompile your program, and then do:
gdb ./myprogram
Which will print some stuff and then prompt you with:
(gdb)
Then you can set break points and all kinds of good stuff, but since you are in a hurry and getting crashes just do:
(gdb) r
Include any arguments after the r that you would be passing to your program when you normally ran it. gdb will then run your program until something odd happens. The something odd, in this case, should be a SIGSEGV (what UNIXes do to your program when it tries to access memory addresses that it shouldn't). gdb will then prompt you with (gdb) again. You can then do:
(gdb) bt
bt stands for back trace and gdb will print out the call stack, meaning all functions that were called to get to the current function. You should see main near the bottom. Look for the first function near the top that is a function you wrote. This is where you need to start trying to find errors. If the top function on the list is not one of yours then try issuing:
(gdb) up
Which will make it examine the previous function on the call stack. Once in one of your functions say:
(gdb) list
And it will show you some code around the area where things are wrong.
To exit gdb you do:
(gdb) quit
And answer Y if it ask you if you really want to quit.
If you were to use assert and that killed your program then you would not end up with quite as much library stuff on top of the call stack to confuse you.
Sadly 3, 4, and 5 mess up the ability to get good info from diff so I suggest trying to limit your adding of this programming style into places where you are having errors or warnings already (at least for now).
I hope that this helps
First of all, we will need your code to see what's going on. But if what you described is true then it is most likely that your code contains what's called undefined behavior. Undefined behavior can occur due to too many reasons, such as crossing array boundaries, incorrectly deleting pointers etc.etc. So, without code nothing can be said
Run it through valgrind.
I can guarantee you will find your error with valgrind.
If you've got access to a unix or linux machine, you should never release code that you haven't run through valgrind, even if the code works.
With the data you've provided, here is my solution.
Take a break and zone out of the problem domain for a while.
Use a debugger, step through the program, identify where it is segfaulting.
Print data at the point of the segfault and validate it.
That should solve the problem.
Compile your code with all warnings on.
Don't hide warnings with bogus casts, but take them seriously and resolve the real problems.
Use different compilers. On linux clang is a good alternative and gives way more indications than gcc.

Resources