I'm getting some intermittent assertion failures when doing constrained Delaunay trianguation with the GNU Triangulated Surface Library. I've seen each of the following at different times:
Gts:ERROR:cdt.c:974:remove_intersected_vertex: code should not be reached
Gts:ERROR:cdt.c:896:remove_intersected_edge: assertion failed: (next)
Gts:ERROR:cdt.c:887:remove_intersected_edge: assertion failed: (o2 == 0.)
I've looked at cdt.c but all I've been able to figure out is that they're coming from calls to gts_delaunay_add_constraint.
Could someone explain what might be the problem with the constraints, that would cause these assertions to fail?
The assertion failures are happening when I try to do triangulation on a set of random vertices. It only happens, unfortunately, for large numbers of vertices and constraints, so it's hard to figure out a pattern for the failing inputs. The code that's using GTS needs to not crash even for bad input, so it would be nice to prevent these assertion failures, otherwise I'll have to disable the assertions.
Edit: Tried removing all intersecting constraints (stored in edges):
int numPossEdges = gts_fifo_size(edges);
GtsEdge **possEdges = malloc(numPossEdges * sizeof(GtsEdge *));
for (int i = 0; i < numPossEdges; ++i)
possEdges[i] = gts_fifo_pop(edges);
for (int i = 0; i < numPossEdges; ++i)
for (int j = 0; j < i && possEdges[i] != NULL; ++j)
if (possEdges[j] != NULL && GTS_IN == gts_segments_are_intersecting(&(possEdges[i]->segment), &(possEdges[j]->segment)))
possEdges[i] = NULL;
for (int i = 0; i < numPossEdges; ++i)
if (possEdges[i] != NULL)
gts_fifo_push(edges, possEdges[i]);
Still getting the same assertion failures.
If you're creating vertices and constraints completely randomly, I imagine you might be supplying constraint edges that intersect each other. In which case I would certainly expect the triangulation routines to complain.
The code that's using GTS needs to not crash even for bad input, so it
would be nice to prevent these assertion failures, otherwise I'll have
to disable the assertions.
I ended up writing a patch that causes GTS to (basically) throw an exception instead of halting when it hits an assertion failure. The patch is here.
Related
I have a 10k lines of code C program that sometimes crashes.
I cannot replicate that crash. All I can do is to wait for it to happen and then try to debug it.
This is what I get from drmingw
EXCEPTION PID=18632 TID=10612 ExceptionCode=0xc0000005 dwFirstChance=0
mypgr.exe caused an Access Violation at location 0000000000402CB2 in
module mypgr.exe Reading from location 0000000000000000.
AddrPC Params
0000000000402CB2 0000000000A862D0 0000000000000003 00000000035BE380 mypgr.exe!manage_ids [C:/Users/mn/Documents/code blocks/zbr/common.c # 440]
if(unique_id[i]->idd[32] != 0)
log_errors_to_file("xd2");
dbg_str_len = 0;
for(int xd = 0;xd < 32;xd++) { // ERROR IN THIS LOOP
if(ant_tb[j][k]->idd[xd] == 0)
break;
dbg_str_len++;
};
if(dbg_str_len != 32)
log_errors_to_file("xd3");
if(ant_tb[j][k]->idd[32] != 0)
log_errors_to_file("xd4");
I know this code does not make any sense but I tried many strange things to replicate the crash.
Anyway my debugger shows that below line is causing the problem
for(int xd = 0;xd < 32;xd++){if(ant_tb[j][k]->idd[xd] == 0) break;dbg_str_len++;};
But why? Is it possible to crash the app by just reading?
Upon calling addrtab = g_hash_table_new_full(g_int_hash, g_int_equal, g_free, NULL); the execution aborts and the following is reported: GLib-ERROR **: 15:23:58.535: ../../../glib/gmem.c:138: failed to allocate 11755944670652 bytes. What puzzles me is the amount of memory requested.
Now a bit of details: I'm writing in C using glib-2.66 on Ubuntu 20.04. Commenting out the previously reported line solves the error entirely. g_hash_table_new_full() is used elsewhere in the program without causing any error.
For completeness I'm posting the function that calls that g_hash_table_new_full()
void addr_advertise (hash_node_t* alice, hash_node_t* bob){
GHashTable* addrtab = NULL;
if ((alice->data->key != bob->data->key) && !(alice->data->attackerid == 1 && bob->data->attackerid == 1)) {
addrtab = g_hash_table_new_full(g_int_hash, g_int_equal, g_free, NULL);
if(alice->data->attackerid == -1){
//populate_honest_addr_table(addrtab, alice->data->known_peers);
}
else{
//add_malicious_to_table(addrtab);
}
execute_addr(simclock + FLIGHT_TIME, alice, bob, 5, simclock, alice->data->key, addrtab);
}
}
addr_advertise itself is called multiple times before causing the error.
As reported in gmem.c the error is caused by a g_malloc0() (line 138) that requests too much memory. How is it possible that g_malloc asks for so many bytes? Why previous executions of the same function work perfectly and most of all what could be the origin of the error?
img of error
Above is an error I have been getting, in relation to this line in my program:
storedData[k] = min(dist[index1][k],dist[index2][k]);
Now here is the surrounding functions to this line:
for(int k = 0; k <arraySize; k++){
if(k!= index1 && k != index2){
if(method == SINGLE_LINKAGE){
storedData[k] = min(dist[index1][k],dist[index2][k]);
} else {
storedData[k] = max(dist[index1][k],dist[index2][k]);
}
}
}
Now after playing around with it for quite a while, I realised that the issue it has is with the incrementing 'k' variable in the for loop. More specifically it is worried that when used as an index in dist, the value returned will be uninitialised. Now in terms of functionality, my program works fine and does everything I want it to do. More notably, I have also initialised this function elsewhere in a helper function which is why this confuses me more. I have initialised all the values from index 0-arraysize which in my head means this should never be an issue. Im not sure if maybe this is caused because its done outside of the main function or something. Regardless it keeps giving me grief and I would like to fix it.
You need to work back from the error to its origin. Even if you are initializing your arrays, it is possible that something is 'uninitializing' them again afterwards. memcheck does not flag uninitialized data when it is copied, only when it affects the outcome.
So in pseudo-code you might have
Array arr;
Scalar uninit; // never initialized
init_array(arr);
// do some stuff
arr[3] = uninit; // no error here
for (i = 1 to arr.size)
store[i] = max(arr[i], arr[i-1]; // errors for i == 3 and 4
There are two things that you could try. Firstly, try some 'printf' debugging, something like
for(int k = 0; k <arraySize; k++) {
if(k!= index1 && k != index2) {
fprintf(stderr, "DEBUG: k %d index1 %d index2 %d\n", k, index1, index2);
// as before
Then run Valgrind without a log file. You should then be able to see which indices cause the error(s).
Next, try using ggbserver. Run valgrind in one terminal with
valgrind --vgdb-error=0 prog args
and then run gdb in a second terminal to attach (see the text that is output in the 1st terminal for the commands to use).
You can then use gdb as usual (except no 'run') to control your guest app, with the additional ability to run valgrind monitor commands.
I'm using the C interface for XPC services; incidentally my XPC service runs very nicely asides from the following problem.
The other day I tried to send a "large" array via XPC; of the order of 200,000 entries. Usually, my application deals with data of the order of a couple of thousand entries and has no problems with that. For other uses an array of this size may not be special.
Here is my C++ server code for generating the array:
xpc_connection_t remote = xpc_dictionary_get_remote_connection(event);
xpc_object_t reply = xpc_dictionary_create_reply(event);
xpc_object_t times;
times = xpc_array_create(NULL, 0);
for(unsigned int s = 0; s < data.size(); s++)
{
xpc_object_t index = xpc_uint64_create(data[s]);
xpc_array_append_value(times, index);
}
xpc_dictionary_set_value(reply, "times", times);
xpc_connection_send_message(remote, reply);
xpc_release(times);
xpc_release(reply);
and here is the client code:
xpc_object_t times = xpc_dictionary_get_value(reply, "times");
size_t count = xpc_array_get_count(times);
for(int c = 0; c < count; c++)
{
long my_time = xpc_array_get_uint64(times, c);
local_times.push_back(my_time);
}
If I try to handle a large array I get a seg fault (SIGSEGV)
Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libxpc.dylib 0x00007fff90e5cc02 xpc_array_get_count + 0
When you say "extremely big array" are you speaking of something that launchd might regard as a resource-hog and kill?
XPC is only really meant for short-fast transactional runs rather than long-winded service-based runs.
If you're going to make calls that make launchd wait, then I'd suggest you try https://developer.apple.com/library/mac/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html
When the Service dies.. Are any specific events other then SIG_ABORTS etc... fired?
Do you get "xpc service was invalidated" (which usually means launchD killed it, or did you get "xpc service/exited prematurely" which usually is handler code error.
My program crashes on this function on the 7th line, when I call malloc() when I run in release mode I get the `Program.exe has stopped working message, and when I run in debugger, most of the time it succeeds but sometimes I get this message (especially on larger input):
MONOM* polynomialsProduct(MONOM* poly1, int size1, MONOM* poly2, int size2, int* productSize)
{
int i1, i2;
int phSize = 1, logSize = 0;
MONOM* product;
product = (MONOM*)malloc(phSize*sizeof(MONOM));
monomAllocationVerification(product);
for (i1 = 0; i1 < size1; i1++)
{
for (i2 = 0; i2 < size2; i2++)
{
if (logSize == phSize)
{
phSize *= 2;
product = (MONOM*)realloc(product,phSize*sizeof(MONOM));
monomAllocationVerification(product);
}
product[logSize].coefficient = poly1[i1].coefficient * poly2[i2].coefficient;
product[logSize].power = poly1[i1].power + poly2[i2].power;
logSize++;
}
}
mergeSort(product,logSize);
*productSize = sumMonomsWithSamePower(product, logSize);
return product;
}
I understand that I'm dealing with memory errors and problems, but is there any quick way to analyze my code and look for memory errors? I look at my code a dozen of times looking for this kind of errors and found nothing. (I didn't want to post the code here since its 420 lines long).
First of all, if heap corruption is detected on the first malloc, that means it happened earlier (not in this function or on previous pass). So the problem may lie outside this code.
However, the code also looks suspicious to me.
monomAllocationVerification has no size parameter, so it should work on one monom only, yet you call it only once after realloc on pointer to first element, despite having allocated space for quite a few monoms. Please clarify your decision.
It is a bit unclear why sumMonomsWithSamePower should return a size, and thus modify an array to store a value. May be a quirk, but still suspicious.
UPDATE
The problem was in other functions; a few reallocs with wrong size.
I would check the return value of malloc() and use perror() to describe what error has occured. Also here is the documentation for malloc() and perror().
if((product = (MONOM*)malloc(phSize*sizeof(MONOM))) == NULL)
{
perror("ERROR: Failed to malloc ");
return 1;
//perror() will display a system specified string to describe the error it may tell you the error
}
Also do you know the size of MONOM? If not add the following line to your code.
printf("MONOM SIZE = %i\n", sizeof(MONOM));