"Error -- memory violation : Exception ACCESS_VIOLATION received" on LR controller - c

Scenario: We are trying to download 2500 PDFs from our website and we need to find the response time of this scenario when run with other business flows of the application. The custom code I had written for selecting and downloading PDFs dynamically worked fine for the size of 200-300 PDFs both on vugen and even on controller. But, when we ran the same script with 2500 PDFs loaded to the DB, the script worked fine on vugen, but failed running out of memory on controller. I tried running this script alone on controller for concurrent users (20) and even then it failed giving the same out of memory error.I started getting this error as soon as the concurrent users started running on the server.I tried following things and my observations:
1. I checked the LG we are using and had no high cpu usage/memory usage at the time I got this memory error.
2. I tried turning off the logging completely and also turned off the "Generate snapshot on error".
3. I increased the network buffer size from default 12KB to a higher value around 2MB as the server was responding with THAT PDF size.
4. Also, increased JavaScript runtime memory value to a higher value but I know it's something to do with the code.
5. I have set web_set_max_html_param_len("100000");
Here is my code:
int download_size,i,m;
m=atoi(lr_eval_string("{DownloadableRecords_FundingNotices_count}"));
for(i=1;i<=m;i++)
lr_param_sprintf("r_buf","%sselectedNotice=%s&",lr_eval_string("{r_buf}"),lr_paramarr_idx("DownloadableRecords_FundingNotices",i));
lr_save_string(lr_eval_string("{r_buf}"), "dpAllRecords");
I am not able to find what the issue with my code as it is running fine in vugen.One thing is: it creates huge mdrv.log file to accommodate all the 2500 members in the format shown above
"%sselectedNotice=%s&".
I need help on this.
Okay, since that did not work and I could not find the root cause, I tried modifying the code with string buffer to hold the value instead of the parameter. This time my code did not work properly and I could not get the proper formatted value resulting in my web_custom_request failing
so, here is the code with sprintf
char *r_buf=(char *) malloc(55000);
int download_size,i,m;
m=atoi(lr_eval_string("{DownloadableRecords_FundingNotices_count}"));
for(i=1;i<=m;i++)
sprintf(r_buf,"%sselectedNotice=%s&",r_buf,lr_paramarr_idx ("DownloadableRecords_FundingNotices",i));
lr_save_string(r_buf, "dpAllRecords");
I also tried using this:
lr_save_string(lr_eval_string("{r_buf}"), "dpAllRecords");
though it is for embedded parameters but in vain

You could try something like the below. If frees the allocated memory, something you do not do in your examples.
I changed:
The way r_buf is allocated
how r_buf is populated (doing a sprintf() into the buffer and from the buffer might not work as expected)
uses lr_paramarr_len()
FREES THE ALLOCATED BUFFER!
Check that the allocated buffer is big enough in the loop
Action() Code:
char *r_buf;
char buf[2048];
int download_size,i,m;
// Allocate memory
if ( (r_buf= (char *)calloc(65535 * sizeof(char))) == NULL)
{
lr_error_message ("Insufficient memory available");
return -1;
}
memset( buf, 0, sizeof(buf) );
m = lr_paramarr_len("DownloadableRecords_FundingNotices");
for(i=1; i<=m; i++) {
sprintf( buf, "selectedNotice=%s&", lr_paramarr_idx("DownloadableRecords_FundingNotices",i) );
// Check buffer is big enough to hold the new data
if ( strlen(r_buf)+strlen(buf) > 65535 ) {
lr_error_message("Buffer exceeded");
lr_abort();
}
// Concatenate to final buffer
strcat( r_buf, buf ); // Bugfix: This was "strcat( r_buf, "%s", buf );"
}
// Save buffer to variable
lr_save_string(r_buf, "dpAllRecords");
// Free memory
free( r_buf );

Related

How do I read OpenVINO IR models from memory with the OpenVINO C API

I am having trouble reading OpenVINO IR networks (XML and bin) from memory using ie_core_read_network_from_memory() in the OpenVINO 2021.4 C API ie_c_api.h.
I suspect that I am creating the network weight blob wrong, but I cannot find any information on how to create weight blobs correctly for networks.
I have read the OpenVINO C API docs but cannot deduce from docs what I am doing wrong. The OpenVINO code repo contains some C code samples, but none of the samples seem to use ie_core_read_network_from_memory().
Below is a cut out of the code I am having trouble with.
// void* dmem->data - network memory buffer (float32)
// size_t dmem->size - size of network memory buffer (bytes)
ie_core_t* ov_core = NULL;
IEStatusCode status = ie_core_create("", &ov_core);
if (status != OK)
{
// error handling
}
const dimensions_t weights_tensor_dims =
{ 4, { 1, 1, 1, dmem->size/sizeof(float) } };
tensor_desc_t weights_tensor_desc = { OIHW, weights_tensor_dims, FP32 };
ie_blob_t* ov_model_weight_blob = NULL;
status = ie_blob_make_memory_from_preallocated(
&weights_tensor_desc, dmem->data, dmem->size, &ov_model_weight_blob);
if (status != OK)
{
// error handling
}
// char* model_xml_desc - the model's XML string
uint8_t* ov_model_xml_content = (uint8_t*)model_xml_desc;
ie_network_t* ov_network = NULL;
size_t xml_sz = strlen(ov_model_xml_content);
status = ie_core_read_network_from_memory(
ov_core, ov_model_xml_content, xml_sz, ov_model_weight_blob, &ov_network);
if (status != OK)
{
// Always get "GENERAL_ERROR (-1)"
}
The code works fine down to the ie_core_read_network_from_memory() call which results in "GENERAL_ERROR".
I have tried two models that were converted from Tensorflow. One is a simple [X] -> [Y] regression model (single input value, single output value). The other is also a regression model [X_1, X_2, ..., X_9] -> [Y] (nine input values, single output value). They work fine when reading them from file with ie_core_read_network(), but for my use case I must provide the network as a binary memory buffer and XML string.
I would appreciate any help, either by pointing out what I am getting wrong or directing me to some code samples that use ie_core_read_network_from_memory().
System information:
Windows 10
OpenVINO v2021.4.689
Microsoft Visual Studio 2019
UPDATE: An Intel employee reached out to me in another forum and pointed out that there is a unit test for ie_core_read_network_from_memory(). The unit test successfully reads a network from memory and made clear that I was in fact using a faulty tensor description to produce the weight blob, just as I suspected. Apparently the weight blob descriptor should be one dimensional, have memory layout ANY and datatype U8 even though the model weights are fp32.
From the unit test:
std::string bin_std = TestDataHelpers::generate_model_path("test_model", "test_model_fp32.bin");
const char* bin = bin_std.c_str();
//...
std::vector<uint8_t> weights_content(content_from_file(bin, true));
tensor_desc_t weights_desc { ANY, { 1, { weights_content.size() } }, U8 };
However, simply changing the tensor descriptor was not enough to get my code to work so it remains for me to properly translate the C++ code from the unit test to my C environment before the issue to can be considered solved.
Thanks
Refer to tensor_desc struct and standard layout format.
Apart from that, it is recommended to use the Benchmark_app tool to test the inference performance.

VS2010, scanf, strange behaviour

I'm converting some source from VC6 to VS2010. The code is written in C++/CLI and it is an MFC application. It includes a line:
BYTE mybyte;
sscanf(source, "%x", &mybyte);
Which is fine for VC6 (for more than 15 years) but causing problems in VS2010 so I created some test code.
void test_WORD_scanf()
{
char *source = "0xaa";
char *format = "%x";
int result = 0;
try
{
WORD pre = -1;
WORD target = -1;
WORD post = -1;
printf("Test (pre scan): stack: pre=%04x, target=%04x, post=%04x, sourse='%s', format='%s'\n", pre, target, post, source, format);
result = sscanf(source, format, &target);
printf("Test (post scan): stack: pre=%04x, target=%04x, post=%04x, sourse='%s', format='%s'\n", pre, target, post, source, format);
printf("result=%x", result);
// modification suggested by Werner Henze.
printf("&pre=%x sizeof(pre)=%x, &target=%x, sizeof(target)=%x, &post=%x, sizeof(post)=%d\n", &pre, sizeof(pre), &target, sizeof(target), &post, sizeof(post));
}
catch (...)
{
printf("Exception: Bad luck!\n");
}
}
Building this (in DEBUG mode) is no problem. Running it gives strange results that I cannot explain. First, I get the output from the two printf statemens as expected. Then a get a run time waring, which is the unexpected bit for me.
Test (pre scan): stack: pre=ffff, target=ffff, post=ffff, source='0xaa', format='%x'
Test (post scan): stack: pre=ffff, target=00aa, post=ffff, source='0xaa', format='%x'
result=1
Run-Time Check Failure #2 - Stack around the variable 'target' was corrupted.
Using the debugger I found out that the run time check failure is triggered on returning from the function. Does anybody know where the run time check failure comes from? I used Google but can't find any suggestion for this.
In the actual code it is not a WORD that is used in sscanf but a BYTE (and I have a BYTE version of the test function). This caused actual stack corruptions with the "%x" format (overwriting variable pre with 0) while using "%hx" (what I expect to be the correct format) is still causing some problems in overwriting the lower byte of variable prev.
Any suggestion is welcome.
Note: I edited the example code to include the return result from sscanf()
Kind regards,
Andre Steenveld.
sscanf with %x writes an int. If you provide the address of a BYTE or a WORD then you get a buffer overflow/stack overwrite. %hx will write a short int.
The solution is to have an int variable, let sscanf write to that and then set your WORD or BYTE variable to the read value.
int x;
sscanf("%x", "0xaa", x);
BYTE b = (BYTE)x;
BTW, for your test and the message
Run-Time Check Failure #2 - Stack around the variable 'target' was corrupted.
you should also print out the addresses of the variables and you'll probably see that the compiler added some padding/security check space between the variables pre/target/post.

Memory leaks from splitting and duplicating strings

I am working on a fairly simple application written in C with GTK+ that is leaking memory badly. It has a few basic functions on timers that check the clock and poll an external networked device, parsing the string returned. The application runs on a small touch panel, and through TOP I can watch the available memory be eaten up as it runs.
I'm pretty new to C, so not surprised that I'm doing something wrong, I just can't seem to figure out what. I've been trying to use Valgrind to narrow it down, but honestly the output is a little over my head (10k+ line log file generated from running the application less than a minute). But in digging through that log I did find some functions repeatedly showing up with permanently lost blocks, all using some similar structure.
Example 1:
This is a short function that gets called when an option is selected. The last line with the g_strdup_printf is the one called out by Valgrind. select_next_show and select_show_five_displayed are both global variables.
static void show_box_five_clicked ()
{
g_timer_start(lock_timer);
gtk_image_set_from_file (GTK_IMAGE(select_show_1_cb_image), "./images/checkbox_clear.png");
gtk_image_set_from_file (GTK_IMAGE(select_show_2_cb_image), "./images/checkbox_clear.png");
gtk_image_set_from_file (GTK_IMAGE(select_show_3_cb_image), "./images/checkbox_clear.png");
gtk_image_set_from_file (GTK_IMAGE(select_show_4_cb_image), "./images/checkbox_clear.png");
gtk_image_set_from_file (GTK_IMAGE(select_show_5_cb_image), "./images/checkbox_checked.png");
select_next_show = g_strdup_printf("%i",select_show_five_displayed);
}
Example 2:
This is another function that gets called often and came up a lot in the Valgrind log. It takes the incoming response from the networked device, parses it into two strings, then returns one.
static gchar* parse_incoming_value(gchar* incoming_message)
{
gchar *ret;
GString *incoming = g_string_new(incoming_message);
gchar **messagePieces = g_strsplit((char *)incoming->str, "=", 2);
ret = g_strdup(messagePieces[1]);
g_strfreev(messagePieces);
g_string_free(incoming, TRUE);
return ret;
}
In all the cases like these which are causing problems I'm freeing everything I can without causing segmentation faults, but I must be missing something else or doing something wrong.
UPDATE:
To answer questions in comments, here is an example (trimmed down) of how I'm using the parse function and where the return is freed:
static void load_schedule ()
{
...other code...
gchar *holder;
gchar *holder2;
holder = read_a_line(schedListenSocket);
holder2 = parse_incoming_value(holder);
schedule_info->regShowNumber = holder2;
holder = read_a_line(schedListenSocket);
holder2 = parse_incoming_value(holder);
schedule_info->holidayShowNumber = holder2;
...other code....
g_free(holder);
g_free(holder2);
}
Any help is greatly appreciated!!
It looks like you free 'ret' once when calling g_free(holder2), but you've done multiple allocations for that one free - you call parse_incoming_value multiple times, each time causing an allocation, but you only free once right at the end.
As you copy the holder2 pointer into schedule_info elements each time, they actually have the "leaked" memory at the end.
If you do not free holder2 anywhere, but just free all the elements in schedule_info at the end of the code. I presume that shows no leak?
e.g.
holder2 = <result of dynamic alloc>;
schedule_info->a = holder2;
...
holder2 = <result of dynamic alloc>;
schedule_info->b = holder2;
...
// instead of g_free(holder2) at the end, do this...
g_free(schedule_info->a);
g_free(schedule_info->a);

"Bad permissions for mapped region at address" Valgrind error for a hash table

I'm pretty new to C. When I run the following code for a hash table under Valgrind:
table *insertObject (table *h, int pref, char ch)
{
struct node x;
int i;
if (ch < 0)
{
ch=256-ch;
}
x.chr=ch;
x.pref=pref;
i = hash(pref, ch, h->size);
while (h->hash[i].pref!=0)
{
i++;
}
h->hash[i]=x;
h->size++;
return h;
}
I get the following error:
==9243==
==9243== Process terminating with default action of signal 11 (SIGSEGV)
==9243== Bad permissions for mapped region at address 0x6018A4
==9243== at 0x4009CD: insertObject (encode.c:119)
==9243== by 0x4008E3: main (encode.c:55)
Line 119 is the line
h->hash[i]=x;
The funny thing is, when I run the whole code through a debugger, it works fine 90% of the time. However, for some special cases, the code segfaults, and the debugger tells me this is also the culprit. What's wrong?
The error is due to an incorrect memory access, basically your application is trying to access a memory area which is not mapped in its memory space.
Quite likely the value of i exceeds the limits of the hash array. I cannot be more precise because I do not how the hash function works and what perf stands for.
However, you should verify the value of i with a debugger, in the 10% of case where the application does not work.
P.S. a program should work fine 100% of time.

node.js file memory leak?

I'm seeing a memory leak with the following code:
while (true) {
console.log("Testing.");
}
I have tried defining the string and just using a constant, but it leaks memory, still:
var test = "Testing.";
while (true) {
console.log(test);
}
The same leak happens if I use a file instead of the standard log:
var test = "Testing.";
var fh = fs.createWriteStream("test.out", {flags: "a"});
while (true) {
fh.write(test);
}
I thought maybe it was because I wasn't properly closing the file, but I tried this and still saw the leak:
var test = "Testing";
while (true) {
var fh = fs.createWriteStream("test.out", {flags: "a"});
fh.end(test);
fh.destroy();
fh = null;
}
Does anyone have any hints as to how I'm supposed to write things without leaking memory?
This happens because you never give node a chance to handle "write successful" events, so they queue up endlessly. To give node a chance to handle them, you have to let the event loop do one iteration from time to time. This won't leak:
function newLine() {
console.log("Testing.");
process.nextTick(newLine);
}
newLine();
In real use cases, this is no issue because you nearly never have to write out so huge amounts of data at once that this matters. And if it does, cycle the event loop from time to time.
However, there's also a second issue with this that also occurs with the nextTick trick: Writing is async, and if the console/file/whatever is slower than node, node buffers data endlessly until the output is free again. To avoid this, you'll have to listen for the drain event after writing some stuff - it tells you when the pipe is free again. See here: http://nodejs.org/docs/latest/api/streams.html#event_drain_

Resources