node.js file memory leak? - file

I'm seeing a memory leak with the following code:
while (true) {
console.log("Testing.");
}
I have tried defining the string and just using a constant, but it leaks memory, still:
var test = "Testing.";
while (true) {
console.log(test);
}
The same leak happens if I use a file instead of the standard log:
var test = "Testing.";
var fh = fs.createWriteStream("test.out", {flags: "a"});
while (true) {
fh.write(test);
}
I thought maybe it was because I wasn't properly closing the file, but I tried this and still saw the leak:
var test = "Testing";
while (true) {
var fh = fs.createWriteStream("test.out", {flags: "a"});
fh.end(test);
fh.destroy();
fh = null;
}
Does anyone have any hints as to how I'm supposed to write things without leaking memory?

This happens because you never give node a chance to handle "write successful" events, so they queue up endlessly. To give node a chance to handle them, you have to let the event loop do one iteration from time to time. This won't leak:
function newLine() {
console.log("Testing.");
process.nextTick(newLine);
}
newLine();
In real use cases, this is no issue because you nearly never have to write out so huge amounts of data at once that this matters. And if it does, cycle the event loop from time to time.
However, there's also a second issue with this that also occurs with the nextTick trick: Writing is async, and if the console/file/whatever is slower than node, node buffers data endlessly until the output is free again. To avoid this, you'll have to listen for the drain event after writing some stuff - it tells you when the pipe is free again. See here: http://nodejs.org/docs/latest/api/streams.html#event_drain_

Related

FreeRTOS: xEventGroupWaitBits() crashes inside a loop with scheduler running

We have several tasks running on an STM32 MCU. In the main.c file we call all the init functions for the various threads. Currently there is one renewing xTimer to trigger a periodic callback (which, at present, does nothing except print a message that it was called). Declarations as follows, outside any function:
TimerHandle_t xMotorTimer;
StaticTimer_t xMotorTimerBuffer;
EventGroupHandle_t MotorEventGroupHandle;
In the init function for the thread:
xMotorTimer = xTimerCreateStatic("MotorTimer",
xTimerPeriod,
uxAutoReload,
( void * ) 0,
MotorTimerCallback,
&xMotorTimerBuffer);
xTimerStart(xMotorTimer, 100);
One thread starts an infinite loop that pauses on an xEventGroupWaitBits() to determine whether to enter an inner loop, which is then governed by its own state:
DeclareTask(MotorThread)
{
bool done = false;
EventBits_t event;
for (;;)
{
Packet * pkt = NULL;
event = xEventGroupWaitBits( MotorEventGroupHandle,
EVT_MOTOR_START | EVT_MOTOR_STOP, // EventBits_t uxBitsToWaitFor
pdTRUE, // BaseType_t xClearOnExit
pdFALSE, // BaseType_t xWaitForAllBits,
portMAX_DELAY //TickType_t xTicksToWait
);
if (event & EVT_MOTOR_STOP)
{
MotorStop(true);
}
if (event & EVT_MOTOR_START)
{
EnableMotor(MOTOR_ALL);
done = false;
while (!done && !abortTest)
{
xQueueReceive(motorQueue, &pkt, portMAX_DELAY);
if (pkt == NULL)
{
done = true;
} else {
done = MotorExecCmd(pkt);
done = ( uxQueueMessagesWaiting(motorQueue) == ( UBaseType_t ) 0);
FreePacket(pkt);
}
}
}
}
}
xEventGroupWaitBits() fires successfully once, the inner loop enters, then exits when the program state meets the expected conditions. The outer loop repeats as it should, but when it arrives again at the xEventGroupWaitBits() call, it crashes almost instantly. In fact, it crashes a few lines down into the wait function, at a call to uxTaskResetEventItemValue(). I can't even step the debugger into the function, as if calling a bad address. But if I check the disassembly, the memory address for the BL instruction hasn't changed since the previous loop, and that address is valid. The expected function is actually there.
I can prevent this chain of events happening altogether by not calling that xTimerStart() and leaving everything else as-is. Everything runs just fine, so it's definitely not xEventGroupWaitBits() (or at least not just that). We tried switching to xEventGroupGetBits() and adding a short osDelay to the loop just as an experiment. That also froze the whole system.
So, main question. Are we doing something FreeRTOS is not meant to do here, using xEventGroupWaitBits() with xTimers running? Or is there supposed to be something between xEventGroupWaitBits() calls, possibly some kind of state reset that we've overlooked? Reviewing the docs, I can't see it, but I could have missed a detail. The

Can't free() a char*

void* password_cracker_thread(void* args) {
cracker_args* arg_struct = (cracker_args*) args;
md5hash* new_hash = malloc (sizeof(md5hash));
while(1)
{
char* password = fetch(arg_struct->in);
if(password == NULL )
{
deposit(arg_struct->out,NULL);
free(new_hash);
pthread_exit(NULL);
}
compute_hash(password,new_hash);
if(compare_hashes(new_hash,(md5hash**)arg_struct->hashes,arg_struct->num_hashes) != -1)
{
printf("VALID_PASS:%s \n",password);
deposit(arg_struct->out,password);
}else{
free(password);
}
}
}
This is a part of a program, where you get char* passwords from a ringbuffer, calculate md5 and compare them and push them into the next buffer if valid.
My problem is now, why can't I free those I don't need?
The whole program will stop if I try to and if I don't, I get memory leaks.
"You", and by this I mean your program, can only free() storage that was got from malloc()-and-friends, and only in the granularity it was got, and only once per chunk of storage.
In the code you show here, you're attempting to free() something got from fetch(). Since we can't see the definition of that function and you have not provided any documentation of it, our best guess is that
fetch() gives you a pointer to something other than a whole chunk
got from malloc()-et-al; and/or
some other part of the program not
shown here free()s the relevant chunk itself.

Memory leaks from splitting and duplicating strings

I am working on a fairly simple application written in C with GTK+ that is leaking memory badly. It has a few basic functions on timers that check the clock and poll an external networked device, parsing the string returned. The application runs on a small touch panel, and through TOP I can watch the available memory be eaten up as it runs.
I'm pretty new to C, so not surprised that I'm doing something wrong, I just can't seem to figure out what. I've been trying to use Valgrind to narrow it down, but honestly the output is a little over my head (10k+ line log file generated from running the application less than a minute). But in digging through that log I did find some functions repeatedly showing up with permanently lost blocks, all using some similar structure.
Example 1:
This is a short function that gets called when an option is selected. The last line with the g_strdup_printf is the one called out by Valgrind. select_next_show and select_show_five_displayed are both global variables.
static void show_box_five_clicked ()
{
g_timer_start(lock_timer);
gtk_image_set_from_file (GTK_IMAGE(select_show_1_cb_image), "./images/checkbox_clear.png");
gtk_image_set_from_file (GTK_IMAGE(select_show_2_cb_image), "./images/checkbox_clear.png");
gtk_image_set_from_file (GTK_IMAGE(select_show_3_cb_image), "./images/checkbox_clear.png");
gtk_image_set_from_file (GTK_IMAGE(select_show_4_cb_image), "./images/checkbox_clear.png");
gtk_image_set_from_file (GTK_IMAGE(select_show_5_cb_image), "./images/checkbox_checked.png");
select_next_show = g_strdup_printf("%i",select_show_five_displayed);
}
Example 2:
This is another function that gets called often and came up a lot in the Valgrind log. It takes the incoming response from the networked device, parses it into two strings, then returns one.
static gchar* parse_incoming_value(gchar* incoming_message)
{
gchar *ret;
GString *incoming = g_string_new(incoming_message);
gchar **messagePieces = g_strsplit((char *)incoming->str, "=", 2);
ret = g_strdup(messagePieces[1]);
g_strfreev(messagePieces);
g_string_free(incoming, TRUE);
return ret;
}
In all the cases like these which are causing problems I'm freeing everything I can without causing segmentation faults, but I must be missing something else or doing something wrong.
UPDATE:
To answer questions in comments, here is an example (trimmed down) of how I'm using the parse function and where the return is freed:
static void load_schedule ()
{
...other code...
gchar *holder;
gchar *holder2;
holder = read_a_line(schedListenSocket);
holder2 = parse_incoming_value(holder);
schedule_info->regShowNumber = holder2;
holder = read_a_line(schedListenSocket);
holder2 = parse_incoming_value(holder);
schedule_info->holidayShowNumber = holder2;
...other code....
g_free(holder);
g_free(holder2);
}
Any help is greatly appreciated!!
It looks like you free 'ret' once when calling g_free(holder2), but you've done multiple allocations for that one free - you call parse_incoming_value multiple times, each time causing an allocation, but you only free once right at the end.
As you copy the holder2 pointer into schedule_info elements each time, they actually have the "leaked" memory at the end.
If you do not free holder2 anywhere, but just free all the elements in schedule_info at the end of the code. I presume that shows no leak?
e.g.
holder2 = <result of dynamic alloc>;
schedule_info->a = holder2;
...
holder2 = <result of dynamic alloc>;
schedule_info->b = holder2;
...
// instead of g_free(holder2) at the end, do this...
g_free(schedule_info->a);
g_free(schedule_info->a);

InternetReadFileEx gives 10035 and 1008 errors

I am trying to write an Asyncronous Wininet application. I read the data in my callback function in case of INTERNET_STATUS_REQUEST_COMPLETE and I handle the ERROR_IO_PENDING errors as well. But after some data read from internet, InternetReadFileEx function gives me 10035=WSAEWOULDBLOCK (A non-blocking socket operation could not be completed immediately) error. After that error I call InternetReadFileEx again and this time it gives me 1008=ERROR_NO_TOKEN (An attempt was made to reference a token that does not exist.) error. I think my design is not correct, and I receive these error because of that.
Here is a snippet of my code:
case INTERNET_STATUS_REQUEST_COMPLETE:
{
BOOL bAllDone= FALSE;
DWORD lastError;
do
{
//Create INTERNET_BUFFERS
char m_pbReadBuffer[4096];
INTERNET_BUFFERS BuffersIn;
ZeroMemory(&BuffersIn, sizeof(INTERNET_BUFFERS));
BuffersIn.dwStructSize = sizeof(INTERNET_BUFFERS);
BuffersIn.lpvBuffer = m_pbReadBuffer;
BuffersIn.dwBufferLength = 4096;
InternetReadFileEx(ReqContext->File, &BuffersIn, IRF_ASYNC, 1);
//HERE I GOT THOSE 10035 and 1008 ERRORS
lastError = GetLastError();
if(lastError == 997) // handling ERROR_IO_PENDING
break;//break the while loop
//append it to my ISTREAM
(ReqContext->savedStream)->Write(BuffersIn.lpvBuffer, BuffersIn.dwBufferLength, NULL);
if (BuffersIn.dwBufferLength == 0)
bAllDone = TRUE;
}while(bAllDone == FALSE);
//delete[] m_pbReadBuffer;
if(bAllDone == TRUE && lastError== 0)
{
//these are for passing the ISTREAM to the function which calls "InternetOpenUrl"
LARGE_INTEGER loc;
loc.HighPart = 0;
loc.LowPart = 0;
ReqContext->savedStream->Seek(loc, STREAM_SEEK_SET, NULL);
ReqContext->savedCallback->OnUrlDownloaded(S_OK, ReqContext->savedStream); //Tell silverlight ISTREAM is ready
ReqContext->savedStream->Release();
ReqContext->savedCallback->Release();
InternetCloseHandle(ReqContext->File);
InternetSetStatusCallback(ReqContext->Connection, NULL);
InternetCloseHandle(ReqContext->Connection);
delete[] ReqContext;
}
}
break;
Can anyone give me a hand to correct that?
Thanks everyone helping...
GetLastError() is only meaningful if InternetReadFileEx() (or any other API, for that matter) actually fails with an error. Otherwise, you will be processing an error from an earlier API call, giving your code a false illusion that an error happened when it really may not have. You MUST pay attention to API return values, but you are currently ignoring the return value of InternetReadFileEx().
Worse than that, though, you are using InternetReadFileEx() in async mode but you are using a receiving buffer that is local to the INTERNET_STATUS_REQUEST_COMPLETE callback handler. If InternetReadFileEx() fails with an ERROR_IO_PENDING error, the read is performed in the background and INTERNET_STATUS_REQUEST_COMPLETE will be triggered when the read is complete. However, when that error occurs, you are breaking your loop (even though the read is still in progress) and that buffer will go out of scope before the read is finished. While the reading is still in progress, the receiving buffer is still on the stack and InternetReadFileEx() is still writing to it, but it may get re-used for other things at the same time because your code moved on to do other things and did not wait for the read to finish.
You need to re-think your approach. Either:
remove the IRF_ASYNC flag, since that is how the rest of your callback code is expecting InternetReadFileEx() to behave.
re-write the code to operate in async mode correctly. Dynamically allocate the receive buffer (or at least store it somewhere else that remains in scope during the async reading), don't call IStream::Write() unless you actually have data to write (only when InternetReadFileEx() returned TRUE right away, or you get an INTERNET_STATUS_REQUEST_COMPLETE event with a success code from an earlier InternetReadFileEx()/ERROR_IO_PENDING call), etc.
There are plenty of online examples and tutorials that show how to use InternetReadFileEx() in async mode. Search around.

how to deal with error return in c

How does one deal with error return of a routine in C, when function calls go deep?
Since C does not provide an exception throw mechanism, we have to check return values for each function. For example, the "a" routine may be called by "b", and "b" may called by many other routines, so if "a" returns an error, we then have to check it in "b" and all other routines calling "b".
It can make the code complicated if "a" is a very basic routine. Is there any solution for such problem?
Actually, here I want to get a quick return path if such kind error happens, so we only need to deal with this error in one place.
You can use setjmp() and longjmp() to simulate exceptions in C.
http://en.wikipedia.org/wiki/Setjmp.h
There are several strategies, but the one I find the most useful is that every function returns zero on success and nonzero for an error, where the specific value indicates the specific error.
This combined with early return logic actually makes the functions quite easy to read:
int
func (int param)
{
int rc;
rc = func2 (param);
if (rc)
return rc;
rc = func3 (param);
if (rc)
return rc;
// do something else
return 0;
}
I'm afraid that's the way it is. Without exceptions, you have to check the return value of every function in the call chain.
In the general case, no. You'll want to make sure your function calls worked as expected. Return codes are your main mechanism for ensuring this (although setting a global error number or error flag may also be appropriate, depending on context - not that it simplifies things much).
Adopting one of the techniques others have suggested should allow you to make your error checking uniform and easier to read. This will go a long way towards keeping things maintainable.
For some basic functions though, the odds of failure may be low enough not to bother, eg.
int sum(int a, int b) {
return a + b;
}
really doesn't need to be checked. But that system call to create a new window really should be.
The best way is to design functions, whenever possible, in ways that cannot fail. This is impossible if they do I/O or memory allocation or other things with side effects, so avoid those. For example, instead of having a function that allocates memory and copies a string, have a function that gets pre-allocated memory to which it copies a string. Or you might have only one place where I/O happens, the rest of the program just manipulates data in memory.
Alternatively, you may decide that certain kinds of errors warrant killing the process. For example, if you're out of memory, it is hard to recover from that, so you might as well crash. (But do that in a way that is user-friendly: checkpoint relevant data to disk continuously so the user may recover.) This way, functions can pretend they never fail.
The setjmp suggestion Murali VP is also worth checking out.
You make a list of error_codes (I use enum for that) and use them "flat" in all your app.
So if b calls a, and get one of the error codes, you can decide if you go on, or return back the original error code.
The user/programmer should have a list of all error codes...
You can use an ugly if pyramid like:
if (getting resource 1 succeeds) {
if (getting resource 2 succeeds) {
if (getting resource 3 succeeds) {
do something;
return success;
}
free resource 2;
}
free resource 1;
}
return failure;
or the equivalent with goto (which looks much nicer):
if (getting resource 1 failed) goto err1;
if (getting resource 2 failed) goto err2;
if (getting resource 3 failed) goto err3;
do something;
return success;
err3:
free resource 2;
err2:
free resource 1;
err1:
return failure;
AFAIK C is a structural programming language.
If this is the problem, the same would apply to RTL functions like fopen, fscanf etc ...
So I guess it is better to propagate errors.
You could use a macro.
#define FAIL_FUNC( funcname, ... ) if ( !funcname( _VA_ARGS_ ) ) \
return false;
This way you maintain the same system but without having to write the same code each time ...
There's a way similar to what R.. GitHub STOP HELPING ICE suggests. It's possible to reduce the number of labels using the fact that free(NULL) does nothing.
// initialize all resources to be empty at the beginning
resource1 = NULL;
resource2 = NULL;
resource3 = NULL;
err = SUCCESS;
// allocate resources
// in case of error simply jump to the end
err1 = get_resource_1(&resource1);
if (err1) {
err = FAIL1;
goto end;
}
err2 = get_resource_2(&resource2);
if (err2) {
err = FAIL2;
goto end;
}
err3 = get_resource_3(&resource3);
if (err3) {
err = FAIL3;
goto end;
}
do_something();
// assignment to the output parameter must come at the end
// where it's known there were no errors
*out_resource2 = resource2;
// if some of the resources are needed outside of the function
// don't forget to assign its local variables to NULL so that
// they don't get freed
resource2 = NULL;
end:
// execution comes here in any case
// all the resources that are still owned need to be freed here
free(resource3);
free(resource2);
free(resource1);
// in case of success err will be SUCCESS
// in case of error err will hold corresponding error
return err;
In order to reduce error handling boilerplate it's possible to use macro as Goz suggested or a function that would convert between external error type and internal one. In which case there would be no need to manually assign err in each branch.
#define E1 convert_error_1
#define E2 convert_error_1
#define E3 convert_error_1
my_error convert_error_1(error1 err) {
switch (err) {
case ERROR1_INVALID_ARGUMENT:
// it's our responsibility not to pass invalid
// argument to get_resource_2, this error means we did
// so it's a bug in our code and it's hard to handle
// in a way other than aborting
abort();
case ERROR1_SOMETHING_SOMETHING:
return MYERROR_SOMETHING_SOMETHING;
...
}
}
...
// allocate resources
// in case of error simply jump to the end
err = E1(get_resource_1(&resource1));
if (err) goto end;
err = E2(get_resource_2(&resource2));
if (err) goto end;
err = E3(get_resource_3(&resource3));
if (err) goto end;
...
Decide what kind of errors are worth dealing with.
In some cases, printing an error message on stderr and then calling exit with a non-zero argument is the best way to go.
This is often done when protecting malloc. A wrapper xmalloc is written which calls malloc and in case of failure prints an error message and then exits. You can find a real example of this here: (https://github.com/sailfishos-mirror/readline/blob/master/xmalloc.c).

Resources