In our project we are loading Lua as a dll in a Windows service (32 bit process on X64 Windows servers) and running scripts in parallel on different threads. It all works fine most of the times however, in cases where the script is doing memory intensive task (such as loading a large file and iterating through it while doing other processing) the script throws "not enough memory" error.
As per my understanding of the Windows OS's memory model, if the "not enough memory" error is being thrown it is because the process has crossed its memory quota (2 GB or if large memory aware linker option is set then 4GB) as there is no thread specific memory quota in Windows. But there are few points that I am not able to understand:
If there are multiple threads running Lua scripts and all are sharing the same address space of the process then why the "not enough memory" coming only on the thread which is doing heavy memory operation, it could come on any other script also, but it doesn't happen in practice?
Could this issue be related to lua Stack that is used to interact with c code?
Is there any internal memory limit that lua maintains (I couldn't find any reference of this btw).
Few points about our env:
It is a legacy product and uses Lua 5.1
There is no custom allocation function so the code use realloc for memory allocation
Any pointer in the direction why this could happen would be helpful.
Thanks in advance!
Just in case someone else hits this : there is no memory limit in lua itself, the issue was happening because the process's address space was getting filled up completely because we were loading huge maps in memory. We fixed it by moving in memory data to disc (local db).
Related
I have a multi threaded server application, written in 'c' that does a lot of network/tcp stuff. On one customer system the server leaks memory at the rate of about 200mb per day. The server is running windows 2012r2
My internal memory allocation functions don't record the leak.
Using CRTDBG functions to analyze total memory use the leak is also not recorded.
All private dll's we use log all memory allocations, and don't show any memory leaks.
The only other dll's in use are standard default libraries.
It's a 32bit build.
The usual suspects I've eliminated already:
1) thread creation/destruction, if I instead re-use threads the leak remains the same.
2) memory fragmentation causing an apparent leak, I use the non fragmenting malloc, and I've tried pre allocating all the needed memory so no fragmentation will occur, and it still leaks at the same rate.
The leak shows as peak working set, or private working set slowly increases.
I don't see any handle leaks in taskmanager.
So, I'm out of ideas. Please make crazy suggestions of what else might be using up this amount of memory and NOT recorded by CRTDBG memory logging functions.
Thanks Heaps! (pun intended) :-)
I am developing a shared memory suite of applications, in C, on Ubuntu 12.04 server.
This is my first attempt at dealing with shared memory, so I am not sure of all the pitfalls.
My suite starts with a daemon that creates a shared memory segment, and a few other apps that use it.
During development, of course, the segment can get created, then the program crash without removing it. I can see it listed using ipcs -m.
Normally, this isn't much of an issue, as the next time I start the "controlling" program, it simply reuses the existing segment.
However, from time to time, the size of the segment changes, so the program fails the shmget() function.
My question...
Before I create and attach the segment, I would like to first determine if it already exists, then if needed, unattach it, then I think, shmctl(..., ipc_rmd, ...).
Can I have some help to figure the proper sequence of events to do this?
I think I should use shmctl to see what's up, (number of attachments, if it exists, etc..) but I don't really know.
It may be there is another app still using the segment, of course that app will have to first be stopped, and any other possible exceptions handled.
I was thinking, in the shared memory segment, I could copy the system clock, and when the app detects the clock of the shared mem differs more then a few seconds, then detach or something.
Thanks, Mark.
I have started to look at c programming and whilst I am not a complete beginner (I have knowledge of java and web development) there are a lot of things I do not understand.
My question is about when a program is first loaded into memory. I am having trouble understanding what actually happens here.
Is all of the program code loaded into memory when the program is launched or is only what is needed loaded in?
After this does this code\set of instructions get swapped in and out of the physical disk as the process gets CPU time or does loaded code stay in memory whilst the program is running?
If two processes can share the same set of instructions does this mean each process gets a separate code section in its virtual memory space?
I am sorry if my questions are basic or poorly worded but I only started looking at this last week and after a weekend of reading I have far more questions than answers!
Is all of the program code loaded into memory when the program is
launched or is only what is needed loaded in?
Most modern OS's will load "on demand", so the starting point of the application (main) will be loaded by the OS, then the OS just kicks off there. When the application jumps to a piece of code that isn't in memory yet, it loads that bit.
After this does this code\set of instructions get swapped in and out
of the physical disk as the process gets CPU time or does loaded code
stay in memory whilst the program is running?
If the OS decides that some memory is needed it may well throw out some of the code, and reload it when it needs it later [if it's ever needed again - if it was some part of the initialization, it may never get hit again].
If two processes can share the same set of instructions does this mean
each process gets a separate code section in its virtual memory space?
It is certainly possible to share the code between multiple copies of the same application. Again, whether a particular OS does this or not depends on the OS. Linux certainly shares code copies from the same application between two (unrelated) processes [obviously, a forked process shares code by definition]. I believe Windows also does.
Shared libraries (".so" and ".dll" files for Linux/Unix and Windows respectively) are also used to share code between processes - the same shared library is used for many different applications.
The Data space is of course separate for each application and shared libraries will also get their own data section per process sharing the library.
Is it necessary to establish the connection each time during uploading the file in the multiple iteration for maintaining the stack size?
I got a calloc failed error.
I am using freertos with multithreading.
According to Wikipedia, yes, TFTP does not allow keeping the connection alive for multiple files.
If you are working with a small embedded system, its filesystem might not be designed to handle many files (even small ones) and you would want to reorganize the data into fewer.
Not sure what this has to do with stack size or running out of heap space. The question is very vague but you might want to account for scarce memory resources (using pencil and paper, even) to plan how the program will run, and avoid chasing these bugs every time a new feature is added.
So, I've recently noticed that our development server has a steady ~300MB out of 4GB ram left after the finished development of a certain project. Assuming this was due to memory leaks during the development phase, will that memory eventually free itself up or will it require a server restart. Are there any tools that can be used to prevent this in the future (aside from the obvious, 'don't write code that produces memory leaks')? Sometimes they go unseen for a little while and over time I guess they add up as you continue testing your app.
What operating system are you running? Most operating systems these days will clean up leaked memory for a process when the process exits. It is possible that the memory you are seeing in use is actually being used for the filesystem cache. This is nothing to worry about -- the OS will reclaim this memory if necessary.
From: http://learnlinux.tsf.org.za/courses/build/internals/ch05.html
The amount of free memory indicated by
the free command includes the current
size of the buffer cache in its
calculation. This is misleading, as
the amount of free memory indicated
will often be very low, as the buffer
cache soon fills most of user memory.
Don't' panic. Applications are
probably not crowding your RAM; it is
merely the buffer cache that is taking
up all available space. The buffer
cache counts as memory space available
for application use (remembering that
it will be shrunk as required), so
subtract the size of the buffer cache
to see the real amount of free memory
available for application use
It's best to fight them during development, because then it's easier to identify the revision that introduces the leak. As you probably see now, doing it after the fact is very, very hard. Expect a lot of reports when running the tools I recommend below:
http://valgrind.org/
http://www.ibm.com/software/awdtools/purify/
http://directory.fsf.org/project/ElectricFence/
I'd suggest you to run this tools, suppress most warnings about leaks, and then fix them one by one, removing the suppresions.
And then, make sure you regularly run these tools and quickly fix any regressions!
Of course the obvious answer is "Don't write code that produces memory leaks" and it's a valid one, because they can be extremely hard to fix if you have reference counting issues, or complex code in which it's hard to track the lifetime of memory.
To address your current situation you might consider using a tool such as DevPartner for Windows, or Valgrind for Linux/Unix, both of which I've found to be very effective for tracking down memory leaks (as well as other issues such as performance bottlenecks).
Another thing you may wish to consider is to look at your use of pointers and slowly replace them with smart pointers if you can, which should help manage your pointer lifetimes.
And no, I doubt that memory is going to be recovered without restarting the process in which your code is running.
Run the program using the exceptional valgrind on Linux x86 boxes.
A commerical equivilant, Purify, is available on Windows.
These runtime analysis of your program will report memory leaks and other errors such as buffer overflows and unitialised variables.
Static code analysis - Lint and Coverity for example - can also uncover memory leaks and more serious errors.
Lets be specific about what memory leaks cause and how they harm your program:
If you 'leak' memory during operation of your program there is a risk that your application will eventually exhaust RAM and swap, or the address space of available to your program (which can be less than physical RAM) and cause the next allocation to fail. The vast majority of programs will fail to catch this error, as error checking is harder than it seems. The majority of programs will either fail by dereferencing a null pointer or will exit.
If this is on Linux, check the output of 'free' and specifically check the amount of 'cached' ram. If your development work includes a lot of disk I/O, it'll use it for caching files, and you'll see very little 'available' but it's still there if it's needed. For all practical purposes, consider free+cached as available.
The 'free' output is distilled from /proc/meminfo, and you can get more detailed information on the running process in /proc/$pid/{maps,smaps}
In theory when your process exits, any memory it had is released. Is your process exiting?
Don't assume anything, run a memory profiler over it and see what it's doing.
When I was at college we used the Borland C++ Builder 6 IDE
It included CodeGuard, which checks for memory leaks and other memory related issues.
I am not sure if this option is still available on newer versions, but it would be weird for a new version to have less features.
On linux, as mentioned before, valgrind is a good memory leak debugger.