How to unload a ByteArray using Actionscript 3? - arrays

How do I forcefully unload a ByteArray from memory using ActionScript 3?
I have tried the following:
// First non-working solution
byteArray.length = 0;
byteArray = new ByteArray();
// Second non-working solution
for ( var i:int=0; i < byteArray.length; i++ ) {
byteArray[i] = null;
}

I don't think you have anything to worry about. If System.totalMemory goes down you can relax. It may very well be the OS that doesn't reclaim the newly freed memory (in anticipation of the next time Flash Player will ask for more memory).
Try doing something else that is very memory intensive and I'm sure that you'll notice that the memory allocated to Flash Player will decrease and be used for the other process instead.
As I've understood it, memory management in modern OS's isn't intuitive from the perspective of looking at the amounts allocated to each process, or even the total amount allocated.
When I've used my Mac for 5 minutes 95% of my 3 GB RAM is used, and it will stay that way, it never goes down. That's just the way the OS handles memory.
As long as it's not needed elsewhere even processes that have quit still have memory assigned to them (this can make them launch quicker the next time, for example).

(I'm not positive about this, but...)
AS3 uses a non-deterministic garbage collection which means that dereferenced memory will be freed up whenever the runtime feels like it (typically not unless there's a reason to run, since it's an expensive operation to execute). This is the same approach used by most modern garbage collecting languages (like C# and Java as well).
Assuming there are no other references to the memory pointed to by byteArray or the items within the array itself, the memory will be freed at some point after you exit the scope where byteArray is declared.
You can force a garbage collection, though you really shouldn't. If you do, do it only for testing. If you do it in production, you'll hurt performance much more than help it.
To force a GC, try (yes, twice):
flash.system.System.gc();
flash.system.System.gc();
You can read more here.

Have a look at this article
http://www.gskinner.com/blog/archives/2006/06/as3_resource_ma.html
IANA actionscript programmer, however the feeling I'm getting is that, because the garbage collector might not run when you want it to.
Hence
http://www.craftymind.com/2008/04/09/kick-starting-the-garbage-collector-in-actionscript-3-with-air/
So I'd recommend trying out their collection code and see if it helps
private var gcCount:int;
private function startGCCycle():void{
gcCount = 0;
addEventListener(Event.ENTER_FRAME, doGC);
}
private function doGC(evt:Event):void{
flash.system.System.gc();
if(++gcCount > 1){
removeEventListener(Event.ENTER_FRAME, doGC);
setTimeout(lastGC, 40);
}
}
private function lastGC():void{
flash.system.System.gc();
}

Unfortunately when it comes to memory management in Flash/actionscript there isn't a whole lot you can do. ActionScript was designed to be easy to use (so they didn't want people to have to worry about memory management)
The following is a workaround, instead of creating a ByteArray variable try this.
var byteObject:Object = new Object();
byteObject.byteArray = new ByteArray();
...
//Then when you are finished delete the variable from byteObject
delete byteObject.byteArray;
Where byteArray is a dynamic property of byteObject, you can free the memory that was allocated for it.

I believe you have answered your own question.
System.totalMemory gives you the total amount of memory being "used", not allocated. It is accurate that your application may only be using 20 MB, but it has 5 MB that is free for future allocations.
I'm not sure whether the Adobe docs would shed light on the way that it manages memory.

So, if I load say 20MB from MySQL, in the Task Manager the RAM for the application goes up by about 25MB. Then when I close the connection and try to dispose the ByteArray, the RAM never frees up. However, if I use System.totalMemory, flash player shows that the memory is being released, which is not the case.
Is the flash player doing something like Java and reserving heap space and not releasing it until the app quits?
Well yes and no, as you might have read from countless blog posts that the GC in AVM2 is optimistic and will work its own mysterious ways. So it does work a bit like Java and tries to reserve heap space. However if you let it long enough and start doing other operations that are consuming some significant memory, it will free that previous space. You can see this using the profiler overnight with some tests running on top of your app.

So, if I load say 20MB from MySQL, in the Task Manager the RAM for the application goes up by about 25MB. Then when I close the connection and try to dispose the ByteArray, the RAM never frees up. However, if I use System.totalMemory, flash player shows that the memory is being released, which is not the case.
The player is "releasing" the memory. If you minimize the window and restore it you should see that the memeory is now much closer to what System.totalMemory shows.
You might also be interested in using FlexBuilder's profiling tools which can show you if you really have memory leaks.

Use bytearray.clear()
As per the Language Reference
this
Clears the contents of the byte array and resets the length and position properties to 0. Calling this method explicitly frees up the memory used by the ByteArray instance.

Related

Memory allocation issues with the LPC1788 microcontroller

I'm fairly new to programming microcontrollers; I've been working with the LPC1788 for a couple of weeks now.
One problem I've been having recently is that I'm running out of memory much sooner than I expect to. I've tested how much memory seems to be available by testing how large a block of contiguous memory I can malloc, and the result is 972 bytes. Allocation is done starting at address 0x10000000 (the start of the on-chip SRAM that should be around 64kB on this board).
The program I'm working on at the moment is meant to act as a simple debugger that utilises the LCD and allows messages to be printed to it. I have one string that will constantly be "added to" by new messages, and then the whole message will be printed on the LCD. When the message's length down the screen goes past the vertical boundary, it will delete the oldest messages (the ones nearer the top) until it fits. However, I can only add about 7 additional messages before it refuses to allocate more memory. If needed, the main.c for the project is hosted at http://pastebin.com/bwUdpnD3
Earlier I also started work on a project that uses the threadX RTOS to create and execute several threads. When I tried including use of the LCD in that program, I found memory to be very limited there aswell. The LCD seems to store all pixel data starting from the SDRAM base address, but I'm not sure if that's the same thing as the SRAM I'm using.
What I need is a way to allocate memory enough to allow several threads to function or large strings to be stored, while being able to utilise the LCD. One possibility might be to use buffers or other areas of memory, but I'm not quite sure how to do that. Any help would be appreciated.
tl;dr: Quickly running out of allocatable memory on SRAM when trying to print out large strings on the LCD.
EDIT 1: A memory leak was noticed with the variable currMessage. I think that's been fixed now:
strcpy(&trimMessage[1], &currMessage[trimIndex+1]);
// Frees up the memory allocated to currMessage from last iteration
// before assigning new memory.
free(currMessage);
currMessage = malloc((msgSize - trimIndex) * sizeof(char));
for(int i=0; i < msgSize - trimIndex; i++)
{
currMessage[i] = trimMessage[i];
}
EDIT 2: Implemented memory leak fixes. Program works a lot better now, and I feel pretty stupid.
You need to be careful when choosing to use dynamic memory allocation in an embedded environment, especially with constrained memory. You can very easily end up fragmenting the memory space such that the biggest hole left is 972 bytes.
If you must allocate from the heap, do it once, and then hang onto the memory--almost like a static buffer. If possible, use a static buffer and avoid the allocation all together. If you must have dynamic allocation, keeping it to fixed sized blocks will help with the fragmentation.
Unfortunately, it does take a bit of engineering effort to overcome the fragmentation issue. It is worth the effort, and it does make the system much more robust though.
As for SRAM vs SDRAM, they aren't the same. I'm not familiar with threadX, and whether or not they have a Board Support Package (BSP) for your board, but in general, SDRAM has to be setup. That means the boot code has to initialize the memory controller, setup the timings, and then enable that space. Depending on your heap implementation, you need to dynamically add it or--more likely--you need to compile with your heap space pointing to where it will ultimately live (in SDRAM space). Then, you have to make sure to come up and get the memory controlled configured and activated before actually using the heap.
One other thing to watch out for, you may actually be running code from the SRAM space, and some of that space is also reserved for processor exception tables. That whole space may not be available, and may live through two different addresses (0x00000000 and 0x10000000) for instance. I know in some other ARM9 processors, this is common. You can boot from flash, which gets mapped initially into the 0x00000000 space, and then you do a song and dance to copy the booter into SRAM and map SRAM into that space. At that point, you can boot into something like Linux, who expects to be able to update the tables that live down at 0.
BTW, it does look like you have some memory leaks in the code you posted. Namely, currentMessage is never freed... only overwritten with a new pointer. Those blocks are then lost forever.

Memory that is malloc'ed and not freed

I have read this-
Memory that is allocated by malloc(for example) and that is not freed
using free() function is released when the program terminates.And
that it is done by the opearting system. So when does having or not
having a garbage collector come into picture?
Or is it that not all operating systems do this automatic release of memory on program termination?
That claim about malloc and free is correct for all modern computing operating systems. But the statement as a whole reflects a complete misunderstanding of the purpose of garbage collection.
The reason you call free is not to clean things up for after your program terminates. The reason you call free is to permit the memory to be re-used during the subsequent execution of a long-running program.
Consider a message server that handles a hundred messages per second. You call malloc when you receive a new message. And then you have to do lots of things with it. You may have to log it. You may have to send it to other clients. You may have to write it to a database. When you are done, if you don't free it, after a few days you'll have millions of messages stuck in memory. So you have to call free.
But when do you call free? When one client is done sending a message, another client might still be using it. And maybe the database still needs it.
The purpose of garbage collection is to ensure that object's used memory is released (so it can be re-used to hold a new message during the application's lifetime) without having to burden the application programmer with the duty (and risks) associated with tracking exactly when the object is no longer required by any code that might be using it.
If an application doesn't run for very long or doesn't have any objects whose lifetimes are difficult to figure out, then garbage collection doesn't do very much good. And there are other techniques (such as reference-counted pointers) that can provide many of the same benefits as garbage collection. But there is a real problem that garbage collection does solve.
Most modern operating systems will indeed free everything you're allocated on program termination. However, a garbage collector will free unused memory before program termination. This allows your program to skip the frees, but still manage to keep allocating memory indefinitely, as long as it lets go to references to memory that isn't being used anymore, and as long as your total working set size doesn't exceed physical memory limits.
All OS do free memory when the program quits. Memory leaks are 'only' a problem because they waste memory on the machine, or cause the program to crash. So you'd garbage collect to prevent these things from happening, but without worrying about actually freeing your own pointers when you're done with them.
They're two solutions to the same problem, really. But, it's because the problem happens during runtime that you're worried about it.
Imagine a long running process like a web server that malloc()s a bunch of data structures for every connection it services. If it never free()s any of that memory, the process memory usage will continually grow, possibly consuming everything available on the system.

Why does Silverlight leak memory when using COM?

We discovered this issue when hosting a legacy COM component in our Out Of Browser Silverlight application, first thinking it was an issue with our COM component.
Narrowing it down to hosting the most basic COM component imaginable still had the memory leak, however. This COM component used for testing is written in .NET, and simply sends events back to the Silverlight application every time a timer fires. Each event contains a single string only.
When running the Silverlight application, the process memory usage keeps growing. Profilers show no increase in managed memory, indicating that there's a leak in the Silverlight runtime / COM implementation.
Has anyone else seen this issue, and if so, have you been able to work around it?
Edit: Repro project now available at http://bitbucket.org/freed/silverlight-com-leak
Looking at your code, the string you pass back & forth is (11 characters + terminating zero) = 24 bytes in unicode. In COM Automation, BSTR are used wich adds 4 bytes for the leading pointer (32-bit), and you multiply that by 10000, wich is 10000 * 28 = 280000 bytes.
It means every millisecond (the timer's value is 1) you will allocate a lot of memory, and in .NET a 280000 bytes chunk will probably be allocated in the large object heap (> 85000 bytes). The result of hitting hard on the LOH is most of the time... problems with memory, as seen here for example: Large Object Heap Fragmentation
This is maybe something you should check. One simple thing to test is to reduce the size of your BigMessage. You can also dive in deep with WinDBG: http://blogs.msdn.com/b/tess/archive/2008/08/21/debugging-silverlight-applications-with-windbg-and-sos-dll.aspx and check what's really going on under covers.
Make sure the COM component is freeing any strings it allocates.
Not familiar with Silverlight, but another possible cause of interop headaches is event handling: http://www.codeproject.com/KB/cs/LingeringCOMObjects.aspx
Could it be that there are native resources not garbage collected? Maybe this is one of the very few cases where calling GC.Collect may be of benefit. Interesting reading here
Just for test you could call GC.Collect a couple (or even three times) and see what happens (I can't believe I'm actually suggesting this...).

What are out-of-memory handling strategies in C programming?

One strategy that I though of myself is allocating 5 megabytes of memory (or whatever number you feel necessary) at the program startup.
Then when at any point program's malloc() returns NULL, you free the 5 megabytes and call malloc() again, which will succeed and let the program continue running.
What do you think about this strategy?
And what other strategies do you know?
Thanks, Boda Cydo.
Handle malloc failures by exiting gracefully. With modern operating systems, pagefiles, etc you should never pre-emptively brace for memory failure, just exit gracefully. It is unlikely you will ever encounter out of memory errors unless you have an algorithmic problem.
Also, allocating 5MB for no reason at startup is insane.
For the last few years, the (embedded) software I have been working with generally does not permit the use of malloc(). The sole exception to this is that it is permissible during the initialization phase, but once it is decided that no more memory allocations are allowed, all future calls to malloc() fail. As memory may become fragmented due to malloc()/free() it becomes difficult at best in many cases to prove that future calls to malloc() will not fail.
Such a scenario might not apply to your case. However, knowing why malloc() is failing can be useful. The following technique that we use in our code since malloc() is not generally available might (or might not) be applicable to your scenario.
We tend to rely upon memory pools. The memory for each pool is allocated during the transient startup phase. Once we have the pools, we get an entry from the pool when we need it, and release it back to the pool when we are done. Each pool is configurable, and is usually reserved for a particular object type. We can track the usage of each over time. If we run out of pool entries, we can find out why. If we don't, we have the option of making our pool smaller and save some resources.
Hope this helps.
As a method of testing that you handle out of memory situations gracefully, this can be a reasonably useful technique.
Under any other circumstance, it sounds useless at best. You're causing the out of memory situation to happen, then fixing the problem by freeing memory you didn't need to start with.
"try-again-later". Just because you're OOM now, doesn't mean you will be later when the system is less busy.
void *smalloc(size_t size) {
for(int i = 0; i < 100; i++) {
void *p = malloc(size);
if(p)
return p;
sleep(1);
}
return NULL;
}
You should of course think a lot about where you employ such a strategy as it is quite hidious, but it has saved some of our systems in various cases
It actually depends on a policy you'd like to implement, meaning, what is the expected behavior of your program when it's out of memory.
Great solution would be to allocate memory during initialization only and never during runtime. In this case you'll never run out of memory if the program managed to start.
Another could be freeing resources when you hit memory limit. It'd be difficult to implement and test.
Keep in mind that when you are getting NULL from malloc it means both physical and virtual memory have no more free space, meaning your program is swapping all the time, making it slow and the computer unresponsive.
You actually need to make sure (by estimated calculation or by checking the amount of memory in runtime) that the expected amount of free memory the computer has is enough for your program.
Generally the purpose of freeing the memory is so that you have enough to report the error before you terminate the program.
If you are just going to keep running, there is no point in preallocating the emergency reserve.
Most of modern OSes in default configuration allow memory overcommit, so your program wouldn't get NULL from malloc() at all or at least until it somehow (by error, I guess) exhausted all available address space (not memory).
And then it writes some perfectly legal memory location, gets a page fault, there is no memory page in backing store and BANG (SIGBUS) - you dead, and there is no good way out there.
So just forget about it, you can't handle it.
Yeah, this doesn't work in practice. First for a technical reason, a typical low-fragmentation heap implementation doesn't make large free blocks available for small allocations.
But the real problem is that you don't know why you ran out of virtual memory space. And if you don't know why then there's nothing you can do to prevent that extra memory from being consumed very rapidly and still crash your program with OOM. Which is very likely to happen, you've already consumed close to two gigabytes, that extra 5 MB is a drop of water on a hot plate.
Any kind of scheme that switches the app into 'emergency mode' is very impractical. You'll have to abort running code so that you can stop, say, loading an enormous data file. That requires an exception. Now you're back to what you already had before, std::badalloc.
I want to second the sentiment that the 5mb pre-allocation approach is "insane", but for another reason: it's subject to race conditions. If the cause of memory exhaustion is within your program (virtual address space exhausted), another thread could claim the 5mb after you free it but before you get to use it. If the cause of memory exhaustion is lack of physical resources on the machine due to other processes using too much memory, those other processes could claim the 5mb after you free it (if the malloc implementation returns the space to the system).
Some applications, like a music or movie player, would be perfectly justified just exiting/crashing on allocation failures - they're managing little if any modifiable data. On the other hand, I believe any application that is being used to modify potentially-valuable data needs to have a way to (1) ensure that data already on disk is left in a consistent, non-corrupted state, and (2) write out a recovery journal of some sort so that, on subsequent invocations, the user can recover any data lost when the application was forced to close.
As we've seen in the first paragraph, due to race conditions your "malloc 5mb and free it" approach does not work. Ideally, the code to synchronize data and write recovery information would be completely allocation-free; if your program is well-designed, it's probably naturally allocation-free. One possible approach if you know you will need allocations at this stage is to implement your own allocator that works in a small static buffer/pool, and use it during allocation-failure shutdown.

malloc and obtaining recently freed memory

I am allocating the array and freeing it every callback of an audio thread. The main user thread (a web browser) is constantly allocating and deallocating memory based on user input. I am sending the uninited float array to the audio card. (example in my page from my profile.) The idea is to hear program state changes.
When I call malloc(sizeof(float)*256*13) and smaller i get an array filled with a wide range of floats which have a seemingly random distribution. It is not right to call it random - presumably this comes from whatever the memory block previously held. This is the behavior I expected and want to exploit. However when I do malloc(sizeof(float)*256*14) and larger, I get an array filled only with zeros. I would like to know why this cliff exists and if theres something I can do to get around it. I know it is undefined behavior per the standard, but I'm hoping someone that knows the implementation of malloc on some system might have an explanation.
Does this mean malloc is also memsetting the block to zero for larger sizes? This would be surprising since it wouldn't be efficient. Even if there are more chunks of memory zeroed out, I'd expect something to happen sometimes, since the arrays are constantly changing.
If possible I would like to be able to obtain chunks of memory that are reallocated over recently freed memory, so any alternatives would be welcomed.
I guess this is a strange question for some because my goal is to explore undefined behavior and use bad programming practices deliberately, but this is the application I am interested in making, so please bear with the usage of uninited arrays. I know the behavior of such usage is undefined, so please bear with me and don't tell me not to do it. I'm developing on a mac 10.5.
Most likely, the larger allocations result in the heap manager directly requesting pages of virtual address space from the kernel. Freeing will return that address space back to the kernel. The kernel must zero all pages that are allocated for a process - this is to prevent data leaking from one process to another.
Smaller allocations are handled by the user-mode heap manager within the process by taking these larger page allocations from the kernel, carving them up into smaller blocks, and reusing blocks on subsequent allocations. These do not need to be zero-initialized, since the memory contents always comes from your own process.
What you'll probably find is that previous requests could be filled using smaller blocks joined together. But when you request the bigger memory, then the existing free memory probably can't handle that much and flips some inbuilt switch for a request direct from the OS.

Resources