How to avoid java.lang OutOfMemory error in Jmeter? - batch-file

I have been trying to run 50 users in Jmeter, but I see Java.Lang OutOfMemory error in the command prompt once the number of users crosses 35. My machine has a physical memory of 16GB and in-spite of increasing the heap size to 8GB(8192m) in the jmeter.bat file, I still happen to see the same error. I have also disabled the listeners to reduce the memory consumption as well and I used the below heap commands to increase its size.
set HEAP=-Xms1g -Xmx1g -XX:MaxMetaspaceSize=8192m
I also used a few more alternative commands to increase the heap size which are:
set HEAP="-Xms2g -Xmx2g -X:MaxMetaspaceSize=512m" && jmeter.bat
HEAP="-Xms512m -Xmx4096m"
But I happen a see another error stating
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
errorlevel=1
Can anybody help me in resolving this issue for running a test with 50 users successfully?

Make sure that you have 64-bit Java installation, you won't be able to allocate more than 2 gigabytes to 32-bit JVM
Make sure to use proper syntax as you made a typo and it doesn't looks like you know what you're really doing (setting too high metaspace size will rather cause the problems than fix them):
OutOfMemoryError Exception can have many faces, it's not necessarily connected with insufficient heap, read the stacktrace and eventual .hprof file more attentively
Make sure to follow recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article

Related

Why does SqlConnection.Open() trigger a "Cannot enlarge memory arrays" error in a Unity WebGL application

I have a very simple Unity WebGL project that I am trying to connect to a SQL Server database from.
When I run the project in the Editor, it works fine.
When I run the WebGL build, as soon as I try to open the DB connection I get an "Out of memory" pop-up and this error in the console:
PrototypeProject.loader.js:1 Cannot enlarge memory arrays. Either (1)
compile with -s TOTAL_MEMORY=X with X higher than the current value
2144141312, (2) compile with -s ALLOW_MEMORY_GROWTH=1 which allows
increasing the size at runtime, or (3) if you want malloc to return
NULL (0) instead of this abort, compile with -s ABORTING_MALLOC=0
My understanding is that the advice included in the error is out of date because allow memory growth defaults to enabled and there is no total memory setting in recent versions of Unity. I can't see why (3) would be a sensible thing to do
I know the problem is triggered (every time) by connection.Open because the first of these debug lines is output but the second is not
SqlConnection connection = new SqlConnection(connectionString);
Debug.Log("Calling connection.open");
connection.Open();
Console.WriteLine("Connection open");
One option would be to stop trying to connect to the database directly and call a web service which does the database work instead. Yes, I know that having a three-tiered architecture is a better design in any case - this was approach was used in a (failed, clearly) effort to prototype something quickly
However I really want to understand what's going on here in case I might have similar issues in future. I know that SOMETHING has to be the tipping point that runs you out of memory, but just opening a connection doesn't intuitively (but maybe I'm wrong...) seem like it should be a massive memory hog and I can't see any noticeable difference when using the Profiler in the editor
Does anyone have any experience in troubleshooting SQL Server connections in particular, or memory issues in general, in WebGL that might be relevant to understanding and avoiding this behaviour?
The answer was much more rudimentary than I expected - it's just not possible to use SqlConnection in a WebGL build because SqlConnection in turn is trying to use Sockets that WebGL builds do not have access to
Exactly why this manifests as an "out of memory" error, or why the Unity Editor couldn't be a little bit more helpful and warn you that you are using classes that are not going to work when you start a WebGL build, I do not yet understand...

MiNiFi GetFile processor fails to get large files

I'm running apache MiNiFi c++, The flow starts with a GetFile processor.
The input directory includes some large files, and when I run MiNiFi the files above ~1.5 GB fail and do not get queued.
The log file states:
[org::apache::nifi::minifi::processors::GetFile] [Warning] failed to stat large_file_path_here
The other smaller files are queued as expected.
Does anyone have a clue what can be wrong? Why can't the processor manage the larger files?
Thanks in advance.
What you found seems like a bug that is present even in the current MiNiFi implementation even up to today. The issue is that file sizes you mentioned, a narrowing exception happens here when trying to determine the length of the file to be written into the content repository.
We will try to fix this issue asap.

Omnimark file processing fails

We have the omnimark script that takes 2gb sgml file size as input and output the file which is around 2.2 gb.The script is called from unix shell script and we are facing issues that sometimes script runs successfully and sometime it just aborted with no error....any idea or suggestions how to debug this?
I have seen this type of issue before in running OmniMark v5.3 when the script bombs due to lack of server resources/memory.
If you've specified writing to a log file, e.g. using -log logfilename.txt, then you would see something like an error code #3000 "insufficient memory error".
http://developers.omnimark.com/docs/html/error/3000.htm
If no log file, then initial step would be to run the script in a console session so that any such abort message is visible.
Stilo have a page listing fixes in various versions of OmniMark
http://developers.omnimark.com/docs/html/concept/806.htm
This mentions a variety of memory-related issues in various versions of the software (e.g. use of certain translate rules) which may help some investigation.
Alternatively, you could add to the script writing to a debug log file (with a global switch to activate debug on or off (so you don't waste further I/O resources when you don't need to)). Debug log file should be unbuffered. At certain breakpoints in the script add a message. The more verbose the better at narrowing down where/when the error is, but with the size of file I suggest it's a I/O or memory error.
Also depends what version of OmniMark you're using.

Is there any way to replicate a memory can't be read error message in my C# application?

Let me state upfront that I truly appreciate any assistance on this issue.
I have a C# (2.0) application. This is relatively simple application that executes stored procedures based on an XML file that is passed as a command line parameter.
We use it as a tool to call different stored procedures. This application does some logging and for the most part works very well.
The application reads the stored procedure name and parameters from an XML file. It sets up a connection string and SQL Command object (System.Data.SqlClient.SqlCommand).
Then it runs the stored procedure with the ExecuteReader method.
Unfortunately on a handful of occasions this application has generated the following error:
“Application popup: StoredProcLauncher.exe - Application Error : The instruction
at "0x7c82c912" referenced memory at "0x00000000". The memory could not be "read”
This error has appeared on multiple servers so it must be a code issue.
It seems that when our production server rolls a certain number it belches out this memory error.
The problem is I don’t see this issue on development. I can’t replicate it so I’m stuck.
Is there any way to simulate this error. Can I fill up the memory on my local PC somehow to attempt to replicate this error?
Does anyone know some common coding issues that might result in an error like this?
Does anyone have some rope I can borrow?
One way to do this is to wrap the offending code in a try catch block and writing the stack trace and error message to the windows application event log, text file, email, etc.
This will give you some line numbers and additional information.
Also note, you may need to deploy this in debug mode or at least copy the .pdb file with the application exe/dll so it can get the debug symbols. Can't remember off the top of my head how that works, but I think when you deploy in release mode you may loose some valuable debug information.
The instruction at "0x7c82c912" referenced memory at "0x00000000"
This is an access violation:
An access violation occurs in unmanaged or unsafe code when the code attempts to read or write to memory that has not been allocated, or to which it does not have access. This usually occurs because a pointer has a bad value.
Why does your program have unmanaged/unsafe code? For doing what you described it needs no native code.
Alas, the code crashes and now is not the time to wonder how is ending up calling native code. To solve the issue you're going to have to catch a dump and analyze the dump. See Capturing Application Crash Dumps. There are tools that specialize in this, like breakpad. there are also services that can help you collect and track crashes generated from your app, like crittercism.com or AirBrake. I even created one for myself and made it public bugcollect.com.

Finding the true memory footprint of a Windows application

I've run into a few OutOfMemoryExceptions with my C#/WPF application and I'm running into some confusing data while attempting to profile the memory usage.
When the app is typically running, Windows Task Manager shows the memory usage as somewhere around 34 MB (bounces around slightly as objects are created and garbage collected). When I run memory profiling applications such as CLR Profiler and dotTrace Memory, they show the total memory usage at around 1.2 MB.
Why this huge discrepancy? What does Task Manager see that these profilers do not?
UPDATE: I added some diag code to my application to print out various memory information every so often via the Process class.
While running my app, I set up a rule in DebugDiag to perform a memory dump in the event of an exception. I forced an exception and the memory dump occurred. At this point, the memory usage of my app (as shown by task manager) jumped from 32 MB to 145 MB and remained there.
You can see this jump in the table below (WorkingSet64). I'm still trying to make sense of all the types of memory info provided by the Process class. How would an external application make the working set of my app grow like this?
Link to data table here.
Using some of the diagnostics tools suggested here, plus the ANTS memory profiler (which is so money) I found the source of the leak.
WPF Storyboard animations leak under .NET 3.5
The WPF BitmapEffect class can cause leaks. The alternative "Effect" class fixes the leak. Link, Link
XAML Merged ResourceDictionaries can cause leak. Link, Link
The "Working Set" memory footprint of an application (memory shown by task manager) is not a good indication of your process' footprint. Outside applications can influence this. Link
The memory profiling tools helped me find that the leaks were mostly in unmanaged code, which made it a real pain to track down. Dealing with these leaks, plus a better understanding of Windows memory (private vs working set) cleared things up.
Prcess Explorer and VMMap, both part of the Sysinternals Suite by Mark Russinovich.

Resources