We just deployed selenium server 2.0b3 (upgraded from 1.0.3). It
looks like there are some fairly serious memory leaks - OutOfMemory
exception thrown during runs longer than 30 minutes long.
Is there any straight forward workaround for dealing with the memory leaks in the
2.0b3 selenium server?
I was hoping to get the 2.0b3 source, apply the assorted patches
submitted thus far and use this. However, when I pull this:
svn checkout http://selenium.googlecode.com/svn/tags/selenium-2.0-beta-3/
selenium-2.0-beta-3
and build with
./go clean release
The resulting binaries don't appear to have the
DefaultSelenium.class. Not sure what is going on here...
Alternately, I thought maybe we will just start working with the
latest release candidate. However, looks like the
DefaultSelenium.class is not here either.
Do I need to upgrade the client code to use WebDriver? I thought
things were suppose to be backwards compatible.
Suggestions?
for backwards compatibility you should use the WebdriverBackedSelenium like this:
FirefoxDriver driver = new FirefoxDriver(); //or any of the other drivertypes
Selenium selenium = new WebDriverBackedSelenium(driver, START_URL);
Specifically what kind of OutOfMemoryException is getting thrown? Heap? GC overhead limit? other?
I was getting "GC overhead limit exceeded" and sometimes also "out of heap space" as the message within the Exception (both 1.0.3 and 2.0.b3, using ruby selenium-client-1.2.18), and found your thread on the selenium-developers google group.[1] Have you followed along with the responses there?
Turning logging off for selenium-server (both -log AND -browserSideLog) stopped the OOMEs for me. I can wait till the next selenium-server release to get Kristian's patches.[2]
[1] http://groups.google.com/group/selenium-developers/browse_thread/thread/30d38475a16985a9/0db1af2456304f9f?hl=en&lnk=gst&q=outofmemory#0db1af2456304f9f
[2] http://code.google.com/p/selenium/source/detail?r=11872
Related
Following https://docs.hiro.so/smart-contracts/devnet I can't get the command clarinet integrate to work. I have installed Docker on my mac and am running version 0.28.0 of clarinet. Running command within 'my-react-app/clarinet' where all clarity related files live (contracts, settings, tests, and Clarinet.toml).
My guess is it could be an issue with Docker?
The issue was that I downloaded my Devnet.toml file from a repo that was configured incorrectly. The configuration I needed was:
[network]
name = "devnet"
I increased the CPU and Memory in Docker as well.
There is an issue when the command attempts to spin up the stacks explorer, but I was informed that there are several existing issues with the stacks explorer from clarinet integrate at the moment.
Depending on how the last devnet was terminated, you could have some containers running. This issue should be fixed in the next incoming release, meanwhile, you'd need to terminate this stale containers manually.
Apart from Ludo's suggestions, I'd also look into your Docker resources. The default CPU/memory allocation should allow you to get started with Clarinet, but just in case, you could alter it to see if that makes a difference. Here's my settings for your reference:
Alternatively, to tease things out, you could reuse one of the samples (eg: hirosystems/stacks-billboard) instead of running your project. See if the sample comes up as expected; if it does, there could be something missing in your project.
I have a very simple Unity WebGL project that I am trying to connect to a SQL Server database from.
When I run the project in the Editor, it works fine.
When I run the WebGL build, as soon as I try to open the DB connection I get an "Out of memory" pop-up and this error in the console:
PrototypeProject.loader.js:1 Cannot enlarge memory arrays. Either (1)
compile with -s TOTAL_MEMORY=X with X higher than the current value
2144141312, (2) compile with -s ALLOW_MEMORY_GROWTH=1 which allows
increasing the size at runtime, or (3) if you want malloc to return
NULL (0) instead of this abort, compile with -s ABORTING_MALLOC=0
My understanding is that the advice included in the error is out of date because allow memory growth defaults to enabled and there is no total memory setting in recent versions of Unity. I can't see why (3) would be a sensible thing to do
I know the problem is triggered (every time) by connection.Open because the first of these debug lines is output but the second is not
SqlConnection connection = new SqlConnection(connectionString);
Debug.Log("Calling connection.open");
connection.Open();
Console.WriteLine("Connection open");
One option would be to stop trying to connect to the database directly and call a web service which does the database work instead. Yes, I know that having a three-tiered architecture is a better design in any case - this was approach was used in a (failed, clearly) effort to prototype something quickly
However I really want to understand what's going on here in case I might have similar issues in future. I know that SOMETHING has to be the tipping point that runs you out of memory, but just opening a connection doesn't intuitively (but maybe I'm wrong...) seem like it should be a massive memory hog and I can't see any noticeable difference when using the Profiler in the editor
Does anyone have any experience in troubleshooting SQL Server connections in particular, or memory issues in general, in WebGL that might be relevant to understanding and avoiding this behaviour?
The answer was much more rudimentary than I expected - it's just not possible to use SqlConnection in a WebGL build because SqlConnection in turn is trying to use Sockets that WebGL builds do not have access to
Exactly why this manifests as an "out of memory" error, or why the Unity Editor couldn't be a little bit more helpful and warn you that you are using classes that are not going to work when you start a WebGL build, I do not yet understand...
In using .net selenium webdrivers, I have been stumbling in 2 main issues, each for a different specific webdriver.
The table below shows the issues Chrome and Firefox webdrivers have been falling short with me:
I am using RellYa's selenium jquery extensions.
Chrome webdriver randomly throws a jQuery not found exception. If I try a couple of times, I eventually succeed.
With Firefox's webdriver, this never happened.
On the other hand, firefox throws a
Unable to bind to locking port 7054 within 45000 ms
Research shows that the reason behind this is that I must have left another firefox webdriver not closed/not quit. But this defeats my using selenium to automate web tasks, in a multi threaded manner. I mean, after a couple of threads are opened, seems it reaches some limit and waits for one of the opened webdrivers to close.
Actually, from this firefox webdriver's documentation, they make it clear that only one instance is supposed to be running. What one is supposed to do then if he had in mind multi threading ?
Does any one have working solutions for the problems singled out in the table, for each specific webdriver implementations ?
No, you can run multiple instances of firefox, chrome, or whatever from your machine at any one time. If you research "Selenium Grid", you will see that it is designed to do that.
So:
The unable to bind message on firefox is not caused by another driver locking a port. Each driver instance starts on its own open port.
If you are not using Selenium Grid, or not using the grid, and are trying to handle the multi-threading yourself, just be careful of how you open and close your browsers in your #Configuration phases in your test runner.
As a educated guess, if you have instability, its more likely because you are trying to control a newer browser with a too-old version of Selenium? We need more info on your question, such as an example project to look at.
I have two machines, one with all the stuff I need (Eclipse + TestNG +scripts) and the other one with just browsers installed.
I use Selenium Grid 2.35.0.
Everything seems to be fine except the problem that very often I get this error:
Error communicating with the remote browser. It may have died.
Scripts are not complicated at all, I run them one-by-one, so it just happens randomly. I don't think it's because of the browser.
Any idea/fix?
If you need more info I'm here.
The only time I get that error is when I manually close the browser myself. I would verify that the machine withe the browsers is stable.
It could also be due to calling driver.quit() and not instantiating another driver (I haven't ever done this, so I don't know what error this throws)
I notice this error as well but ONLY when using Selenium grid (using 2.35 but 2.38 exists now)
When I run locally I don't get error communicating with the browse but typically it can happen when there is a bug with your setup and teardown code (maybe one of your classes creates an instance of your browser before your setup function gets called)
See How to close child browser window in Selenium WebDriver using Java
ensure to call driver.close(); on every popup / new windows / new tab you open during the test (after switching to it using driver.switchTo())
and to call driver.quit(); at the end of the session (generally in #AfterClass annotated method)
I'm developing a GWT application, and I'm having issues with testing in development mode in eclipse.
When I make changes to the client-side code, I refresh the browser page (F5) to reload the module. Every time I do this (whether the code has changed or not), the Development Tab in eclipse shows a new bullet point with "Module xxxx has loaded". As well, according to Task Manager, every time I do this, the javaw.exe host process increases by about 1MB of memory. Eventually (10-20 refreshes later), the page fails to load and the Development Mode tab shows this error:
Out of memory; to increase the amount of memory, use the -Xmx flag at startup (java -Xmx128M ...)
I can fix this by stopping and restarting the server (not the little refresh button in the Developer Mode tab, but the red stop button), but it then the module has to be revalidated, which takes a while. It seems that eclipse doesn't realize I've finished with the old module when I reload a new one. I'm observing the same behavior with a brand-new GWT project, so I don't think it's my code. Is anyone aware of a way to remedy this?
EDIT: See both answers below for possible solutions.
The default settings gwt dev mode use are the minimum , so you hit an out of memory really quickly.
From this you can see that the permgenspace is to low and if you refresh 20 times in a short periode it will go out of memory.
you can start by using following vmargs :
-Xms512m -Xmx512m -XX:MaxPermSize=256M -XX:+UseParallelGC
But as enrybo pointed out if your application grows it requires more memory:
-Xms512m -Xmx1g -XX:MaxPermSize=256M -XX:+UseParallelGC
There isn't really all that much you can do. As you mentioned you can increase the memory but eventually you'll run into the same problem even with more allocated memory.
I suggest you try to run in Super Dev Mode but in that case you'll need to update the SDK you're using to 2.5.1+. With Super Dev Mode your browser does not need a plugin because it will actually be running true Javascript. You even have the ability to debug in your browser but looking at your Java source (using source maps).