I'm getting random "connection is closed" in Teiid 13.1.0 to SQL Server:
2021-01-08 10:20:23,949 DEBUG [org.teiid.COMMAND_LOG.SOURCE] (Worker513_QueryProcessorQueue9800) Cz9nti5G/vUr ERROR SRC COMMAND: endTime=2021-01-08 10:20:23.949 requestID=Cz9nti5G/vUr.0 sourceCommandID=0 executionID=9632 txID=null modelName=customer translatorName=sqlserver sessionID=Cz9nti5G/vUr principal=sforce-app-user
2021-01-08 10:20:23,949 WARN [org.teiid.CONNECTOR] (Worker513_QueryProcessorQueue9800) Cz9nti5G/vUr Connector worker process failed for atomic-request=Cz9nti5G/vUr.0.0.9632: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: ['(111)111-1111'] SQL: SELECT g_0.Id AS c_0, g_0.Email AS c_1, g_0.Phone AS c_2, g_0.parent AS c_3 FROM Customer g_0 WHERE g_0.Phone = ? ORDER BY c_0 OFFSET 0 ROWS FETCH NEXT 2001 ROWS ONLY]
at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:402)
at sun.reflect.GeneratedMethodAccessor101.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:228)
at com.sun.proxy.$Proxy44.execute(Unknown Source)
at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:59)
at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:281)
at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:124)
at org.teiid.dqp.internal.process.ThreadReuseExecutor$2.run(ThreadReuseExecutor.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:234)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:1130)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.prepareStatement(SQLServerConnection.java:3536)
at org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.doPrepareStatement(BaseWrapperManagedConnection.java:758)
at org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.prepareStatement(BaseWrapperManagedConnection.java:744)
at org.jboss.jca.adapters.jdbc.WrappedConnection$4.produce(WrappedConnection.java:478)
at org.jboss.jca.adapters.jdbc.WrappedConnection$4.produce(WrappedConnection.java:476)
at org.jboss.jca.adapters.jdbc.SecurityActions.executeInTccl(SecurityActions.java:97)
at org.jboss.jca.adapters.jdbc.WrappedConnection.prepareStatement(WrappedConnection.java:476)
at org.teiid.translator.jdbc.JDBCBaseExecution.getPreparedStatement(JDBCBaseExecution.java:198)
at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:117)
... 17 more
2021-01-08 10:20:23,949 DEBUG [jboss.jdbc.spy] (default task-88) Cz9nti5G/vUr java:/datasources/DATASOURCE [Connection] close()
2021-01-08 10:20:23,949 DEBUG [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (default task-88) Cz9nti5G/vUr DATASOURCE: returnConnection(714b1b5a, false) [1/20]
Originally I was seeing this when the SQL Server was restarted: Teiid was not validating the connections in the pool and I'd have to restart Teiid to get connections back. To fix this I added
<pool>
<flush-strategy>EntirePool</flush-strategy>
</pool>
which I tested and worked. However I am still getting "The connection is closed" errors at random times.
SQL Server marks the connections as idle after 10 minutes. I do not have a <idle-timeout-minutes> on my data source.
My configuration is:
<datasource jta="true" jndi-name="java:/datasources/DATASOURCE" pool-name="DATASOURCE" enabled="true" spy="true" use-ccm="false" statistics-enabled="true">
<connection-url>jdbc:sqlserver://1.1.1.1:1433;DatabaseName=DATABASE</connection-url>
<driver-class>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver-class>
<driver>mssql-jdbc-8.2.0.jre8.jar</driver>
<pool>
<flush-strategy>EntirePool</flush-strategy>
</pool>
<security>
<user-name>USERNAME</user-name>
<password>PASSWORD</password>
</security>
<validation>
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker"/>
<background-validation>false</background-validation>
</validation>
</datasource>
Any idea why Teiid isn't validating and rebuilding the pool when this happens? If it can detect dead connections when I reboot the SQL Server, why can it not detect the dead connections when this random unknown event happens?
How can I investigate further? I'm blind to why the connections die randomly every few days and do not know if CCM would help debug this or if I should be monitoring with netstat.
Teiid does not maintain the connection pools, the WildFly server does. Teiid just requests a connection and uses it when one is returned, which could a closed connection if the pool is not validated.
Validation checks seem correct above. You can alternatively follow similar techniques for validation defined here [1]
<validation>
<check-valid-connection-sql>select 1</check-valid-connection-sql>
<validate-on-match>false</validate-on-match>
<background-validation>true</background-validation>
<background-validation-millis>10000</background-validation-millis>
</validation>
[1] http://www.mastertheboss.com/jboss-server/jboss-datasource/how-to-automatically-reconnect-to-the-database-in-wildfly
Related
I am running a program that crawls the web and saves data into a solr index. for mysterious reasons, the solr server crashed. And now I end up with a corrupted index that has no segment files and hence risking losing all my data collected for 5 days....
The error message reads as below when you try to search on this index. the index folder definitely has data, as it has 182 files and 2GB in size.
I have tried to use CheckIndex but get the same error about no segment files...
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: Unable to create core [chase]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.core.CoreContainer.lambda$load$6(CoreContainer.java:586)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Unable to create core [chase]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:935)
at org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:558)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
... 5 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:977)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:830)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:920)
... 7 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2069)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2189)
at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1071)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:949)
... 9 more
Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file found in LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory#/home/zqz/Work/chase/aws/data/solr/chase/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory#51b2fc7e; maxCacheMB=48.0 maxMergeSizeMB=4.0)): files: [_fh2.fdt, _fh2.fdx, _fh2.fnm, _fh2.nvd, _fh2.nvm, _fh2.si, _fh2_Lucene50_0.doc, _fh2_Lucene50_0.pos, _fh2_Lucene50_0.tim, _fh2_Lucene50_0.tip, write.lock]
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:925)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:118)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:93)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:248)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:122)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2030)
... 12 more
2017-06-20 14:38:52.428 INFO (qtp475266352-16) [ ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores
2017-06-20 14:38:52.894 INFO (qtp475266352-13) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={indexInfo=false&wt=json&_=1497969532681} status=0 QTime=11
2017-06-20 14:38:52.962 INFO (qtp475266352-20) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={wt=json&_=1497969532684} status=0 QTime=76
The error you mentioned is caused by the missing file :
segments* e.g. segments_3 ...
in the index files :
files: [_fh2.fdt, _fh2.fdx, _fh2.fnm, _fh2.nvd, _fh2.nvm, _fh2.si, _fh2_Lucene50_0.doc, _fh2_Lucene50_0.pos, _fh2_Lucene50_0.tim, _fh2_Lucene50_0.tip, write.lock]
That file specifies the last commit point and the last generation of segments to take into account and apparently it is missing.
Check if that file is there and is readable.
If it is not ( because for example the index writer was not closed properly due to the mulfuction, do not despair.
Chances are there that the transaction log contains still the documents you indexed, so you could just replay it and get the documents back ( clean the index dir, make solr starting and it should take care).
Solr allows also a backup functionality, so for the future you may want to configure it.
I am trying to import large data using dih from mySql.
Following is the datasource with batchSize =-1 for mySql
<dataSource batchSize="-1" driver="com.mysql.jdbc.Driver" ..... />
If fetches all 10 million records.
But at the end says full import failed.
I get the following exception in the log. :
2017-03-14 07:27:04.429 ERROR (Thread-14) [ x:companyData] o.a.s.h.d.DataImporter Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException: java.sql.SQLException: Operation not allowed after ResultSet closed
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:270)
at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:475)
at org.apache.solr.handler.dataimport.DataImporter.lambda$runAsync$0(DataImporter.java:458)
at org.apache.solr.handler.dataimport.DataImporter$$Lambda$85/252359661.run(Unknown Source)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException: java.sql.SQLException: Operation not allowed after ResultSet closed
at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:416)
at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
... 5 more
Any help would be appreciated regardign the same.
The error you're facing does not concern Solr but the way you're accessing your database.
If you look at your exception: java.sql.SQLException: Operation not allowed after ResultSet closed.
I suggest to change batchSize parameter to a different value, for example 1000.
The batchSize option is used to retrieve the rows of a database table
in batches in order to reduce memory usage (it is often used to
prevent running out of memory when running the data import handler).
While a lower batch size might be slower, the option does not intend
to affect the speed of the import process.
iam using solr 4.7.1 and trying to do a full import.My data source is a table in mysql. It has 10000000 rows and 20 columns.
Whenever iam trying to do a full import solr stops responding. But when i try to do a import of 400000 or less it works fine.
If i try to import more than this solr wont index the result it either stops responding or will show "indexing failed". In the error log it says "Unable to execute query".But i dont understand how is the query running fine for lesser number of records but fails when i run more number of records
My system config are follows
CPU-i7
Ram -6Gb
OS-64 bit windows 7
I am not able to figure out what the problem is ,i have tried increasing the max_allowed_packet to 1000M and even java heap size.
please help thanks in advance
This is the error code
`Exception while processing: playername document : SolrInputDocument(fields: []):org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to execute query: SELECT player_id,firstname,lastname,value1,value2,value3,value4,value5,value6, value7,value8,value9,value10, value11,value18,value19,value20, country_id, playername_modtime,player_flag from playername WHERE 'true' != 'false' OR playername.playername_modtime > '2014-05-23 10:38:56' Processing Document # 1 at org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:71) at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:281) at org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:238) at org.apache.solr.handler.dataimport.JdbcDataSource.getData(JdbcDataSource.java:42) at org.apache.solr.handler.dataimport.SqlEntityProcessor.initQuery(SqlEntityProcessor.java:59) at org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:73) at org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:243) at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:477) at org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:416) at org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:331) at org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:239) at org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:411) at org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483) at org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:464) Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
The last packet successfully received from the server was 130,037 milliseconds ago. The last packet sent successfully to the server was 130,038 milliseconds ago. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source) at java.lang.reflect.Constructor.newInstance(Unknown Source) at com.mysql.jdbc.Util.handleNewInstance(Util.java:409) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1127) at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2288) at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:2044) at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3549) at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:489) at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3240) at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2411) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2834) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2832) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2781) at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:908) at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:788) at org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.(JdbcDataSource.java:274) ... 12 more Caused by: java.io.EOFException: Can not read response from server. Expected to read 6 bytes, read 4 bytes before connection was unexpectedly lost. at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3161) at com.mysql.jdbc.MysqlIO.nextRowFast(MysqlIO.java:2269) ... 23 more 5/23/2014 8:32:18 PM ERROR DataImporter Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException: org.apache.solr.handler.dataimport.DataImportHandlerException: Unable to execute query: SELECT player_id,firstname,lastname,value1,value2,value3,value4,value5,value6, value7,value8,value9,value10, value11,value18,value19,value20, country_id, playername_modtime,player_flag from playername WHERE 'true' != 'false' OR playername.playername_modtime > '2014-05-23 10:38:56' Processing Document # 1 Last Check: 5/23/2014 8:36:34 PM`
Added batchSize="-1" to data-config.xml and it worked
http://wiki.apache.org/solr/DataImportHandlerFaq
I am using solr 4.6.0 with jetty on windows 7 enterrpise with max heap of 2G.I can do a full-import for 200,000 records properly from the Solr Admin UI but as soon as I increase to 250,000 records, it starts giving me this error below:
webapp=/solr path=/dataimport params={optimize=false&clean=false&indent=true&commit=true&verbose=true&entity=files&command=full-import&debug=true&wt=json&rows=250000} {add=[8065121, 8065126, 8065128, 8065146, 8065963, 7838189, 7838186, 8065155, 8065174, 8065179, ... (250001 adds)],commit=} 0 2693420
org.apache.solr.common.SolrException; null:org.eclipse.jetty.io.EofException
at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:914)
at org.eclipse.jetty.http.AbstractGenerator.blockForOutput(AbstractGenerator.java:507)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:170)
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:107)
at sun.nio.cs.StreamEncoder.writeBytes(Unknown Source)
at su
Caused by: java.net.SocketException: Software caused connection abort: socket write error at java.net.SocketOutputStream.socketWrite0(Native Method)
at j......
org.apache.solr.common.SolrException;null:org.eclipse.jetty.io.EofException at org.eclipse.jetty.http.HttpGenerator.flushBuffer(HttpGenerator.java:914)
org.eclipse.jetty.servlet.ServletHandler; /solr/dihdb/dataimport
java.lang.IllegalStateException: Committed
at org.eclipse.jetty.server.Response.resetBuffer(Response.java:1144)
I have changed example/etc/jetty.xml as follows for maxIdleTime=3500000.
I changed example/etc/webdefault.xml for session-timeout=720.
I still keep getting the error above.
TIA,
Vijay
I changed -Xmx5120M and that seems to have fixed the issue with 500K and 1 million records.Lack of memory in essence was the issue for this misleading error showing up.
Also tried 100000 1800000 for DataImportHandler.
I've set up a connection from localhost to the Dev database on Heroku (as described in: Errors in evolutions on Heroku) and I am receving the following error after trying to apply evolutions a couple of times:
SQLException: Unable to open a test connection to the given database. JDBC url = [URL], username = null. Terminating connection pool.
Original Exception: org.postgresql.util.PSQLException: FATAL: too many connections for role "ntnkypawxazhwo"
at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:469)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:110)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:123)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:28)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:20)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:30)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:22)
at org.postgresql.Driver.makeConnection(Driver.java:391)
at org.postgresql.Driver.connect(Driver.java:265)
at play.utils.ProxyDriver.connect(ProxyDriver.scala:9)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:256)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:305)
at com.jolbox.bonecp.BoneCPDataSource.maybeInit(BoneCPDataSource.java:150)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:112)
at play.api.db.DBApi$class.getConnection(DB.scala:64)
at play.api.db.BoneCPApi.getConnection(DB.scala:273)
at play.api.db.evolutions.Evolutions$.databaseEvolutions(Evolutions.scala:306)
at play.api.db.evolutions.Evolutions$.evolutionScript(Evolutions.scala:284)
at play.api.db.evolutions.OfflineEvolutions$.applyScript(Evolutions.scala:452)
at play.core.ReloadableApplication.handleWebCommand(ApplicationProvider.scala:175)
at play.core.server.Server$$anonfun$getHandlerFor$1.apply(Server.scala:86)
at play.core.server.Server$$anonfun$getHandlerFor$1.apply(Server.scala:86)
at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:110)
at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:110)
at scala.util.control.Exception$Catch.apply(Exception.scala:88)
at scala.util.control.Exception$Catch.either(Exception.scala:110)
at play.core.server.Server$class.getHandlerFor(Server.scala:86)
at play.core.server.NettyServer.getHandlerFor(NettyServer.scala:38)
at play.core.server.netty.PlayDefaultUpstreamHandler.messageReceived(PlayDefaultUpstreamHandler.scala:226)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:777)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndFireMessageReceived(ReplayingDecoder.java:522)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:501)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:438)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:343)
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:274)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:194)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Dev databases have a fixed number of available connections (20 or so). How can I make sure I am properly closing my connections?
You can use the JDBC settings of Play to reduce the number of connections. Try setting only 1 partition to start:
db.default.partitionCount=1
and keep tweaking to limit time and number of connections per partition.