I created a multi-node cluster of cassandra: node0 and node1 (cassandra version 1.1.1), then I used cassandra-cli connect to node0 and create a column family. The info of node0 is ok, but it throw an exception at node1 like this:
ERROR 18:22:48,486 Exception in thread Thread[MigrationStage:1,5,main]
org.apache.cassandra.db.marshal.MarshalException: invalid UTF8 bytes 4fd5c745
at org.apache.cassandra.db.marshal.UTF8Type.getString(UTF8Type.java:56)
at org.apache.cassandra.cql3.ColumnIdentifier.(ColumnIdentifier.java:47)
at org.apache.cassandra.cql3.CFDefinition.getKeyId(CFDefinition.java:125)
at org.apache.cassandra.cql3.CFDefinition.(CFDefinition.java:59)
at org.apache.cassandra.config.CFMetaData.updateCfDef(CFMetaData.java:1303)
at org.apache.cassandra.config.CFMetaData.keyAlias(CFMetaData.java:224)
at org.apache.cassandra.config.CFMetaData.fromSchemaNoColumns(CFMetaData.java:1187)
at org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1215)
at org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:291)
at org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:396)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:271)
at org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:249)
at org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
then i store data into the column family at node0, i can get the data at node0, and node1 say the column family not found. after i restart node1, i can get the data at node1 like node0.
How can I fix this problem?
This is a bug fixed in the soon-to-be-released 1.1.2 version: https://issues.apache.org/jira/browse/CASSANDRA-4307. If you want to test the fix, you can pull the trunk branch from Apache's git repo.
Related
I am using flume 1.5.0 version to migrate SQL server data to Amazon S3.
I am migrating only incremental data to s3. So its something like whenever a new record gets inserted into my sql server then it should be replicate on S3.
I can write sql server data to s3 in north virginia region, but when I create a bucket in mumbai region and write data in mumbai region then its throwing below error-
19/09/07 13:01:08 WARN hdfs.HDFSEventSink: HDFS IO error
java.io.IOException: s3n://BUCKET-NAME : 400 : Bad Request
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:453)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:427)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieveMetadata(Jets3tNativeFileSystemStore.java:181)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at org.apache.hadoop.fs.s3native.$Proxy8.retrieveMetadata(Unknown Source)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:476)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.create(NativeS3FileSystem.java:403)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:890)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:787)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:776)
at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86)
at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:273)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:262)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:718)
at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:183)
at org.apache.flume.sink.hdfs.BucketWriter.access$1700(BucketWriter.java:59)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:715)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.jets3t.service.impl.rest.HttpException: 400 Bad Request
My flume conf sink properties are as below-
# SINK
agent.sinks = s3hdfs
agent.sinks.s3hdfs.type = hdfs
agent.sinks.s3hdfs.hdfs.path = s3n://ACCESS_KEY:SECRET_KEY#BUCKET-NAME/test
agent.sinks.s3hdfs.hdfs.fileType = DataStream
agent.sinks.s3hdfs.hdfs.filePrefix = test
agent.sinks.s3hdfs.hdfs.writeFormat = Text
agent.sinks.s3hdfs.hdfs.rollCount = 0
agent.sinks.s3hdfs.hdfs.rollSize = 67108864 #64Mb filesize
agent.sinks.s3hdfs.hdfs.batchSize = 100
agent.sinks.s3hdfs.hdfs.rollInterval = 1
agent.sinks.s3hdfs.channel = ch1
I tried S3a instead of S3n but it also did not work.
So please suggest me what changes I need to make so it can write to S3 bucket in mumbai region. If I need to mention region name in flume conf file, then how would I mention that.
Thanks in advance.
S3n is obsolete. S3A will work with it -look at the hadoop docs on fs.s3a.endpoint and signing algorithm. These are supported in Hadoop 2.8+
Hi I am getting the following stack trace when I execute the following lines of code:
transactionDF.write.format("jdbc")
.option("url",SqlServerUri)
.option("driver", driver)
.option("dbtable", fullQualifiedName)
.option("user", SqlServerUser).option("password",SqlServerPassword)
.mode(SaveMode.Append).save()
The following is the stacktrace:
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_3$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.LocalTableScanExec$$anonfun$1.apply(LocalTableScanExec.scala:41)
at org.apache.spark.sql.execution.LocalTableScanExec$$anonfun$1.apply(LocalTableScanExec.scala:41)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.sql.execution.LocalTableScanExec.<init>(LocalTableScanExec.scala:41)
at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:394)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:237)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:237)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:112)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:237)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:54)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2788)
at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2319)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:670)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:77)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:518)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
at com.test.spark.jobs.ingestion.test$.main(test.scala:193)
at com.test.spark.jobs.ingestion.test.main(test.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried debugging it and I believe query execution is giving null pointer exception
I am not sure what it means. I am running this on my local machine and not on any cluster
Any help will be appreciated.
I figured it out (Alteast I think this is the reason). For others facing a similar situation: While I was creating the table, I made every column as null so I assumed it would allow null insertion in the table. But the Avro schema I was building the dataframe had nullable = false. So, dataframe.create was reading null and hence raising a NPE error. The error was raised when I did Dataframe.write (which made me think it was a jdbc error) but the actual NPE happened while creating the dataframe
I am running a program that crawls the web and saves data into a solr index. for mysterious reasons, the solr server crashed. And now I end up with a corrupted index that has no segment files and hence risking losing all my data collected for 5 days....
The error message reads as below when you try to search on this index. the index folder definitely has data, as it has 182 files and 2GB in size.
I have tried to use CheckIndex but get the same error about no segment files...
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: Unable to create core [chase]
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at org.apache.solr.core.CoreContainer.lambda$load$6(CoreContainer.java:586)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Unable to create core [chase]
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:935)
at org.apache.solr.core.CoreContainer.lambda$load$5(CoreContainer.java:558)
at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
... 5 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:977)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:830)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:920)
... 7 more
Caused by: org.apache.solr.common.SolrException: Error opening new searcher
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2069)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:2189)
at org.apache.solr.core.SolrCore.initSearcher(SolrCore.java:1071)
at org.apache.solr.core.SolrCore.<init>(SolrCore.java:949)
... 9 more
Caused by: org.apache.lucene.index.IndexNotFoundException: no segments* file found in LockValidatingDirectoryWrapper(NRTCachingDirectory(MMapDirectory#/home/zqz/Work/chase/aws/data/solr/chase/data/index lockFactory=org.apache.lucene.store.NativeFSLockFactory#51b2fc7e; maxCacheMB=48.0 maxMergeSizeMB=4.0)): files: [_fh2.fdt, _fh2.fdx, _fh2.fnm, _fh2.nvd, _fh2.nvm, _fh2.si, _fh2_Lucene50_0.doc, _fh2_Lucene50_0.pos, _fh2_Lucene50_0.tim, _fh2_Lucene50_0.tip, write.lock]
at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:925)
at org.apache.solr.update.SolrIndexWriter.<init>(SolrIndexWriter.java:118)
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:93)
at org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:248)
at org.apache.solr.update.DefaultSolrCoreState.getIndexWriter(DefaultSolrCoreState.java:122)
at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:2030)
... 12 more
2017-06-20 14:38:52.428 INFO (qtp475266352-16) [ ] o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 transient cores
2017-06-20 14:38:52.894 INFO (qtp475266352-13) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/cores params={indexInfo=false&wt=json&_=1497969532681} status=0 QTime=11
2017-06-20 14:38:52.962 INFO (qtp475266352-20) [ ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/info/system params={wt=json&_=1497969532684} status=0 QTime=76
The error you mentioned is caused by the missing file :
segments* e.g. segments_3 ...
in the index files :
files: [_fh2.fdt, _fh2.fdx, _fh2.fnm, _fh2.nvd, _fh2.nvm, _fh2.si, _fh2_Lucene50_0.doc, _fh2_Lucene50_0.pos, _fh2_Lucene50_0.tim, _fh2_Lucene50_0.tip, write.lock]
That file specifies the last commit point and the last generation of segments to take into account and apparently it is missing.
Check if that file is there and is readable.
If it is not ( because for example the index writer was not closed properly due to the mulfuction, do not despair.
Chances are there that the transaction log contains still the documents you indexed, so you could just replay it and get the documents back ( clean the index dir, make solr starting and it should take care).
Solr allows also a backup functionality, so for the future you may want to configure it.
I can not generate entities from table in eclipse with foreign key. I am usingsql server. Here a schema of my tables:
There's a foreigne key between Pointage.Id and Pause.IdPointage. The geration fails when I try to generate the two tables. Pointage is generated but not Pause. In le last screen of generation Pause Table look empty (no column). It also fails when i just import Pause but it works well when I delete the foreign key.
I can see error in workspace log:
org.apache.velocity.exception.MethodInvocationException: Invocation of method 'getImportStatements' in class org.eclipse.jpt.jpa.gen.internal.ORMGenTable threw exception java.lang.IllegalStateException: Pause.FK_Pause_Pointage - mismatched sizes: 0 vs. 1 # main.java.vm[7,9]
at org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:205)
at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:203)
at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:294)
at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:318)
at org.apache.velocity.Template.merge(Template.java:254)
at org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:508)
at org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:473)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateJavaFile(PackageGenerator.java:333)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateClass(PackageGenerator.java:310)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateInternal(PackageGenerator.java:132)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.doGenerate(PackageGenerator.java:106)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generate(PackageGenerator.java:82)
at org.eclipse.jpt.jpa.ui.internal.wizards.gen.GenerateEntitiesFromSchemaWizard$GenerateEntitiesJob.runInWorkspace(GenerateEntitiesFromSchemaWizard.java:285)
at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:39)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.lang.IllegalStateException: Pause.FK_Pause_Pointage - mismatched sizes: 0 vs. 1
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.buildColumnPairArray(DTPForeignKeyWrapper.java:116)
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.getColumnPairArray(DTPForeignKeyWrapper.java:106)
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.getLocalColumnPairs(DTPForeignKeyWrapper.java:101)
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.getBaseColumns(DTPForeignKeyWrapper.java:146)
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.baseColumnsContains(DTPForeignKeyWrapper.java:150)
at org.eclipse.jpt.jpa.db.internal.DTPTableWrapper.foreignKeyBaseColumnsContains(DTPTableWrapper.java:216)
at org.eclipse.jpt.jpa.db.internal.DTPColumnWrapper.isPartOfForeignKey(DTPColumnWrapper.java:58)
at org.eclipse.jpt.jpa.gen.internal.ORMGenColumn.isForeignKey(ORMGenColumn.java:266)
at org.eclipse.jpt.jpa.gen.internal.ORMGenTable.buildColumnTypesMap(ORMGenTable.java:206)
at org.eclipse.jpt.jpa.gen.internal.ORMGenTable.getImportStatements(ORMGenTable.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.velocity.runtime.parser.node.PropertyExecutor.execute(PropertyExecutor.java:137)
at org.apache.velocity.util.introspection.UberspectImpl$VelGetterImpl.invoke(UberspectImpl.java:350)
at org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:180)
... 14 more
This is a bug in Eclipse Dali (https://bugs.eclipse.org/bugs/show_bug.cgi?id=281991), which is the result of a bug in Eclipse DTP (https://bugs.eclipse.org/bugs/show_bug.cgi?id=282206). There has been some recent development activity in DTP, after several years of nearly complete inactivity; so, maybe if you add your vote to the latter bug, it might get fixed. :-)
It was a problem of rights with the database. I did not restart the database connection in eclipse after giving more rights to the user. After doing this, all was right.
I've set up a connection from localhost to the Dev database on Heroku (as described in: Errors in evolutions on Heroku) and I am receving the following error after trying to apply evolutions a couple of times:
SQLException: Unable to open a test connection to the given database. JDBC url = [URL], username = null. Terminating connection pool.
Original Exception: org.postgresql.util.PSQLException: FATAL: too many connections for role "ntnkypawxazhwo"
at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:469)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:110)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:123)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:28)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:20)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:30)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:22)
at org.postgresql.Driver.makeConnection(Driver.java:391)
at org.postgresql.Driver.connect(Driver.java:265)
at play.utils.ProxyDriver.connect(ProxyDriver.scala:9)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:256)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:305)
at com.jolbox.bonecp.BoneCPDataSource.maybeInit(BoneCPDataSource.java:150)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:112)
at play.api.db.DBApi$class.getConnection(DB.scala:64)
at play.api.db.BoneCPApi.getConnection(DB.scala:273)
at play.api.db.evolutions.Evolutions$.databaseEvolutions(Evolutions.scala:306)
at play.api.db.evolutions.Evolutions$.evolutionScript(Evolutions.scala:284)
at play.api.db.evolutions.OfflineEvolutions$.applyScript(Evolutions.scala:452)
at play.core.ReloadableApplication.handleWebCommand(ApplicationProvider.scala:175)
at play.core.server.Server$$anonfun$getHandlerFor$1.apply(Server.scala:86)
at play.core.server.Server$$anonfun$getHandlerFor$1.apply(Server.scala:86)
at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:110)
at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:110)
at scala.util.control.Exception$Catch.apply(Exception.scala:88)
at scala.util.control.Exception$Catch.either(Exception.scala:110)
at play.core.server.Server$class.getHandlerFor(Server.scala:86)
at play.core.server.NettyServer.getHandlerFor(NettyServer.scala:38)
at play.core.server.netty.PlayDefaultUpstreamHandler.messageReceived(PlayDefaultUpstreamHandler.scala:226)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:777)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndFireMessageReceived(ReplayingDecoder.java:522)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:501)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:438)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:343)
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:274)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:194)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Dev databases have a fixed number of available connections (20 or so). How can I make sure I am properly closing my connections?
You can use the JDBC settings of Play to reduce the number of connections. Try setting only 1 partition to start:
db.default.partitionCount=1
and keep tweaking to limit time and number of connections per partition.