I've set up a connection from localhost to the Dev database on Heroku (as described in: Errors in evolutions on Heroku) and I am receving the following error after trying to apply evolutions a couple of times:
SQLException: Unable to open a test connection to the given database. JDBC url = [URL], username = null. Terminating connection pool.
Original Exception: org.postgresql.util.PSQLException: FATAL: too many connections for role "ntnkypawxazhwo"
at org.postgresql.core.v3.ConnectionFactoryImpl.readStartupMessages(ConnectionFactoryImpl.java:469)
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:110)
at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:64)
at org.postgresql.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:123)
at org.postgresql.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:28)
at org.postgresql.jdbc3g.AbstractJdbc3gConnection.<init>(AbstractJdbc3gConnection.java:20)
at org.postgresql.jdbc4.AbstractJdbc4Connection.<init>(AbstractJdbc4Connection.java:30)
at org.postgresql.jdbc4.Jdbc4Connection.<init>(Jdbc4Connection.java:22)
at org.postgresql.Driver.makeConnection(Driver.java:391)
at org.postgresql.Driver.connect(Driver.java:265)
at play.utils.ProxyDriver.connect(ProxyDriver.scala:9)
at java.sql.DriverManager.getConnection(Unknown Source)
at java.sql.DriverManager.getConnection(Unknown Source)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:256)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:305)
at com.jolbox.bonecp.BoneCPDataSource.maybeInit(BoneCPDataSource.java:150)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:112)
at play.api.db.DBApi$class.getConnection(DB.scala:64)
at play.api.db.BoneCPApi.getConnection(DB.scala:273)
at play.api.db.evolutions.Evolutions$.databaseEvolutions(Evolutions.scala:306)
at play.api.db.evolutions.Evolutions$.evolutionScript(Evolutions.scala:284)
at play.api.db.evolutions.OfflineEvolutions$.applyScript(Evolutions.scala:452)
at play.core.ReloadableApplication.handleWebCommand(ApplicationProvider.scala:175)
at play.core.server.Server$$anonfun$getHandlerFor$1.apply(Server.scala:86)
at play.core.server.Server$$anonfun$getHandlerFor$1.apply(Server.scala:86)
at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:110)
at scala.util.control.Exception$Catch$$anonfun$either$1.apply(Exception.scala:110)
at scala.util.control.Exception$Catch.apply(Exception.scala:88)
at scala.util.control.Exception$Catch.either(Exception.scala:110)
at play.core.server.Server$class.getHandlerFor(Server.scala:86)
at play.core.server.NettyServer.getHandlerFor(NettyServer.scala:38)
at play.core.server.netty.PlayDefaultUpstreamHandler.messageReceived(PlayDefaultUpstreamHandler.scala:226)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:777)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.unfoldAndFireMessageReceived(ReplayingDecoder.java:522)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:501)
at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:438)
at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:75)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:558)
at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:553)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:343)
at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:274)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:194)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:102)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
Dev databases have a fixed number of available connections (20 or so). How can I make sure I am properly closing my connections?
You can use the JDBC settings of Play to reduce the number of connections. Try setting only 1 partition to start:
db.default.partitionCount=1
and keep tweaking to limit time and number of connections per partition.
Related
I'm getting random "connection is closed" in Teiid 13.1.0 to SQL Server:
2021-01-08 10:20:23,949 DEBUG [org.teiid.COMMAND_LOG.SOURCE] (Worker513_QueryProcessorQueue9800) Cz9nti5G/vUr ERROR SRC COMMAND: endTime=2021-01-08 10:20:23.949 requestID=Cz9nti5G/vUr.0 sourceCommandID=0 executionID=9632 txID=null modelName=customer translatorName=sqlserver sessionID=Cz9nti5G/vUr principal=sforce-app-user
2021-01-08 10:20:23,949 WARN [org.teiid.CONNECTOR] (Worker513_QueryProcessorQueue9800) Cz9nti5G/vUr Connector worker process failed for atomic-request=Cz9nti5G/vUr.0.0.9632: org.teiid.translator.jdbc.JDBCExecutionException: 0 TEIID11008:TEIID11004 Error executing statement(s): [Prepared Values: ['(111)111-1111'] SQL: SELECT g_0.Id AS c_0, g_0.Email AS c_1, g_0.Phone AS c_2, g_0.parent AS c_3 FROM Customer g_0 WHERE g_0.Phone = ? ORDER BY c_0 OFFSET 0 ROWS FETCH NEXT 2001 ROWS ONLY]
at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:127)
at org.teiid.dqp.internal.datamgr.ConnectorWorkItem.execute(ConnectorWorkItem.java:402)
at sun.reflect.GeneratedMethodAccessor101.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.teiid.dqp.internal.datamgr.ConnectorManager$1.invoke(ConnectorManager.java:228)
at com.sun.proxy.$Proxy44.execute(Unknown Source)
at org.teiid.dqp.internal.process.DataTierTupleSource.getResults(DataTierTupleSource.java:302)
at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:108)
at org.teiid.dqp.internal.process.DataTierTupleSource$1.call(DataTierTupleSource.java:104)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.teiid.dqp.internal.process.FutureWork.run(FutureWork.java:59)
at org.teiid.dqp.internal.process.DQPWorkContext.runInContext(DQPWorkContext.java:281)
at org.teiid.dqp.internal.process.ThreadReuseExecutor$RunnableWrapper.run(ThreadReuseExecutor.java:124)
at org.teiid.dqp.internal.process.ThreadReuseExecutor$2.run(ThreadReuseExecutor.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: The connection is closed.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDriverError(SQLServerException.java:234)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.checkClosed(SQLServerConnection.java:1130)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.prepareStatement(SQLServerConnection.java:3536)
at org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.doPrepareStatement(BaseWrapperManagedConnection.java:758)
at org.jboss.jca.adapters.jdbc.BaseWrapperManagedConnection.prepareStatement(BaseWrapperManagedConnection.java:744)
at org.jboss.jca.adapters.jdbc.WrappedConnection$4.produce(WrappedConnection.java:478)
at org.jboss.jca.adapters.jdbc.WrappedConnection$4.produce(WrappedConnection.java:476)
at org.jboss.jca.adapters.jdbc.SecurityActions.executeInTccl(SecurityActions.java:97)
at org.jboss.jca.adapters.jdbc.WrappedConnection.prepareStatement(WrappedConnection.java:476)
at org.teiid.translator.jdbc.JDBCBaseExecution.getPreparedStatement(JDBCBaseExecution.java:198)
at org.teiid.translator.jdbc.JDBCQueryExecution.execute(JDBCQueryExecution.java:117)
... 17 more
2021-01-08 10:20:23,949 DEBUG [jboss.jdbc.spy] (default task-88) Cz9nti5G/vUr java:/datasources/DATASOURCE [Connection] close()
2021-01-08 10:20:23,949 DEBUG [org.jboss.jca.core.connectionmanager.pool.strategy.OnePool] (default task-88) Cz9nti5G/vUr DATASOURCE: returnConnection(714b1b5a, false) [1/20]
Originally I was seeing this when the SQL Server was restarted: Teiid was not validating the connections in the pool and I'd have to restart Teiid to get connections back. To fix this I added
<pool>
<flush-strategy>EntirePool</flush-strategy>
</pool>
which I tested and worked. However I am still getting "The connection is closed" errors at random times.
SQL Server marks the connections as idle after 10 minutes. I do not have a <idle-timeout-minutes> on my data source.
My configuration is:
<datasource jta="true" jndi-name="java:/datasources/DATASOURCE" pool-name="DATASOURCE" enabled="true" spy="true" use-ccm="false" statistics-enabled="true">
<connection-url>jdbc:sqlserver://1.1.1.1:1433;DatabaseName=DATABASE</connection-url>
<driver-class>com.microsoft.sqlserver.jdbc.SQLServerDriver</driver-class>
<driver>mssql-jdbc-8.2.0.jre8.jar</driver>
<pool>
<flush-strategy>EntirePool</flush-strategy>
</pool>
<security>
<user-name>USERNAME</user-name>
<password>PASSWORD</password>
</security>
<validation>
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker"/>
<background-validation>false</background-validation>
</validation>
</datasource>
Any idea why Teiid isn't validating and rebuilding the pool when this happens? If it can detect dead connections when I reboot the SQL Server, why can it not detect the dead connections when this random unknown event happens?
How can I investigate further? I'm blind to why the connections die randomly every few days and do not know if CCM would help debug this or if I should be monitoring with netstat.
Teiid does not maintain the connection pools, the WildFly server does. Teiid just requests a connection and uses it when one is returned, which could a closed connection if the pool is not validated.
Validation checks seem correct above. You can alternatively follow similar techniques for validation defined here [1]
<validation>
<check-valid-connection-sql>select 1</check-valid-connection-sql>
<validate-on-match>false</validate-on-match>
<background-validation>true</background-validation>
<background-validation-millis>10000</background-validation-millis>
</validation>
[1] http://www.mastertheboss.com/jboss-server/jboss-datasource/how-to-automatically-reconnect-to-the-database-in-wildfly
I'm using pyspark with spark 2.2 and python 2.7 in a cluster of 20 nodes. I'm loading data into a dataframe from cloud blob storage using df = spark.read.jdbc(...) and then I attempt to write it into my SQL Server database with df.write.jdbc(...). However, during the write process I get the following error:
py4j.protocol.Py4JJavaError: An error occurred while calling o67.jdbc.
: com.microsoft.sqlserver.jdbc.SQLServerException: The statement failed. Column 'my_col' has a data type that cannot participate in a columnstore index.
The df's schema is as follows:
root
|-- my_col: string (nullable = true)
|-- my_other_col: string (nullable = true)
...
This post has me believing that df.write.jdbc(...)maybe trying to create columnstore indexes for all columns in the df when it writes. Unfortunately I do not know how to stop spark from doing this so I can mitigate this issue.
A summarized version of my code looks like this:
spark = (pyspark.sql.SparkSession.builder
.appName('my-app').getOrCreate())
df = (self.spark.read.format('com.databricks.spark.avro')
.options(inferSchema=True, header=True)
.load(blob_storage_path)).repartition(self.num_partitions)
df.write.jdbc(url=self.jdbc_url, table=table, mode='overwrite',
properties=self.jdbc_properties)
Here's the full stack trace:
py4j.protocol.Py4JJavaError: An error occurred while calling o68.jdbc.
: com.microsoft.sqlserver.jdbc.SQLServerException: The statement failed. Column 'my_col' has a data type that cannot participate in a columnstore index.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:217)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1655)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:885)
at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:778)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7505)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2445)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:191)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:166)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:703)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createTable(JdbcUtils.scala:805)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:90)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)
at org.apache.spark.sql.DataFrameWriter.jdbc(DataFrameWriter.scala:461)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Thanks in advance for any help you can provide!
What SQL Server version do you have? There are some restrictions to the datatypes that could be in columnstore index in versions earlier than 2017. Read restrictions section of this article https://learn.microsoft.com/en-us/sql/t-sql/statements/create-columnstore-index-transact-sql The thing you can do, I guess, is drop the columnstore index or to migrate to Sql Server 2017.
Hi I am getting the following stack trace when I execute the following lines of code:
transactionDF.write.format("jdbc")
.option("url",SqlServerUri)
.option("driver", driver)
.option("dbtable", fullQualifiedName)
.option("user", SqlServerUser).option("password",SqlServerPassword)
.mode(SaveMode.Append).save()
The following is the stacktrace:
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply_3$(Unknown Source)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
at org.apache.spark.sql.execution.LocalTableScanExec$$anonfun$1.apply(LocalTableScanExec.scala:41)
at org.apache.spark.sql.execution.LocalTableScanExec$$anonfun$1.apply(LocalTableScanExec.scala:41)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at org.apache.spark.sql.execution.LocalTableScanExec.<init>(LocalTableScanExec.scala:41)
at org.apache.spark.sql.execution.SparkStrategies$BasicOperators$.apply(SparkStrategies.scala:394)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:62)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:77)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:74)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:74)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:66)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:92)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:84)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:89)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:89)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:237)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:237)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:112)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:237)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:54)
at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2788)
at org.apache.spark.sql.Dataset.foreachPartition(Dataset.scala:2319)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.saveTable(JdbcUtils.scala:670)
at org.apache.spark.sql.execution.datasources.jdbc.JdbcRelationProvider.createRelation(JdbcRelationProvider.scala:77)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:518)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:215)
at com.test.spark.jobs.ingestion.test$.main(test.scala:193)
at com.test.spark.jobs.ingestion.test.main(test.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:743)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I tried debugging it and I believe query execution is giving null pointer exception
I am not sure what it means. I am running this on my local machine and not on any cluster
Any help will be appreciated.
I figured it out (Alteast I think this is the reason). For others facing a similar situation: While I was creating the table, I made every column as null so I assumed it would allow null insertion in the table. But the Avro schema I was building the dataframe had nullable = false. So, dataframe.create was reading null and hence raising a NPE error. The error was raised when I did Dataframe.write (which made me think it was a jdbc error) but the actual NPE happened while creating the dataframe
I can not generate entities from table in eclipse with foreign key. I am usingsql server. Here a schema of my tables:
There's a foreigne key between Pointage.Id and Pause.IdPointage. The geration fails when I try to generate the two tables. Pointage is generated but not Pause. In le last screen of generation Pause Table look empty (no column). It also fails when i just import Pause but it works well when I delete the foreign key.
I can see error in workspace log:
org.apache.velocity.exception.MethodInvocationException: Invocation of method 'getImportStatements' in class org.eclipse.jpt.jpa.gen.internal.ORMGenTable threw exception java.lang.IllegalStateException: Pause.FK_Pause_Pointage - mismatched sizes: 0 vs. 1 # main.java.vm[7,9]
at org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:205)
at org.apache.velocity.runtime.parser.node.ASTReference.execute(ASTReference.java:203)
at org.apache.velocity.runtime.parser.node.ASTReference.render(ASTReference.java:294)
at org.apache.velocity.runtime.parser.node.SimpleNode.render(SimpleNode.java:318)
at org.apache.velocity.Template.merge(Template.java:254)
at org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:508)
at org.apache.velocity.app.VelocityEngine.mergeTemplate(VelocityEngine.java:473)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateJavaFile(PackageGenerator.java:333)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateClass(PackageGenerator.java:310)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generateInternal(PackageGenerator.java:132)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.doGenerate(PackageGenerator.java:106)
at org.eclipse.jpt.jpa.gen.internal.PackageGenerator.generate(PackageGenerator.java:82)
at org.eclipse.jpt.jpa.ui.internal.wizards.gen.GenerateEntitiesFromSchemaWizard$GenerateEntitiesJob.runInWorkspace(GenerateEntitiesFromSchemaWizard.java:285)
at org.eclipse.core.internal.resources.InternalWorkspaceJob.run(InternalWorkspaceJob.java:39)
at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
Caused by: java.lang.IllegalStateException: Pause.FK_Pause_Pointage - mismatched sizes: 0 vs. 1
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.buildColumnPairArray(DTPForeignKeyWrapper.java:116)
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.getColumnPairArray(DTPForeignKeyWrapper.java:106)
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.getLocalColumnPairs(DTPForeignKeyWrapper.java:101)
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.getBaseColumns(DTPForeignKeyWrapper.java:146)
at org.eclipse.jpt.jpa.db.internal.DTPForeignKeyWrapper.baseColumnsContains(DTPForeignKeyWrapper.java:150)
at org.eclipse.jpt.jpa.db.internal.DTPTableWrapper.foreignKeyBaseColumnsContains(DTPTableWrapper.java:216)
at org.eclipse.jpt.jpa.db.internal.DTPColumnWrapper.isPartOfForeignKey(DTPColumnWrapper.java:58)
at org.eclipse.jpt.jpa.gen.internal.ORMGenColumn.isForeignKey(ORMGenColumn.java:266)
at org.eclipse.jpt.jpa.gen.internal.ORMGenTable.buildColumnTypesMap(ORMGenTable.java:206)
at org.eclipse.jpt.jpa.gen.internal.ORMGenTable.getImportStatements(ORMGenTable.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.velocity.runtime.parser.node.PropertyExecutor.execute(PropertyExecutor.java:137)
at org.apache.velocity.util.introspection.UberspectImpl$VelGetterImpl.invoke(UberspectImpl.java:350)
at org.apache.velocity.runtime.parser.node.ASTIdentifier.execute(ASTIdentifier.java:180)
... 14 more
This is a bug in Eclipse Dali (https://bugs.eclipse.org/bugs/show_bug.cgi?id=281991), which is the result of a bug in Eclipse DTP (https://bugs.eclipse.org/bugs/show_bug.cgi?id=282206). There has been some recent development activity in DTP, after several years of nearly complete inactivity; so, maybe if you add your vote to the latter bug, it might get fixed. :-)
It was a problem of rights with the database. I did not restart the database connection in eclipse after giving more rights to the user. After doing this, all was right.
I created a multi-node cluster of cassandra: node0 and node1 (cassandra version 1.1.1), then I used cassandra-cli connect to node0 and create a column family. The info of node0 is ok, but it throw an exception at node1 like this:
ERROR 18:22:48,486 Exception in thread Thread[MigrationStage:1,5,main]
org.apache.cassandra.db.marshal.MarshalException: invalid UTF8 bytes 4fd5c745
at org.apache.cassandra.db.marshal.UTF8Type.getString(UTF8Type.java:56)
at org.apache.cassandra.cql3.ColumnIdentifier.(ColumnIdentifier.java:47)
at org.apache.cassandra.cql3.CFDefinition.getKeyId(CFDefinition.java:125)
at org.apache.cassandra.cql3.CFDefinition.(CFDefinition.java:59)
at org.apache.cassandra.config.CFMetaData.updateCfDef(CFMetaData.java:1303)
at org.apache.cassandra.config.CFMetaData.keyAlias(CFMetaData.java:224)
at org.apache.cassandra.config.CFMetaData.fromSchemaNoColumns(CFMetaData.java:1187)
at org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1215)
at org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:291)
at org.apache.cassandra.db.DefsTable.mergeColumnFamilies(DefsTable.java:396)
at org.apache.cassandra.db.DefsTable.mergeSchema(DefsTable.java:271)
at org.apache.cassandra.db.DefsTable.mergeRemoteSchema(DefsTable.java:249)
at org.apache.cassandra.db.DefinitionsUpdateVerbHandler$1.runMayThrow(DefinitionsUpdateVerbHandler.java:48)
at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)
at java.util.concurrent.FutureTask.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
then i store data into the column family at node0, i can get the data at node0, and node1 say the column family not found. after i restart node1, i can get the data at node1 like node0.
How can I fix this problem?
This is a bug fixed in the soon-to-be-released 1.1.2 version: https://issues.apache.org/jira/browse/CASSANDRA-4307. If you want to test the fix, you can pull the trunk branch from Apache's git repo.