Functor misses implicit value for parameter instance (only after sbt clean) - shapeless

I did some experiments with Kittens (https://github.com/milessabin/kittens) and have issues with compiling my code. I receive the following error.
[error] ...danirey\scala\kittens\Kittens.scala:23: could not find implicit value for parameter instance: cats.Functor[danirey.scala.kittens.AdtDefns.Tree]
[error] val funct = Functor[Tree]
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
The complete File is as follows
package danirey.scala.kittens
/**
* #author Dani
*/
import cats.Functor
import cats.syntax.AllSyntax
import cats.derived.functor._
import legacy._
import cats.derived.iterable.legacy._
import org.typelevel.discipline.scalatest.Discipline
import shapeless.cachedImplicit
object Kittens extends App {
val ft = new FunctorExperiment()
ft.print()
}
class FunctorExperiment extends AllSyntax {
import AdtDefns._
def print():Unit = {
val funct = Functor[Tree]
val tree: Tree[String] = Node(
Leaf("Reto"),
Node(
Leaf("Sandra"),
Leaf("Mike")
)
)
println(funct.map(tree)(_.length))
}
}
I have use the almost identical code in a ScalaTest which compiles without any issues.
package danirey.scala.kittens
import cats.Functor
import cats.syntax.AllSyntax
import cats.derived.functor._
import legacy._
import cats.derived.iterable.legacy._
import org.scalatest.FunSuite
import org.typelevel.discipline.scalatest.Discipline
import shapeless.cachedImplicit
/**
* #author Dani
*/
class FunctorExperimentTest extends FunSuite with Discipline with AllSyntax {
import AdtDefns._
test("functors experiment") {
val funct = Functor[Tree]
val tree: Tree[String] = Node(
Leaf("Reto"),
Node(
Leaf("Sandra"),
Leaf("Mike")
)
)
println(funct.map(tree)(_.length))
}
}
My build.sbt looks as follows
name := "shapeless-experiments"
version := "1.0-SNAPSHOT"
scalaVersion := "2.11.8"
exportJars := true
libraryDependencies ++= Seq(
"com.chuusai" % "shapeless_2.11" % "2.3.0",
"org.typelevel" % "kittens_2.11" % "1.0.0-M2",
"org.scalatest" %% "scalatest" % "3.0.0-M7" % "test"
)
scalacOptions ++= Seq(
"-feature",
"-language:higherKinds",
"-language:implicitConversions",
"-unchecked"
)
The most interesting thing is, that it compiles as part of an incremental compile.
If I comment line number 16, 23 and 32, then execute "sbt compile",
then remove the comments again and execute "sbt compile/package" it compiles and I can even execute the program. But as soon as I run "sbt clean", it will not compile anymore.
The AdtDefns Object is basically a copy of https://github.com/milessabin/kittens/blob/master/core/src/test/scala/cats/derived/adtdefns.scala
The relevant part is
object AdtDefns {
sealed trait Tree[T]
final case class Leaf[T](t: T) extends Tree[T]
final case class Node[T](l: Tree[T], r: Tree[T]) extends Tree[T]
}
PS: Would be nice if someone could create a tag for scala-kittens

#DaniRey we use kittens in our projects but only the sequence part. I am not aware of any project that use kittens derivation. What's your user case?

Related

results of simple sql query with Flink 1.12.2 are type of org.apache.flink.table.data.binary.BinaryRowData

I am using Flink 1.12.2, and I write a simple flink application to try the integration with DataStream and Table, the code is as follows:
package org.example
import org.apache.flink.api.common.typeinfo.TypeInformation
import org.apache.flink.streaming.api.scala.{StreamExecutionEnvironment, _}
import org.apache.flink.table.api.FieldExpression
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.table.api.bridge.scala._
import org.apache.flink.streaming.api.scala._
object FlinkDemo {
def main(args: Array[String]): Unit = {
implicit val typeInfo = TypeInformation.of(classOf[Int])
val fsSettings = EnvironmentSettings.newInstance().useBlinkPlanner().inStreamingMode().build()
val fsEnv = StreamExecutionEnvironment.getExecutionEnvironment
val fsTableEnv = StreamTableEnvironment.create(fsEnv, fsSettings)
fsEnv.setParallelism(2)
val ds: DataStream[Int] = fsEnv.fromElements(Seq(1, 2, 3, 4, 5): _*)
fsTableEnv.createTemporaryView("t", ds)
fsTableEnv.from("t").printSchema()
val table = fsTableEnv.sqlQuery("select f0 from t")
table.toAppendStream[Int].print()
fsEnv.execute()
}
}
The result is very surprising, it is not 1,2,3,4,5 but as follows, I am not sure where the prolem is, could you please help me out?
2> org.apache.flink.table.data.binary.BinaryRowData#4fe4bbf0
1> org.apache.flink.table.data.binary.BinaryRowData#5759f99e
2> org.apache.flink.table.data.binary.BinaryRowData#564770ca
1> org.apache.flink.table.data.binary.BinaryRowData#d206e547
1> org.apache.flink.table.data.binary.BinaryRowData#f8248f4
``

PyFlink - Issue using Scala UDF in JAR

I'm trying to register a Scala UDF in Pyflink using an external JAR as follows, but get below error.
Scala UDF:
package com.dummy
import org.apache.flink.table.functions.ScalarFunction
class dummyTransform(factor: Int) extends ScalarFunction {
def eval(s: String): Int = {
s.hashCode()
}
}
build.sbt:
name := "hello_scala_for_flink"
version := "0.1"
scalaVersion := "2.12.11"
libraryDependencies += "org.apache.flink" % "flink-table-common" % "1.11.2" % "provided"
assembly.sbt:
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.5")
Scala project structure:
hello_scala_for_flink/
project/
src/
main/
resources/
scala/
com.dummy/
dummyTransform
hello_scala
test
target/
build.sbt
shell:
cd hello_scala_for_flink
sbt assembly
cp ./target/scala-2.12/hello_scala_for_flink-assembly-0.1.jar /Users/py-r/opt/anaconda3/envs/venv_pyflink_37/lib/python3.7/site-packages/pyflink/lib/
Python:
from pyflink.dataset import ExecutionEnvironment
exec_env = ExecutionEnvironment.get_execution_environment()
#exec_env = StreamExecutionEnvironment.get_execution_environment()
exec_env.set_parallelism(1)
t_config = TableConfig()
table_env = BatchTableEnvironment.create(exec_env, t_config)
table_env.register_java_function("hash_code","com.dummy.dummyTransform")
Error (incl. after restarting Anaconda):
Py4JJavaError: An error occurred while calling o12.newInstance.
: java.lang.InstantiationException: com.dummy.dummyTransform
at java.base/java.lang.Class.newInstance(Class.java:598)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at org.apache.flink.api.python.shaded.py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at org.apache.flink.api.python.shaded.py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at org.apache.flink.api.python.shaded.py4j.Gateway.invoke(Gateway.java:282)
at org.apache.flink.api.python.shaded.py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at org.apache.flink.api.python.shaded.py4j.commands.CallCommand.execute(CallCommand.java:79)
at org.apache.flink.api.python.shaded.py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.base/java.lang.Thread.run(Thread.java:832)
Caused by: java.lang.NoSuchMethodException: com.dummy.dummyTransform.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3427)
at java.base/java.lang.Class.newInstance(Class.java:585)
11 more
Versions:
jdk = 1.8.0_151.jdk
scala = 2.12.11
python = 3.7
apache-beam = 2.19.0
apache-flink = 1.11.2
Any idea what the issue might be ?
Thanks for your support
Looks like I found the issue myself. Apparently only a class instantiation was required in above code:
class dummyTransform(factor: Int) extends ScalarFunction {
def eval(s: String): Int = {
s.hashCode() * factor
}
def this() = this(1)
}
Also, I changed to Scala 2.11.12 due to another error. Now everything seems to work: awesome !

Slick can't connect to mssql database

I would like to use Slick (3.2.3) to connect to a MSSQL database.
Currently, my project is the following.
In application.conf, I have
somedbname = {
driver = "slick.jdbc.SQLServerProfile$"
db {
host = "somehost"
port = "someport"
databaseName = "Recupel.Datawarehouse"
url = "jdbc:sqlserver://"${somedbname.db.host}":"${somedbname.db.port}";databaseName="${somedbname.db.databaseName}";"
user = "someuser"
password = "somepassword"
}
}
The "somehost" looks like XX.X.XX.XX where X's are numbers.
My build.sbt contains
name := "test-slick"
version := "0.1"
scalaVersion in ThisBuild := "2.12.7"
libraryDependencies ++= Seq(
"com.typesafe.slick" %% "slick" % "3.2.3",
"com.typesafe.slick" %% "slick-hikaricp" % "3.2.3",
"org.slf4j" % "slf4j-nop" % "1.6.4",
"com.microsoft.sqlserver" % "mssql-jdbc" % "7.0.0.jre10"
)
The file with the "main" object contains
import slick.basic.DatabaseConfig
import slick.jdbc.JdbcProfile
import slick.jdbc.SQLServerProfile.api._
import scala.concurrent.Await
import scala.concurrent.duration._
val dbConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig("somedbname")
val db: JdbcProfile#Backend#Database = dbConfig.db
def main(args: Array[String]): Unit = {
try {
val future = db.run(sql"SELECT * FROM somettable".as[(Int, String, String, String, String,
String, String, String, String, String, String, String)])
println(Await.result(future, 10.seconds))
} finally {
db.close()
}
}
}
This, according to all the documentation that I could find, should be enough to connect to the database. However, when I run this, I get
[error] (run-main-0) java.sql.SQLTransientConnectionException: somedbname.db - Connection is not available, request timed out after 1004ms.
[error] java.sql.SQLTransientConnectionException: somedbname.db - Connection is not available, request timed out after 1004ms.
[error] at com.zaxxer.hikari.pool.HikariPool.createTimeoutException(HikariPool.java:548)
[error] at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:186)
[error] at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:145)
[error] at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:83)
[error] at slick.jdbc.hikaricp.HikariCPJdbcDataSource.createConnection(HikariCPJdbcDataSource.scala:14)
[error] at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:453)
[error] at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
[error] at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
[error] at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:249)
[error] at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:248)
[error] at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
[error] at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:274)
[error] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
[error] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
[error] at java.base/java.lang.Thread.run(Thread.java:844)
[error] Nonzero exit code: 1
Perhaps related and also annoying, when I run this code for the second (and subsequent) times, I get the following error instead:
Failed to get driver instance for jdbcUrl=jdbc:sqlserver://[...]
which forces me to kill and reload sbt each time.
What am I doing wrong? Worth noting: I can connect to the database with the same credential from a software like valentina.
As suggested by #MarkRotteveel, and following this link, I found a solution.
First, I explicitly set the driver, adding the line
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
in the db dictionary, after password = "somepassword".
Secondly, the default timeout (after one second) appears to be too short for my purposes, and therefore I added the line
connectionTimeout = "30 seconds"
after the previous driver line, still in the db dictionary.
Now it works.

Error when Importing keras in embedded python in C

I'm trying to embed python in my C application. I download the package in python official website and manage to do a simple Hello World.
Now I want to go deeper and use some libraries of python like numpy, keras, tensorflow...
I'm working with Python 3.5.4, I installed all the needed package on my PC with pip3 :
pip3 install keras
pip3 install tensorflow
...
then I created my script and launch it in python environment, it works fine :
Python:
# Importing the libraries
#
import numpy as np
import pandas as pd
dataset2 = pd.read_csv('I:\RNA\dataset19.csv')
X_test = dataset2.iloc[:, 0:228].values
y_test = dataset2.iloc[:, 228].values
# 2.
import pickle
sc = pickle.load(open('I:\RNA\isVerb_sc', 'rb'))
X_test = sc.transform(X_test)
# 3.
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
classifier = Sequential()
classifier.add(Dense(units = 114, kernel_initializer = 'uniform', activation = 'relu', input_dim = 228))
classifier.add(Dropout(p = 0.3))
classifier.add(Dense(units = 114, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dropout(p = 0.3))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
classifier.load_weights('I:\RNA\isVerb_weights.h5')
y_pred = classifier.predict(X_test)
y_pred1 = (y_pred > 0.5)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred1)
But when I execute the same script in a C environment with embed python it didn't work :
At first, I execute my script directly with PyRun_SimpleFile with no luck, so I sliced it in multiple instructions with PyRun_SimpleString to detect the problem :
C:
result = PyRun_SimpleString("import numpy as np"); // result = 0 (ok)
result = PyRun_SimpleString("import pandas as pd"); // result = 0 (ok)
...
result = PyRun_SimpleString("import pickle"); // result = 0 (ok)
... (all insctruction above works)
result = PyRun_SimpleString("import keras"); // result = -1 !!
... (all under this failed)
but there is not a single stack trace about this error, I tried this but I just got :
"Here's the output: (null)"
My initialization of Python in C seems correct since others libraries import fine :
// Python
wchar_t *stdProgramName = L"I:\\LIBs\\cpython354";
Py_SetProgramName(stdProgramName);
wchar_t *stdPythonHome = L"I:\\LIBs\\cpython354";
Py_SetPythonHome(stdPythonHome);
wchar_t *stdlib = L"I:\\LIBs\\cpython354;I:\\LIBs\\cpython354\\Lib\\python35.zip;I:\\LIBs\\cpython354\\Lib;I:\\LIBs\\cpython354\\DLLs;I:\\LIBs\\cpython354\\Lib\\site-packages";
Py_SetPath(stdlib);
// Initialize Python
Py_Initialize();
When inside a Python cmd, the line import keras take some time (3sec) but works (a warning but I found no harm around it) :
>>> import keras
I:\LIBs\cpython354\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
>>>
I'm at loss now, I don't know where to look at since there is no stack trace.
it seems like when you import keras, it executes this line :
sys.stderr.write('Using TensorFlow backend.\n')
or sys.stderr was not defined in python embedded on windows
A simple correction is to define sys.stderr, for example :
import sys
class CatchOutErr:
def __init__(self):
self.value = ''
def write(self, txt):
self.value += txt
catchOutErr = CatchOutErr()
sys.stderr = catchOutErr

Try statement in Cython for cimport (for use with mpi4py)

Is there a way to have the equivalent of the Python try statement in Cython for the cimport?
Something like that:
try:
cimport something
except ImportError:
pass
I would need this to write a Cython extension that can be compiled with or without mpi4py. This is very standard in compiled languages where the mpi commands can be put between #ifdef and #endif preprocessor directives. How can we obtain the same result in Cython?
I tried this but it does not work:
try:
from mpi4py import MPI
from mpi4py cimport MPI
from mpi4py.mpi_c cimport *
except ImportError:
rank = 0
nb_proc = 1
# solve a incompatibility between openmpi and mpi4py versions
cdef extern from 'mpi-compat.h': pass
does_it_work = 'Not yet'
Actually it works well if mpi4py is correctly installed but if
import mpi4py raises an ImportError, the Cython file does not
compile and I get the error:
Error compiling Cython file:
------------------------------------------------------------
...
try:
from mpi4py import MPI
from mpi4py cimport MPI
^
------------------------------------------------------------
mod.pyx:4:4: 'mpi4py.pxd' not found
The file setup.py:
from setuptools import setup, Extension
from Cython.Distutils import build_ext
import os
here = os.path.abspath(os.path.dirname(__file__))
include_dirs = [here]
try:
import mpi4py
except ImportError:
pass
else:
INCLUDE_MPI = '/usr/lib/openmpi/include'
include_dirs.extend([
INCLUDE_MPI,
mpi4py.get_include()])
name = 'mod'
ext = Extension(
name,
include_dirs=include_dirs,
sources=['mod.pyx'])
setup(name=name,
cmdclass={"build_ext": build_ext},
ext_modules=[ext])
Using a try-catch block in this way is something you won't be able to do.
The extension module you are making must be statically compiled and linked against the things it uses cimport to load at the C-level. A try-catch block is something that will be executed when the module is imported, not when it is compiled.
On the other hand, in theory, you should be able to get the effect you're looking for using Cython's support for conditional compilation.
In your setup.py file you can check to see if the needed modules are defined and then define environment variables to be passed to the Cython compiler that, in turn, depend on whether or not the needed modules are present.
There's an example of how to do this in one of Cython's tests.
There they pass a dictionary containing the desired environment variables to the constructor for Cython's Extension class as the keyword argument pyrex_compile_time_env, which has been renamed to cython_compile_time_env, and for Cython.Build.Dependencies.cythonize is called compile_time_env).
Thank you for your very useful answer #IanH. I include an example to show what it gives.
The file setup.py:
from setuptools import setup
from Cython.Distutils.extension import Extension
from Cython.Distutils import build_ext
import os
here = os.path.abspath(os.path.dirname(__file__))
import numpy as np
include_dirs = [here, np.get_include()]
try:
import mpi4py
except ImportError:
MPI4PY = False
else:
MPI4PY = True
INCLUDE_MPI = '/usr/lib/openmpi/include'
include_dirs.extend([
INCLUDE_MPI,
mpi4py.get_include()])
name = 'mod'
ext = Extension(
name,
include_dirs=include_dirs,
cython_compile_time_env={'MPI4PY': MPI4PY},
sources=['mod.pyx'])
setup(name=name,
cmdclass={"build_ext": build_ext},
ext_modules=[ext])
if not MPI4PY:
print('Warning: since importing mpi4py raises an ImportError,\n'
' the extensions are compiled without mpi and \n'
' will work only in sequencial.')
And the file mod.pyx, with a little bit of real mpi commands:
import numpy as np
cimport numpy as np
try:
from mpi4py import MPI
except ImportError:
nb_proc = 1
rank = 0
else:
comm = MPI.COMM_WORLD
nb_proc = comm.size
rank = comm.Get_rank()
IF MPI4PY:
from mpi4py cimport MPI
from mpi4py.mpi_c cimport *
# solve an incompatibility between openmpi and mpi4py versions
cdef extern from 'mpi-compat.h': pass
print('mpi4py ok')
ELSE:
print('no mpi4py')
n = 8
if n % nb_proc != 0:
raise ValueError('The number of processes is incorrect.')
if rank == 0:
data_seq = np.ones([n], dtype=np.int32)
s_seq = data_seq.sum()
else:
data_seq = np.zeros([n], dtype=np.int32)
if nb_proc > 1:
data_local = np.zeros([n/nb_proc], dtype=np.int32)
comm.Scatter(data_seq, data_local, root=0)
else:
data_local = data_seq
s = data_local.sum()
if nb_proc > 1:
s = comm.allreduce(s, op=MPI.SUM)
if rank == 0:
print('s: {}; s_seq: {}'.format(s, s_seq))
assert s == s_seq
Build with python setup.py build_ext --inplace and test with python -c "import mod" and mpirun -np 4 python -c "import mod". If mpi4py is not installed, one can still build the module and use it in sequential.

Resources