error "can't expand macros compiled by previous versions of Scala". Using sbt and Scala Test, - scalatest

I got above error when i run 'test' under sbty.
environment: ScalaTest, sbt version 0.13.8
in the build.sbt file tried "scalaVersion := "2.10.4" and below dependency definition (both options):
//libraryDependencies += "org.scalatest" % "scalatest_2.11" % "2.2.4" % "test"
libraryDependencies += "org.scalatest" % "scalatest_2.10" % "2.0" % "test"
I refreshed my sbt projects after the above changes.
the error still exists. can anybody give some lights?

after playing around and helped by colleague, it turns out the version for scalaTest (ArtifactID)is incorrect, and now the working version is automatically get the right Scala Version (i.e. use GroupID %% artifactID %revision INSTEAD OF GroupID % artificatID %revision).
libraryDependencies += "org.scalatest" %% "scalatest" % "2.2.4" % Test //note 2.2.2 works too
for more details see: http://www.scala-sbt.org/0.13/tutorial/Library-Dependencies.html

Related

Making Spark and Slick work at the same time

I am writing a program in Scala and I came through this problem. I am trying to solve this for days now, and really getting on my nerve. But who knows maybe it is not even possible. I am doing a GUI with scalafx which firstly has a login window where I want to use mssql to do the login. I decided to use Slick. It works fine if I assemble them together with sbt assembly for an executable jar and execute it with
java -jar target/scala-2.12/myjarname.jar
If I don't do the sbt assembly then I can only run the program with sbt run to make it work.
But later on, on a scene I want to start a sparkSession where data comes through Kafka. That part alone if I comment out the Slick-sqlserver usage, the spark-submit with the spark-kafka-connector package added works fine. I am using sbt with scala.
But as soon as I want to combine them together it gives me back an error:
java.lang.ClassNotFoundException: com.microsoft.sqlserver.jdbc.SQLServerDriver
Just for making it sure I checked that it can be found inside the assembled jar. Am I missing something with the spark-submit? Or they are not able to work at the same time/together in one program? My spark sumbit looks like this:
spark-submit --class Login --master local[*] --packages "org.apache.spark:spark-sql-kafka-0-10_2.12:3.1.3" target/scala-2.12/myjarname.jar
I don't know if this is useful or not but my application.conf
sqlserver = {
profile = "slick.jdbc.SQLServerProfile$"
db {
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
host = localhost
port = <myport>
databaseName = <my database's name>
url = "jdbc:sqlserver://"${sqlserver.db.host}":"${sqlserver.db.port}";databaseName="${sqlserver.db.databaseName}";encrypt=true;trustServerCertificate=true;"
user = <username>
password = <password>
}
}
And some of my build.sbt:
scalaVersion := "2.12.10"
libraryDependencies += "org.scalafx" %% "scalafx" % "16.0.0-R24"
libraryDependencies ++= Seq(
"com.typesafe.slick" %% "slick" % "3.2.1",
"org.slf4j" % "slf4j-nop" % "2.0.1",
"com.typesafe.slick" %% "slick-hikaricp" % "3.2.1",
"com.microsoft.sqlserver" % "mssql-jdbc" % "10.2.1.jre8"
)
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "3.2.0" % "provided",
"org.apache.spark" %% "spark-sql" % "3.2.0" % "provided",
"com.datastax.spark" %% "spark-cassandra-connector" % "3.1.0"
)
Update:
Found what cause this. Adding ShadeRules to the SBT Assembly for Hikari in build.sbt solved my problem.
I got my solution from this.

Flink (1.11.2) - Can't find implementation for S3 despite correct plugins being set. Using JDK11 and Scala 2.12.11

I'm running a Flink standalone cluster with a single node using Docker in Linux. I've been running a previous version for a while in production with Flink 1.10.0 and JDK8, I was able to get S3 running properly there. Now I'm trying to update to a newer version, running Docker on my dev machine using a local S3 implementation. No matter what I try, this error keeps popping up:
org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 's3'.
It would seem that the S3 scheme isn't being mapped to the appropriate classes. I'm positive that the right plugins are being picked up by Flink. I have the following dependencies:
val testDependencies = Seq(
"org.scalatest" %% "scalatest" % "3.2.0" % "test"
)
val miscDependencies = Seq(
"com.github.tototoshi" %% "scala-csv" % "1.3.6",
"org.lz4" % "lz4-java" % "1.5.1",
"org.json4s" %% "json4s-jackson" % "3.6.1",
"org.apache.hadoop" % "hadoop-common" % "3.2.1",
"redis.clients" % "jedis" % "2.9.0",
"com.googlecode.plist" % "dd-plist" % "1.21",
"com.couchbase.client" % "java-client" % "2.7.14",
"org.apache.parquet" % "parquet-avro" % "1.11.1",
)
val flinkDependencies = Seq(
"org.apache.flink" %% "flink-scala" % flinkVersion % "provided",
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion % "provided",
"org.apache.flink" % "flink-s3-fs-hadoop" % flinkVersion % "provided",
"org.apache.flink" % "flink-metrics-dropwizard" % flinkVersion,
"org.apache.flink" % "flink-formats" % flinkVersion pomOnly(),
"org.apache.flink" % "flink-compress" % flinkVersion,
"org.apache.flink" %% "flink-statebackend-rocksdb" % flinkVersion,
"org.apache.flink" %% "flink-clients" % flinkVersion,
"org.apache.flink" %% "flink-parquet" % flinkVersion
)
I confirm that I'm following the documentation to the letter.
After struggling with this for a while I was able to solve the problem. I'm leaving my solution here in case anyone has the same issue.
Plugin classes, such as the S3 file system factory, are detected once the jobmanager and taskmanager starts, however, they're not loaded. In my setup, the classes must be loaded dynamically once the job starts. You can find more information about how Flink loads its classes here.
As explained here, the cue to load a class is given by the existence of a file in META-INF/services inside the job's jar. For the S3 plugins to work, you need to have the file:
META-INF/services/org.apache.flink.core.fs.FileSystemFactory
which contains one line for each class that Flink should load dynamically as dependencies to your job. For example:
org.apache.flink.fs.s3hadoop.S3FileSystemFactory
org.apache.flink.fs.s3hadoop.S3AFileSystemFactory
I'm using sbt assembly to create a far JAR with my job. In my project dependencies I was including flink-s3-fs-hadoop as a provided dependency, which prevented the correct services files from being included. Once I removed that qualifier, the correct services were created and everything worked.

Bitbake error - Nothing RPROVIDES

I would to split and application into multiple packages. Basically I just would like to add an other one which could be build by using a specific image.
Inside the .bb file associated to the application I added :
SRC_URI = " \
...
file://api.conf \
file://script.sh \
"
PACKAGES =+ "${PN} ${PN}-tools"
FILES_${PN}-tools = "${bindir}/mrvl/tools/script.sh \
${sysconfdir}/mrvl/api.conf \
"
Then, I added the following line in my bb image test
IMAGE_INSTALL += " mrvl-tools"
I am using the command bitbake image-test which returns :
ERROR: Nothing RPROVIDES 'mrvl-tools' (but /home/usr/../image-test.bb RDEPENDS on or otherwise requires it)
NOTE: Runtime target 'mrvl-tools' is unbuildable, removing...
Missing or unbuildable dependency chain was: ['mrvl-tools']
ERROR: Required build target 'image-test' has no buildable providers.
Missing or unbuildable dependency chain was: ['image-test', 'mrvl-tools']
I followed the same definition of the bluez5-obex package and IMAGE_ISTALL += " bluez5-obex" works..
What I forget ?
Anders is close.
First, your PACKAGES definition is wrong, all you need is PACKAGES += "${PN}-tools".
But the important thing to remember is that FILES is evaluated in the order of PACKAGES, so ${PN} is processed first and the default FILES_${PN} contains ${bindir} ${sysconfdir}, so all of ${bindir} and ${sysconfdir} is in ${PN}. Then it tries to process ${PN}-tools and none of the expressions in its FILES match any files remaining, so the package is empty.
So, either set FILES_${PN} to what you want it to contain, or use PACKAGE_BEFORE_PN=${PN}-tools to inject PN-tools before PN in the default PACKAGES value. Reading bitbake.conf will help make this clearer, I promise.
Note that I'd have expected the error to be a rootfs-time failure not an image construction failure, but hopefully this is the problem.
it's good to verify if the layer has been added to the
conf/bblayers.conf
this is where it usually starts with "nothing provides"
BBLAYERS += " \
${BSPDIR}/sources/"your layer" \
Thank Ross Burton for you answer. But I modified the .bb file and it currently contains the following lines :
SUMMARY_${PN}-tools="mrvl tools test"
PACKAGE_BEFORE_PN += "${PN}-tools"
RDEPENDS_${PN}-tools = ""
FILES_${PN}-tools = "${bindir}/mrvl/tools/script.sh ${sysconfdir}/mrvl/api.conf"
ALLOW_EMPTY_${PN}-tools = "1"
The build finished and the package named mrvl-test-tools_0.1-r0.ipk is well created under /build/tmp/deploy/ipk/board/ but it contains nothing. This is due to the variable "ALLOW_EMPTY..="1"". and without this line the build failed and the following message is displayed
Collected errors:
* opkg_install_cmd: Cannot install package mrvl-test-tools.
ERROR: Function failed: do_rootfs
ERROR: Logfile of failure stored in: /home/../build/tmp/work/oe-linux/test-img/1.0-r0/temp/log.do_rootfs.4639
ERROR: Task 7 (/home/../sources/meta-board/recipes-images/images/test-img.bb, do_rootfs) failed with exit code '1'
I do not understand why files are now not included into the .ipk
Don't you need to add file in the extra files path
THISAPPENDFILESDIR := "${THISDIR}/file"
FILESEXTRAPATHS_prepend := "${THISDIR}/file:"

fabric for offline package installation

The project I'm working in uses fabric for many build steps and requires a offline build as fallback.
I'm currently stuck at installing python packages provided in tarballs.
The thing is I have trouble getting into the newly extracted directory and running setup.py install in there.
#task
def deploy_artifacts():
"""Installs dependencies from local path, useful for offline builds"""
#TODO: Handle downloading files and do something like this bellow
tmpdir = tempfile.mkdtemp()
artifacts_path = ''
if not 'http' in env.artifacts_path:
artifacts_path = env.artifacts_path
with lcd(artifacts_path):
for f in os.listdir(artifacts_path):
if 'gz' in f:
put(f, tmpdir)
tar = os.path.join(tmpdir, f)
target_dir = os.path.join(tempfile.gettempdir(), normalize(f))
if not files.exists(target_dir):
run('mkdir %s' % target_dir)
else:
run('rm -rf %s' %target_dir)
run('mkdir %s' % target_dir)
run('tar xf %s -C %s' % (tar, target_dir))
run('rm %s' % tar)
with cd(target_dir):
sudo('python setup.py install')
I come from reading the tar man page for the bazillion time and I got nowhere near to getting what I want.
Did some of you face a situation like this? is there some other (read: better) approach to this scenario?
There's nothing wrong (in principle) with what you're trying do. Maybe just take smaller steps getting there. Rather than using temporary directories, it might make debugging easier if everything was put in a systematic location that has known permissions that nothing else writes to by convention. At least that would let you use some combination of fabric and manual intervention to check what is going wrong.
In the longer term, there are a few alternatives that I see. For simplicity you want the online and offline versions to work the same way, and that means fetching packages using easy_install / pip for both cases.
One way to do this is to build a mirror of PyPi. The right way to do this if you've got plenty of storage space (30Gb) is to use software that implements PEP381 (Mirroring Infrastructure for PyPI), there is already a client that does this (pep381client). A number of other projects are available that do similar things (basketweaver, djangopypi2, chishop).
An alternative is to consider a lighter weight proxying scheme. I've been looking a pip2pi and pipli. I'm unsure if they will work directly with easy_install, but it would be worth a try.
It's also worth noting that if you were using pip, you could have installed directly from the tarballs.

SyntaxError when trying to create a new app using django nonrel

I'm playing around with django-nonrel and google app engine when I get the following error trying to create a new app.
Traceback (most recent call last):
File "./manage.py", line 2, in <module>
from django.core.management import execute_manager
File "/home/levi/Projects/DoneBox/django/__init__.py", line 14
if svn_rev != u'SVN-unknown':
^
SyntaxError: invalid syntax
The file that caused the exception is included below.
VERSION = (1, 3, 0, 'final', 0)
def get_version():
version = '%s.%s' % (VERSION[0], VERSION[1])
if VERSION[2]:
version = '%s.%s' % (version, VERSION[2])
if VERSION[3:] == ('alpha', 0):
version = '%s pre-alpha' % version
else:
if VERSION[3] != 'final':
version = '%s %s %s' % (version, VERSION[3], VERSION[4])
from django.utils.version import get_svn_revision
svn_rev = get_svn_revision()
if svn_rev != u'SVN-unknown':
version = "%s %s" % (version, svn_rev)
return version
I've looked at this file in emacs and I can't see the problem, and I've tried searching google with no luck. Can someone please point me in the right direction?
(For those interested in what I'm doing, please see http://www.allbuttonspressed.com/projects/djangoappengine.)
running this code in buffer with ipython shebang
! /usr/bin/env ipython
-- coding: utf-8 --
extending the end like this:
print version
return version
get_version()
get "1.3" at the ipython prompt
similar thing with python shell fails like this
from django.utils.version import get_svn_revision
Traceback (most recent call last):
File "", line 1, in
ImportError: No module named utils.version
assume some install/path issue
Just an FYI, the latest version of django-appengine is found here:
https://github.com/django-nonrel
However, the allbuttonspressed version should be working. You may not have your python environment set properly.

Resources