Scala Slick-Extensions SQLServerDriver 2.1.0 usage - can't get it to compile - sql-server

I am trying to use Slick-Extensions to connect to an SQL Server Database from Scala. I use slick 2.1.0 and slick-extensions 2.1.0.
I can't seem to get the code I wrote to compile. I followed the examples from slick's website and this compiled fine when the driver was H2. Please see below:
package com.example
import com.typesafe.slick.driver.ms.SQLServerDriver.simple._
import scala.slick.direct.AnnotationMapper.column
import scala.slick.lifted.TableQuery
import scala.slick.model.Table
class DestinationMappingsTable(tag: Tag) extends Table[(Long, Int, Int)](tag, "DestinationMappings_tbl") {
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def mltDestinationType = column[Int]("mltDestinationType")
def mltDestinationId = column[Int]("mltDestinationId")
def * = (id, mltDestinationType, mltDestinationId)
}
I am getting a wide range of errors: scala.slick.model.Table does not take type parameters, column does not take type parameters and O not found.
If the SQLServerDriver does not use the same syntax as slick, where do I find its documentation?
Thank you!

I think to your import of scala.slick.model.Table shadows your import of com.typesafe.slick.driver.ms.SQLServerDriver.simple.Table
Try to just remove the:
import scala.slick.model.Table

Related

Why is Table Schema inference from Scala case classes not working in this official example

I am having trouble with Schema inference from Scala case classes during conversion from DataStreams to Tables in Flink. I've tried reproducing the examples given in the documentation but cannot get them to work. I'm wondering whether this might be a bug?
I have commented on a somewhat related issue in the past. My workaround is not using case classes but defining somewhat laboriously a DataStream[Row] with return type annotations.
Still I would like to learn if it is somehow possible to get the Schema inference from case classes working.
I'm using Flink 1.15.2 with Scala 2.12.7. I'm using the java libraries but install flink-scala separately.
This is my implementation of Example 1 as a quick Sanity Check:
import org.apache.flink.runtime.testutils.MiniClusterResourceConfiguration
import org.apache.flink.test.util.MiniClusterWithClientResource
import org.scalatest.BeforeAndAfter
import org.scalatest.funsuite.AnyFunSuite
import org.apache.flink.api.scala._
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment
import java.time.Instant
class SanitySuite extends AnyFunSuite with BeforeAndAfter {
val flinkCluster = new MiniClusterWithClientResource(
new MiniClusterResourceConfiguration.Builder()
.setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build
)
before {
flinkCluster.before()
}
after {
flinkCluster.after()
}
test("Verify that table conversion works as expected") {
val env = StreamExecutionEnvironment.getExecutionEnvironment
val tableEnv = StreamTableEnvironment.create(env)
case class User(name: String, score: java.lang.Integer, event_time: java.time.Instant)
// create a DataStream
val dataStream = env.fromElements(
User("Alice", 4, Instant.ofEpochMilli(1000)),
User("Bob", 6, Instant.ofEpochMilli(1001)),
User("Alice", 10, Instant.ofEpochMilli(1002))
)
val table =
tableEnv.fromDataStream(
dataStream
)
table.printSchema()
}
}
According to documentation this should result in:
(
`name` STRING,
`score` INT,
`event_time` TIMESTAMP_LTZ(9)
)
What I get:
(
`f0` RAW('SanitySuite$User$1', '...')
)
If I instead modify my code in line with Example 5 - that is explicitly define a Schema that mirrors the case class, I instead get an error which very much looks like it results from the inability of extracting the case class fields:
Unable to find a field named 'event_time' in the physical data type derived from the given type information for schema declaration. Make sure that the type information is not a generic raw type. Currently available fields are: [f0]
the issue is with the imports, you are importing java classes and using scala classes for pojo.
Using following works:
import org.apache.flink.api.common.eventtime.WatermarkStrategy
import org.apache.flink.api.common.serialization.SimpleStringSchema
import org.apache.flink.configuration.Configuration
import org.apache.flink.connector.kafka.source.KafkaSource
import org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.table.api.bridge.scala.StreamTableEnvironment
import org.apache.flink.streaming.api.scala._

Not able to import AndroidNativeUtil

I am Not able to import "com.codename1.impl.android.AndroidNativeUtil" in my project.I have to implement Sim binding functionality using Native code.So in native code I am trying to import below classes
import com.codename1.impl.android.AndroidNativeUtil;
import android.telephony.TelephonyManager;
import android.content.Context;
but I am not able to import this classes.do I need to include any third party library?
I am getting below error
.java:8: error: cannot find symbol
TelephonyManager tMgr = (TelephonyManager) getSystemService
(Context.TELEPHONY_SERVICE);
symbol: method getSystemService(String)
location: class MyNativeImpl
while excecuting below code
TelephonyManager tMgr =(TelephonyManager) getSystemService (Context.TELEPHONY_SERVICE);
String mPhoneNumber = tMgr.getLine1Number();
What is the solution for that?
It looks like the symbol that isn't found is getSystemService which is indeed not declared in that class. You need to use AndroidNativeUtil.getActivity().getSystemService(...)

How to construct and store the geopoint

Am working on GAE with python. How to construct and store the latitude and longitude using GeoPoint?
This is my DB
class Customer(db.Model):
name = db.StringProperty()
address= db.PostalAddressProperty()
geopoint= db.GeoPtProperty()
My code
class AddCustomerHandler(BaseHandler):
input_fullname=self.request.get('fullname')
input_cusaddress=self.request.get('cusaddress')
input_latitude=self.request.get('latitude')
input_longitude=self.request.get('longitude')
input_geopoint= GeoPoint(input_latitude, input_longitude)
logging.info('****************** xxxx= %s', input_geopoint)
newcustomer=Customer(name=input_fullname,address=input_cusaddress,geopoint=input_geopoint)
newcustomer.put()
Am trying this code. But Am getting error. How to store this geopoint field?
input_geopoint= GeoPoint(input_latitude, input_longitude) NameError: global name 'GeoPoint' is not defined.
Am getting this error
You're getting that error because you don't have the module imported where GeoPoint is defined.
Add this import
from google.appengine.api import search
and then use like this
input_geopoint = search.GeoPoint(input_latitude, input_longitude)
Also see the docs

Python/Nose/Testing Why does pickling a OrderedDict fail?

I´ve come about some very strange behavior in one of my nose tests for GAE, not quite sure on how to debug it further... Any idea why it fails would be appreciated...
# Main Testing file stripped to the basics
# -*- coding: utf-8 -*-
import unittest
import pickle
from collections import OrderedDict
from ptest import SomeClass
class PickleTest(unittest.TestCase):
def runTest(self):
res = OrderedDict()
for item in [1, 2, 3]:
res[item] = "test"
#works
pickle.dumps(res)
#fails
otherClass = SomeClass()
test = otherClass.pTest("Nav")
if __name__ == '__main__':
unittest.main()
The imported class file:
import pickle
from collections import OrderedDict
class SomeClass:
def pTest(self, tableName=None, rightsTrimmed=True):
return pickle.dumps(OrderedDict())
leads to
PicklingError: Can't pickle <class 'collections.OrderedDict'>: it's not the same object as collections.OrderedDict
But strangely enough only for the statement in the imported class, not the main one.
I´m at the end of my wisdom. Being executed in the normal GAE dev/production environment, the code works... The system Python version is Python 2.7.5.
Thanks to Oleksiy, I´ve figured out that when inserting a breakpoint, the test ran though without modifying the code. Stange. I can´t really imagine why, but this has given me the idea that something really strange in terms of timing is going on. And, to confirm that suspicion, I´ve tried a late import of the OrderedDict, which works.
It´s a first find, changing my production code to late imports to allow tests to run seems crazy. I´ll read up on late imports and think about on how to proceed...
# -*- coding: utf-8 -*-
import pickle
#from collections import OrderedDict
class SomeClass:
def pTest(self, tableName=None, rightTrimmed=True):
# The Demons seem to be pleased by the late import...
from collections import OrderedDict
return pickle.dumps(OrderedDict())

different development/production databases in scalaquery

ScalaQuery requires (AFAIK) to use an provider specific import in your code, for example:
import org.scalaquery.ql.extended.H2Driver.Implicit._
We are trying to use H2 in development mode and MySQL in production. Is there a way to achieve this?
My approach was:
class Subscribers(database: Database)(profile: ExtendedProfile) {
import profile.Implicit._
}
Where Subscribers basically is my Data-Access-Object.
Not sure this is the best approach out there. It solved my case.
You would create such DAO like:
...in production code:
new Subscribers(database)(MySQLDriver)
...and in test code:
new Subscribers(database)(H2Driver)
I use the following in playframework
object test {
lazy val extendedProfile = {
val extendedProfileName = Play.configuration getString "db.default.extendedProfile" get
companionObjectNamed(extendedProfileName).asInstanceOf[ExtendedProfile]
}
def companionObjectNamed(name: String) : AnyRef = {
val c = Class forName (name + "$")
c.getField("MODULE$") get c
}
}
And then import
import util.extendedProfile.Implicit._
org.scalaquery.ql.extended.MySQLDriver is the string I used in config to make mysql work.

Resources