How to write/use a anorm Extractor like rowToStringSequence Column[Seq[String]] - database

i wrote this row converter.
implicit def rowToStringSequence: Column[Seq[String]] = Column.nonNull { (value, meta) =>
val MetaDataItem(qualified, nullable, clazz) = meta
value match {
case data: Seq[String] => Right(data)
case _ => Left(TypeDoesNotMatch(
"Cannot convert " + value + ":" + value.asInstanceOf[AnyRef].getClass +
" to String Array for column " + qualified))
}
}
Unfortunately, I do not know how to use it within a case class.
For instance:
case class profile ( eyeColor: Seq[String] )
The profile companion object:
object Profile{
val profile= {
get[Seq[String]]("eyeColor") map {
case
eyeColor => Profile(eyeColor)
}
}
}
The compilation error message is:
could not find implicit value for parameter extractor: anorm.Column[Seq[String]]
I need a hint.
Thank you!!

anorm.Column is made to convert JDBC data to desired Scala type. So first question is which kind of JDBC you want to convert as Seq[String] (not being by itself a JDBC type).

Related

Is there a way to accept null in resultset

I am trying to view data from the SQL Server and I have a column (note) that accept null value because it is not necessary for the user to input any values. I tried using the "rs.wasNull" method as below:
var Note : String = rs.getString("note")
if(rs.wasNull()){
Note = ""
}
But I have still facing the error saying that the "rs.getString("note) cannot be null. Is there any possible ways to return null value?
Below are my whole code:
private fun displayData() {
val sqlCon = SQLCon()
connection = sqlCon.connectionClass()!!
var cUser : String? = intent.getStringExtra("Current User")
if (connection == null) {
Toast.makeText(this, "Failed to make connection", Toast.LENGTH_LONG).show()
} else {
try {
val sql : String=
"SELECT * FROM Goals where Username = '$cUser' "
statement = connection!!.createStatement()
var rs : ResultSet = statement!!.executeQuery(sql)
while (rs.next())
{
var Note : String = rs.getString("note")
if(rs.wasNull()){
Note = ""
}
gList.add(GoalList(rs.getString("gName"), rs.getDouble("tAmount"), rs.getDouble("sAmount"), rs.getString("note"), rs.getString("date")))
}
rs.close()
statement!!.close()
Toast.makeText(this, "Success", Toast.LENGTH_LONG).show()
} catch (e: Exception) { Log.e("Error", e.message!!) }
}
}
You use non-nullable return type for note variable:
var Note : String = rs.getString("note")
First, the variable name should start with a lowercase letter -> note.
Secondly, return value must be nullable
var note : String? = rs.getString("note")
In accordance with JavaDoc getString function:
Returns: the column value; if the value is SQL NULL, the value
returned is null
So no need to check if it was null or not, you can use the following construction to pass default value:
var note : String = rs.getString("note") ?: ""

How can I filter and update a table in Slick with composite key?

I have a table:
trait Schema {
val db: Database
class CoffeeTable(tag: Tag) extends Table[Coffee](tag, "coffees") {
def from = column[String]("from")
def kind = column[String]("kind")
def sold = column[Boolean]("sold")
}
protected val coffees = TableQuery[Coffee]
}
I want to update entries which are sold. Here is a method I end up with:
def markAsSold(soldCoffees: Seq[Coffee]): Future[Int] = {
val cmd = coffees
.filter { coffee =>
soldCoffees
.map(sc => coffee.from === sc.from && coffee.kind === sc.kind)
.reduceLeftOption(_ || _)
.getOrElse(LiteralColumn(false))
}
.map(coffee => coffee.sold)
.update(true)
db.db.stream(cmd)
}
While it works for a small soldCoffee collection, it badly fails with an input of hundreds of elements:
java.lang.StackOverflowError
at slick.ast.TypeUtil$$colon$at$.unapply(Type.scala:325)
at slick.jdbc.JdbcStatementBuilderComponent$QueryBuilder.expr(JdbcStatementBuilderComponent.scala:311)
at slick.jdbc.H2Profile$QueryBuilder.expr(H2Profile.scala:99)
at slick.jdbc.JdbcStatementBuilderComponent$QueryBuilder.$anonfun$expr$8(JdbcStatementBuilderComponent.scala:381)
at slick.jdbc.JdbcStatementBuilderComponent$QueryBuilder.$anonfun$expr$8$adapted(JdbcStatementBuilderComponent.scala:381)
at slick.util.SQLBuilder.sep(SQLBuilder.scala:31)
at slick.jdbc.JdbcStatementBuilderComponent$QueryBuilder.expr(JdbcStatementBuilderComponent.scala:381)
at slick.jdbc.H2Profile$QueryBuilder.expr(H2Profile.scala:99)
at slick.jdbc.JdbcStatementBuilderComponent$QueryBuilder.$anonfun$expr$8(JdbcStatementBuilderComponent.scala:381)
at slick.jdbc.JdbcStatementBuilderComponent$QueryBuilder.$anonfun$expr$8$adapted(JdbcStatementBuilderComponent.scala:381)
at slick.util.SQLBuilder.sep(SQLBuilder.scala:31)
So the question is - is there another way to do such update?
What comes to my mind is using a plain SQL query or introduction of some artificial column holding concatenated values of from and type columns and filter against it.

Spark 2.0: Moving from RDD to Dataset

I want to adapt my Java Spark app (which actually uses RDDs for some calculations) to use Datasets instead of RDDs. I'm new to Datasets and not sure how to map which transaction to a corresponding Dataset operation.
At the moment I map them like this:
JavaSparkContext.textFile(...) -> SQLContext.read().textFile(...)
JavaRDD.filter(Function) -> Dataset.filter(FilterFunction)
JavaRDD.map(Function) -> Dataset.map(MapFunction)
JavaRDD.mapToPair(PairFunction) -> Dataset.groupByKey(MapFunction) ???
JavaPairRDD.aggregateByKey(U, Function2, Function2) -> KeyValueGroupedDataset.???
And the corresponing questions are:
Equals JavaRDD.mapToPair the Dataset.groupByKey method?
Does JavaPairRDD map to KeyValueGroupedDataset?
Which method equals the JavaPairRDD.aggregateByKey method?
However, I want to port the following RDD code into a Dataset one:
JavaRDD<Article> goodRdd = ...
JavaPairRDD<String, Article> ArticlePairRdd = goodRdd.mapToPair(new PairFunction<Article, String, Article>() { // Build PairRDD<<Date|Store|Transaction><Article>>
public Tuple2<String, Article> call(Article article) throws Exception {
String key = article.getKeyDate() + "|" + article.getKeyStore() + "|" + article.getKeyTransaction() + "|" + article.getCounter();
return new Tuple2<String, Article>(key, article);
}
});
JavaPairRDD<String, String> transactionRdd = ArticlePairRdd.aggregateByKey("", // Aggregate distributed data -> PairRDD<String, String>
new Function2<String, Article, String>() {
public String call(String oldString, Article newArticle) throws Exception {
String articleString = newArticle.getOwg() + "_" + newArticle.getTextOwg(); // <<Date|Store|Transaction><owg_textOwg###owg_textOwg>>
return oldString + "###" + articleString;
}
},
new Function2<String, String, String>() {
public String call(String a, String b) throws Exception {
String c = a.concat(b);
...
return c;
}
}
);
My code looks this yet:
Dataset<Article> goodDS = ...
KeyValueGroupedDataset<String, Article> ArticlePairDS = goodDS.groupByKey(new MapFunction<Article, String>() {
public String call(Article article) throws Exception {
String key = article.getKeyDate() + "|" + article.getKeyStore() + "|" + article.getKeyTransaction() + "|" + article.getCounter();
return key;
}
}, Encoders.STRING());
// here I need something similar to aggregateByKey! Not reduceByKey as I need to return another data type (String) than I have before (Article)

morphline#flume - looking for regexp change and a hash function

Fluming data to Solr. Data get changed using morphline.
Looking for a couple of basic functions in morphline library:
create a hash value based on other attribute values (e.g. hash=("sha-1", timestamp,message,host,..)
change case of an attribute's string value (something more generic like regexp_replace would do as well).
Don't want yet to write a custom Java handler.. I think there is should be an easier way :)
(1) Non-generic solution for hash function as I wasn't able to find out-of-the-box morphline implementation, hard-coded SHA-1 (eg. no for loop, hard-coded 20 bytes):
{ java {
imports : "import java.security.*;"
code: """
try {
MessageDigest digest = MessageDigest.getInstance("SHA-1");
String value;
value = (String) record.getFirstValue("message");
if (value != null) { digest.update(value.getBytes("ISO-8859-1"), 0, value.length()); }
value = (String) record.getFirstValue("timestamp");
if (value != null) { digest.update(value.getBytes("ISO-8859-1"), 0, value.length()); }
value = (String) record.getFirstValue("hostname");
if (value != null) { digest.update(value.getBytes("ISO-8859-1"), 0, value.length()); }
byte[] a = digest.digest();
record.replaceValues("id"
, String.format("%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X%02X"
,a[0] ,a[1] ,a[2] ,a[3] ,a[4] ,a[5] ,a[6] ,a[7] ,a[8] ,a[9] //SHA-1 has exactly 20 bytes
,a[10],a[11],a[12],a[13],a[14],a[15],a[16],a[17],a[18],a[19]) );
}
catch (java.security.NoSuchAlgorithmException e) { logger.error("hash to id: caught NoSuchAlgorithmException for SHA-1"); }
catch (java.io.UnsupportedEncodingException e) { logger.error("hash to id: caught UnsupportedEncodingException"); }
finally {
return child.process(record);
}
"""
}
}
(2) Non-generic implementaion for lower case transformation (I would hope morphline had just something like regexp_replace) :
java {
code: """
String program = (String) record.getFirstValue("program");
String program_lc = program.toLowerCase();
if (! program.equals(program_lc) )
{ record.replaceValues("program", program_lc); }
return child.process(record);
"""
}

parse JSON data with different structures

I have a problem with accessing a certain property(hlink) in JSON code. This is because the structure of the JSON output is not always the same ,and as a result I get following error: "Cannot use object of type stdClass as array in...". Can someone help me to solve this problem?
JSON output 1 (Array)
Array (
[0] => stdClass Object (
[hlink] => http://www.rock-zottegem.be/
[main] => true
[mediatype] => webresource )
[1] => stdClass Object (
[copyright] => Rock Zottegem
[creationdate] => 20/03/2013 14:35:57
[filename] => b014933c-fdfd-4d93-939b-ac7adf3a20a3.jpg
[filetype] => jpeg
[hlink] => http://media.uitdatabank.be/20130320/b014933c-fdfd-4d93-939b-ac7adf3a20a3.jpg
)
JSON output 2
stdClass Object (
[copyright] => Beschrijving niet beschikbaar
[creationdate] => 24/04/2013 19:22:47
[filename] => Cinematek_F14281_1.jpg
[filetype] => jpeg
[hlink] => http://media.uitdatabank.be/20130424/Cinematek_F14281_1.jpg
[main] => true
[mediatype] => photo
)
And this is my code:
try {
if (!empty($img[1]->hlink)){
echo "<img src=" . $img[1]->hlink . "?maxheight=300></img>";
}
}
catch (Exception $e){
$e->getMessage();
}
Assuming this is PHP, and you know that the JSON always contains either an object or an array of objects, the problem boils down to detecting which you've received.
Try something like:
if (is_array($img)) {
$hlink = $img[0]->hlink;
} else {
$hlink = $img->hlink;
}
This isn't directly answering your question, but it should give you the means to investigate the problems you are having.
Code Sample
var obj1 = [ new Date(), new Date() ];
var obj2 = new Date();
obj3 = "bad";
function whatIsIt(obj) {
if (Array.isArray(obj)) {
console.log("obj has " + obj.length + " elements");
} else if (obj instanceof Date) {
console.log(obj.getTime());
} else {
console.warn("unexpected object of type " + typeof obj);
}
}
// Use objects
whatIsIt(obj1);
whatIsIt(obj2);
whatIsIt(obj3);
// Serialize objects as strings
var obj1JSON = JSON.stringify(obj1);
// Notice how the Date object gets stringified to a string
// which no longer parses back to a Date object.
// This is because the Date object can be fully represented as a sting.
var obj2JSON = JSON.stringify(obj2);
var obj3JSON = JSON.stringify(obj3);
// Parse strings back, possibly into JS objects
whatIsIt(JSON.parse(obj1JSON));
// This one became a string above and stays one
// unless you construct a new Date from the string.
whatIsIt(JSON.parse(obj2JSON));
whatIsIt(JSON.parse(obj3JSON));
Output
obj has 2 elements JsonExample:6
1369955261466 JsonExample:8
unexpected object of type string JsonExample:10
obj has 2 elements JsonExample:6
unexpected object of type string JsonExample:10
unexpected object of type string JsonExample:10
undefined

Resources