Adding tables to SQLite database using Slick and Scala - database

So I have SQLite database using Slick and I want to add and remove tables from it.
Here is what I have now:
Here is the database element class:
class Data(tag: Tag)
extends Table[(Int, String)](tag, "myDB") {
// This is the primary key column:
def id = column[Int]("ID", O.PrimaryKey)
def name = column[String]("NAME")
// Every table needs a * projection with the same type as the table's type parameter
def * : ProvenShape[(Int, String)] = (id, name)
}
I need to be able to create multiple tables using the class above. Something like this:
def addTable(name:String){
db withSession { implicit session =>
val newTable = TableQuery[Data]
newTable.ddl.create
}
}
Problem is that I cant create new table because one already exists with name "myDB". I tried to add a parameter for the name of the Table in the class Data like so:
class Data(tag: Tag,tableName:String)
But then I couldn't create a table at all and got an error
unspecified value parameter tableName
And how can I query a specific table from the database given the table name?
I tried to Implement this using Map with table name pointing to a table, but it doesnt work because the Map is not saved anywhere and is reset everytime the program starts.
This is what I had for querying a table:
def getDataFromTable(tableName:String)
{
var res = ""
db withSession { implicit session =>
tables(tableName) foreach{
case (id,name)=>
res += id + " " + name + " "
}
}
res
}
Any help is appreciated!
Thanks!

Definition
class Data(tag: Tag, tableName: String)
extends Table[(Int, String)](tag, tableName){
...
Usage
(new TableQuery(Data(_,"table_1"))).ddl.create
(new TableQuery(Data(_,"table_2"))).ddl.create
...

Related

Django How can I expand list into models without overwriting ID?

#Model
class You(models.Model):
Name = models.TextField(null=False)
Age = models.IntegerField(null=False)
# What i want to do
data = [["john", 16] ,["jax", 18]]
for d in data:
You(*d)
Then it tries to override id.
i set to editable=False but it can't help too
Any way to skip id field ?
Assign it with named parameters:
# What i want to do
data = [['john', 16], ['jax', 18]]
for name, age in data:
You(Name=name, Age=age)
Another option is to pass DEFERRED for the primary key, so:
from django.db.models import DEFERRED
for d in data:
You(DEFERRED, *d)
Note: normally the name of the fields in a Django model are written in snake_case, not PascalCase, so it should be: age instead of Age.
Note: Specifying null=False [Django-doc] is not necessary: fields are by default not NULLable.

How to create an object attribute without writing to database -- peewee -- python

Maybe i have an understanding problem. I try to make 2 tabeles in one database. But additionaly i need to have some temporary values in one class that i doen´t want to write to the database.
I try to switch to peewee and read the dokumentation but i find no solution at my own.
without peewee i would make an init method where i write my attributes. But where did i have to write them now?
from peewee import *
import datetime
db = SqliteDatabase('test.db', pragmas={'foreign_keys': 1})
class BaseModel(Model):
class Meta:
database = db
class Sensor(BaseModel):
id = IntegerField(primary_key=True)
sort = IntegerField()
name = TextField()
#def __init__(self):
#self.sometemporaryvariable = "blabla"
def meineparameter(self, hui):
self.hui = hui
print(self.hui)
class Sensor_measure(BaseModel):
id = ForeignKeyField(Sensor, backref="sensorvalues")
timestamp = DateTimeField(default=datetime.datetime.now)
value = FloatField()
class Meta:
primary_key = CompositeKey("id", "timestamp")
db.connect()
db.create_tables([Sensor_measure, Sensor])
sensor1 = Sensor.create(id=2, sort=20, name="Sensor2")
#sensor1.sometemporaryvariable = "not so important to write to the database"
sensor1.save()
Remember to call super() whenever overriding a method in a subclass:
class Sensor(BaseModel):
id = IntegerField(primary_key=True)
sort = IntegerField()
name = TextField()
def __init__(self, **kwargs):
self.sometemporaryvariable = "blabla"
super().__init__(**kwargs)

Is It Possible To Handle Invalid References (SQL Server)?

I've very much enjoyed being able to shorten my code by using reference in my Table objects as well as referencedOn and referrersOn in my Entities. But as I've gotten further with my project, I've realized that I might have to undo all that work and recreate these processes manually in the event that the rows being referenced are deleted.
Is there any way of keeping these without risking IllegalStateExceptions (like being able to provide a default foreign key you know exists), or will I have to give it all up for manual reference methods?
Here is a minimal example:
fun main(args: Array<String>) {
/* Connecting to DB here */
transaction {
val timesheets = TimeSheet.all()
println("Printing time sheet list:")
timesheets.forEach { sheet ->
println("Time Sheet Object for employee ${sheet.employee}\n$sheet")
}
}
/* Disconnecting to DB */
}
object EmployeeTable : IdTable<UUID>("employeeTable") {
override val id = uuid("EmployeeUID").entityId().primaryKey()
}
object TimeSheetTable : IdTable<Int>("timeSheetTable") {
override val id = integer("JobUID").primaryKey().autoIncrement().entityId()
val employee = reference("EmployeeUID", EmployeeTable)
}
class Employee(id: EntityID<UUID>) : UUIDEntity(id) {
companion object : UUIDEntityClass<Employee>(EmployeeTable)
}
class TimeSheet(id: EntityID<Int>) : IntEntity(id) {
companion object : IntEntityClass<TimeSheet>(TimeSheetTable)
var employee by Employee referencedOn TimeSheetTable.employee
}
Once the list of time sheets tries to print a row where a Employee UID doesn't match any Employee rows, it'll throw:
Exception in thread "main" java.lang.IllegalStateException: Cannot find employeeTable WHERE id=some-invalid-id-string

spark scala typesafe config safe iterate over value of a specific column name

I have found similar post on Stackoverflow. However, I could not solve my issue So, this is why I write this post.
Aim
The aim is to perform a column projection [projection = filter columns] while loading a SQL table (I use SQL Server).
According to the scala cookbook this is the way to filter colums [using an Array]:
sqlContext.read.jdbc(url,"person",Array("gender='M'"),prop)
However, I do not want to hardcode Array("col1", "col2", ...) inside my Scala code this is why I am using a config file with typesafe (see hereunder).
Config file
dataset {
type = sql
sql{
url = "jdbc://host:port:user:name:password"
tablename = "ClientShampooBusinesLimited"
driver = "driver"
other = "i have a lot of other single string elements in the config file..."
columnList = [
{
colname = "id"
colAlias = "identifient"
}
{
colname = "name"
colAlias = "nom client"
}
{
colname = "age"
colAlias = "âge client"
}
]
}
}
Let's focus on 'columnList': The name of the SQL column correspond exatecly to 'colname'. 'colAlias' is a field that I will use later.
data.scala file
lazy val columnList = configFromFile.getList("dataset.sql.columnList")
lazy val dbUrl = configFromFile.getList("dataset.sql.url")
lazy val DbTableName= configFromFile.getList("dataset.sql.tablename")
lazy val DriverName= configFromFile.getList("dataset.sql.driver")
configFromFile is created by myself in another custom class. But this does not matter. The type of columnList is "ConfigList" this type comes from typesafe.
main file
def loadDataSQL(): DataFrame = {
val url = datasetConfig.dbUrl
val dbTablename = datasetConfig.DbTableName
val dbDriver = datasetConfig.DriverName
val columns = // I need help to solve this
/* EDIT 2 march 2017
This code should not be used. Have a look at the accepted answer.
*/
sparkSession.read.format("jdbc").options(
Map("url" -> url,
"dbtable" -> dbTablename,
"predicates" -> columns,
"driver" -> dbDriver))
.load()
}
So all my issue is to extract the 'colnames' values in order to put them in a suitable array. Can someone help me to write the right operhand of 'val columns' ?
Thanks
If you're looking for a way to read the list of colname values into a Scala Array - I think this does it:
import scala.collection.JavaConverters._
val columnList = configFromFile.getConfigList("dataset.sql.columnList")
val colNames: Array[String] = columnList.asScala.map(_.getString("colname")).toArray
With the supplied file this would result in Array(id, name, age)
EDIT:
As to your actual goal, I actually don't know of any option named predication (nor can I find evidence for one in the sources, using Spark 2.0.2).
JDBC Data Source performs "projection pushdown" based on the actual columns selected in the query used. In other words - only selected columns would be read from DB, so you can use the colNames array in a select immediately following the DF creation, e.g.:
import org.apache.spark.sql.functions._
sparkSession.read
.format("jdbc")
.options(Map("url" -> url, "dbtable" -> dbTablename, "driver" -> dbDriver))
.load()
.select(colNames.map(col): _*) // selecting only desired columns

Slick MSSQL inserting object with auto increment

I've recently had to move a project over from MySQL to MSSQL. I'm using IDENTITY(1,1) on my id columns for my tables to match MySQL's auto-increment feature.
When I try to insert an object though, I'm getting this error:
[SQLServerException: Cannot insert explicit value for identity column in table 'categories' when IDENTITY_INSERT is set to OFF.]
Now after some research I found out that it's because I'm trying to insert a value for my id(0) on my tables. So for example I have an object Category
case class Category(
id: Long = 0L,
name: String
)
object Category extends Table[Category]("categories"){
def name = column[String]("name", O.NotNull)
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def * = id ~ name <> (Category.apply _, Category.unapply _)
def add(model:Category) = withSession{ implicit session =>
Category.insert(model)
}
def remove(id:Long) = withSession{implicit session =>
try{Some(Query(Category).filter(_.id === id).delete)}
catch{case _ => None}
}
}
Is there a way to insert my object into the database and ignoring the 0L without MSSQL throwing an SQLException? MySQL would just ignore the id's value and do the increment like it didn't receive an id.
I'd really rather not create a new case class with everything but the id.
Try redefining your add method like this and see if it works for you:
def add(model:Category) = withSession{ implicit session =>
Category.name.insert(model.name)
}
If you had more columns then you could have added a forInsert projection to your Category table class that specified all fields except id, but since you don't, this should work instead.
EDIT
Now if you do have more than 2 fields on your table objects, then you can do something like this, which is described in the Lifted Embedding documentation here:
case class Category(
id: Long = 0L,
name: String,
foo:String
)
object Category extends Table[Category]("categories"){
def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
def name = column[String]("name", O.NotNull)
def foo = column[String]("foo", O.NotNull)
def * = id ~ name ~ foo <> (Category.apply _, Category.unapply _)
def forInsert = name ~ foo <> (t => Category(0L, t._1, t._2), {(c:Category) => Some(c.name, c.foo)})
def add(model:Category) = withSession{ implicit session =>
Category.forInsert insert model
}
def remove(id:Long) = withSession{implicit session =>
try{Some(Query(Category).filter(_.id === id).delete)}
catch{case _ => None}
}
def withSession(f: Session => Unit){
}
}

Resources