I am creating bunch of case classes in scala which I will use to write to db. As the columns in the db has certain restrictions(Length, type, null/not null, etc). How can I enforce the length restriction on my case object fields without checking every field one by one?
This is how you can put the restructions on the fields of case class
object Solution1 extends App {
case class Payload(name: String, id: Int, address: String) {
require(name.length < 10)
require(address.length <= 50)
}
println(Payload("name5678910", 120, "earth")) // this will give you an erro
println(Payload("name", 121, "earth"))
}
Related
I have this NUMERIC(20) field on mssql and I'm trying do read it as a data type string to get the full number because int limits on javascript
code: {
type: DataTypes.STRING
}
But the returned value is a truncated INT
[ {code: 4216113112911594000 } ]
No matter what kind of data type choose it'll return as truncated int
The original values is 4216113112911594192. This is a java UUID
How can a convert this value to using model.findAll()
This is a already created table for another application and I'm trying to read it using sequelize
Here you go :
code: {
type: Sequelize.INTEGER , // <------ keep is as DB's datatype
get() { // <------ Use getter method to modify the output of query
return this.getDataValue('code').toString();
}
}
I think this might help you
model.findOne({
attributes: [[db.sequelize.literal('cast(`code` as varchar)'), 'code_string']] ,
where : { id : YOUR_ID }
}).then(data => {
console.log(data); // <------ Check your output here
})
I have not fully understood what you are trying to achieve.
Nor my code is tested.
Idea 1 : convert into float.
Idea 2 : Append any string alphabet to originial int value before sending to javascript.
The original values is 4216113112911594192.
So the string become 'A'+'4216113112911594192'='A4216113112911594192'
Now you can play with 'A4216113112911594192'
There are many javascript library to support big integer.
BigInteger
This worked for postgres (thanks Bessonov & vitaly-t!)
Just add this before you initialize sequelize
import pg from 'pg'
// Parse bigints and bigint arrays
pg.types.setTypeParser(20, BigInt) // Type Id 20 = BIGINT | BIGSERIAL
const parseBigIntArray = pg.types.getTypeParser(1016) // 1016 = Type Id for arrays of BigInt values
pg.types.setTypeParser(1016, (a) => parseBigIntArray(a).map(BigInt))
A little background:
I didn't want Sequelize to parse bigint columns as strings, so I added pg.defaults.parseInt8 = true. That made pg start parsing bigints as regular numbers, but it truncated numbers that were too big. I ended up going with the above solution.
I'm trying to test a single API that internally does different things based on inputs:
country
customer
amount of items
The following simulation is what I came up with:
val countries = List("US", "CAN")
val customerTypes = List("TYPE1", "TYPE2")
val basketSizes = List(1, 10, 50)
val scenarioGenerator: Seq[(String, String, Int)] = for {
country <- countries
customerType <- customerTypes
basketSize <- basketSizes
} yield (country, customerType, basketSize)
def scenarios(): Seq[PopulationBuilder] = {
var scenarioList = new ArraySeq[PopulationBuilder](countries.size * customerTypes.size * basketSizes.size)
var i = 0;
for ((country: String, customerType: String, basketSize: Int) <- scenarioGenerator) {
// fetch customer data for scenario
val customers = DataFetcher.customerRequest(country, customerType)
// fetch product data for scenario
val products = DataFetcher.productRequest(country)
// generate a scenario with given data and parameters
val scen = scenario(s"Pricing-(${country},${customerType},${basketSize})")
// feeder that creates the request object for the gatling user
.feed(new PricingFeeder(country, customers, products, basketSize))
.repeat(10) {
exec(Pricing.price)
.pause(500 milliseconds)
}
.inject(
rampUsers(10) over (10 seconds)
)
scenarioList(i) = scen
i = i + 1
}
scenarioList
}
setUp(scenarios: _*).protocols(httpProto)
This is run with the maven plugin (and with tracking in jenkins using the gatling plugin), but this results in a single tracked case: Pricing. This is useless as even the item amount will be close to a linear increase in response time.
The simulation.log has the data for each scenario type, but out of the box reporting handles it as a single type of query, and merges all the results in a single graph, which means that it's impossible to see if a certain combination causes a spike due to a calculation or data bug.
I'd like to get separate metrics for each of the combinations, so it would be really easy to see for example that a code or data change in the API caused a latency spike in the Pricing-(US,TYPE1,50) scenario.
What is the idiomatic way of achieving this with gatling? I don't want to create simulations for each case, as this would be a nightmare to manage (and getting rid of manually managed data and jenkins jobs with jmeter is what we are trying to achieve).
First thing - it is not a good practice to run so many scenarios in one simulation as it runs them in parallel not sequentially so you should be sure that it is what you want.
If so, you can use fact that gatling report allows to show graphs per group. So you can wrap all your requests in group that is named based on parameters, this way in detailed view of report you will be able to select which group to show, fe.:
val singleScenario = scenario(s"Pricing-(${country},${customerType},${basketSize})")
.group(s"Pricing-(${country},${customerType},${basketSize})"){
.feed(new PricingFeeder(country, customers, products, basketSize))
.repeat(10) {
exec(Pricing.price)
.pause(500 milliseconds)
}
}
If you do not need all scenarios to run in parallel, and want separate reports for separate scenarios best way is to implement simulation class as parametrized abstract class and add separate classes for each parameters set, as in Gatling one simulation equals on report, fe.:
package com.performance.project.simulations
import io.gatling.core.Predef.Simulation
import scala.concurrent.duration._
class UsType1Simulation1 extends ParametrizedSimulation("US", "TYPE1", 1)
class UsType1Simulation10 extends ParametrizedSimulation("US", "TYPE1", 10)
class UsType1Simulation50 extends ParametrizedSimulation("US", "TYPE1", 50)
class UsType2Simulation1 extends ParametrizedSimulation("US", "TYPE2", 1)
class UsType2Simulation10 extends ParametrizedSimulation("US", "TYPE2", 10)
class UsType2Simulation50 extends ParametrizedSimulation("US", "TYPE2", 50)
class CanType1Simulation1 extends ParametrizedSimulation("CAN", "TYPE1", 1)
class CanType1Simulation10 extends ParametrizedSimulation("CAN", "TYPE1", 10)
class CanType1Simulation50 extends ParametrizedSimulation("CAN", "TYPE1", 50)
class CanType2Simulation1 extends ParametrizedSimulation("CAN", "TYPE2", 1)
class CanType2Simulation10 extends ParametrizedSimulation("CAN", "TYPE2", 10)
class CanType2Simulation50 extends ParametrizedSimulation("CAN", "TYPE2", 50)
sealed abstract class ParametrizedSimulation(country: String, customerType: String, basketSize: Int) extends Simulation{
val customers = DataFetcher.customerRequest(country, customerType)
val products = DataFetcher.productRequest(country)
val singleScenario = scenario(s"Pricing-(${country},${customerType},${basketSize})")
.feed(new PricingFeeder(country, customers, products, basketSize))
.repeat(10) {
exec(Pricing.price)
.pause(500 milliseconds)
}
.inject(
rampUsers(10) over (10 seconds)
)
setUp(singleScenario).protocols(httpProto)
}
Of course it makes sense only if there is small amount of combinations, with hundreds of them it will get messy.
Lets say you have a publisher using broadcast with some fast and some slow subscribers and would like to be able to drop sets of messages for the slow subscriber without having to keep them in memory. The data consists of chunked ByteStrings, so dropping a single ByteString is not an option.
Each set of ByteStrings is followed by a terminator ByteString("\n"), so I would need to drop a set of ByteStrings ending with that.
Is that something you can do with a custom graph stage? Can it be done without aggregating and keeping the whole set in memory?
Avoid Custom Stages
Whenever possible try to avoid custom stages, they are very tricky to get correct as well as being pretty verbose. Usually some combination of the standard akka-stream stages and plain-old-functions will do the trick.
Group Dropping
Presumably you have some criteria that you will use to decide which group of messages will be dropped:
type ShouldDropTester : () => Boolean
For demonstration purposes I will use a simple switch that drops every other group:
val dropEveryOther : ShouldDropTester =
Iterator.from(1)
.map(_ % 2 == 0)
.next
We will also need a function that will take in a ShouldDropTester and use it to determine whether an individual ByteString should be dropped:
val endOfFile = ByteString("\n")
val dropGroupPredicate : ShouldDropTester => ByteString => Boolean =
(shouldDropTester) => {
var dropGroup = shouldDropTester()
(byteString) =>
if(byteString equals endOfFile) {
val returnValue = dropGroup
dropGroup = shouldDropTester()
returnValue
}
else {
dropGroup
}
}
Combining the above two functions will drop every other group of ByteStrings. This functionality can then be converted into a Flow:
val filterPredicateFunction : ByteString => Boolean =
dropGroupPredicate(dropEveryOther)
val dropGroups : Flow[ByteString, ByteString, _] =
Flow[ByteString] filter filterPredicateFunction
As required: the group of messages do not need to be buffered, the predicate will work on individual ByteStrings and therefore consumes a constant amount of memory regardless of file size.
I try to have Seq[String], containing the fields name of a case class
And another Seq[String] containing values of case class.
In a generic way. I think I will have to map values with a Poly1 function to have the Arbitrary type => String.
But now, I'm not able to extract keys and values form LabelledGenerics.
def apply[T,R <: HList](value : T)(implicit gen: LabelledGeneric.Aux[T, R],
keys : Keys[R],
valuesR : Values[R]
) {
val hl = gen.to(value)
val keys = hl.keys ...
val values = hl.values.map ...
}
I'm not sure if I have to ask for keys and values implicit or if it's possible to have this from the LabelledGeneric.
I have tried to map the following Poly over keys to have a hlist of string.
But it's seems keys are not Witness
object PolyWitnesToString extends Poly1 {
implicit def witnessCase = at[Witness]{ w => w.toString}
}
I'm a little bit lost now.
I'm struggling with Slick's lifted embedding and mapped tables. The API feels strange to me, maybe just because it is structured in a way that's unfamiliar to me.
I want to build a Task/Todo-List. There are two entities:
Task: Each task has a an optional reference to the next task. That way a linked list is build. The intention is that the user can order the tasks by his priority. This order is represented by the references from task to task.
TaskList: Represents a TaskList with a label and a reference to the first Task of the list.
case class Task(id: Option[Long], title: String, nextTask: Option[Task])
case class TaskList(label: String, firstTask: Option[Task])
Now I tried to write a data access object (DAO) for these two entities.
import scala.slick.driver.H2Driver.simple._
import slick.lifted.MappedTypeMapper
implicit val session: Session = Database.threadLocalSession
val queryById = Tasks.createFinderBy( t => t.id )
def task(id: Long): Option[Task] = queryById(id).firstOption
private object Tasks extends Table[Task]("TASKS") {
def id = column[Long]("ID", O.PrimaryKey, O.AutoInc)
def title = column[String]("TITLE")
def nextTaskId = column[Option[Long]]("NEXT_TASK_ID")
def nextTask = foreignKey("NEXT_TASK_FK", nextTaskId, Tasks)(_.id)
def * = id ~ title ~ nextTask <> (Task, Task.unapply _)
}
private object TaskLists extends Table[TaskList]("TASKLISTS") {
def label = column[String]("LABEL", O.PrimaryKey)
def firstTaskId = column[Option[Long]]("FIRST_TASK_ID")
def firstTask = foreignKey("FIRST_TASK_FK", firstTaskId, Tasks)(_.id)
def * = label ~ firstTask <> (Task, Task.unapply _)
}
Unfortunately it does not compile. The problems are in the * projection of both tables at nextTask respective firstTask.
could not find implicit value for evidence parameter of type
scala.slick.lifted.TypeMapper[scala.slick.lifted.ForeignKeyQuery[SlickTaskRepository.this.Tasks.type,justf0rfun.bookmark.model.Task]]
could not find implicit value for evidence parameter of type scala.slick.lifted.TypeMapper[scala.slick.lifted.ForeignKeyQuery[SlickTaskRepository.this.Tasks.type,justf0rfun.bookmark.model.Task]]
I tried to solve that with the following TypeMapper but that does not compile, too.
implicit val taskMapper = MappedTypeMapper.base[Option[Long], Option[Task]](
option => option match {
case Some(id) => task(id)
case _ => None
},
option => option match {
case Some(task) => task.id
case _ => None
})
could not find implicit value for parameter tm: scala.slick.lifted.TypeMapper[Option[justf0rfun.bookmark.model.Task]]
not enough arguments for method base: (implicit tm: scala.slick.lifted.TypeMapper[Option[justf0rfun.bookmark.model.Task]])scala.slick.lifted.BaseTypeMapper[Option[Long]]. Unspecified value parameter tm.
Main question: How to use Slick's lifted embedding and mapped tables the right way? How to I get this to work?
Thanks in advance.
The short answer is: Use ids instead of object references and use Slick queries to dereference ids. You can put the queries into methods for re-use.
That would make your case classes look like this:
case class Task(id: Option[Long], title: String, nextTaskId: Option[Long])
case class TaskList(label: String, firstTaskId: Option[Long])
I'll publish an article about this topic at some point and link it here.