I am trying to connect to an MS/SQL server and execute a 'count' statement. I've reached this far:
import scala.dbc._
import scala.dbc.Syntax._
import scala.dbc.syntax.Statement._
import java.net.URI
object MsSqlVendor extends Vendor {
val uri = new URI("jdbc:sqlserver://173.248.X.X:Y/DataBaseName")
val user = "XXX"
val pass = "XXX"
val retainedConnections = 5
val nativeDriverClass = Class.forName("com.microsoft.sqlserver.jdbc.SQLServerDriver");
val urlProtocolString = "jdbc:sqlserver:"
}
object Main {
def main(args: Array[String]) {
println("Hello, world!")
val db = new Database(MsSqlVendor)
val count = db.executeStatement {
select (count) from (technical)
}
println("%d rows counted", count)
}
}
I get an error saying: "scala.dbc.syntax.Statement.select of type dbc.syntax.Statement.SelectZygote does not take parameters"
How do I set this up?
This can be a problem:
val count = db.executeStatement {
select (count) from (technical)
}
The count inside the statement refers to val count, not to some other count. Still, there's the other problem you report. There's neither a count nor a technical definition anywhere. Maybe it is missing from some other place where you found this snippet. The following does compile, though it's anyone's guess as to whether it does what you want:
val countx = db.executeStatement {
select fields "count" from "technical"
}
At any rate, I thought scala.dbc was long deprecated. I can't find any deprecation notice, however, and it is still linked on the library jar, even on trunk.
Related
I am executing 2 consecutive scenarios, I have a requirement where I need to record current time before start of 1st scenario and then pass that time value to next scenario. Can someone please suggest how this can be implemented. Please check below my code
def fileUpload() = foreach("${datasetIdList}","datasetId"){
println("File Upload Start Time::::"+Calendar.getInstance().getTime+" for datasetId ::: ${datasetId}")
exec(http("file upload").post("/datasets/${datasetId}/uploadFile")
.formUpload("File","./src/test/resources/data/Scan_good.csv")
.header("content-type","multipart/form-data")
.check(status is 200).check(status.saveAs("uploadStatus")))
.exec(session => {
if(session("uploadStatus").as[Int] == 200)
counter +=1
session
})
}
def getDataSetId() = foreach("${datasetIdList}","datasetId"){
exec(http("get datasetId")
.get("/datasets/${datasetId}")
.header("content-type","application/json")
.check(status is 200)
)
I need to record upload start time for each iteration of datasetIdList and pass that value to next scenario and print that value for each datasetId. can someone please suggest how this can be implemented
You may try using before section
package load
import io.gatling.core.Predef._
import io.gatling.http.Predef._
class TransferTimeSimulation extends Simulation {
var beforeScn1Start: Long = 0L
before {
println("Simulation is about to start!")
beforeScn1Start = System.currentTimeMillis()
}
after {
println("Simulation is finished!")
}
val scn1 = scenario("Scenario 1").exec(
http("get google")
.get("http://google.com")
.check(status.is(200))
)
val scn2 = scenario("Scenario 2")
.exec { session =>
println("beforeScn1Start = " + beforeScn1Start)
session
}
setUp(
scn1.inject(atOnceUsers(1))
.andThen(scn2.inject(atOnceUsers(1)))
)
.protocols(http)
.maxDuration(10)
.assertions(
forAll.failedRequests.count.is(0),
)
}
For more flexibilty you may also consider using lazy val initialization
https://www.baeldung.com/scala/lazy-val
I have a simple Flink application to illustrate the usage of KeyedStream#max
import com.huawei.flink.time.Box
import org.apache.flink.streaming.api.TimeCharacteristic
import org.apache.flink.streaming.api.scala.{StreamExecutionEnvironment, _}
object KeyStreamMaxTest {
val env = StreamExecutionEnvironment.getExecutionEnvironment
def main(args: Array[String]): Unit = {
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime)
env.setParallelism(1)
env.setMaxParallelism(1)
val ds = env.fromElements(("X,Red,10"), ("Y,Blue,10"), ("Z,Black, 22"), ("U,Green,22"), ("N,Blue,25"), ("M,Green,23"))
val ds2 = ds.map { line =>
val Array(name, color, size) = line.split(",")
Box(name.trim, color.trim, size.trim.toInt)
}.keyBy(_.color).max("size")
ds2.print()
env.execute()
}
}
The output is:
Box(X,Red,10)
Box(Y,Blue,10)
Box(Z,Black,22)
Box(U,Green,22)
Box(Y,Blue,25) -- I thought this should be ("N,Blue,25")
Box(U,Green,23)
Looks Flink only replaces the size, but keeps name and color unchanged,
I would ask what's the practical usage for this behavior? I could only imagine that it is natural to get the whole record that has the max size.
Sometimes all you need to know is the max value, for each key, of one field. I believe max is able to provide this information while doing less work than the more generally useful maxBy, which returns the whole record that has the max size.
I am trying to use a cursor with Objectify and Google App Engine to return a subset of data and a cursor so that I can retrieve more data when the user is ready. I found an example here that looks exactly like what I need but I don't know how to return the final list plus the cursor. Here is the code I have:
#ApiMethod(name = "listIconThemeCursor") //https://code.google.com/p/objectify-appengine/wiki/Queries#Cursors
public CollectionResponse<IconTheme> listIconThemeCursor(#Named("cursor") String cursorStr) {
Query<IconTheme> query = ofy().load().type(IconTheme.class).limit(10);
if (cursorStr != null ) {
query.startAt(Cursor.fromWebSafeString(cursorStr));
}
List<IconTheme> result = new ArrayList<IconTheme>();
int count = 0;
QueryResultIterator<IconTheme> iterator = query.iterator();
while (iterator.hasNext()) {
IconTheme theme = iterator.next();
result.add(theme);
count++;
}
Cursor cursor = iterator.getCursor();
String encodeCursor = cursor.toWebSafeString();
return serial(tClass, result, encodeCursor);
}
Note that this was modified from a previous endpoint in which I returned the CollectionResponse of ALL the data. My dataset is large enough that this is no longer practical. Basically, I don't know what was in the user's function of 'serial(tClass, result, encodeCursor) that let it get returned to the user.
There is another example here but it doesn't appear to answer my question either.
I don't quite understand what you are asking, but I see one immediate bug in your code:
query.startAt(Cursor.fromWebSafeString(cursorStr));
...should be:
query = query.startAt(Cursor.fromWebSafeString(cursorStr));
Objectify command objects are immutable, functional objects.
After a long slog, I figured out that CollectionResponse has the cursor in it :(
Here is the complete code I used incorporating the comment from stickfigure above:
#ApiMethod(name = "listIconThemeCursor", path="get_cursor")
public CollectionResponse<IconTheme> listIconThemeCursor(#Named("cursor") String cursorStr) {
Query<IconTheme> query = ofy().load().type(IconTheme.class)
.filter("errors <", 10)
.limit(10);
if (cursorStr != null ) {
query = query.startAt(Cursor.fromWebSafeString(cursorStr));
}
List<IconTheme> result = new ArrayList<IconTheme>();
QueryResultIterator<IconTheme> iterator = query.iterator();
while (iterator.hasNext()) {
IconTheme theme = iterator.next();
result.add(theme);
}
Cursor cursor = iterator.getCursor();
CollectionResponse<IconTheme> response = CollectionResponse.<IconTheme> builder()
.setItems(result)
.setNextPageToken(cursor.toWebSafeString())
.build();
return response;
}
I am trying to connect to a MSSQL database using the slick framework. The following code shows my first attempt but I can't figure out what is wrong.
This error occurs when leaving it as shown below:
[1] value create is not a member of scala.slick.lifted.DDL
Now I delete the line because I do not necessarily need to create the table within my scala code. But then another error arises:
[2] value map is not a member of object asd.asd.App.Coffees
package asd.asd
import scala.slick.driver.SQLServerDriver._
import scala.slick.session.Database.threadLocalSession
object App {
object Coffees extends Table[(String, Int, Double)]("COFFEES") {
def name = column[String]("COF_NAME", O.PrimaryKey)
def supID = column[Int]("SUP_ID")
def price = column[Double]("PRICE")
def * = name ~ supID ~ price
}
def main(args : Array[String]) {
println( "Hello World!" )
val db = slick.session.Database.forURL(url = "jdbc:jtds:sqlserver", user = "test", password = "test", driver = "scala.slick.driver.SQLServerDriver")
db withSession {
Coffees.ddl.create [1]
// Coffees.insertAll(
// ("Colombian", 101, 7.99),
// ("Colombian_Decaf", 101, 8.99),
// ("French_Roast_Decaf", 49, 9.99)
// )
val q = for {
c <- Coffees [2]
} yield (c.name, c.price, c.supID)
println(q.selectStatement)
q.foreach { case (n, p, s) => println(n + ": " + p) }
}
}
}
Problem solved. What I did was the following: Update to the latest Slick version and then adjust the code like demonstrated here. Afterwards you need to exchange the line
import scala.slick.driver.H2Driver.simple._
with
import scala.slick.driver.SQLServerDriver.simple._
And modify the connect string to
[...]
Database.forURL("jdbc:jtds:sqlserver://localhost:1433/<DB>;instance=<INSTANCE>", driver = "scala.slick.driver.SQLServerDriver") withSession {
[...]
After this worked out, I decided to use a c3p0 pooled connection (Which makes Slick very much faster and thus first usable, I do highly recommend to use connection pooling!). This left me with the following database object.
package utils
import scala.slick.driver.SQLServerDriver.simple._
import com.mchange.v2.c3p0.ComboPooledDataSource
object DatabaseUtils {
private val ds = new ComboPooledDataSource
ds.setDriverClass("scala.slick.driver.SQLServerDriver")
ds.setUser("supervisor")
ds.setPassword("password1")
ds.setMaxPoolSize(20)
ds.setMinPoolSize(3)
ds.setTestConnectionOnCheckin(true)
ds.setIdleConnectionTestPeriod(300)
ds.setMaxIdleTimeExcessConnections(240)
ds.setAcquireIncrement(1)
ds.setJdbcUrl("jdbc:jtds:sqlserver://localhost:1433/db_test;instance=SQLEXPRESS")
ds.setPreferredTestQuery("SELECT 1")
private val _database = Database.forDataSource(ds)
def database = _database
}
You can use this like displayed below.
DatabaseUtils.database withSession {
implicit session =>
[...]
}
Last the Maven dependency for c3p0 and the latest Slick version.
<dependency>
<groupId>com.typesafe.slick</groupId>
<artifactId>slick_2.10</artifactId>
<version>2.0.0-M3</version>
</dependency>
<dependency>
<groupId>com.mchange</groupId>
<artifactId>c3p0</artifactId>
<version>0.9.2.1</version>
</dependency>
I can't find a string result contained in a column...
here is the table:
object Equivalences extends Table[(Option[Int], String, String)]("EQUIVALENCES") {
def id = column[Int]("EQV_ID", O.PrimaryKey, O.AutoInc)
def racine = column[String]("RAC")
def definition = column[String]("DEF")
def * = id.? ~ racine ~ definition
}
and here is the wrong code:
def ajoute_si_racine_absente(racine_ajoutée: String, definition_ajoutée: String) = {
val cul = Query(Equivalences).filter(
equ => {
println(equ.racine)
equ.racine.toString.contains(racine_ajoutée)
})
if (cul.list().length == 0) {
Equivalences.insert(None, racine_ajoutée, definition_ajoutée)
}
}
the wrong code aims to insert a value if it does not exists, but the "println" within displays this result: "(EQUIVALENCES #409303125).RAC" and it does not match the column's content.
Maybe should I use the "getResult" method but I did not found any example on the net.
thanks.
Karol S is right. This does what you want:
def ajoute_si_racine_absente(racine_ajoutée: String, definition_ajoutée: String) = {
val cul = Query(Equivalences).list().filter(
equ => {
println(equ.racine)
equ.racine.toString.contains(racine_ajoutée)
})
if (cul.length == 0) {
Equivalences.insert(None, racine_ajoutée, definition_ajoutée)
}
}
But it may not be efficient, because you fetch the complete table. Slick is a query builder with a collection like API. Everything you write just resembles and builds up a query description until you finally call .listor .run. Only then the query is executed. Everything before are just placeholder objects representing tables, queries and columns. And the placeholder object for column racine prints as "(EQUIVALENCES #409303125).RAC".