Can't execute a transaction block while working with a database. Kotlin, Exposed - database

I am trying to send transactions to the Postgre database (the table exists) using the Exposed framework for Kotlin, but an error occurs that does not allow me to do this. The error appears on the line SchemaUtils.create(tableTest)
Source code:
import org.jetbrains.exposed.dao.id.IntIdTable
import org.jetbrains.exposed.sql.*
import org.jetbrains.exposed.sql.transactions.transaction
fun main(args: Array<String>) {
val db = Database.connect("jdbc:postgresql://localhost:5432/testBase", driver = "org.postgresql.Driver", user = "user", password = "123")
println("Database name: ${db.name}")
transaction {
addLogger(StdOutSqlLogger)
SchemaUtils.create(tableTest)
println("People: ${tableTest.selectAll()}")
}
}
object tableTest: Table() {
val id = integer("id")
val name = text("name")
val surname = text("surname")
val height = integer("height")
val phone = text("phone")
override val primaryKey = PrimaryKey(id)
}
The error:
Exception in thread "main" java.lang.ExceptionInInitializerError
at MainKt$main$1.invoke(main.kt:12)
at MainKt$main$1.invoke(main.kt)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$1.invoke(ThreadLocalTransactionManager.kt:170)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$inTopLevelTransaction$2.invoke(ThreadLocalTransactionManager.kt:211)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:219)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.inTopLevelTransaction(ThreadLocalTransactionManager.kt:210)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt$transaction$1.invoke(ThreadLocalTransactionManager.kt:148)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.keepAndRestoreTransactionRefAfterRun(ThreadLocalTransactionManager.kt:219)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:120)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction(ThreadLocalTransactionManager.kt:118)
at org.jetbrains.exposed.sql.transactions.ThreadLocalTransactionManagerKt.transaction$default(ThreadLocalTransactionManager.kt:117)
at MainKt.main(main.kt:10)
Caused by: java.lang.IllegalStateException: javaClass.`package` must not be null
at org.jetbrains.exposed.sql.Table.<init>(Table.kt:306)
at org.jetbrains.exposed.sql.Table.<init>(Table.kt:303)
at tableTest.<init>(main.kt:30)
at tableTest.<clinit>(main.kt:30)
... 12 more
build.gradle.kts:
import org.jetbrains.kotlin.gradle.tasks.KotlinCompile
plugins {
kotlin("jvm") version "1.4.0"
application
}
group = "me.amd"
version = "1.0-SNAPSHOT"
repositories {
mavenCentral()
jcenter()
}
dependencies {
testImplementation(kotlin("test-junit"))
implementation("org.jetbrains.exposed", "exposed-core", "0.26.2")
implementation("org.jetbrains.exposed", "exposed-dao", "0.26.2")
implementation("org.jetbrains.exposed", "exposed-jdbc", "0.26.2")
implementation("org.postgresql:postgresql:42.2.16")
implementation("org.slf4j", "slf4j-api", "1.7.25")
implementation("org.slf4j", "slf4j-simple", "1.7.25")
implementation("org.xerial:sqlite-jdbc:3.30.1")
}
tasks.withType<KotlinCompile>() {
kotlinOptions.jvmTarget = "1.8"
}
application {
mainClassName = "MainKt"
}
Tried doing like this:
transaction {
addLogger(StdOutSqlLogger)
val schema = Schema("tableTest", authorization = "postgres", password = "123456")
SchemaUtils.setSchema(schema)
println("People: ${tableTest.selectAll()}")
}
but the error has moved to the line println("People: ${tableTest.selectAll()}")
I tried to send queries to SQLite — everything is the same
How to fix this error and still send a request to the database? I hope for your help!

Add a package statement above your import statements. Furthermore, add your main method in a class.

Related

How to use terraform to enable Managed private endpoint on datafactory azure sql database linked service

I am trying to use terraform to create adf linked services however the terraform resource doesn't give the option to select an already existing managed private endpoint for the linked service to communicate over but when creating from the portal, this is possible. bellow is my code
resource "azurerm_data_factory" "process-adf" {
resource_group_name = module.resourcegroup.resource_group.name
location = module.resourcegroup.resource_group.location
name = "adf"
managed_virtual_network_enabled = true
public_network_enabled = false
tags = var.tags
identity {
type = "SystemAssigned"
}
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "process-mssql-adf" {
name = "mssql-adf"
data_factory_id = azurerm_data_factory.process-adf.id
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.adf.id
connection_string = "data source=servername;initial catalog=databasename;user id=admin;Password=password;integrated security=True;encrypt=True;connection timeout=30"
}
resource "azurerm_data_factory_managed_private_endpoint" "adf-msssql-pe" {
name = "adf"
data_factory_id = azurerm_data_factory.process-adf.id
target_resource_id = azurerm_mssql_server.process-control.id
subresource_name = "sqlServer"
}
resource "azurerm_data_factory_integration_runtime_azure" "adf" {
name = "adf"
data_factory_id = azurerm_data_factory.process-adf.id
location = module.resourcegroup.resource_group.location
virtual_network_enabled = true
}
how do i point the resource azurerm_data_factory_linked_service_azure_sql_database to the resource azurerm_data_factory_managed_private_endpoint ?

Getting better performance with JpaRepository.saveAll()

I am using a rest-api which is supposed to import data out of csv files. The uploading and mapping to object part is working, but not the saveAll(), it just takes years to save 130000~ rows to the database (which is running on a mssqlserver) and it should be working with much bigger files in less time.
This is what my data class looks:
#Entity
data class Street(
#Id
#GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "streetSeq")
#SequenceGenerator(name = "streetSeq", sequenceName = "streetSeq", allocationSize = 1)
val id: Int,
val name: String?,
val municipalityId: Int?
)
And im just using the saveAll methode. Everything in the import methode works relativ fast (like 10 seconds) untlis the saveAll()
override fun import(file: MultipartFile) {
val inputStream = file.inputStream
var import: List<Street> = listOf()
tsvReader.open(inputStream) {
val csvContents = readAllWithHeaderAsSequence()
val dataClasses = grass<ImportStreet>().harvest(csvContents)
dataClasses.forEach { row ->
import = import + toStreet(row)
}
println("Data Converted")
}
streetRepository.saveAll(import)
inputStream.close()
}
I already tried to adjust the application.yml but its not making a big diffrence.
jpa:
properties:
hibernate:
ddl-auto: update
dialect: org.hibernate.dialect.SQLServer2012Dialect
generate_statistics: true
order_inserts: true
order_updates: true
jdbc:
batch_size: 1000
As you can see in the implementaiton fo SimpleJpaRepository it's not doing any kind of batching save, it just saves each entity one by one, that's why it is slow
https://github.com/spring-projects/spring-data-jpa/blob/d35ee1a82bf0fdf2de2724a02619eea1cf3c98bd/src/main/java/org/springframework/data/jpa/repository/support/SimpleJpaRepository.java#L584
Assert.notNull(entities, "Entities must not be null!");
List<S> result = new ArrayList<S>();
for (S entity : entities) {
result.add(save(entity));
}
return result;
So try to implement batch save without usage of spring-data-jpa, for example using spring-batch
As the other comments / awnsers suggested i tried to use spring-batch, but it didnt work out for me (mostly because i didnt know how to pull it off with it)
After trying out more stuff i found jdbcTemplate, which perfectly worked out for me, the inserts are now much faster then with the saveAll()
override fun import(file: MultipartFile) {
val inputStream = file.inputStream
var import: List<Street> = listOf()
tsvReader.open(inputStream) {
val csvContents = readAllWithHeaderAsSequence()
val dataClasses = grass<ImportStreet>().harvest(csvContents)
dataClasses.forEach { row ->
import = import + toStreet(row)
}
println("Data Converted")
}
batchInsert(import)
inputStream.close()
}
The batchInstert methode is using the jdbcTemplate.batchUpdate() fun
fun batchInsert(streets: List<Street>): IntArray? {
return jdbcTemplate.batchUpdate(
"INSERT INTO street (name, municipalityId) VALUES (?, ?)",
object: BatchPreparedStatementSetter {
#Throws(SQLException::class)
override fun setValues(ps: PreparedStatement, i: Int) {
ps.setString(1, streets[i].name)
streets[i].municipalityId?.let { ps.setInt(2, it) }
}
override fun getBatchSize(): Int {
return streets.size
}
})
}

No checkpoint files are created in my simple application

I have the following simple flink application running within IDE, and I do a checkpoint every 5 seconds, and would like to write the checkpoint data into directory file:///d:/applog/out/mycheckpoint/, but after running for a while, i stop the application,but I didn't find anything under the directory file:///d:/applog/out/mycheckpoint/
The code is:
import java.util.Date
import io.github.streamingwithflink.util.DateUtil
import org.apache.flink.api.common.state.{ListState, ListStateDescriptor}
import org.apache.flink.api.scala._
import org.apache.flink.runtime.state.filesystem.FsStateBackend
import org.apache.flink.runtime.state.{FunctionInitializationContext, FunctionSnapshotContext}
import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction
import org.apache.flink.streaming.api.environment.CheckpointConfig.ExternalizedCheckpointCleanup
import org.apache.flink.streaming.api.functions.source.SourceFunction
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
object SourceFunctionExample {
def main(args: Array[String]): Unit = {
val env = StreamExecutionEnvironment.getExecutionEnvironment
env.setParallelism(4)
env.getCheckpointConfig.setCheckpointInterval(5 * 1000)
env.getCheckpointConfig.enableExternalizedCheckpoints(ExternalizedCheckpointCleanup.RETAIN_ON_CANCELLATION)
env.setStateBackend(new FsStateBackend("file:///d:/applog/out/mycheckpoint/"))
val numbers: DataStream[Long] = env.addSource(new ReplayableCountSource)
numbers.print()
env.execute()
}
}
class ReplayableCountSource extends SourceFunction[Long] with CheckpointedFunction {
var isRunning: Boolean = true
var cnt: Long = _
var offsetState: ListState[Long] = _
override def run(ctx: SourceFunction.SourceContext[Long]): Unit = {
while (isRunning && cnt < Long.MaxValue) {
ctx.getCheckpointLock.synchronized {
// increment cnt
cnt += 1
ctx.collect(cnt)
}
Thread.sleep(200)
}
}
override def cancel(): Unit = isRunning = false
override def snapshotState(snapshotCtx: FunctionSnapshotContext): Unit = {
println("snapshotState is called at " + DateUtil.format(new Date) + s", cnt is ${cnt}")
// remove previous cnt
offsetState.clear()
// add current cnt
offsetState.add(cnt)
}
override def initializeState(initCtx: FunctionInitializationContext): Unit = {
// obtain operator list state to store the current cnt
val desc = new ListStateDescriptor[Long]("offset", classOf[Long])
offsetState = initCtx.getOperatorStateStore.getListState(desc)
// initialize cnt variable from the checkpoint
val it = offsetState.get()
cnt = if (null == it || !it.iterator().hasNext) {
-1L
} else {
it.iterator().next()
}
println("initializeState is called at " + DateUtil.format(new Date) + s", cnt is ${cnt}")
}
}
I tested the application on Windows and Linux and in both cases the checkpoint files were created as expected.
Note that the program keeps running if a checkpoint fails, for example due to some permission errors or invalid path.
Flink logs a WARN message with the exception that caused the checkpoint to fail.

Terraform Error: vsphere provider doesn’t support resource

I have a small issue, my terraform code is saying the vsphere provider does not support a vsphere_instance resource.
When I run terraform plan, I get:
1 error(s) occurred:
*vsphere_instance.node1: Provider doesn’t support resource: vsphere_instance
Terraform template:
provider "vsphere" {
user = "andm"
password = "Welcome123!"
vsphere_server = "vcenter1.domain.com"
allow_unverified_ssl = true
}
resource "vsphere_instance" "node1" {
name = "node1.domain.com"
vcpu = 4
memory = 4096
time_zone = "040"
domain = "hosting.domain.com"
dns_servers = ["8.8.8.8"]
disk {
datastore = "WS006_LUN_197"
vmdk = "templates_01/AV_W2K8_Tmlate/AV_W2K8_Template.vmdk"
type = "thin"
}
network_interface {
ipv4_address = "192.168.0.1"
ipv4_gateway = "192.168.1.1"
ipv4_prefix_length = "24"
}
}
Can you change the resource name from vspher_instance to vsphere_virtual_machine
This should fix your issue.
https://www.terraform.io/docs/providers/vsphere/index.html
VMWARE VSPHERE PROVIDER
RESOURCES
vsphere_virtual_machine
vsphere_folder
vsphere_file
vsphere_virtual_disk

Sending LDAP request with message ID

Need to send an LDAP search request with message ID set to 0 value (as part of RFC validation testing). Tried the following modified code from apache directory api examples section:
import java.io.IOException;
import org.apache.directory.api.ldap.model.entry.DefaultEntry;
import org.apache.directory.api.ldap.model.entry.ModificationOperation;
import org.apache.directory.api.ldap.model.exception.LdapException;
import org.apache.directory.api.ldap.model.name.Dn;
import org.apache.directory.ldap.client.api.LdapConnection;
import org.apache.directory.ldap.client.api.LdapNetworkConnection;
import org.apache.directory.api.ldap.model.message.SearchRequest;
import org.apache.directory.api.ldap.model.message.SearchRequestImpl;
import org.apache.directory.api.ldap.model.cursor.SearchCursor;
import org.apache.directory.api.ldap.model.exception.LdapNoPermissionException;
import org.apache.directory.api.ldap.model.name.Dn;
import org.apache.directory.api.ldap.model.exception.LdapInvalidDnException;
public class ManageLDAPConnection {
private static Dn getSafeSearchBaseDn(String dn) throws LdapInvalidDnException{
Dn searchBaseDn = null;
if (dn != null && !dn.isEmpty()){
searchBaseDn = new Dn(dn);
}else{
searchBaseDn = Dn.ROOT_DSE;
}
return searchBaseDn;
}
public static void main (String[] args) {
int messageId = 0;
int port = 389;
String username = "<Admin CN>";
String password = "<Password>";
String hostname = "<IP>";
SearchCursor searchResult = null;
String dn = "<DN>";
String filterExpr = "(objectclass=*)";
org.apache.directory.api.ldap.model.message.SearchScope searchScopeValue = org.apache.directory.api.ldap.model.message.SearchScope.OBJECT;
LdapConnection connection = new LdapNetworkConnection(hostname, port);
try {
connection.bind(username, password);
System.out.println("Connected successfully");
} catch (LdapException e) {
System.out.println("Unable to bind");
}
try {
SearchRequest searchRequest = new SearchRequestImpl();
System.out.println(searchRequest.getMessageId());
searchRequest.setMessageId(0);
System.out.println(searchRequest.getMessageId());
searchRequest.setBase(getSafeSearchBaseDn(dn));
searchRequest.setFilter(filterExpr);
searchRequest.setScope(searchScopeValue);
searchResult = connection.search(searchRequest);
} catch (LdapNoPermissionException e){
System.out.println("No permission exception");
} catch (LdapException e){
System.out.println("LDAP Exception: " + e.getMessage());
}
}
}
The above code is able to send the request, but the message ID is still sent as non zero,
even though the following has been done:
searchRequest.setMessageId(0);
You're clearly going to have to use a different library, or modify this one, or go to a lower level. It isn't at all surprising that this library prevents you from shooting yourself in the foot.
Had some solution in python's pyasn1-modules. The following seems to work well:
from pyasn1.type import univ, namedval, namedtype, tag
from pyasn1.codec.ber import encoder
import socket
from pyasn1_modules.rfc2251 import *
ldap_bind_request = BindRequest()
ldap_bind_request.setComponentByName('version', 3)
ldap_bind_request.setComponentByName('name', 'cn=admin,o=org')
ldap_auth = AuthenticationChoice()
ldap_auth.setComponentByName('simple', 'mypwd')
ldap_bind_request.setComponentByName('authentication', ldap_auth)
ldap_message = LDAPMessage()
ldap_message.setComponentByName('messageID', 0)
ldap_message.setComponentByName('protocolOp', ldap_bind_request)
print(ldap_bind_request.prettyPrint())
print(dir(ldap_bind_request))
encoded_request = encoder.encode(ldap_message)
print(encoded_request)
asock = socket.socket()
asock.connect(('127.0.0.1', 389))
asock.send(encoded_request)
There is something named JAVA ASN.1 Compiler (JAC). Trying to see if they provide something similar, with less of object oriented complexity which is common in java :)

Resources