Execute an operation n times on a Mono in Webflux with reactive MongoDB - arrays

I have following scenario for a data loader in a Webflux application using the reactive MongoDB driver and Spring:
create X objects of type B
create Y objects of type A: object A contains a field of type array and a reference to an object of type B. The reference to B is chosen randomly from the first step
add N entries to the array of the previously created object
The problem I am facing seems to be parallel execution of the Mono/Flux, which should not happen from my understanding. According to the documentation things are always executed in sequence unless specified otherwise.
Can someone please give me a hint what I am doing wrong?
Here is an example code snippet. Object A is a toilet. Object B is a user. The array field is the comments field:
Flux.range(0, 10)
// create 10 objects of type user
.flatMap {
LOG.debug("Creating user $it")
userRepository.save(
User(
id = ObjectId(),
name = userNames.random(),
email = "${userNames.random()}#mail.com"
)
)
}
.collectList()
// create 2 objects of type toilet
.flatMapMany { userList ->
Flux.range(0, 2).zipWith(Flux.range(0, 2).map { userList })
}
.flatMap {
LOG.debug("Creating toilet ${it.t1}")
val userList = it.t2
toiletRepository.save(
Toilet(
id = ObjectId(),
title = userList.random().name
)
)
}
// add 5 entries to array of toilet
.flatMap { toilet ->
Flux.range(0, 5).zipWith(Flux.range(0, 5).map { toilet })
}
.flatMap { tuple ->
val toilet = tuple.t2
LOG.debug("Creating comment ${tuple.t1} for toilet $toilet")
// get current values from toilet
toiletRepository.findById(toilet.id).map {
// and push a new element to the comments array
LOG.debug("Comment size ${it.commentRefs.size}")
toiletRepository.save(it.apply { commentRefs.add(ObjectId()) })
}
}
.subscribe {
GlobalScope.launch {
exitProcess(SpringApplication.exit(context))
}
}
Executing this code produces following log:
2020-11-15 19:42:54.197 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 0
2020-11-15 19:42:54.293 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 1
2020-11-15 19:42:54.295 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 2
2020-11-15 19:42:54.296 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 3
2020-11-15 19:42:54.300 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 4
2020-11-15 19:42:54.301 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 5
2020-11-15 19:42:54.304 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 6
2020-11-15 19:42:54.310 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 7
2020-11-15 19:42:54.316 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 8
2020-11-15 19:42:54.318 DEBUG 13524 --- [ main] c.g.q.t.DataLoaderRunner : Creating user 9
2020-11-15 19:42:54.348 DEBUG 13524 --- [tLoopGroup-3-10] c.g.q.t.DataLoaderRunner : Creating toilet 0
2020-11-15 19:42:54.380 DEBUG 13524 --- [tLoopGroup-3-10] c.g.q.t.DataLoaderRunner : Creating toilet 1
2020-11-15 19:42:54.386 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 0 for toilet Toilet(id=5fb176aef24f4c248fbb051c, title=wholesale, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.405 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 1 for toilet Toilet(id=5fb176aef24f4c248fbb051c, title=wholesale, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.406 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 2 for toilet Toilet(id=5fb176aef24f4c248fbb051c, title=wholesale, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.407 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 3 for toilet Toilet(id=5fb176aef24f4c248fbb051c, title=wholesale, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.409 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 4 for toilet Toilet(id=5fb176aef24f4c248fbb051c, title=wholesale, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.410 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 0 for toilet Toilet(id=5fb176aef24f4c248fbb051d, title=imaginary, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.412 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 1 for toilet Toilet(id=5fb176aef24f4c248fbb051d, title=imaginary, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.413 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 2 for toilet Toilet(id=5fb176aef24f4c248fbb051d, title=imaginary, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.414 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 3 for toilet Toilet(id=5fb176aef24f4c248fbb051d, title=imaginary, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.415 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Creating comment 4 for toilet Toilet(id=5fb176aef24f4c248fbb051d, title=imaginary, location=Point [x=0.000000, y=0.000000], previewID=null, averageRating=0.0, ratingRefs=[], disabled=false, toiletCrewApproved=false, description=, commentRefs=[], imageRefs=[])
2020-11-15 19:42:54.425 DEBUG 13524 --- [tLoopGroup-3-10] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.425 DEBUG 13524 --- [ntLoopGroup-3-8] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.425 DEBUG 13524 --- [ntLoopGroup-3-6] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.425 DEBUG 13524 --- [ntLoopGroup-3-2] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.425 DEBUG 13524 --- [ntLoopGroup-3-3] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.425 DEBUG 13524 --- [ntLoopGroup-3-9] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.425 DEBUG 13524 --- [ntLoopGroup-3-7] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.429 DEBUG 13524 --- [ntLoopGroup-3-2] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.429 DEBUG 13524 --- [ntLoopGroup-3-9] c.g.q.t.DataLoaderRunner : Comment size 0
2020-11-15 19:42:54.464 DEBUG 13524 --- [tLoopGroup-3-10] c.g.q.t.DataLoaderRunner : Comment size 0
I have now three questions:
Why is the Thread switching from main to LoopGroup? If it gets executed in a sequence it should not use multi-threading at all?
Why are the Comment size log messages grouped together at the end?
How to correctly push elements to the array using the reactive mongo repository implementation?
Any hints are appreciated. I assume that the nested execution of findById and save is not correct but how would you write that differently? Since save requires an entity I need to pass in the latest version of the entity which contains one additional element in the array.
I try to achive that by getting the latest version with findById and directly modifying it with 'map -> save'.
Thank you all!

I am not sure if this is the best way to do it but I was able to achieve what I want by splitting up the operations in functions to have it grouped more logically.
Here are the snippets for following operations:
create users
create comments
create ratings
private fun createUsers() = Flux.range(0, userNames.size + 1)
.flatMap {
if (it < userNames.size) {
LOG.debug("Creating user $it")
userRepository.save(
User(
id = ObjectId(),
name = userNames[it],
email = "${userNames[it]}#mail.com"
)
)
} else {
LOG.debug("Creating dev-user")
userRepository.save(
User(
id = ObjectId("000000000000012343456789"),
name = "devuser",
email = "devuser#mail.com"
)
)
}
}
.collectList()
private fun createComments(users: List<User>) = Flux.range(0, numComments)
.flatMap {
LOG.debug("Creating comment $it")
commentRepository.save(
Comment(
id = ObjectId(),
text = commentTexts.random(),
userRef = users.random().id
)
)
}
.collectList()
private fun createRatings(users: List<User>) = Flux.range(0, numRatings)
.flatMap {
LOG.debug("Creating rating $it")
ratingRepository.save(
Rating(
id = ObjectId(),
userRef = users.random().id,
value = Random.nextInt(0, 5)
)
)
}
.collectList()
And finally creating the toilets with the result from above:
private fun createToilets(comments: List<Comment>, ratings: List<Rating>) = Flux.range(0, numToilets)
.flatMap {
val toilet = Toilet(
id = ObjectId(),
title = titles.random(),
location = GeoJsonPoint(Random.nextDouble(10.0, 20.0), Random.nextDouble(45.0, 55.0)),
description = descriptions.random()
)
// add comments
val commentsToAdd = Random.nextInt(0, comments.size)
for (i in 0 until commentsToAdd) {
toilet.commentRefs.add(comments[i].id)
}
// add average rating and rating references
val ratingsToAdd = Random.nextInt(0, ratings.size)
for (i in 0 until ratingsToAdd) {
toilet.ratingRefs.add(ratings[i].id)
toilet.averageRating += ratings[i].value
}
if (toilet.ratingRefs.isNotEmpty()) {
toilet.averageRating /= toilet.ratingRefs.size
}
LOG.debug("Creating toilet $it with $commentsToAdd comments and $ratingsToAdd ratings")
toiletRepository.save(toilet)
}
// upload preview image
.flatMap { toilet ->
val imageName = "toilet${Random.nextInt(1, 10)}.jpg"
imageService.store(
Callable {
DataLoaderRunner::class.java.getResourceAsStream("/sample-images/$imageName")
},
"${toilet.title}-preview"
).zipWith(Mono.just(toilet))
}
// set preview image
.flatMap {
val imageId = it.t1
val toilet = it.t2
toiletRepository.save(toilet.copy(previewID = imageId))
}
.collectList()
This is the final reactive operation chain:
createUsers()
.flatMap { users ->
createComments(users).map { comments ->
Tuples.of(users, comments)
}
}
.flatMap {
val users = it.t1
val comments = it.t2
createRatings(users).map { ratings ->
Tuples.of(comments, ratings)
}
}
.flatMap {
val comments = it.t1
val ratings = it.t2
createToilets(comments, ratings)
}
// close application when all toilets are processed
.subscribe {
GlobalScope.launch {
exitProcess(SpringApplication.exit(context))
}
}
I am not sure if this is the best way to do it but it is working. The approach in the opening post is using nested map/flatmap operations which should be anyway avoided and maybe they are the reason why it was not working.

Related

Flink KafkaSink connector with exactly once semantics too many logs

Configuring a KafkaSink from new Kafka connector API (since version 1.15) with DeliveryGuarantee.EXACTLY_ONCE and transactionalId prefix produce an excessive amount of logs each time a new checkpoint is triggered.
Logs are these
2022-11-02 10:04:10,124 INFO org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer [] - Flushing new partitions
2022-11-02 10:04:10,125 INFO org.apache.kafka.clients.producer.ProducerConfig [] - ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-flink-1-24
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
internal.auto.downgrade.txn.commit = false
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = flink-1-24
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2022-11-02 10:04:10,131 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Overriding the default enable.idempotence to true since transactional.id is specified.
2022-11-02 10:04:10,161 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Overriding the default enable.idempotence to true since transactional.id is specified.
2022-11-02 10:04:10,161 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Instantiated a transactional producer.
2022-11-02 10:04:10,162 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Overriding the default acks to all since idempotence is enabled.
2022-11-02 10:04:10,159 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Instantiated a transactional producer.
2022-11-02 10:04:10,170 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Overriding the default acks to all since idempotence is enabled.
2022-11-02 10:04:10,181 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka version: 2.8.1
2022-11-02 10:04:10,184 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka commitId: 839b886f9b732b15
2022-11-02 10:04:10,184 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka startTimeMs: 1667379850181
2022-11-02 10:04:10,185 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Invoking InitProducerId for the first time in order to acquire a producer ID
2022-11-02 10:04:10,192 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka version: 2.8.1
2022-11-02 10:04:10,192 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka commitId: 839b886f9b732b15
2022-11-02 10:04:10,192 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka startTimeMs: 1667379850192
2022-11-02 10:04:10,209 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Invoking InitProducerId for the first time in order to acquire a producer ID
2022-11-02 10:04:10,211 INFO org.apache.kafka.clients.Metadata [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Cluster ID: MCY5mzM1QWyc1YCvsO8jag
2022-11-02 10:04:10,216 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Discovered transaction coordinator ubuntu:9092 (id: 0 rack: null)
2022-11-02 10:04:10,233 INFO org.apache.kafka.clients.Metadata [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Cluster ID: MCY5mzM1QWyc1YCvsO8jag
2022-11-02 10:04:10,241 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Discovered transaction coordinator ubuntu:9092 (id: 0 rack: null)
2022-11-02 10:04:10,345 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] ProducerId set to 51 with epoch 0
2022-11-02 10:04:10,346 INFO org.apache.flink.connector.kafka.sink.KafkaWriter [] - Created new transactional producer flink-0-24
2022-11-02 10:04:10,353 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] ProducerId set to 52 with epoch 0
2022-11-02 10:04:10,354 INFO org.apache.flink.connector.kafka.sink.KafkaWriter [] - Created new transactional producer flink-1-24
ProducerConfig values log is repeated for each new producer created (based on sink parallelism level).
Configuring checkpoint interval to 10 or 15 seconds, I lose valuable job logs.
There is a way to disable these logs without setting WARN level?

Camel reactive streams not completing when subscribed more than once

#Component
class TestRoute(
context: CamelContext,
) : EndpointRouteBuilder() {
val streamName: String = "news-ticker-stream"
val logger = LoggerFactory.getLogger(TestRoute::class.java)
val camel: CamelReactiveStreamsService = CamelReactiveStreams.get(context)
var count = 0L
val subscriber: Subscriber<String> =
camel.streamSubscriber(streamName, String::class.java)
override fun configure() {
from("timer://foo?fixedRate=true&period=30000")
.process {
count++
logger.info("Start emitting data for the $count time")
Flux.fromIterable(
listOf(
"APPLE", "MANGO", "PINEAPPLE"
)
)
.doOnComplete {
logger.info("All the data are emitted from the flux for the $count time")
}
.subscribe(
subscriber
)
}
from(reactiveStreams(streamName))
.to("file:outbox")
}
}
2022-07-07 13:01:44.626 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 1 time
2022-07-07 13:01:44.640 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : All the data are emitted from the flux for the 1 time
2022-07-07 13:01:44.646 INFO 50988 --- [1 - timer://foo] a.c.c.r.s.ReactiveStreamsCamelSubscriber : Reactive stream 'news-ticker-stream' completed
2022-07-07 13:02:14.616 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 2 time
2022-07-07 13:02:44.610 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 3 time
2022-07-07 13:02:44.611 WARN 50988 --- [1 - timer://foo] a.c.c.r.s.ReactiveStreamsCamelSubscriber : There is another active subscription: cancelled
The reactive stream are not getting completed when running for more than 1 times. So, as you can see in the logs the log message which I have added doOnComplete is only coming for the first time when timer route was triggered. When the timer route is triggered for the second time then there is no completion message. I tried to put the break point in the ReactiveStreamsCamelSubscriber, and found that for the 1st time the flow is going into the onNext() and onComplete() methods but the flow is not going into these method when the timer ran for 2nd time. I am not able to understand why this scenario is playing out?

System.ArgumentException: 24100: The spatial reference identifier (SRID) is not valid. SRIDs must be between 0 and 999999

I want to save a new geometry data at database, but I'm receiving this error message all the time.
A .NET Framework error occurred during execution of user-defined routine or aggregate "geometry".
System.ArgumentException: 24100: The spatial reference identifier (SRID) is not valid. SRIDs must be between 0 and 999999.
Logging
2019-10-02 06:00:41.009 DEBUG 55688 --- [on(2)-127.0.0.1] o.h.e.j.e.i.JdbcEnvironmentInitiator : Database ->
name : Microsoft SQL Server
version : 14.00.1000
major : 14
minor : 0
2019-10-02 06:00:41.010 DEBUG 55688 --- [on(2)-127.0.0.1] o.h.e.j.e.i.JdbcEnvironmentInitiator : Driver ->
name : Microsoft JDBC Driver 7.4 for SQL Server
version : 7.4.1.0
major : 7
minor : 4
2019-10-02 06:00:41.010 DEBUG 55688 --- [on(2)-127.0.0.1] o.h.e.j.e.i.JdbcEnvironmentInitiator : JDBC version : 4.2
2019-10-02 06:00:41.054 INFO 55688 --- [on(2)-127.0.0.1] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.spatial.dialect.sqlserver.SqlServer2008SpatialDialect
...
2019-10-02 05:50:22.232 DEBUG 62340 --- [nio-8080-exec-6] org.hibernate.SQL : insert into teste_geo (geom, nome) values (?, ?)
Hibernate: insert into teste_geo (geom, nome) values (?, ?)
2019-10-02 05:50:22.232 TRACE 62340 --- [nio-8080-exec-6] o.h.type.descriptor.sql.BasicBinder : binding parameter [1] as [VARBINARY] - [POLYGON ((4 0, 2 2, 4 4, 6 2, 4 0))]
2019-10-02 05:50:22.232 TRACE 62340 --- [nio-8080-exec-6] o.h.type.descriptor.sql.BasicBinder : binding parameter [2] as [VARCHAR] - [5f230d1b-ad0d-44a8-997e-02f4533bcfcd]
2019-10-02 05:50:26.452 INFO 62340 --- [ scheduling-1] c.v.g.o.service.ExemploService : Executou chamada do servico!
2019-10-02 05:50:26.452 WARN 62340 --- [nio-8080-exec-6] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 6522, SQLState: S0001
My class code
import lombok.Getter;
import lombok.Setter;
import org.locationtech.jts.geom.Geometry;
import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;
#Entity
#Table(name = "TESTE_GEO")
#Getter
#Setter
public class TesteGeom {
#Id
#Column(name = "nome")
private String name;
#Column(name = "geom")
private Geometry geometry;
}
...........
UUID idUnique = UUID.randomUUID();
TesteGeom t = new TesteGeom();
t.setName(idUnique.toString());
GeometryFactory geometryFactory = new GeometryFactory(new PrecisionModel() ,4326);
Coordinate[] coords =
new Coordinate[] {new Coordinate(4, 0), new Coordinate(2, 2),
new Coordinate(4, 4), new Coordinate(6, 2), new Coordinate(4, 0) };
LinearRing ring = geometryFactory.createLinearRing( coords );
LinearRing holes[] = null; // use LinearRing[] to represent holes
int SRID = geometryFactory.getSRID();
Polygon polygon = geometryFactory.createPolygon(ring, holes );
t.setGeometry(polygon);
t.getGeometry().setSRID(4326);
Executing the same SQL on management studio it works!
insert into teste_geo (geom, nome) values ('POLYGON ((4 0, 2 2, 4 4, 6 2, 4 0))', 'OK');
nome varchar(50)
geom geometry
In a query I got the error. Probably there are some dialect error.
org.springframework.orm.jpa.JpaSystemException: could not deserialize; nested exception is org.hibernate.type.SerializationException: could not deserialize
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.convertHibernateAccessException(HibernateJpaDialect.java:351)
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.translateExceptionIfPossible(HibernateJpaDialect.java:253)
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.translateExceptionIfPossible(AbstractEntityManagerFactoryBean.java:527)
at org.springframework.dao.support.ChainedPersistenceExceptionTranslator.translateExceptionIfPossible(ChainedPersistenceExceptionTranslator.java:61)
I found the solution.
At my model I was using:
import org.locationtech.jts.geom.Geometry;
But Hibernate Spatial only accept :
import com.vividsolutions.jts.geom.Geometry;
So, I change my lib and everything work.

#RestController - Collection issue

I am implementing a spring boot application using #RestController w/ Angular.js client and I want to create/update a Group with one or multiple Users (as children). This does not work with multiple users.
Model Signature:
Grp {
externalId (string, optional),
grpOwner (User, optional),
grpTxt (string, optional),
users (Array[User], optional)
}
User {
externalId (string, optional),
username (string, optional)
}
grp.json (both users exist in database)
{
"externalId": "G0000001",
"users": [
{
"username": "admin"
},
{
"username": "system"
}
]
})
When trying to create/update one Group w/ multiple users, only the first one is taken into consideration. Actually, when debugging the Java part. Any idea?
#RequestMapping(method = RequestMethod.PUT, produces = MediaType.APPLICATION_JSON_VALUE)
#Timed
public ResponseEntity<Grp> updateGrp(#RequestBody Grp grp) throws URISyntaxException {
log.debug("REST request to update Grp : {}", grp);
// Check if the group has been created
Optional<Grp> oGrp = gDao.findOneByExternalIdAndGrpOwner(grp.getExternalId(), this.getCurrentUser());
if (oGrp.isPresent()) {
Grp mGrp = oGrp.get();
this.merge(mGrp, grp);
Grp result = gDao.save(mGrp);
return ResponseEntity.ok()
.headers(HeaderUtil.createEntityUpdateAlert("grp", grp.getExternalId().toString()))
.body(result);
}
return createGrp(grp);
}
Trace:
2016-07-06 11:29:12.563 DEBUG 3216 --- [nio-8080-exec-5] o.s.web.servlet.DispatcherServlet : DispatcherServlet with name 'dispatcherServlet' processing PUT request for [/api/grps]
2016-07-06 11:29:12.566 DEBUG 3216 --- [nio-8080-exec-5] s.w.s.m.m.a.RequestMappingHandlerMapping : Looking up handler method for path /api/grps
2016-07-06 11:29:12.567 DEBUG 3216 --- [nio-8080-exec-5] s.w.s.m.m.a.RequestMappingHandlerMapping : Returning handler method [public org.springframework.http.ResponseEntity<com.coopyrightcode.common.domain.Grp> com.coopyrightcode.common.web.rest.GrpResource.updateGrp(com.coopyrightcode.common.domain.Grp) throws java.net.URISyntaxException]
2016-07-06 11:29:12.604 DEBUG 3216 --- [nio-8080-exec-5] m.m.a.RequestResponseBodyMethodProcessor : Read [class com.coopyrightcode.common.domain.Grp] as "application/json;charset=UTF-8" with [org.springframework.http.converter.json.MappingJackson2HttpMessageConverter#4982ce48]
2016-07-06 11:29:12.613 DEBUG 3216 --- [nio-8080-exec-5] c.c.common.aop.logging.LoggingAspect : Enter: com.coopyrightcode.common.web.rest.GrpResource.updateGrp() with argument[s] = [Grp [grpTxt=null, grpOwner=null, users=[User [username=admin, password=null, fName=null, lName=null, email=null, activated=false, langKey=null, appKey=63d71c52-95cd-43db-9, resetKey=3dfaf9f7-b46d-4daf-9, resetDate=null, usrAccountNonLocked=true, usrAccountNonExpired=true, usrCredentialsNonExpired=true, usrBio=null, usrMobile=null, usrPhone=null, usrPics=null, usrReferent=null, usrCtrId=null]]]]
2016-07-06 11:29:12.613 DEBUG 3216 --- [nio-8080-exec-5] c.c.common.web.rest.PortfolioResource : REST request to update Grp : Grp [grpTxt=null, grpOwner=null, users=[User [username=admin, password=null, fName=null, lName=null, email=null, activated=false, langKey=null, appKey=63d71c52-95cd-43db-9, resetKey=3dfaf9f7-b46d-4daf-9, resetDate=null, usrAccountNonLocked=true, usrAccountNonExpired=true, usrCredentialsNonExpired=true, usrBio=null, usrMobile=null, usrPhone=null, usrPics=null, usrReferent=null, usrCtrId=null]]]

Power Query Looping

I used Power Query to pull all of the unique Item Types tested in the past month:
let
Source = Sql.Database("XXX", "YYY"),
dbo_tblTest = Source{[Schema="dbo",Item="tblTest"]}[Data],
#"Filtered Rows" = Table.SelectRows(dbo_tblTest, each Date.IsInPreviousNMonths([Test_Stop], 1)),
#"Added Custom" = Table.AddColumn(#"Filtered Rows", "Custom", each Text.Start([Item],5)),
#"Removed Duplicates" = Table.Distinct(#"Added Custom", {"Custom"})
in
#"Removed Duplicates"
To get:
Test_ID --- Item --- Test_Start --- Test_Stop --- Custom
2585048 --- B1846-6-02 --- 1/14/2014 12:46 --- 6/25/2015 14:28 --- B1846
2589879 --- B1843-5-05 --- 12/23/2013 16:46 --- 6/25/2015 14:19 --- B1843
2633483 --- B1907-1-04 --- 8/21/2014 20:47 --- 6/10/2015 6:20 --- B1907
2638786 --- B1361-2-04 --- 6/13/2013 14:21 --- 6/16/2015 14:15 --- B1361
2675663 --- B1345-2-02 --- 5/23/2014 18:39 --- 6/25/2015 21:27 --- B1345
Next, I want to use Power Query to pull the past 10 tests for each of the Item Types listed in Query1, regardless of time period. I figured out how to pull the past 10 tests for the Item Types separately, but not all together in one query.
let
Source = Sql.Database("XXX", "YYY"),
dbo_tblTest = Source{[Schema="dbo",Item="tblTest"]}[Data],
#"Filtered Rows" = Table.SelectRows(dbo_tblTest, each Text.StartsWith([Item], "B1846")),
#"Sorted Rows" = Table.Sort(#"Filtered Rows",{{"Test_Stop", Order.Descending}}),
#"Kept First Rows" = Table.FirstN(#"Sorted Rows",10)
in
#"Kept First Rows"
To get:
Test_ID --- Item --- Test_Start --- Test_Stop --- Value
11717643 --- B1846-6-02 --- 7/23/2015 12:48 --- 7/23/2015 12:57 --- 43725341
11716432 --- B1846-1-21 --- 7/23/2015 10:23 --- 7/23/2015 10:29 --- 43724705
11715802 --- B1846-1-21 --- 7/23/2015 9:28 --- 7/23/2015 10:29 --- 43724720
11715505 --- B1846-1-21 --- 7/23/2015 8:59 --- 7/23/2015 9:06 --- 43724675
11715424 --- B1846-1-21 --- 7/23/2015 8:36 --- 7/23/2015 8:59 --- 43724690
11713680 --- B1846-1-55 --- 7/23/2015 5:50 --- 7/23/2015 6:07 --- 43725239
11691169 --- B1846-6-04 --- 7/20/2015 22:47 --- 7/22/2015 20:18 --- 43642835
11690466 --- B1846-6-04 --- 7/20/2015 21:30 --- 7/22/2015 18:41 --- 43642729
11701183 --- B1846-1-140 --- 7/21/2015 21:34 --- 7/21/2015 22:24 --- 43667358
11701184 --- B1846-6-04 --- 7/21/2013 20:35 --- 7/21/2015 20:46 --- 43667359
Is it possible to use Power Query to pull all needed data in one query? If not, is it possible to use VBA with Power Query to get it done?
In Power Query if you're thinking about how to loop, you often find a higher-order library function that does just what you want. In this case, it's grouping.
Grouping splits up a table by some key, in your case the Custom column of the first table. You can rewrite your "keep past 10" logic into a function that you apply within each grouped table using Table.TransformColumns, then expand the grouped tables back out into one flat table.
Your query should be something like:
let
Source = Sql.Database("XXX", "YYY"),
dbo_tblTest = Source{[Schema="dbo",Item="tblTest"]}[Data],
#"Added Custom" = Table.AddColumn(dbo_tblTest, "Custom", each Text.Start([Item],5)),
#"Grouped Rows" = Table.Group(#"Added Custom", {"Custom"}, {{"Grouped", each _, type table}}),
Custom2 = Table.TransformColumns(#"Grouped Rows", {{"Grouped", (groupedTable) =>
let
#"Sorted Rows" = Table.Sort(groupedTable,{{"Test_Stop", Order.Descending}}),
#"Kept First Rows" = Table.FirstN(#"Sorted Rows",10)
in
#"Kept First Rows"}}),
#"Removed Other Columns1" = Table.SelectColumns(Custom2,{"Grouped"}),
#"Expanded Grouped" = Table.ExpandTableColumn(#"Removed Other Columns1", "Grouped", Table.ColumnNames(#"Added Custom"))
in
#"Expanded Grouped"

Resources