akka-testkit without ScalaTest - scalatest

I want to use utest with akka-testkit.
How can I disable ScalaTest in akka-testkit?
Running the test an empty ScalaTest is executed every time.
[info] ScalaTest
[info] Run completed in 957 milliseconds.
[info] Total number of tests run: 0
[info] Suites: completed 0, aborted 0
[info] Tests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0
[info] No tests were executed.
[info] utest
[info] Tests: 1, Passed: 1, Failed: 0
[info] Passed: Total 1, Failed 0, Errors 0, Passed 1

Try unregistering ScalaTest by filtering it out of testFrameworks setting in build.sbt like so
testFrameworks := {
testFrameworks.value.filterNot(_.toString.contains("scalatest"))
}
Now show testFrameworks gives
show testFrameworks
[info] * TestFramework(org.scalacheck.ScalaCheckFramework)
[info] * TestFramework(org.specs2.runner.Specs2Framework, org.specs2.runner.SpecsFramework)
[info] * TestFramework(org.specs.runner.SpecsFramework)
[info] * TestFramework(com.novocode.junit.JUnitFramework)
[info] * TestFramework(utest.runner.Framework)
where we see TestFramework(org.scalatest.tools.Framework, org.scalatest.tools.ScalaTestFramework) is not present.

Related

Flink KafkaSink connector with exactly once semantics too many logs

Configuring a KafkaSink from new Kafka connector API (since version 1.15) with DeliveryGuarantee.EXACTLY_ONCE and transactionalId prefix produce an excessive amount of logs each time a new checkpoint is triggered.
Logs are these
2022-11-02 10:04:10,124 INFO org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer [] - Flushing new partitions
2022-11-02 10:04:10,125 INFO org.apache.kafka.clients.producer.ProducerConfig [] - ProducerConfig values:
acks = -1
batch.size = 16384
bootstrap.servers = [localhost:9092]
buffer.memory = 33554432
client.dns.lookup = use_all_dns_ips
client.id = producer-flink-1-24
compression.type = none
connections.max.idle.ms = 540000
delivery.timeout.ms = 120000
enable.idempotence = true
interceptor.classes = []
internal.auto.downgrade.txn.commit = false
key.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
linger.ms = 0
max.block.ms = 60000
max.in.flight.requests.per.connection = 5
max.request.size = 1048576
metadata.max.age.ms = 300000
metadata.max.idle.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
receive.buffer.bytes = 32768
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 30000
retries = 2147483647
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
security.providers = null
send.buffer.bytes = 131072
socket.connection.setup.timeout.max.ms = 30000
socket.connection.setup.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.endpoint.identification.algorithm = https
ssl.engine.factory.class = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.certificate.chain = null
ssl.keystore.key = null
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLSv1.3
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.certificates = null
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.timeout.ms = 60000
transactional.id = flink-1-24
value.serializer = class org.apache.kafka.common.serialization.ByteArraySerializer
2022-11-02 10:04:10,131 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Overriding the default enable.idempotence to true since transactional.id is specified.
2022-11-02 10:04:10,161 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Overriding the default enable.idempotence to true since transactional.id is specified.
2022-11-02 10:04:10,161 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Instantiated a transactional producer.
2022-11-02 10:04:10,162 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Overriding the default acks to all since idempotence is enabled.
2022-11-02 10:04:10,159 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Instantiated a transactional producer.
2022-11-02 10:04:10,170 INFO org.apache.kafka.clients.producer.KafkaProducer [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Overriding the default acks to all since idempotence is enabled.
2022-11-02 10:04:10,181 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka version: 2.8.1
2022-11-02 10:04:10,184 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka commitId: 839b886f9b732b15
2022-11-02 10:04:10,184 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka startTimeMs: 1667379850181
2022-11-02 10:04:10,185 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Invoking InitProducerId for the first time in order to acquire a producer ID
2022-11-02 10:04:10,192 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka version: 2.8.1
2022-11-02 10:04:10,192 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka commitId: 839b886f9b732b15
2022-11-02 10:04:10,192 INFO org.apache.kafka.common.utils.AppInfoParser [] - Kafka startTimeMs: 1667379850192
2022-11-02 10:04:10,209 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Invoking InitProducerId for the first time in order to acquire a producer ID
2022-11-02 10:04:10,211 INFO org.apache.kafka.clients.Metadata [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Cluster ID: MCY5mzM1QWyc1YCvsO8jag
2022-11-02 10:04:10,216 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] Discovered transaction coordinator ubuntu:9092 (id: 0 rack: null)
2022-11-02 10:04:10,233 INFO org.apache.kafka.clients.Metadata [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Cluster ID: MCY5mzM1QWyc1YCvsO8jag
2022-11-02 10:04:10,241 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] Discovered transaction coordinator ubuntu:9092 (id: 0 rack: null)
2022-11-02 10:04:10,345 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-0-24, transactionalId=flink-0-24] ProducerId set to 51 with epoch 0
2022-11-02 10:04:10,346 INFO org.apache.flink.connector.kafka.sink.KafkaWriter [] - Created new transactional producer flink-0-24
2022-11-02 10:04:10,353 INFO org.apache.kafka.clients.producer.internals.TransactionManager [] - [Producer clientId=producer-flink-1-24, transactionalId=flink-1-24] ProducerId set to 52 with epoch 0
2022-11-02 10:04:10,354 INFO org.apache.flink.connector.kafka.sink.KafkaWriter [] - Created new transactional producer flink-1-24
ProducerConfig values log is repeated for each new producer created (based on sink parallelism level).
Configuring checkpoint interval to 10 or 15 seconds, I lose valuable job logs.
There is a way to disable these logs without setting WARN level?

Camel reactive streams not completing when subscribed more than once

#Component
class TestRoute(
context: CamelContext,
) : EndpointRouteBuilder() {
val streamName: String = "news-ticker-stream"
val logger = LoggerFactory.getLogger(TestRoute::class.java)
val camel: CamelReactiveStreamsService = CamelReactiveStreams.get(context)
var count = 0L
val subscriber: Subscriber<String> =
camel.streamSubscriber(streamName, String::class.java)
override fun configure() {
from("timer://foo?fixedRate=true&period=30000")
.process {
count++
logger.info("Start emitting data for the $count time")
Flux.fromIterable(
listOf(
"APPLE", "MANGO", "PINEAPPLE"
)
)
.doOnComplete {
logger.info("All the data are emitted from the flux for the $count time")
}
.subscribe(
subscriber
)
}
from(reactiveStreams(streamName))
.to("file:outbox")
}
}
2022-07-07 13:01:44.626 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 1 time
2022-07-07 13:01:44.640 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : All the data are emitted from the flux for the 1 time
2022-07-07 13:01:44.646 INFO 50988 --- [1 - timer://foo] a.c.c.r.s.ReactiveStreamsCamelSubscriber : Reactive stream 'news-ticker-stream' completed
2022-07-07 13:02:14.616 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 2 time
2022-07-07 13:02:44.610 INFO 50988 --- [1 - timer://foo] c.e.reactivecameltutorial.TestRoute : Start emitting data for the 3 time
2022-07-07 13:02:44.611 WARN 50988 --- [1 - timer://foo] a.c.c.r.s.ReactiveStreamsCamelSubscriber : There is another active subscription: cancelled
The reactive stream are not getting completed when running for more than 1 times. So, as you can see in the logs the log message which I have added doOnComplete is only coming for the first time when timer route was triggered. When the timer route is triggered for the second time then there is no completion message. I tried to put the break point in the ReactiveStreamsCamelSubscriber, and found that for the 1st time the flow is going into the onNext() and onComplete() methods but the flow is not going into these method when the timer ran for 2nd time. I am not able to understand why this scenario is playing out?

Apache Camel: Idempotent SFTP is throwing exception on other node unnecessarily? (Infinispan)

Following is code:
return "sftp://"+getSftpHostName() +getSftpImportDirectory()
+ "?username="+getSftpUserName()
+ "&password="+getSftpPassword() // Stored on wildfly server
+ "&download=true" //Shall be read chunk by chunk to avoid heap space issues. Earlier download=true was used: Harpreet
+ "&useList=true"
+ "&stepwise=false"
+ "&disconnect=true"
+ "&passiveMode=true"
+ "&reconnectDelay=10000"
+ "&bridgeErrorHandler=true"
+ "&delay="+getSftpDelay()
+ "&include="+ getSftpFileName()
+ "&preMove=$simple{file:onlyname}.$simple{date:now:yyyy-MM-dd'T'hh-mm-ss}.processing"
+ "&move="+getSftpSuccessDirectory()+"$simple{file:onlyname.noext}.$simple{date:now:yyyy-MM-dd'T'hh-mm-ss}.success"
+ "&moveFailed="+getSftpFailedDirectory()+"$simple{file:onlyname.noext}.$simple{date:now:yyyy-MM-dd'T'hh-mm-ss}.failed"
+ "&readLock=idempotent-changed"
+ "&idempotentRepository=#infinispan"
+ "&readLockRemoveOnCommit=true";
We have three nodes. This locks file on Node 1 and route starts all logs printed but on Node 2 it shows following exception only (Node 3 is fine):
Cannot rename file from: Hrm/test/From_HRM/import/Integrator_2.xml to: Hrm/test/From_HRM/import/Integrator_2.xml.2020-05-07T01-27-19.processing, StackTrace: org.apache.camel.component.file.GenericFileOperationFailedException: Cannot rename file from: Hrm/test/From_HRM/import/Integrator_2.xml to: Hrm/test/From_HRM/import/Integrator_2.xml.2020-05-07T01-27-19.processing
at org.apache.camel.component.file.remote.SftpOperations.renameFile(SftpOperations.java:467)
at org.apache.camel.component.file.strategy.GenericFileProcessStrategySupport.renameFile(GenericFileProcessStrategySupport.java:113)
at org.apache.camel.component.file.strategy.GenericFileRenameProcessStrategy.begin(GenericFileRenameProcessStrategy.java:45)
at org.apache.camel.component.file.GenericFileConsumer.processExchange(GenericFileConsumer.java:360)
at org.apache.camel.component.file.remote.RemoteFileConsumer.processExchange(RemoteFileConsumer.java:137)
at org.apache.camel.component.file.GenericFileConsumer.processBatch(GenericFileConsumer.java:219)
at org.apache.camel.component.file.GenericFileConsumer.poll(GenericFileConsumer.java:183)
at org.apache.camel.impl.ScheduledPollConsumer.doRun(ScheduledPollConsumer.java:174)
at org.apache.camel.impl.ScheduledPollConsumer.run(ScheduledPollConsumer.java:101)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: 2: No such file
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2873)
at com.jcraft.jsch.ChannelSftp.rename(ChannelSftp.java:1950)
at org.apache.camel.component.file.remote.SftpOperations.renameFile(SftpOperations.java:463)
Why node 2 even trying to try that file? (Not able to understand what changes shall i do?)
(A small issue we face once a month i.e file gets loss and above error appears on all three nodes. If anyone can comment on same that would be help too)
Updated INfinispan Settings:
public class InfinispanIdempodentRepositoryProducer {
#Resource(lookup = "java:jboss/infinispan/container/camel")
private CacheContainer container;
#Produces
#Named("infinispan")
public InfinispanIdempotentRepository createInfinispanInfinispanIdempotentRepository() {
return new InfinispanIdempotentRepository(container, "camel-default-cache");
}
}
***Configuration***
The configuration of an infinispan cache container
Aliases
Default Cache: camel-default-cache
Module: org.jboss.as.clustering.infinispan
Statistics Enabled: false
***Async Operations***
Defines a thread pool used for asynchronous operations.
Keepalive Time: 60000MILLISECONDS
Max Threads: 25
Min Threads: 25
Queue Length: 1000
***Expiration***
Defines a thread pool used for for evictions.
Keepalive Time: 60000MILLISECONDS
Max Threads: 1
***Listener***
Defines a thread pool used for asynchronous cache listener notifications.
Keepalive Time: 60000MILLISECONDS
Max Threads: 1
Min Threads: 1
Queue Length: 100000
***Persistence***
Defines a thread pool used for interacting with the persistent store.
Keepalive Time: 60000MILLISECONDS
Max Threads: 4
***Remote Command***
Defines a thread pool used to execute remote commands.
Keepalive Time 60000MILLISECONDS
Max Threads: 200
Min Threads: 1
Queue Length: 0
***State Transfer***
Defines a thread pool used for for state transfer.
Keepalive Time: 60000MILLISECONDS
Max Threads: 60
Min Threads: 1
Queue Length: 0
***Transport***
Defines a thread pool used for asynchronous transport communication.
Keepalive Time: 60000MILLISECONDS
Max Threads: 25
Min Threads: 25
Queue Length: 100000
***JGroups***
The description of the transport used by this cache container
Channel: ee
Lock Timeout: 240000MILLISECONDS
More Settings in replicated cache:
***Locking***
Acquire Timeout:15000MILLISECONDS
Concurrency Level:1000
Isolation:READ_COMMITTED
Striping:false
***Partition Handling***
Enabled:false
***State TRansfer***
Chunk Size:512
Timeout:240000MILLISECONDS
***TRansaction***
Locking:OPTIMISTIC
Mode:FULL_XA
Stop Timeout:10000MILLISECONDS
***Expiration***
Interval:60000MILLISECONDS
Lifespan:-1MILLISECONDS
Max Idle:-1MILLISECONDS
***Attributes***
Module:
Remote Timeout:17500MILLISECONDS
Statistics Enabled:false
***Memory***
Size:-1

Elixir- phoenix server side rendering react using std_json_io and react-stdio

I followed the tutorial from https://medium.com/#chvanikoff/phoenix-react-love-story-reph-1-c68512cfe18 and developed an application but with different versions.
elixir - 1.3.4
phoenix - 1.2.1
poison - 2.0
distillery - 0.10
std_json_io - 0.1
The application ran successfully when running locally.
Bur when created a mix release(MIX_ENV=prod mix release --env=prod --verbose) and ran rel/utopia/bin/utopia console(the otp application name was :utopia), I ran into error
Interactive Elixir (1.3.4) - press Ctrl+C to exit (type h() ENTER for help)
14:18:21.857 request_id=idqhtoim2nb3lfeguq22627a92jqoal6 [info] GET /
panic: write |1: broken pipe
goroutine 3 [running]:
runtime.panic(0x4a49e0, 0xc21001f480)
/usr/local/Cellar/go/1.2.2/libexec/src/pkg/runtime/panic.c:266 +0xb6
log.(*Logger).Panicf(0xc210020190, 0x4de060, 0x3, 0x7f0924c84e38, 0x1, ...)
/usr/local/Cellar/go/1.2.2/libexec/src/pkg/log/log.go:200 +0xbd
main.fatal_if(0x4c2680, 0xc210039ab0)
/Users/alco/extra/goworkspace/src/goon/util.go:38 +0x17e
main.inLoop2(0x7f0924e0c388, 0xc2100396f0, 0xc2100213c0, 0x7f0924e0c310, 0xc210000000, ...)
/Users/alco/extra/goworkspace/src/goon/io.go:100 +0x5ce
created by main.wrapStdin2
/Users/alco/extra/goworkspace/src/goon/io.go:25 +0x15a
goroutine 1 [chan receive]:
main.proto_2_0(0x7ffce6670101, 0x4e3e20, 0x3, 0x4de5a0, 0x1, ...)
/Users/alco/extra/goworkspace/src/goon/proto_2_0.go:58 +0x3a3
main.main()
/Users/alco/extra/goworkspace/src/goon/main.go:51 +0x3b6
14:18:21.858 request_id=idqhtoim2nb3lfeguq22627a92jqoal6 [info] Sent 500 in 1ms
14:18:21.859 [error] #PID<0.1493.0> running Utopia.Endpoint terminated
Server: 127.0.0.1:8080 (http)
Request: GET /
** (exit) an exception was raised:
** (Protocol.UndefinedError) protocol String.Chars not implemented for {#PID<0.1467.0>, :result, %Porcelain.Result{err: nil, out: {:send, #PID<0.1466.0>}, status: 2}}
(elixir) lib/string/chars.ex:3: String.Chars.impl_for!/1
(elixir) lib/string/chars.ex:17: String.Chars.to_string/1
(utopia) lib/utopia/react_io.ex:2: Utopia.ReactIO.json_call!/2
(utopia) web/controllers/page_controller.ex:12: Utopia.PageController.index/2
(utopia) web/controllers/page_controller.ex:1: Utopia.PageController.action/2
(utopia) web/controllers/page_controller.ex:1: Utopia.PageController.phoenix_controller_pipeline/2
(utopia) lib/utopia/endpoint.ex:1: Utopia.Endpoint.instrument/4
(utopia) lib/phoenix/router.ex:261: Utopia.Router.dispatch/2
goon got panicked and hence the porcelain. Someone please provide a solution.
Related issues: https://github.com/alco/porcelain/issues/13
EDIT: My page_controller.ex
defmodule Utopia.PageController do
use Utopia.Web, :controller
def index(conn, _params) do
visitors = Utopia.Tracking.Visitors.state
initial_state = %{"visitors" => visitors}
props = %{
"location" => conn.request_path,
"initial_state" => initial_state
}
result = Utopia.ReactIO.json_call!(%{
component: "./priv/static/server/js/utopia.js",
props: props
})
render(conn, "index.html", html: result["html"], props: initial_state)
end
end

ExtJS 4.2.0 error: Layout run failed

I am using the Bryntum Scheduler v 2.2.5 with Exts JS 4.2.0
I have added the diagnostic tools and the console output is below.
I am using a Sch.panel.SchedulerTree.
Is this diagnostic telling me I need to set a width
for the SchedulerTree SchedulerGrid headercontainer columns?
==================== LAYOUT ====================
[E] Layout run failed
[E] ----------------- FAILURE -----------------
--schedulergridview-1036<tableview> - size: calculated/calculated
triggeredBy: count=1
headercontainer-1035.columnWidthsDone () dirty: false, setBy: ?
Cycles: 2, Flushes: 0, Calculates: 1 in 2 msec
Calculates by type:
tableview: 1 in 1 tries (1x) at 0 msec (avg 0 msec)
I added
schedulerConfig{ layout: 'auto'}
to my 'Sch.panel.SchedulerTree' and my 'Layout run failed' error is gone!

Resources