I am writing a Ring / Compojure app with Clojure that fetches content for pages from database. To be able to create tests for how the content is displayed, I created prod and dev environments and when using dev environment, a mock database is used instead of the production database. I achieve this by reading the database from another file and giving it as a parameter to my routes. Here's a simplified version:
(defn www-routes [db]
(defroutes www-routes
(GET "/" [] ...)))
(def config (delay (load-file (.getFile (resource "config.clj")))))
(defn db []
(if (= "dev" (:database #(force config)))
'kipsu.db-mock
'kipsu.database))
(def app (routes (www-routes (db)))
The setup is largely taken from the example here, with the addition of setting the database as a parameter.
This setup works great with running the tests with the mock database and displaying real content on prod environment. Things run fine when I start a lein server locally, run tests or any of the functions in lein repl. My problem comes when I'd like to create an uberjar for deploying the changes on my server.
This is where I get a NullPointerException when compiling, starting from (db) function call inside the def app. I've tried debugging with poor success and am not even 100% sure where the actual error is. All I know is the db function is never even called. Here's the stack trace:
Compiling kipsu.jdbc.json
Compiling kipsu.database
Compiling kipsu.api_converter
Compiling kipsu.web
java.lang.NullPointerException, compiling:(web.clj:124:29)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3628)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3622)
at clojure.lang.Compiler$BodyExpr.eval(Compiler.java:5879)
at clojure.lang.Compiler$DefExpr.eval(Compiler.java:439)
at clojure.lang.Compiler.compile1(Compiler.java:7323)
at clojure.lang.Compiler.compile(Compiler.java:7390)
at clojure.lang.RT.compile(RT.java:399)
at clojure.lang.RT.load(RT.java:444)
at clojure.lang.RT.load(RT.java:412)
at clojure.core$load$fn__5448.invoke(core.clj:5866)
at clojure.core$load.doInvoke(core.clj:5865)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5671)
at clojure.core$compile$fn__5453.invoke(core.clj:5877)
at clojure.core$compile.invoke(core.clj:5876)
at user$eval9$fn__16.invoke(form-init1768231915654429312.clj:1)
at user$eval9.invoke(form-init1768231915654429312.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6782)
at clojure.lang.Compiler.eval(Compiler.java:6772)
at clojure.lang.Compiler.load(Compiler.java:7227)
at clojure.lang.Compiler.loadFile(Compiler.java:7165)
at clojure.main$load_script.invoke(main.clj:275)
at clojure.main$init_opt.invoke(main.clj:280)
at clojure.main$initialize.invoke(main.clj:308)
at clojure.main$null_opt.invoke(main.clj:343)
at clojure.main$main.doInvoke(main.clj:421)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:383)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.Var.applyTo(Var.java:700)
at clojure.main.main(main.java:37)
Caused by: java.lang.NullPointerException
at kipsu.web$fn__4568.invoke(web.clj:115)
at clojure.lang.Delay.deref(Delay.java:37)
at clojure.lang.Delay.force(Delay.java:27)
at clojure.core$force.invoke(core.clj:730)
at kipsu.web$db.invoke(web.clj:118)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3623)
... 30 more
Exception in thread "main" java.lang.NullPointerException, compiling (web.clj:124:29)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3628)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3622)
at clojure.lang.Compiler$BodyExpr.eval(Compiler.java:5879)
at clojure.lang.Compiler$DefExpr.eval(Compiler.java:439)
at clojure.lang.Compiler.compile1(Compiler.java:7323)
at clojure.lang.Compiler.compile(Compiler.java:7390)
at clojure.lang.RT.compile(RT.java:399)
at clojure.lang.RT.load(RT.java:444)
at clojure.lang.RT.load(RT.java:412)
at clojure.core$load$fn__5448.invoke(core.clj:5866)
at clojure.core$load.doInvoke(core.clj:5865)
at clojure.lang.RestFn.invoke(RestFn.java:408)
at clojure.core$load_one.invoke(core.clj:5671)
at clojure.core$compile$fn__5453.invoke(core.clj:5877)
at clojure.core$compile.invoke(core.clj:5876)
at user$eval9$fn__16.invoke(form-init1768231915654429312.clj:1)
at user$eval9.invoke(form-init1768231915654429312.clj:1)
at clojure.lang.Compiler.eval(Compiler.java:6782)
at clojure.lang.Compiler.eval(Compiler.java:6772)
at clojure.lang.Compiler.load(Compiler.java:7227)
at clojure.lang.Compiler.loadFile(Compiler.java:7165)
at clojure.main$load_script.invoke(main.clj:275)
at clojure.main$init_opt.invoke(main.clj:280)
at clojure.main$initialize.invoke(main.clj:308)
at clojure.main$null_opt.invoke(main.clj:343)
at clojure.main$main.doInvoke(main.clj:421)
at clojure.lang.RestFn.invoke(RestFn.java:421)
at clojure.lang.Var.invoke(Var.java:383)
at clojure.lang.AFn.applyToHelper(AFn.java:156)
at clojure.lang.Var.applyTo(Var.java:700)
at clojure.main.main(main.java:37)
Caused by: java.lang.NullPointerException
at kipsu.web$fn__4568.invoke(web.clj:115)
at clojure.lang.Delay.deref(Delay.java:37)
at clojure.lang.Delay.force(Delay.java:27)
at clojure.core$force.invoke(core.clj:730)
at kipsu.web$db.invoke(web.clj:118)
at clojure.lang.AFn.applyToHelper(AFn.java:152)
at clojure.lang.AFn.applyTo(AFn.java:144)
at clojure.lang.Compiler$InvokeExpr.eval(Compiler.java:3623)
... 30 more
Compilation failed: Subprocess failed
I'm not the most fluent with Clojure and am working with this app to learn more. Any help for steering me at the right direction from here is greatly appreciated!
I think your problem is because def will execute at compile time. You've done the right thing in (def config) and (defn db), but (def app) is going to cause compile time errors if it cannot find your file. To understand why let's look at def.
(def hello (println "hello"))
If you try to compile this code you'll see "hello" printed to your console at compile time and at runtime the var hello will have the value nil.
(def hello (delay (println "hello"))
(def world #hello)
The var hello now won't get evaluated at compile time, but by introducing the var world we get the exact same problem.
Now back to your specific problem. You don't want your configuration to get read at compile time, and you don't want your configuration to have to read a file from disk every single time you need it.
Not reading your configuration at compile time makes me think that maybe it should be a function. If it is then a function you can simply use memoize to ensure it doesn't read from disk every time you call that function.
Related
I am using XTDB 1.21.0 deployed on AWS/ECS (Fargate) with checkpoints configured (frequency 30 minutes) and stored on an S3 bucket (RocksDB). After a couple of successful checkpoints, they seem to be constantly failing with an XTDB warning due to an exception in the HTTP request to AWS, as shown below:
This leaves the S3 buckets with incomplete checkpoints (i.e., a Folder containing a set of SSTs and other RocksDB files and no associated EDN index file):
XTDB documentation mentions the fact that an optional S3configurator can be passed to the node configuration and after a bit of Googling around I figured that makeClient should be overridden so that connectionAcquisitionTimeout can be set:
NettyNioAsyncHttpClient.builder()
.maxConcurrency(200)
.connectionAcquisitionTimeout(Duration.ofMillis(20000))
I am not too familiar with NETTY so would appreciate if someone could help with the right incantation.
Also I am configuring the XT node from an EDN file, and haven't figure out how to write a S3 configurator in an EDN file (or if it is even possible).
Thanks in advance!
This can happen for large datasets where the default S3 client used will create a new async request for each object (for which the number of objects may be very large, particularly if using the RockDBs index). Internally it uses the connectionAcquisitionTimeout as a type of backpressure to ensure that incoming requests don't wait indefinitely for a connection from the connection pool, however, in this case we're the only source of these requests and we definitely want the requests to complete before starting the nodes so it's reasonable to set the connectionAcquisitionTimeout to something very high (the default is only 10 seconds). A good choice of limit might be something like the maximum amount of time you want to wait for the node to start before failing.
This appears to be a non-optional parameter of the SDK for what I can only assume is a sensible default strategy for requests coming from an external source, in our case we essentially want it to behave as if it was a synchronous operation.
Configuring this in Clojure with xtdb would look something like this:
(ns foo.db
(:require
[xtdb.api :as xtdb]
[xtdb.checkpoint]
[xtdb.rocksdb]
[xtdb.s3.checkpoint])
(:import
(java.time Duration)
(software.amazon.awssdk.http.nio.netty NettyNioAsyncHttpClient)
(software.amazon.awssdk.services.s3 S3AsyncClient)
(xtdb.checkpoint Checkpointer)
(xtdb.s3 S3Configurator)))
(def s3-configurator
(reify S3Configurator
(makeClient [this]
(.. (S3AsyncClient/builder)
(httpClientBuilder
(.. (NettyNioAsyncHttpClient/builder)
(connectionAcquisitionTimeout
(Duration/ofSeconds 600)) ;; Set a high limit here
;; We can rely on the defaults for maxConcurrency and
;; maxPendingConnectionAcquires
;; (maxConcurrency (Integer. 200))
;; (maxPendingConnectionAcquires (Integer. 10000))
))
(build)))))
(defn start-node!
[]
(xtdb/start-node
{:xtdb/index-store
{:kv-store {:xtdb/module 'xtdb.rocksdb/->kv-store
:db-dir "/var/xtdb/idxs"
:checkpointer {:xtdb/module 'xtdb.checkpoint/->checkpointer
:store {:xtdb/module 'xtdb.s3.checkpoint/->cp-store
:configurator (constantly s3-configurator)
:bucket "checkpoints"}
:approx-frequency "PT3H"}}}}))
I recently bumped into a new problem with reticulate and doparallel that I never seen before.
The following code works well with %do% (which is regular for loop) BUT NOT with %dopar% (doParallel package)
library(doParallel)
library(reticulate)
cl <- makeCluster(4)
registerDoParallel(cl)
foreach(j=1:4,.combine=rbind,.packages = "reticulate") %dopar%
{
test <- "Testing to see if dopar can recognize reticulate"
py_run_string("for i in range(len(r.test)): print(i)")
}
Here is the error message:
"Error in { :
task 1 failed - "RuntimeError: Evaluation error: object 'test' not found."
By default, test is an R object and when using reticulate, r.test will be recognized as an object in python py_run_string. However, the dopar created different cluster and therefore the r.test is not recognized anymore. The strange thing in here is that I have test ran inside the foreach loop and reticulate loaded for each small cluster.
Any hint would help. Thanks in advance.
I'm trying to run an OptaPlanner benchmark with the following configuration:
<?xml version="1.0" encoding="UTF-8"?>
<plannerBenchmark xmlns="https://www.optaplanner.org/xsd/benchmark" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="https://www.optaplanner.org/xsd/benchmark https://www.optaplanner.org/xsd/benchmark/benchmark.xsd">
<benchmarkDirectory>data/acme</benchmarkDirectory>
<inheritedSolverBenchmark>
<solver>
<solutionClass>org.acme.domain.Main</solutionClass>
<entityClass>org.acme.domain.Standstill</entityClass>
<entityClass>org.acme.domain.Visit</entityClass>
<scoreDirectorFactory>
<constraintProviderClass>org.acme.solver.MainConstraintProvider</constraintProviderClass>
<initializingScoreTrend>ONLY_DOWN</initializingScoreTrend>
</scoreDirectorFactory>
<termination>
<secondsSpentLimit>30</secondsSpentLimit>
</termination>
</solver>
<problemBenchmarks>
<solutionFileIOClass>org.acme.bootstrap.AcmeSolutionIO</solutionFileIOClass>
<inputSolutionFile>data/latest.xml</inputSolutionFile>
</problemBenchmarks>
</inheritedSolverBenchmark>
<solverBenchmark>
<solver>
<constructionHeuristic>
<constructionHeuristicType>FIRST_FIT</constructionHeuristicType>
</constructionHeuristic>
<localSearch>
<localSearchType>LATE_ACCEPTANCE</localSearchType>
</localSearch>
</solver>
</solverBenchmark>
</plannerBenchmark>
However, I get this error message:
Exception in thread "main" org.optaplanner.benchmark.api.PlannerBenchmarkException: Benchmarking failed: failureCount (1). The exception of the firstFailureSingleBenchmarkRunner (blankData_Config_0_0) is chained.
at org.optaplanner.benchmark.impl.DefaultPlannerBenchmark.benchmarkingEnded(DefaultPlannerBenchmark.java:326)
at org.optaplanner.benchmark.impl.DefaultPlannerBenchmark.benchmark(DefaultPlannerBenchmark.java:100)
at org.optaplanner.benchmark.impl.DefaultPlannerBenchmark.benchmarkAndShowReportInBrowser(DefaultPlannerBenchmark.java:424)
at org.acme.bootstrap.BenchmarkApp.main(BenchmarkApp.kt:29)
Caused by: java.lang.IllegalStateException: The class (class java.lang.Long) should have a no-arg constructor to create a planning clone.
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.lambda$retrieveCachedConstructor$0(FieldAccessingSolutionCloner.java:90)
at java.base/java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1708)
at org.optaplanner.core.impl.domain.common.ConcurrentMemoization.computeIfAbsent(ConcurrentMemoization.java:44)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.retrieveCachedConstructor(FieldAccessingSolutionCloner.java:85)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.constructClone(FieldAccessingSolutionCloner.java:158)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.clone(FieldAccessingSolutionCloner.java:150)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.process(FieldAccessingSolutionCloner.java:207)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.processQueue(FieldAccessingSolutionCloner.java:194)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.cloneSolution(FieldAccessingSolutionCloner.java:136)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.cloneSolution(FieldAccessingSolutionCloner.java:73)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.cloneSolution(AbstractScoreDirector.java:257)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.cloneWorkingSolution(AbstractScoreDirector.java:250)
at org.optaplanner.core.impl.solver.recaller.BestSolutionRecaller.updateBestSolutionAndFire(BestSolutionRecaller.java:129)
at org.optaplanner.core.impl.constructionheuristic.DefaultConstructionHeuristicPhase.phaseEnded(DefaultConstructionHeuristicPhase.java:146)
at org.optaplanner.core.impl.constructionheuristic.DefaultConstructionHeuristicPhase.solve(DefaultConstructionHeuristicPhase.java:98)
at org.optaplanner.core.impl.solver.AbstractSolver.runPhases(AbstractSolver.java:99)
at org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:192)
at org.optaplanner.benchmark.impl.SubSingleBenchmarkRunner.call(SubSingleBenchmarkRunner.java:122)
at org.optaplanner.benchmark.impl.SubSingleBenchmarkRunner.call(SubSingleBenchmarkRunner.java:42)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.lang.NoSuchMethodException: java.lang.Long.<init>()
at java.base/java.lang.Class.getConstructor0(Class.java:3585)
at java.base/java.lang.Class.getDeclaredConstructor(Class.java:2754)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.lambda$retrieveCachedConstructor$0(FieldAccessingSolutionCloner.java:88)
... 22 more
I've tried many other benchmark configs including with <solverBenchmarkBluePrintType>CONSTRUCTION_HEURISTIC_WITH_AND_WITHOUT_LOCAL_SEARCH</solverBenchmarkBluePrintType> but none of them work either.
I had it working previously, but I went back to it after a number of changes and it no longer works.
I'm using a SimpleLongScore now instead of a SimpleScore and I'm wondering if that has anything to do with the error since before it wasn't throwing this exception.
Any ideas?
Edit #1:
I wish I could provide more code, but I really don't know what code to include.
The bizarre thing is that OptaPlanner runs just fine through mvn compile quarkus:dev, but when I run my benchmark app, from the main entry, it gives the errors as mentioned before, java.lang.IllegalStateException: The class (class java.lang.Long) should have a no-arg constructor to create a planning clone.
Here's my main:
fun main(args: Array<String>) {
val plannerBenchmarkFactory:PlannerBenchmarkFactory = PlannerBenchmarkFactory.createFromXmlResource("benchmarkBluePrintConfig.xml")
val plannerBenchmark: PlannerBenchmark = plannerBenchmarkFactory.buildPlannerBenchmark()
plannerBenchmark.benchmarkAndShowReportInBrowser()
}
The solverConfig.xml can also be the same for both, but only the benchmark app throws the error. It's pretty odd to me.
I figured it out.
In my Visit class I was using a #CustomShadowVariable:
#DeepPlanningClone
#CustomShadowVariable(
variableListenerClass = ArrivalTimeUpdatingVariableListener::class,
sources = [PlanningVariableReference(variableName = "previousStandstill")]
)
var arrivalTime: Long? = null
Removing #DeepPlanningClone solved the problem.
I have no idea why it runs fine when the SolverManager is #Inject'd as opposed to throwing the error when SolverConfig/SolverManager is created through the API (as it is in my Benchmark App).
In a bare bones project, I added these build hints:
android.gradleDep=compile 'com.erikagtierrez.multiple_media_picker:multiple-media-picker:1.0.5'
android.min_sdk_version=23
I would like to import the following Android library to make a CN1Lib (that requires at least Android SDK 23):
https://github.com/erikagtierrez/multiple-media-picker
To be short: I spent one day trying to import that, I also experimented with Android Studio and with suggestions found on Stack Overflow (trying to make a custom .aar), without success.
Could you help me to import that library? There is manifest merger error.
In fact, the issue reported by the build server is:
* What went wrong:
Execution failed for task ':processReleaseManifest'.
> Manifest merger failed : Attribute application#label value=(BareBones) from AndroidManifest.xml:15:17-42
is also present at [com.erikagtierrez.multiple_media_picker:multiple-media-picker:1.0.5] AndroidManifest.xml:23:9-41 value=(#string/app_name).
Suggestion: add 'tools:replace="android:label"' to <application> element at AndroidManifest.xml:15:3-43:103 to override.
I also tried to add the build hint:
android.xapplication_attr=tools:replace="android:label"
as suggested by the previous error, without success.
In the last case, I get:
Merging result: ERROR
/tmp/build1659178556337293135xxx/Test/src/main/AndroidManifest.xml:15:3-43:103 Error:
tools:replace specified at line:15 for attribute android:label, but no new value specified
/tmp/build1659178556337293135xxx/Test/src/main/AndroidManifest.xml Error:
Validation failed, exiting
-- Merging decision tree log ---
The last full log is here: https://gist.github.com/jsfan3/dd6c23f86a2ac949f996910c8cece62b
Thank you
This is happening because our code things you injected android:label on your own and doesn't inject it to avoid collision...
Change the code to this:
android.xapplication_attr=tools:replace="android:label" android:label="App Name"
Can someone please tell me how I can run a failed test again in Behave using Python?
I want to re-run the failed test case automatically if it fails.
The behave library actually has a RerunFormatter which can help you rerun the failing scenarios of your previous test-run. It creates a text file of all your failing scenarios like:
# -- file:rerun.features
# RERUN: Failing scenarios during last test run.
features/auth.feature:10
features/auth.feature:42
features/notifications.feature:67
To use the RerunFormatter all you need to do is put it in your behave configuration file (behave.ini):
# -- file:behave.ini
[behave]
format = rerun
outfiles = rerun_failing.features
To rerun the failing scenarios, use this command:
behave #rerun_failing.features
I know that's a later answer but it could help others.
There is another approach that also could help, it's implementing it under the environment.py file, you could do the retry by a specific tag.
Provides support functionality to retry scenarios a number of times
before their failure is accepted. This functionality can be helpful
when you use behave tests in an unreliable server/network
infrastructure.
For example, I am running tag "#smoke_test" on CI, so I choose this tag to patch with retry condition.
First, on your environment.py import the following:
# -- file: environment.py
from behave.contrib.scenario_autoretry import patch_scenario_with_autoretry
Then add the method:
# -- file:environment.py
#
def before_feature(context, feature):
for scenario in feature.scenarios:
if "smoke_test" in scenario.effective_tags:
patch_scenario_with_autoretry(scenario, max_attempts=3)
*max_attempts are by default set as 3. I just described there to make it explicit that you can actually set how many retries you want.