I need to invoke a task twice in Thor. In Rake, this could be accomplished by "re-enabling" it, but I can't find an equivalent in either of http://www.rubydoc.info/github/wycats/thor/master/Thor/Invocation or https://github.com/erikhuda/thor/wiki/Invocations
Some background, because of old code, sometimes I need to reset a database between tests (I know this is not ideal, but this is old code), so my scenario is like
desc "all-tests", "all the tests"
def all_tests
invoke :"clean-db"
invoke :"first-tests"
invoke :"clean-db"
invoke :"second-tests"
end
I had a very similar situation. What worked for me was calling the methods directly rather than using invoke. For example:
class Tests < Thor
desc('all', 'run all tests')
def all
setup()
invoke :some_test_suite
teardown()
setup()
invoke :some_other_test_suite
teardown()
# etc.
end
desc('setup', 'set up the test database')
def setup
# some setup tasks here
end
desc('teardown', 'tear down the test database')
def teardown
# some teardown tasks here
end
end
Related
I have a simple Apache Flink job that looks very much like this:
public final class Application {
public static void main(final String... args) throws Exception {
final var env = StreamExecutionEnvironment.getExecutionEnvironment();
final var executionConfig = env.getConfig();
final var params = ParameterTool.fromArgs(args);
executionConfig.setGlobalJobParameters(params);
executionConfig.setParallelism(params.getInt("application.parallelism"));
final var source = KafkaSource.<CustomKafkaMessage>builder()
.setBootstrapServers(params.get("application.kafka.bootstrap-servers"))
.setGroupId(config.get("application.kafka.consumer.group-id"))
// .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
.setStartingOffsets(OffsetsInitializer.earliest())
.setTopics(config.getString("application.kafka.listener.topics"))
.setValueOnlyDeserializer(new MessageDeserializationSchema())
.build();
env.fromSource(source, WatermarkStrategy.noWatermarks(), "custom.kafka-source")
.uid("custom.kafka-source")
.rebalance()
.flatMap(new CustomFlatMapFunction())
.uid("custom.flatmap-function")
.filter(new CustomFilterFunction())
.uid("custom.filter-function")
.addSink(new CustomDiscardSink()) // Will be a Kafka sink in the future
.uid("custom.discard-sink");
env.execute(config.get("application.job-name"));
}
}
Problem is that I would like to provide an integration test for the entire application — sort of like an end-to-end (set of) test(s) for the entire job. I'm using Testcontainers, but I'm not really sure how to move forward with this. For instance, this is how the test looks like (for now):
#Testcontainers
final class ApplicationTest {
private static final DockerImageName DOCKER_IMAGE = DockerImageName.parse("confluentinc/cp-kafka:7.0.1");
#Container
private static final KafkaContainer KAFKA_CONTAINER = new KafkaContainer(DOCKER_IMAGE);
#ClassRule // How come this work in JUnit Jupiter? :/
public static MiniClusterResource cluster;
#BeforeAll
static void init() {
KAFKA_CONTAINER.start();
// ...probably need to wait and create the topic(s) as well
final var config = new MiniClusterResourceConfiguration.Builder().setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build();
cluster = new MiniClusterResource(config);
}
#Test
void main() throws Exception {
// new Application(); // ...what's next?
}
}
I'm not sure how to implement what's required to trigger the job as-is from that point on. Basically, I would like to execute what was defined before, without (almost) any modifications — I've seen plenty of examples that practically build the entire job again, so that's not an option.
Can somebody provide any pointers here?
MessageDeserializationSchema is unbounded, so isEndOfStream returns false. Not sure if that's an impediment.
In order to make the pipeline more testable, I suggest you create a method on your Application class that takes a source and a sink as parameters, and creates and executes the pipeline, using those connectors.
In your tests you can call that method with special sources and sinks that you use for testing. In particular, you will want to use a KafkaSource that uses .setBounded(...) in the tests so that it cleanly handles just the range of data intended for the test(s).
The solutions and tests for the Apache Flink training exercises are organized along these lines; for example, see RideCleansingSolution.java and RideCleansingIntegrationTest.java. These examples don't use kafka or test containers, but hopefully they'll still be helpful.
I would suggest you instrument your application as an opaque-box test by interacting with it through its public API. This can be done either as an out-process test (e.g. by running your application in a container as well, using Testcontainers) are as an in-process test (by creating your Application and calling its main() method).
Now in your comments you explained, that you want to check for the side-effects of interacting with your application (Kafka messages being published). To check this, connect to the KafkaContainer with your own KafkaConsumer from within the test and use a library such as Awaitiliy to wait until the messages have been received.
I checked the docs and stackoverflow but didn't find exactly a suiting approach.
E.g. this post seems very close: Dispatch a blocking service in a Reactive REST GET endpoint with Quarkus/Mutiny
However, I don't want so much unneccessary boilerplate code in my service, at best, no service code change at all.
I generally just want to call a service method which uses entity manager and thus is a blocking action, however, want to return a string to the caller immidiately like "query started" or something. I don't need a callback object, it's just a fire and forget approach.
I tried something like this
#NonBlocking
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public Uni<String> triggerQuery() {
return Uni.createFrom()
.item("query started")
.call(() -> service.startLongRunningQuery());
}
But it's not working -> Error message returned to the caller:
You have attempted to perform a blocking operation on a IO thread. This is not allowed, as blocking the IO thread will cause major performance issues with your application. If you want to perform blocking EntityManager operations make sure you are doing it from a worker thread.",
I actually expected quarkus takes care to distribute the tasks accordingly, that is, rest call to io thread and blocking entity manager operations to worker thread.
So I must using it wrong.
UPDATE:
Also tried an proposed workaround that I found in https://github.com/quarkusio/quarkus/issues/11535 changing the method body to
return Uni.createFrom()
.item("query started")
.emitOn(Infrastructure.getDefaultWorkerPool())
.invoke(()-> service.startLongRunningQuery());
Now I don't get an error, but service.startLongRunningQuery() is not invoked, thus no logs and no query is actually sent to db.
Same with (How to call long running blocking void returning method with Mutiny reactive programming?):
return Uni.createFrom()
.item(() ->service.startLongRunningQuery())
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
Same with (How to run blocking codes on another thread and make http request return immediately):
ExecutorService executor = Executors.newFixedThreadPool(10, r -> new Thread(r, "CUSTOM_THREAD"));
return Uni.createFrom()
.item(() -> service.startLongRunningQuery())
.runSubscriptionOn(executor);
Any idea why service.startLongRunningQuery() is not called at all and how to achieve fire and forget behaviour, assuming rest call handled via IO thread and service call handled by worker thread?
It depends if you want to return immediately (before your startLongRunningQuery operation is effectively executed), or if you want to wait until the operation completes.
If the first case, use something like:
#Inject EventBus bus;
#NonBlocking
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public void triggerQuery() {
bus.send("some-address", "my payload");
}
#Blocking // Will be called on a worker thread
#ConsumeEvent("some-address")
public void executeQuery(String payload) {
service.startLongRunningQuery();
}
In the second case, you need to execute the query on a worker thread.
#POST
#Produces(MediaType.TEXT_PLAIN)
#Path("/query")
public Uni<String> triggerQuery() {
return Uni.createFrom(() -> service.startLongRunningQuery())
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool());
}
Note that you need RESTEasy Reactive for this to work (and not classic RESTEasy). If you use classic RESTEasy, you would need the quarkus-resteasy-mutiny extension (but I would recommend using RESTEasy Reactive, it will be way more efficient).
Use the EventBus for that https://quarkus.io/guides/reactive-event-bus
Send and forget is the way to go.
I am trying to shutdown a StreamExecutionEnvironment that is started during one of our junit Integration tests. Once all the items in the stream are processed i want to be able to shutdown this execution environment in a deterministic fashion.
Right now when i call StreamExecutionEnvironment.execute method it never returns from that call.
[May be I am too late here, but will answer for those people who are looking for an answer or hint]
Actually what you need to do is gracefully exit from the SourceFunction<T>.
Then the whole StreamExecutionEnvironment will be auto closed. To do that, you may need a special end event to send into your source function.
Write or extend (if you are using pre-defined source function) to check the special incoming event, which will be emitted at the end of your integration tests, and break the loop or unsubscribe from the source. The basic pattern would be shown below.
public class TestSource implements SourceFunction<Event> {
public void run(SourceContext<T> ctx) throws Exception {
while (hasContent()) {
Event event = readNextEvent();
if (isAnEndEvent(event)) {
break;
}
ctx.collect(event);
}
}
}
Depends on your situation, in junit tests, you should send this special event at the end of each/all test cases.
// or #AfterClass
#After
public void doFinish() {
// send the special event from a different thread.
}
Sometimes you might have to do this from a different thread (basically where you generate test events). Not like above.
It is recommended you having a separate source function implementation for your tests because of this matter, so it is easier to modify to accept a special close event. But never in your actual source function which is expecting to go into the production. :)
The javadoc of SourceFunction also explains about a stop() function, which you can see an example from how TwitterSource has been implemented.
Gracefully Stopping Functions
Functions may additionally implement the {#link
org.apache.flink.api.common.functions.StoppableFunction} interface.
"Stopping" a function, in contrast to "canceling" means a graceful
exit that leaves the state and the emitted elements in a consistent
state
.
I'm trying to build a security system that is configurable via a config file, perhaps XML. One of the configurable options is hours of the day/night when the system should be recording. There will be an entry or more for each day of the week. I'm trying to implement this in python which is not a language I know well. My main problem is how to implement this cleanly. Basically the program will look something like this:
def check_if_on():
"""
check if the system should be on based on current time and config file
returns True iff system should be on, False otherwise
"""
...
# main loop
while True:
# do something
# do something else
if check_if_on():
# do something
else:
# do something else or nothing
time.sleep(10)
the config file will look something like:
<on-times>
<on day="1" time="1900"/>
<off day="2" time="0700"/>
<on day="2" time="1800"/>
<off day="3" time="0900"/>
</on-times>
A friend with a lot more experience than me said to implement the on/off times as timed events in a queue but I'm not sure if that's a good idea or even how to do it
If super precise timing is not necessary, you could run it every minute as a cron job to save some CPU cycles. If time exceeded the configured time, do something, else do nothing.
I have fully isolated this problem to a very simple play app
I think it has to do with some DB caching, but I can't figure it out
BasicTest.java
==========
import org.junit.*;
import play.test.*;
import play.Logger;
import models.*;
import play.mvc.Http.*;
public class BasicTest extends FunctionalTest {
#Before public void setUp() {
Fixtures.deleteDatabase();
Fixtures.loadModels("data.yml");
Logger.debug("countFromSetup=%s",User.count());
}
#Test
public void test() {
Response response= GET("/");
Logger.debug("countFromTest=%s",User.count());
assertIsOk(response);
}
}
Uncommented Configs
================
%prod.application.mode=prod
%test.application.mode=dev
%test.db.url=jdbc:h2:mem:play;MODE=MYSQL;LOCK_MODE=0
%test.db=mysql:root:xxx#t_db
%test.jpa.ddl=create
%test.mail.smtp=mock
application.mode=dev
application.name=test
application.secret=jXKw4HabjhaNvosxgzq39to9BJECtOr39EXrEabsQAZKi7YoWAwQWo3B BFUOQnJw
attachments.path=data/attachments
date.format=yyyy-MM-dd
db=mysql:root:xxx#db
mail.smtp=mock
Application.java
============
package controllers;
import play.*;
import play.mvc.*;
import models.*;
public class Application extends Controller {
public static void index() {
Logger.debug("countFromIndex=%s",User.count());
render();
}
}
>play test
Output of log after running the BasicTest http://localhost:9000/#tests
==================================================
11:54:59,008 DEBUG ~ countFromSetup=1
11:54:59,021 DEBUG ~ countFromIndex=0
11:54:59,034 DEBUG ~ countFromTest=1
point to browser=> http://localhost:9000
12:25:59,781 DEBUG ~ countFromIndex=1
What happened to the record during?
Response response= GET("/");
This 'bug' almost makes my test cases useless
It has probably something to do with transactions. I've came across a similar case once with Spring/JUnit couple.
Here is the transactionnal execution of the test (I think) :
Start transaction t1,
Execute setup, result is fetched from cache.
Execute test.
Start transaction t2 for controller execution GET("/")
Result is fetched from database but since t1 hasn't been commmited, it isn't displayed.
Close transaction t2 and commit t1!
Close transaction t1 and commit t2!
By the way, that is not really a Functionnal Test. For functionnal tests, you are not supposed to check such data but only http status. Turn to UnitTests for that. When looking at source code of functionnal tests, you can see all the checks implemented are for response/http checking.
I think its the default behavior of JUnit, #Before annotation makes the method run before every test:
When writing tests, it is common to find that several tests need
similar objects created before they can run. Annotating a public void
method with #Before causes that method to be run before the Test
method. The #Before methods of superclasses will be run before those
of the current class.
From : http://junit.sourceforge.net/javadoc/org/junit/Before.html
IF you want the setup to be run once you can use #BeforeClass Annotation : http://junit.sourceforge.net/javadoc/org/junit/BeforeClass.html
In PlayFramework, there's n+1 threads for prod and 1 thread for test profile or compile profile. So if you have a dual-core CPU, there's 3 threads if you are running in prod, and one thread if you started the application with "test".
Now, one more interesting fact : there'x one Tx per execution. Thus when your application starts, and you launch your very first test, here is what happens :
Play starts with one thread.
The JUnitRunner starts, the first test myTest gets executed. It's an HTTP connection to the application. The reason why you see 0 is because of the Response GET that is executed before the #Before statement.
The #Before gets executed, creates your entries and the result count is accurate in the #Before, because it's done in the same Tx.
So what I suggest is that you either use #BeforeClass, or perform the setup not in a #Before but from a direct call in myTest for the very specific test case with Response.
I assume that if you replace this code
#Test
public void myTest() {
Response response= GET("/test");
}
with this
#Test
public void myTest() {
assertEquals(1,User.count());
}
Correct ?
So the reason why you get this is not a bug. It's simply because of this one thread configuration we have for test environment.
Nicolas