I have created a Suite with nested suites. I suppose this will run tests sequentially in order- http://doc.scalatest.org/1.8/org/scalatest/Suites.html
class RepositoryTestController extends Suites {
new AnswersRepositorySpecs
new AnswersTransactionRepositorySpecs
new CassandraRepositorySpecs
new PartitionsOfATagRepositorySpecs
new PasswordRepositorySpecs
new PracticeQuestionsRepositorySpecs
new PracticeQuestionsTagsRepositorySpecs
new QuestionsAnsweredByAUserForATagSpecs
new QuestionsCreatedByAUserForATagSpecs
new SupportedTagsRepositorySpecs
new UserProfileAndPortfolioRepositorySpecs
new UsersRepositorySpecs
new UserTokenRepositorySpecs
}
When I run these tests (by pressing Ctrl+Shift+F10 in IntelliJ), I see prompt Tests Passed 0.
UPDATE
I suppose the problem could be that I haven't created a PlaySpec. I have not created this class (partially) as I don't know how to go about creating Args required by run method.
class RepositorySpecs extends PlaySpec {
"Repository Specs" should {
"run all specs" in {
val specs = new RepositoryTestController()
specs.run(None) //run requires Args which I am unable to create. http://doc.scalatest.org/1.8/org/scalatest/Suites.html
}
}
It seems the tests are not getting executed. How do I run the tests?
Related
I have a simple Apache Flink job that looks very much like this:
public final class Application {
public static void main(final String... args) throws Exception {
final var env = StreamExecutionEnvironment.getExecutionEnvironment();
final var executionConfig = env.getConfig();
final var params = ParameterTool.fromArgs(args);
executionConfig.setGlobalJobParameters(params);
executionConfig.setParallelism(params.getInt("application.parallelism"));
final var source = KafkaSource.<CustomKafkaMessage>builder()
.setBootstrapServers(params.get("application.kafka.bootstrap-servers"))
.setGroupId(config.get("application.kafka.consumer.group-id"))
// .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
.setStartingOffsets(OffsetsInitializer.earliest())
.setTopics(config.getString("application.kafka.listener.topics"))
.setValueOnlyDeserializer(new MessageDeserializationSchema())
.build();
env.fromSource(source, WatermarkStrategy.noWatermarks(), "custom.kafka-source")
.uid("custom.kafka-source")
.rebalance()
.flatMap(new CustomFlatMapFunction())
.uid("custom.flatmap-function")
.filter(new CustomFilterFunction())
.uid("custom.filter-function")
.addSink(new CustomDiscardSink()) // Will be a Kafka sink in the future
.uid("custom.discard-sink");
env.execute(config.get("application.job-name"));
}
}
Problem is that I would like to provide an integration test for the entire application — sort of like an end-to-end (set of) test(s) for the entire job. I'm using Testcontainers, but I'm not really sure how to move forward with this. For instance, this is how the test looks like (for now):
#Testcontainers
final class ApplicationTest {
private static final DockerImageName DOCKER_IMAGE = DockerImageName.parse("confluentinc/cp-kafka:7.0.1");
#Container
private static final KafkaContainer KAFKA_CONTAINER = new KafkaContainer(DOCKER_IMAGE);
#ClassRule // How come this work in JUnit Jupiter? :/
public static MiniClusterResource cluster;
#BeforeAll
static void init() {
KAFKA_CONTAINER.start();
// ...probably need to wait and create the topic(s) as well
final var config = new MiniClusterResourceConfiguration.Builder().setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build();
cluster = new MiniClusterResource(config);
}
#Test
void main() throws Exception {
// new Application(); // ...what's next?
}
}
I'm not sure how to implement what's required to trigger the job as-is from that point on. Basically, I would like to execute what was defined before, without (almost) any modifications — I've seen plenty of examples that practically build the entire job again, so that's not an option.
Can somebody provide any pointers here?
MessageDeserializationSchema is unbounded, so isEndOfStream returns false. Not sure if that's an impediment.
In order to make the pipeline more testable, I suggest you create a method on your Application class that takes a source and a sink as parameters, and creates and executes the pipeline, using those connectors.
In your tests you can call that method with special sources and sinks that you use for testing. In particular, you will want to use a KafkaSource that uses .setBounded(...) in the tests so that it cleanly handles just the range of data intended for the test(s).
The solutions and tests for the Apache Flink training exercises are organized along these lines; for example, see RideCleansingSolution.java and RideCleansingIntegrationTest.java. These examples don't use kafka or test containers, but hopefully they'll still be helpful.
I would suggest you instrument your application as an opaque-box test by interacting with it through its public API. This can be done either as an out-process test (e.g. by running your application in a container as well, using Testcontainers) are as an in-process test (by creating your Application and calling its main() method).
Now in your comments you explained, that you want to check for the side-effects of interacting with your application (Kafka messages being published). To check this, connect to the KafkaContainer with your own KafkaConsumer from within the test and use a library such as Awaitiliy to wait until the messages have been received.
I wrote the following test for my simple Spring Boot Web application:
#SpringApplicationConfiguration(classes=[PDK])
#WebIntegrationTest
#DirtiesContext
class GebMainpageSpec extends GebSpec {
#Autowired
WebApplicationContext context;
def setup() {
System.setProperty("webdriver.chrome.driver", "chromedriver/win32/chromedriver.exe");
browser.driver = new ChromeDriver();
browser.baseUrl = "http://localhost:8080/";
}
def 'Static page present and works, check without pages'() {
when:
go ""
then:
assert title == "MyApp"
}
def 'Static page present and works, check WITH pages'() {
when:
to Mainpage
then:
LoginWithFormUsername.value() == "root"
}
}
These tests are apparently work, i.e. they pass or fail depending on page data.
The problem is that it opens TWO instances of Chrome browser to operate (by number of tests).
How to prevent that? May be reuse browser? Or may be close it after each test?
UPDATE
If I add something like below
def cleanupSpec() {
browser.driver.quit()
}
then all my tests startin to run twice, moreover, each second run they try with HtmlUnit (i.e. with "in memory" web browser, not Chrome).
You shouldn't instantiate the driver yourself when using GebSpec because it already handles the lifecycle of a driver instance through a browser instance it lazy initializes in it's getBrowser() method.
See the sections of The Book of Geb about how Geb interacts with WebDriver instances and about configuring the driver to be used via the config script to learn more.
I need to do some node require commands in a webdriverJS test script, because these dont get entered into the webdriverJS command queue, I am wrapping them in .then() functions (to deal with the asyncrony)
e.g.
var webdriver = require('selenium-webdriver');
// create webdriver instance so promise chain can be setup
var promise_builder = new webdriver.Builder().withCapabilities(webdriver.Capabilities.chrome()).
build();
// wrap all functions in webdriver promises so they get managed by webdrivers
// command queue
promise_builder.sleep(0).then(function() {
// Run "non-command-queue" commands
var tests = require('./test_commands');
tests(helpers, setup, webdriver, driver);
}).then(function(){
// more non-webdriver commands
});
The problem here (other than the fact its inelegant) is that a browser instance is launched - just to achieve promise chaining.
Is there a better way to create the initial promise, e.g. a static method within the webdriver api for creating promises?
This seems to work:
// create an instance of webdriver.promise.ControlFlow
var flow = webdriver.promise.controlFlow();
// use webdriver.promise.controlFlow#execute() to schedule statements into command queue
flow.execute(function() {
// Run "non-command-queue" commands
var tests = require('./test_commands');
tests(helpers, setup, webdriver, driver);
}).then(function(){
// more non-webdriver commands
});
An explantion can be found on this Webdriver JS website/docs site, i.e.
At the heart of the promise manager is the ControlFlow class. You can obtain an instance of this class using webdriver.promise.controlFlow(). Tasks are enqueued using the execute() function. Tasks always execute in a future turn of the event loop, once those before it in the queue (if there are any) have completed.
I would use webdriver.promise.createFlow(callback) to start a new control flow.
So you'd have something like this:
webdriver.promise.createFlow(function() {
// Run "non-command-queue" commands
var tests = require('./test_commands');
tests(helpers, setup, webdriver, driver);
}).then(function(){
// more non-webdriver commands
});
Documentation: http://selenium.googlecode.com/git/docs/api/javascript/namespace_webdriver_promise.html
Update
I am now leaning towards the webdriver.promise.controlFlow.execute() option that #the_velour_fog described, since I get errors with after hook failing when new controlFlow is created. Guess creating a new flow messes with mocha async functionality.
i have a batch apex class
global class apexBatch implements Database.Batchable<sObject>{
global final string query;
List<user> lstUser= new List<user>();
Set<id> setUserID= new Set<id>();
//constructor
global apexBatch () {
if (system.Test.isRunningTest())
{
this.query='SELECT id FROM user limit 100';
}
else
{
this.query='SELECT id FROM user ;
}
}
global Database.QueryLocator start(Database.BatchableContext BC) {
return Database.getQueryLocator(query);
}
global void execute(Database.BatchableContext BC, List<sObject> scope) {
// do some processing
}
global void finish(Database.BatchableContext BC) {
}
I am calling this class from test class using this code
Test.startTest();
apexBatch ba = new apexBatch();
Database.executeBatch(ba);
Test.stopTest();
When i check the code coverage i can only see that the constructor is covered, the start and execute methods are not covered at all.
Any idea what could cause this
Thanks
Are there any exceptions in your debug log when you run tests? This is the exact same method I use for testing batch classes, so I took this code (I know it's simplified), added the missing close quote on the second query (I assume your code did save correctly and this isn't the problem!), and put the test code into a class, sure enough it's covered the batch code correctly.
Finally, I have seen some weird issues with test coverage reporting recently — how are you running the tests at the moment? I just ran all tests in the Org and got 90% coverage (it missed the second query line since for obvious reasons).
I am writing my first Android database backend and I'm struggling to unit test the creation of my database.
Currently the problem I am encountering is obtaining a valid Context object to pass to my implementation of SQLiteOpenHelper. Is there a way to get a Context object in a class extending TestCase? The solution I have thought of is to instantiate an Activity in the setup method of my TestCase and then assigning the Context of that Activity to a field variable which my test methods can access...but it seems like there should be an easier way.
You can use InstrumentationRegistry methods to get a Context:
InstrumentationRegistry.getTargetContext() - provides the application Context of the target application.
InstrumentationRegistry.getContext() - provides the Context of this Instrumentation’s package.
For AndroidX use InstrumentationRegistry.getInstrumentation().getTargetContext() or InstrumentationRegistry.getInstrumentation().getContext().
New API for AndroidX:
ApplicationProvider.getApplicationContext()
You might try switching to AndroidTestCase. From looking at the docs, it seems like it should be able to provide you with a valid Context to pass to SQLiteOpenHelper.
Edit:
Keep in mind that you probably have to have your tests setup in an "Android Test Project" in Eclipse, since the tests will try to execute on the emulator (or real device).
Your test is not a Unit test!!!
When you need
Context
Read or Write on storage
Access Network
Or change any config to test your function
You are not writing a unit test.
You need to write your test in androidTest package
Using the AndroidTestCase:getContext() method only gives a stub Context in my experience. For my tests, I'm using an empty activity in my main app and getting the Context via that. Am also extending the test suite class with the ActivityInstrumentationTestCase2 class. Seems to work for me.
public class DatabaseTest extends ActivityInstrumentationTestCase2<EmptyActivity>
EmptyActivity activity;
Context mContext = null;
...
#Before
public void setUp() {
activity = getActivity();
mContext = activity;
}
... //tests to follow
}
What does everyone else do?
You can derive from MockContext and return for example a MockResources on getResources(), a valid ContentResolver on getContentResolver(), etc. That allows, with some pain, some unit tests.
The alternative is to run for example Robolectric which simulates a whole Android OS. Those would be for system tests: It's a lot slower to run.
You should use ApplicationTestCase or ServiceTestCase.
Extending AndroidTestCase and calling AndroidTestCase:getContext() has worked fine for me to get Context for and use it with an SQLiteDatabase.
The only niggle is that the database it creates and/or uses will be the same as the one used by the production application so you will probably want to use a different filename for both
eg.
public static final String NOTES_DB = "notestore.db";
public static final String DEBUG_NOTES_DB = "DEBUG_notestore.db";
First Create Test Class under (androidTest).
Now use following code:
public class YourDBTest extends InstrumentationTestCase {
private DBContracts.DatabaseHelper db;
private RenamingDelegatingContext context;
#Override
public void setUp() throws Exception {
super.setUp();
context = new RenamingDelegatingContext(getInstrumentation().getTargetContext(), "test_");
db = new DBContracts.DatabaseHelper(context);
}
#Override
public void tearDown() throws Exception {
db.close();
super.tearDown();
}
#Test
public void test1() throws Exception {
// here is your context
context = context;
}}
Initialize context like this in your Test File
private val context = mock(Context::class.java)