Timeout for AndroidJUnitRunner + ActivityInstrumentationTestCase2? - android-instrumentation

The setup:
An older project I've inherited has a lot of legacy instrumentation tests and I would like to impose a timeout on them, since a lot of them can hang indefinitely and this makes it hard to get a test report. I'm in the process of updating the tests to be Junit4 style, but at the moment they're all extending ActivityInstrumentationTestCase2.
Tried so far:
In the documentation for AndroidJUnitRunner it says to set this flag:
Set timeout (in milliseconds) that will be applied to each test: -e timeout_msec 5000
...
...
All arguments can also be specified in the in the AndroidManifest via a meta-data tag
I've tried adding AndroidJUnitRunner configuration to the app manifest and the test manifest, but the timeout_msec meta-data item has had no effect so far.

You can use a rule to provide a timeout for each test in the class as shown below.
#Rule public Timeout timeout = new Timeout(120000, TimeUnit.MILLISECONDS);
You can also specify per test basis timeouts by using the following
#Test(timeout = 100) // Exception: test timed out after 100 milliseconds
public void test1() throws Exception {
Thread.sleep(200);
}
You can read more about the differences using this link
https://stackoverflow.com/a/32034936/2128442

Related

Flink integration test(s) with Testcontainers

I have a simple Apache Flink job that looks very much like this:
public final class Application {
public static void main(final String... args) throws Exception {
final var env = StreamExecutionEnvironment.getExecutionEnvironment();
final var executionConfig = env.getConfig();
final var params = ParameterTool.fromArgs(args);
executionConfig.setGlobalJobParameters(params);
executionConfig.setParallelism(params.getInt("application.parallelism"));
final var source = KafkaSource.<CustomKafkaMessage>builder()
.setBootstrapServers(params.get("application.kafka.bootstrap-servers"))
.setGroupId(config.get("application.kafka.consumer.group-id"))
// .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
.setStartingOffsets(OffsetsInitializer.earliest())
.setTopics(config.getString("application.kafka.listener.topics"))
.setValueOnlyDeserializer(new MessageDeserializationSchema())
.build();
env.fromSource(source, WatermarkStrategy.noWatermarks(), "custom.kafka-source")
.uid("custom.kafka-source")
.rebalance()
.flatMap(new CustomFlatMapFunction())
.uid("custom.flatmap-function")
.filter(new CustomFilterFunction())
.uid("custom.filter-function")
.addSink(new CustomDiscardSink()) // Will be a Kafka sink in the future
.uid("custom.discard-sink");
env.execute(config.get("application.job-name"));
}
}
Problem is that I would like to provide an integration test for the entire application — sort of like an end-to-end (set of) test(s) for the entire job. I'm using Testcontainers, but I'm not really sure how to move forward with this. For instance, this is how the test looks like (for now):
#Testcontainers
final class ApplicationTest {
private static final DockerImageName DOCKER_IMAGE = DockerImageName.parse("confluentinc/cp-kafka:7.0.1");
#Container
private static final KafkaContainer KAFKA_CONTAINER = new KafkaContainer(DOCKER_IMAGE);
#ClassRule // How come this work in JUnit Jupiter? :/
public static MiniClusterResource cluster;
#BeforeAll
static void init() {
KAFKA_CONTAINER.start();
// ...probably need to wait and create the topic(s) as well
final var config = new MiniClusterResourceConfiguration.Builder().setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build();
cluster = new MiniClusterResource(config);
}
#Test
void main() throws Exception {
// new Application(); // ...what's next?
}
}
I'm not sure how to implement what's required to trigger the job as-is from that point on. Basically, I would like to execute what was defined before, without (almost) any modifications — I've seen plenty of examples that practically build the entire job again, so that's not an option.
Can somebody provide any pointers here?
MessageDeserializationSchema is unbounded, so isEndOfStream returns false. Not sure if that's an impediment.
In order to make the pipeline more testable, I suggest you create a method on your Application class that takes a source and a sink as parameters, and creates and executes the pipeline, using those connectors.
In your tests you can call that method with special sources and sinks that you use for testing. In particular, you will want to use a KafkaSource that uses .setBounded(...) in the tests so that it cleanly handles just the range of data intended for the test(s).
The solutions and tests for the Apache Flink training exercises are organized along these lines; for example, see RideCleansingSolution.java and RideCleansingIntegrationTest.java. These examples don't use kafka or test containers, but hopefully they'll still be helpful.
I would suggest you instrument your application as an opaque-box test by interacting with it through its public API. This can be done either as an out-process test (e.g. by running your application in a container as well, using Testcontainers) are as an in-process test (by creating your Application and calling its main() method).
Now in your comments you explained, that you want to check for the side-effects of interacting with your application (Kafka messages being published). To check this, connect to the KafkaContainer with your own KafkaConsumer from within the test and use a library such as Awaitiliy to wait until the messages have been received.

Running failed test using RetryAnalyzer - not working as expected for test using data provider

I am using IRetryAnalyzer for running failed test cases and using IAnnotationTransformer for setting annotation at run time. For #Test using data provider its giving strange result.
I have set retry limit 3, that is test should re-run 3 times. Issue is :
If test fails for first data set, then it retries 3 times (as expected). Then for all remaining data set - re-run count is 2. I am not sure, its 2 retries or its 1 run 1 retry.
Here is class implementing data provider:
#Test(dataProvider = "data-source")
public void toolbarActionsOnShapes(String selectShape)
throws InterruptedException {
Assert.assertTrue(false);
}
#DataProvider(name = "data-source")
public Object[][] allShapes() {
return new Object[][] { { "Rectangle" }, { "Circle" }, { "Triangle" }
};
}
}
On running this i get output :
https://drive.google.com/open?id=1FxercluPinPiOOUAZKe_dMa6NvVMCE0j
For every set of data, if test fails - there should be 3 retries. Dummy project zip is attached for reference.
https://drive.google.com/open?id=1Mt7V2TO4TWRKU9dN4FIFzprkDingUKaE
Thanks !!
This is due to a bug that exists in TestNG 7.0.0-beta1. Please see GITHUB-1946 for more details.
I went ahead and fixed this as part of my pull request PR-1948
Please make use of TestNG 7.0.0-SNAPSHOT to get past this problem. This should be part of the upcoming TestNG 7.0.0-beta2 (or) 7.0.0 (final release). Its not decided on this part yet.

How to have a Selenium test locate elements generated by Angular?

I'm currently taking the angular tutorial using Wisdom framework as back end. As a consequence, I run end-to-end tests using Fluentlenium, as the wisdom framework doc states.
My test for step 3, although dead simple, doesn't pass.
Full test can be found at github : Step03IsImplementedIT
However, here is the offending extract (around lines 30)
#Test
public void canTestPageCorrectly() {
if (getDriver() instanceof HtmlUnitDriver) {
HtmlUnitDriver driver = (HtmlUnitDriver) getDriver();
if(!driver.isJavascriptEnabled()) {
driver.setJavascriptEnabled(true);
}
Assert.assertTrue("Javascript should be enabled for Angular to work !", driver.isJavascriptEnabled());
}
goTo(GoogleShopController.LIST);
// Et on charge la liste des téléphones
FluentWebElement phones = findFirst(".phones");
assertThat(phones).isDisplayed();
FluentList<FluentWebElement> items = find(".phone");
assertThat(items).hasSize(3); // <-- this is the assert that fails
}
Failure message :
canTestPageCorrectly(org.ndx.wisdom.tutorial.angular.Step03IsImplementedIT) Time elapsed: 2.924 sec <<< FAILURE!
java.lang.AssertionError: Expected size: 3. Actual size: 1.
at org.fluentlenium.assertj.custom.FluentListAssert.hasSize(FluentListAssert.java:60)
at org.ndx.wisdom.tutorial.angular.Step03IsImplementedIT.canTestPageCorrectly(Step03IsImplementedIT.java:33)
From that failure, I guess the angular controllers weren't loaded.
How can I make sure they are ? And how can I have a working test ?
Turned out the error wasn't the expected one ... Well, it was, but in a hidden fashion.
HtmlUnitDriver, as one may be aware, is a pure Java implementation of a browser and, as such, has some limitations.
One of its limitation is Javascript interpretation, which seems to go awfully bad with angular ....
To make long things short, the simplest way to fix that is to replace the default driver with firefox one which implies
setting fluentlenium.browser to firefox
make sure driver loads correctly (since firefox.exe should be on path when trying to use its driver) by adding a small assert at the beginning of the test
Final test is then
assertThat(getDriver()).isInstanceOf(FirefoxDriver.class);
goTo(GoogleShopController.LIST);
FluentList<FluentWebElement> items = find("li");
FluentLeniumAssertions.assertThat(items).hasSize(3);
fill("input").with("nexus");
await();
items = find(".phone");
FluentLeniumAssertions.assertThat(items).hasSize(1);
fill("input").with("motorola");
await();
items = find(".phone");
FluentLeniumAssertions.assertThat(items).hasSize(2);

playframework: -- Database/Test Framework/Cache bug

I have fully isolated this problem to a very simple play app
I think it has to do with some DB caching, but I can't figure it out
BasicTest.java
==========
import org.junit.*;
import play.test.*;
import play.Logger;
import models.*;
import play.mvc.Http.*;
public class BasicTest extends FunctionalTest {
#Before public void setUp() {
Fixtures.deleteDatabase();
Fixtures.loadModels("data.yml");
Logger.debug("countFromSetup=%s",User.count());
}
#Test
public void test() {
Response response= GET("/");
Logger.debug("countFromTest=%s",User.count());
assertIsOk(response);
}
}
Uncommented Configs
================
%prod.application.mode=prod
%test.application.mode=dev
%test.db.url=jdbc:h2:mem:play;MODE=MYSQL;LOCK_MODE=0
%test.db=mysql:root:xxx#t_db
%test.jpa.ddl=create
%test.mail.smtp=mock
application.mode=dev
application.name=test
application.secret=jXKw4HabjhaNvosxgzq39to9BJECtOr39EXrEabsQAZKi7YoWAwQWo3B BFUOQnJw
attachments.path=data/attachments
date.format=yyyy-MM-dd
db=mysql:root:xxx#db
mail.smtp=mock
Application.java
============
package controllers;
import play.*;
import play.mvc.*;
import models.*;
public class Application extends Controller {
public static void index() {
Logger.debug("countFromIndex=%s",User.count());
render();
}
}
>play test
Output of log after running the BasicTest http://localhost:9000/#tests
==================================================
11:54:59,008 DEBUG ~ countFromSetup=1
11:54:59,021 DEBUG ~ countFromIndex=0
11:54:59,034 DEBUG ~ countFromTest=1
point to browser=> http://localhost:9000
12:25:59,781 DEBUG ~ countFromIndex=1
What happened to the record during?
Response response= GET("/");
This 'bug' almost makes my test cases useless
It has probably something to do with transactions. I've came across a similar case once with Spring/JUnit couple.
Here is the transactionnal execution of the test (I think) :
Start transaction t1,
Execute setup, result is fetched from cache.
Execute test.
Start transaction t2 for controller execution GET("/")
Result is fetched from database but since t1 hasn't been commmited, it isn't displayed.
Close transaction t2 and commit t1!
Close transaction t1 and commit t2!
By the way, that is not really a Functionnal Test. For functionnal tests, you are not supposed to check such data but only http status. Turn to UnitTests for that. When looking at source code of functionnal tests, you can see all the checks implemented are for response/http checking.
I think its the default behavior of JUnit, #Before annotation makes the method run before every test:
When writing tests, it is common to find that several tests need
similar objects created before they can run. Annotating a public void
method with #Before causes that method to be run before the Test
method. The #Before methods of superclasses will be run before those
of the current class.
From : http://junit.sourceforge.net/javadoc/org/junit/Before.html
IF you want the setup to be run once you can use #BeforeClass Annotation : http://junit.sourceforge.net/javadoc/org/junit/BeforeClass.html
In PlayFramework, there's n+1 threads for prod and 1 thread for test profile or compile profile. So if you have a dual-core CPU, there's 3 threads if you are running in prod, and one thread if you started the application with "test".
Now, one more interesting fact : there'x one Tx per execution. Thus when your application starts, and you launch your very first test, here is what happens :
Play starts with one thread.
The JUnitRunner starts, the first test myTest gets executed. It's an HTTP connection to the application. The reason why you see 0 is because of the Response GET that is executed before the #Before statement.
The #Before gets executed, creates your entries and the result count is accurate in the #Before, because it's done in the same Tx.
So what I suggest is that you either use #BeforeClass, or perform the setup not in a #Before but from a direct call in myTest for the very specific test case with Response.
I assume that if you replace this code
#Test
public void myTest() {
Response response= GET("/test");
}
with this
#Test
public void myTest() {
assertEquals(1,User.count());
}
Correct ?
So the reason why you get this is not a bug. It's simply because of this one thread configuration we have for test environment.
Nicolas

Silverlight 4 WCF RIA Service Timeout Problem

I have a Silverlight 4 usercontrol that calls a very long running WCF RIA service. As shown below, I am increasing the default timeout period.
_domainContext = new WindowsDashboardDomainContext();
// Increase timeout -- this can be a very long running query
((WebDomainClient<WindowsDashboardDomainContext.IWindowsDashboardDomainServiceContract>)
_domainContext.DomainClient).ChannelFactory.Endpoint.Binding.SendTimeout = new TimeSpan(99, 0, 0);
_domainContext.GetSections( "All", "All", "All" ).Completed += GetAllSectionsCompleted;
Unfortunately, it seems to ignore this timeout and still throws the timeout exception:
Error: Unhandled Error in Silverlight Application Load operation failed for query 'GetClicks'. An error occurred while executing the command definition. See the inner exception for details. Inner exception message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. at System.Data.EntityClient.EntityCommandDefinition.ExecuteStoreCommands(EntityCommand entityCommand, CommandBehavior behavior)
Why is this happening?
I answered the same question here: WCF ria service SP1 timeout expired
The answer:
I'll explain my context and I wish it will work for my. I'm sure about that.
First of all to call RIA services, and using some domain context, in my example:
EmployeeDomainContext context = new EmployeeDomainContext();
InvokeOperation<bool> invokeOperation = context.GenerateTMEAccessByEmployee(1, 'Bob');
invokeOperation.Completed += (s, x) =>
{....};
Nothing new until here. And with this I was facing every time that same timeout exception after 1 minute. I spend quite a lot of time trying to face how to change the timeout definition, I tried all possible changes in Web.config and nothing. The solution was:
Create a CustomEmployeeDomainContext, that is a partial class localizated in the same path of the generated code and this class use the hook method OnCreate to change the behavior of created domain context. In this class you should wrote:
public partial class EmployeeDomainContext : DomainContext
{
partial void OnCreated()
{
PropertyInfo channelFactoryProperty = this.DomainClient.GetType().GetProperty("ChannelFactory");
if (channelFactoryProperty == null)
{
throw new InvalidOperationException(
"There is no 'ChannelFactory' property on the DomainClient.");
}
ChannelFactory factory = (ChannelFactory)channelFactoryProperty.GetValue(this.DomainClient, null);
factory.Endpoint.Binding.SendTimeout = new TimeSpan(0, 10, 0);
}
}
I looking forward for you feedback.
There are two possibilities that come to mind:
You have not configured your DomainService to serilalize enough objects. The default is very small. Try this tip I put in yesterday to increase the resultset allocation
Your data source may be timing out. In that case you need to increase the command timeout for LINQ to SQL, EF, or ADO.NET accordingly. This is the less likely cause, but one to consider.

Resources