I am trying to run my selenium-cucumber tests on Azure Dev Ops.
I run my tests locally on command prompt or on terminal with
mvn test -Dcucumber.filter.tags="#ForgotPassword" -Dbrowser=Firefox
This works perfectly fine. Now, I want to run the same thing on Azure DevOps. Hence, I have created a pipeline as below
Maven task is as below
But, when I run the pipeline, in the logs, it does not detect any test and no test is executed.
? ?
? src/test/resources/cucumber.properties: cucumber.publish.enabled=true ?
? src/test/resources/junit-platform.properties: cucumber.publish.enabled=true ?
? Environment variable: CUCUMBER_PUBLISH_ENABLED=true ?
? JUnit: #CucumberOptions(publish = true) ?
? ?
? More information at https://cucumber.io/docs/cucumber/environment-variables/ ?
? ?
? Disable this message with one of the following: ?
? ?
? src/test/resources/cucumber.properties: cucumber.publish.quiet=true ?
? src/test/resources/junit-platform.properties: cucumber.publish.quiet=true ?
?????????????????????????????????????????????????????????????????????????????????????
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.65 s - in com.mohg.automation.cucumberOptions.TestNGTestRunner
[INFO]
[INFO] Results:
[INFO]
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:09 min
[INFO] Finished at: 2022-08-04T15:01:19Z
[INFO] ------------------------------------------------------------------------
Am I missing anything here?
From your screenshot, you are defining the maven command arguments in Maven Task Goal field. This is the root cause of the issue.
You need to define the argument in the Option field.
For example:
Yaml Sample:
- task: Maven#3
displayName: 'Maven pom.xml'
inputs:
goals: test
options: '-Dcucumber.filter.tags="#ForgotPassword" -Dbrowser=Firefox'
testResultsFiles: '**/surefire-reports/TEST-*.xml'
Related
I have a local cluster running in Minikube. My pipeline job is written in python and is a basic consumer of Kafka. My pipeline looks as follows:
def run():
import apache_beam as beam
options = PipelineOptions([
"--runner=FlinkRunner",
"--flink_version=1.10",
"--flink_master=localhost:8081",
"--environment_type=EXTERNAL",
"--environment_config=localhost:50000",
"--streaming",
"--flink_submit_uber_jar"
])
options.view_as(SetupOptions).save_main_session = True
options.view_as(StandardOptions).streaming = True
with beam.Pipeline(options=options) as p:
(p
| 'Create words' >> ReadFromKafka(
topics=['mullerstreamer'],
consumer_config={
'bootstrap.servers': '192.168.49.1:9092,192.168.49.1:9093',
'auto.offset.reset': 'earliest',
'enable.auto.commit': 'true',
'group.id': 'BEAM-local'
}
)
| 'print' >> beam.Map(print)
)
if __name__ == "__main__":
run()
The Flink runner shows no records passing through in "Records received"
Am I missing something basic?
--environment_type=EXTERNAL means you are starting up the workers manually, and is primarily for internal testing. Does it work if you don't specify an environment_type/config at all?
def run(bootstrap_servers, topic, pipeline_args):
bootstrap_servers = 'localhost:9092'
topic = 'wordcount'
pipeline_args = pipeline_args.append('--flink_submit_uber_jar')
pipeline_options = PipelineOptions([
"--runner=FlinkRunner",
"--flink_master=localhost:8081",
"--flink_version=1.12",
pipeline_args
],
save_main_session=True, streaming=True)
with beam.Pipeline(options=pipeline_options) as pipeline:
_ = (
pipeline
| ReadFromKafka(
consumer_config={'bootstrap.servers': bootstrap_servers},
topics=[topic])
| beam.FlatMap(lambda kv: log_ride(kv[1])))
I'm facing another issue with latest apache Beam 2.30.0, Flink 1.12.4
2021/06/10 17:39:42 Initializing python harness: /opt/apache/beam/boot --id=1-2 --provision_endpoint=localhost:42353
2021/06/10 17:39:50 Failed to retrieve staged files: failed to retrieve /tmp/staged in 3 attempts: failed to retrieve chunk for /tmp/staged/pickled_main_session
caused by:
rpc error: code = Unknown desc = ; failed to retrieve chunk for /tmp/staged/pickled_main_session
caused by:
rpc error: code = Unknown desc = ; failed to retrieve chunk for /tmp/staged/pickled_main_session
caused by:
rpc error: code = Unknown desc = ; failed to retrieve chunk for /tmp/staged/pickled_main_session
caused by:
rpc error: code = Unknown desc =
2021-06-10 17:39:53,076 WARN org.apache.flink.runtime.taskmanager.Task [] - [3]ReadFromKafka(beam:external:java:kafka:read:v1)/{KafkaIO.Read, Remove Kafka Metadata} -> [1]FlatMap(<lambda at kafka-taxi.py:88>) (1/1)#0 (9d941b13ae9f28fd1460bc242b7f6cc9) switched from RUNNING to FAILED.
org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException: java.lang.IllegalStateException: No container running for id d727ca3c0690d949f9ed1da9c3435b3ab3af70b6b422dc82905eed2f74ec7a15
I want to test that all methods of a REST API are covered by tests.
All http calls are recorded into a mutable set, and I have a piece of code that checks correspondence between specification and the result set of recored api calls.
I can place this check in a separate test at the end of FunSuite and it would be executed after all other tests. However, there are two problems: I have to copy-paste it in every file that tests API and make sure it is at the end of file.
Using common trait does not work: tests from parent class are executed before tests from child class. Placing the test inside afterAll does not work either: scalatest swallows all exceptions (including test failures) thrown in it.
Is there a way to run some test after all others without boilerplate?
Personally I would go with dedicated coverage tool such as scoverage. One advantage would be avoiding global state.
Nevertheless, as per question, a way to execute test after all tests would be via Suites and BeforeAndAfterAll traits like so
import org.scalatest.{BeforeAndAfterAll, Suites, Matchers}
class AllSuites extends Suites(
new FooSpec,
new BarSpec,
) with BeforeAndAfterAll withy Matchers {
override def afterAll(): Unit = {
// matchers here as usual
}
}
Here is a toy example with global state as per question
AllSuites.scala
import org.scalatest.{BeforeAndAfterAll, Matchers, Suites}
object GlobalMutableState {
val set = scala.collection.mutable.Set[Int]()
}
class AllSuites extends Suites(
new HelloSpec,
new GoodbyeSpec
) with BeforeAndAfterAll with Matchers {
override def afterAll(): Unit = {
GlobalMutableState.set should contain theSameElementsAs Set(3,2)
}
}
HelloSpec.scala
#DoNotDiscover
class HelloSpec extends FlatSpec with Matchers {
"The Hello object" should "say hello" in {
GlobalMutableState.set.add(1)
"hello" shouldEqual "hello"
}
}
GoodbyeSpec.scala
#DoNotDiscover
class GoodbyeSpec extends FlatSpec with Matchers {
"The Goodbye object" should "say goodbye" in {
GlobalMutableState.set.add(2)
"goodbye" shouldEqual "goodbye"
}
}
Now executing sbt test gives something like
[info] example.AllSuites *** ABORTED ***
[info] HashSet(1, 2) did not contain the same elements as Set(3, 2) (AllSuites.scala:15)
[info] HelloSpec:
[info] The Hello object
[info] - should say hello
[info] GoodbyeSpec:
[info] The Goodbye object
[info] - should say goodbye
[info] Run completed in 377 milliseconds.
[info] Total number of tests run: 2
[info] Suites: completed 2, aborted 1
[info] Tests: succeeded 2, failed 0, canceled 0, ignored 0, pending 0
[info] *** 1 SUITE ABORTED ***
[error] Error during tests:
[error] example.AllSuites
[error] (Test / test) sbt.TestsFailedException: Tests unsuccessful
I'm leveraging the DynamicTest functionality of JUnit 5.3.0-M1 and there's a showstopping point I can't seem to figure out with regards to the test name.
My expectation (right or wrong) is that the displayName of the DynamicTest translates to what I'm used to seeing as the test name in the test output.
For example, given the following TestFactory method
public class RequestAcceptTest {
private final Proxy.Location proxy = Proxy.location();
#TestFactory
Collection<DynamicTest> testAllMimeTypes() {
final List<DynamicTest> tests = new ArrayList<>();
for (final Method method : Method.values()) {
for (final MimeType mimeType : MimeType.values()) {
final String displayName = String.format("testAccept%svia%s", mimeType.name(), method);
tests.add(dynamicTest(displayName, () -> {
assertAccept(200, method, mimeType);
}));
}
}
return tests;
}
I would expect to see failures output by maven-surefire-plugin contain the DynamicTest displayName somewhere, possibly like so:
[ERROR] Tests run: 6, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 1,578.898 s <<< FAILURE! - in com.example.prototype.proxy.RequestAcceptTest
[ERROR] testAcceptApplicationJsonviaGET Time elapsed: 0.013 s <<< FAILURE!
org.opentest4j.AssertionFailedError
at com.example.prototype.proxy.RequestAcceptTest.assertAccept(RequestAcceptTest.java:73)
at com.example.prototype.proxy.RequestAcceptTest.lambda$testAllMimeTypes$0(RequestAcceptTest.java:64)
[ERROR] testAcceptApplicationJsonviaPUT Time elapsed: 0.001 s <<< FAILURE!
org.opentest4j.AssertionFailedError
at com.example.prototype.proxy.RequestAcceptTest.assertAccept(RequestAcceptTest.java:73)
at com.example.prototype.proxy.RequestAcceptTest.lambda$testAllMimeTypes$0(RequestAcceptTest.java:64)
[ERROR] testAcceptApplicationJsonviaPOST Time elapsed: 0.001 s <<< FAILURE!
org.opentest4j.AssertionFailedError
at com.example.prototype.proxy.RequestAcceptTest.assertAccept(RequestAcceptTest.java:73)
at com.example.prototype.proxy.RequestAcceptTest.lambda$testAllMimeTypes$0(RequestAcceptTest.java:64)
Instead it is array indexed.
[ERROR] Tests run: 6, Failures: 6, Errors: 0, Skipped: 0, Time elapsed: 1,578.898 s <<< FAILURE! - in com.example.prototype.proxy.RequestAcceptTest
[ERROR] testAllMimeTypes[1] Time elapsed: 0.013 s <<< FAILURE!
org.opentest4j.AssertionFailedError
at com.example.prototype.proxy.RequestAcceptTest.assertAccept(RequestAcceptTest.java:73)
at com.example.prototype.proxy.RequestAcceptTest.lambda$testAllMimeTypes$0(RequestAcceptTest.java:64)
[ERROR] testAllMimeTypes[2] Time elapsed: 0.001 s <<< FAILURE!
org.opentest4j.AssertionFailedError
at com.example.prototype.proxy.RequestAcceptTest.assertAccept(RequestAcceptTest.java:73)
at com.example.prototype.proxy.RequestAcceptTest.lambda$testAllMimeTypes$0(RequestAcceptTest.java:64)
[ERROR] testAllMimeTypes[3] Time elapsed: 0.001 s <<< FAILURE!
org.opentest4j.AssertionFailedError
at com.example.prototype.proxy.RequestAcceptTest.assertAccept(RequestAcceptTest.java:73)
at com.example.prototype.proxy.RequestAcceptTest.lambda$testAllMimeTypes$0(RequestAcceptTest.java:64)
As well, the failure summary just shows the lambda names and does not contain any reference to displayName
[INFO] Results:
[INFO]
[ERROR] Failures:
[ERROR] RequestAcceptTest.lambda$testAllMimeTypes$0:64->assertAccept:73
[ERROR] RequestAcceptTest.lambda$testAllMimeTypes$0:64->assertAccept:73
[ERROR] RequestAcceptTest.lambda$testAllMimeTypes$0:64->assertAccept:73
In reality, I don't have 6 or 10 test, but 10,000. I don't care how, but I do need some information I control to show up in the test output.
If I'm unable to communicate to the casual observer what the test was, then the feature is unfortunately not usable and I'll probably have to go back to bytecode generating test methods.
I also do not know if this is a JUnit question or a Maven-Surefire-Plugin question.
Maven took back surefire integration but didnt integrate with displayname yet :(. https://github.com/junit-team/junit5/issues/990
Romain
If you configure maven-surefire-plugin, then the XML report will contain testcase name as "A() - B", where:
A is method name with annotation #TestFactory
B is is displayName from DynamicTest
pom.xml/build/plugins:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>3.0.0-M6</version>
<configuration>
<statelessTestsetReporter implementation="org.apache.maven.plugin.surefire.extensions.junit5.JUnit5Xml30StatelessReporter">
<usePhrasedTestCaseMethodName>true</usePhrasedTestCaseMethodName>
</statelessTestsetReporter>
</configuration>
</plugin>
https://maven.apache.org/surefire/maven-surefire-plugin/examples/junit-platform.html#Surefire_Extensions_and_Reports_Configuration_for_.40DisplayName
https://issues.apache.org/jira/browse/SUREFIRE-1546
I had a problem when I tried to integrate 'Solr' with 'Nutch':
version of 'Nutch':1.13
version of 'Solr': 5.5.0 (as recommended by the official
documentations https://wiki.apache.org/nutch/NutchTutorial#Verify_your_Nutch_installation)
The error is :
Active IndexWriters :
SOLRIndexWriter
solr.server.url : URL of the SOLR instance
solr.zookeeper.hosts : URL of the Zookeeper quorum
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
Indexer: number of documents indexed, deleted, or skipped:
Indexer: finished at 2017-11-30 01:34:49, elapsed: 00:00:01
Cleaning up index if possible
apache-nutch-1.13/bin /nutch clean -Dsolr.server.url=http://localhost:8983/solr/nutch crawling_dir/crawldb
SolrIndexer: deleting 1/1 documents
ERROR CleaningJob: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865)
at org.apache.nutch.indexer.CleaningJob.delete(CleaningJob.java:174)
at org.apache.nutch.indexer.CleaningJob.run(CleaningJob.java:197)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.CleaningJob.main(CleaningJob.java:208)
Error running:
apache-nutch-1.13/bin/nutch clean -Dsolr.server.url=http://localhost:8983/solr/nutch crawling_dir/crawldb
Failed with exit value 255.
on the log file:
2017-11-30 01:34:50,851 WARN output.FileOutputCommitter - Output Path is null in cleanupJob()
2017-11-30 01:34:50,851 WARN mapred.LocalJobRunner - job_local531807742_0001
java.lang.Exception: java.lang.IllegalStateException: Connection pool shut down
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: java.lang.IllegalStateException: Connection pool shut down
at org.apache.http.util.Asserts.check(Asserts.java:34)
at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:169)
at org.apache.http.pool.AbstractConnPool.lease(AbstractConnPool.java:202)
at org.apache.http.impl.conn.PoolingClientConnectionManager.requestConnection(PoolingClientConnectionManager.java:184)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:415)
at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:481)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:240)
at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:229)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:482)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.commit(SolrIndexWriter.java:191)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.close(SolrIndexWriter.java:179)
at org.apache.nutch.indexer.IndexWriters.close(IndexWriters.java:117)
at org.apache.nutch.indexer.CleaningJob$DeleterReducer.close(CleaningJob.java:122)
at org.apache.hadoop.io.IOUtils.cleanup(IOUtils.java:244)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:459)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2017-11-30 01:34:51,458 ERROR indexer.CleaningJob - CleaningJob: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:865)
at org.apache.nutch.indexer.CleaningJob.delete(CleaningJob.java:174)
at org.apache.nutch.indexer.CleaningJob.run(CleaningJob.java:197)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.CleaningJob.main(CleaningJob.java:208)
Please do you have any idea?
Had same problem and yours probably is due to the same reason
https://issues.apache.org/jira/browse/NUTCH-2269
Try patch it and error should be gone
From my finding, it appears to be a bug. Here's a blog that explains it well, https://reformatcode.com/code/apache-configuration/apache-nutch-112-with-apache-solr-621-give-an-error
We have a small java project need to deploy
it include 9000+ files
command : mvn gcloud:deploy
but I get the Log:
...
[INFO] INFO: Uploading [/home/steven/work/idigisign/target/appengine-staging/__static__/node_modules/rx/src/core/linq/observable/when.js] to [7dfb30ad32893c5042dba03601f006a40419fab0]
[INFO] DEBUG: Uploading [/home/steven/work/idigisign/target/appengine-staging/assets/global/plugins/bootstrap-switch/js/bootstrap-switch.min.js] to [7e0725897d7b99c3c33b56915d202e2dde552ea9]
[INFO] INFO: Uploading [/home/steven/work/idigisign/target/appengine-staging/assets/global/plugins/bootstrap-switch/js/bootstrap-switch.min.js] to [7e0725897d7b99c3c33b56915d202e2dde552ea9]
[INFO] DEBUG: Uploading [/home/steven/work/idigisign/target/appengine-staging/node_modules/is-redirect/index.js] to [7e0afe4775bf7f8558665760171c01948c22f771]
[INFO] INFO: Uploading [/home/steven/work/idigisign/target/appengine-staging/node_modules/is-redirect/index.js] to [7e0afe4775bf7f8558665760171c01948c22f771]
[INFO] DEBUG: Uploading [/home/steven/work/idigisign/target/appengine-staging/node_modules/rxjs/src/util/Map.ts] to [7e11722f4cd9ce91ec99b97710fbc4e7f40be09d]
...
About 50 per minute
So it will spent 180 minute...
It is extraodinarily slow
anybody can help me?
Set the environment variable CLOUDSDK_APP_USE_GSUTIL=1 and try again; this uses a less-reliable but faster codepath for file upload (there are plans to speed up the default codepath).
We have the same issue, it's very slow.
Guess we have solved it.
First, we traced the gcloud logs and we found many files had been uploaded again, these files all are no modified. So we try to trace the source code of gcloud and we found the issue is caused by "Google Cloud Storage JSON API".
When it queried the List of Bucket, it returned 1000 items but we have 1325 items so I guess we find the issue.
Then, we look for the api reference, and we find a parameter - maxResults, so we try to modify the source code(cloud_storage.py), and we find it has No Effect when its value is over 1000.
Finally, we find another parameter - nextPageToken, and we query list until the "nextPageToken" is None, now it got all items from "Google Cloud Storage" and the exists files not be uploaded again.
def ListBucket(bucket_ref, client):
request = STORAGE_MESSAGES.StorageObjectsListRequest(bucket=bucket_ref.bucket)
items = set()
try:
response = client.objects.List(request)
for item in response.items:
items.add(item.name)
while response.nextPageToken:
request = STORAGE_MESSAGES.StorageObjectsListRequest(bucket=bucket_ref.bucket,pageToken=response.nextPageToken)
response = client.objects.List(request)
for item in response.items:
items.add(item.name)
except api_exceptions.HttpError as e:
raise UploadError('Error uploading files: {e}'.format(e=e))
return items