I'm using Chutzpah console to run all tests in teamcity.
Command line options:
/junit jasmine_report.html /teamcity /failOnError
This produces JUnit report in xml format. How can i visualize this as TeamCity tab (as Specflow, etc.).
It will be great to see all passed/not passed tests. I know there is Tests tab, but it is not human-readable one.
I don't know anything about Chutzpah, but have you looked at jasmine-reporters? It is a collection of reporters supporting Jasmine 1.x or 2.x, and it includes a TeamCity reporter. Maybe you could add this extra reporter to your spec runner to get better TeamCity reporting?
https://github.com/larrymyers/jasmine-reporters
To show some html as tab on build details page, follow next instructions.
Also jasmine-reporters #bloveridge mentioned allows on-the-fly reporting about tests and test would be shown on tests tab.
Related
I want to implement TestRunner into my project to execute the test feature wise or in the given order but couldn't find any proper document or video how to integrate TestRunner in the project, as I am new in specflow and automation so I am not getting the idea. If anyone has implemented TestRunner then, please suggest me how can I implement.
I am trying to run my Feature file using right click on Feature file and click on Run as SpecFlow Scenarios but test execution doesn't start do I need to add an extra library to execute feature file using Run as SpecFlow Scenarios.
I don't know why but 'Run SpecFlow Scenarios" button from context menu doesn't work indeed.
To be able to run your tests you need to install an adapter for your test framework.
If you use MStest then install MSTest.TestAdapter
If you use Nunit then install NUnit3TestAdapter
When you do you'll see the tests in your tests explorer:
Denis Koreyba provided what you are probably looking for.
Another way could be to run the test from the command line.
These two topics on StackOverflow provide information on how to do this, depending on your test framework:
Console Application to launch Specflow features by code not using ncode runner
How do you run SpecFlow scenarios from the command line using MSTest?
You can change the test explorer view to "Group by namespace" then you will see all the features and you can run the scenarios inside the feature. Refer below image
I am writing an integration test (not a unit test) for my AngularJS / CouchDB application using Karma. I realize that in unit tests mocking the database is important. I am deliberately NOT mocking the database since this is an integration test. I explicitly DO want to interact with the database. I also do NOT want to use protractor for these integration tests. The motivation for these integration tests is to move tests that don't necessarily rely on the UI away from protractor to increase the overall speed of our tests.
The following works when I run code in protractor exec("my command that loads data into the database").
In karma exec() is not available. How would you run a shell command in karma? Given how karma works, is this even possible?
If karma is not the best way to run integration tests, what would you recommend that I do?
Perhaps there is some way to run the shell command while karma is starting up?
I've created an Angular 2 Typescript application. I'm using Protractor for running e2e test cases. I need to generated coverage report for e2e test cases using Istanbul or any other tool available.
I cannot use grunt-protractor-coverage. Also, protractor-istanbul-plugin is abandoned now and cannot be used for production level applications.
Is there any other tool available for generating coverage report for e2e test cases?
While, a google search gives the purpose for Karma and Protractor, I am keen to know as to what are the best practices when it comes to writing automated tests. Is it a recommended practice to write both Karma and Protractor tests? Is this an overkill on the project. How can one find the optimal balance?
It is not an overkill!
Only Protractor runs end-to-end (e2e) tests, i.e. your complete application, so this is the only reliable way to test the end result.
However, errors due to individual pieces of your code are hard to track with e2e tests. Also Protractor is slow and not suitable for running in background upon every source file edit, as Karma can do.
See my answer here for more detailed discussion of use cases, advantages and limitations of Karma and Protractor.
I recently have implemented JBehave with webdriver for automation. I have few queries.
can JBehave store the results in DB after the suite is completed?
Can we modify the Jbehave report to display the buildnumber?
can we run webdriver tests to run from jbehave web runber.Example of etsy.com doesn't actually run the webdriver stories.
Can we integrate the results with web-runner. i.e instead of opening target/view.index.html , can we host it on any webserver along with web-runner.
To answer your question,
1) No, Not necessary at all.But you can write your own utility classes and call the methods for DB insertion as you wish. Here The Official selenium Database access Docs,But the DatabaseStorage interface is depreciated.
2) To make a templatable views , you need to look at here, the view generation can help you.
The solution for some of these things was to use continuous integration using Jenkins.
Create jenkins job as described here Integration of jbehave with jenkins
The results are available for all the test runs until the runs are purged in jenkins (which is configurable).