Find out all testcases not assigned to a test plan in Kiwi TCMS - kiwi-tcms

Is there a way to find out all testcases not assigned to a test plan?
We have many test cases and it is difficult to find out which test cases have not yet been assigned to a test plan.

So finding test cases which are not attached to a Test Plan is possible, but not very handy ATM. Please open an issue on GitHub to request an enhancement for the Test Case search page.
If you use the JavaScript API inside a browser's console then the following will work for you:
jsonRPC('TestCase.filter', {plan:4638}, console.log) - will filter all test cases attached to TP 4638.
jsonRPC('TestCase.filter', {plan:null}, console.log) - will filter all test cases which are not attached to any plans. On https://public.tenant.kiwitcms.org currently this returns 117 cases.
Disclaimer: this answer has been provided to you by a Kiwi TCMS team member.

Related

How to delete a link to a bug from a test execution in Kiwi TCMS

Steps to reproduce
Configure external bug tracker. In my case Redmine
Complete a test run
Use ‘Report bug’ to create a new bug in redmine
Expected result
A bug is created and linked to the test case run
There is a way to delete the link to the bug
Actual result
As expected :+1:
There is no way to delete the link, or at least I can't find it
Screenshot
Some parts are blurred:
Test Case Run with linked bug reports
Deleting links to bugs (or anything else) from a Test Execution is currently not possible.
You may open a feature request on GitHub Issues to request this functionality.
However these URLs serve the purpose of traceability b/w your bug reports & test executions. If the link is removed that breaks traceability. You'd better have a very good justification for that because it doesn't sound like something a test team would like to do.

Code Coverage Failure Your code coverage is 72%. You need at least 75% coverage to complete this deployment

I am working on a new project where the client's pre-existing production code has a low coverage of 72% thus not allowing me to deploy any work done in the Sandbox.
Error:
Code Coverage Failure
Your code coverage is 72%. You need at least 75% coverage to complete this deployment.
Does anyone have recommendations as to how to increase code coverage?
Compile all classes in production
Run all your unit tests (local ones, no need to run tests that come with managed packages)
Go to Developer Console, Query Editor, tick at the bottom the Tooling API checkbox
Run this query
SELECT ApexClassorTrigger.Name, NumLinesCovered, NumLinesUncovered
FROM ApexCodeCoverageAggregate
ORDER BY NumLinesUncovered DESC
LIMIT 10
It should give you a good idea which classes/triggers are least covered. Some of these will be quick wins, time spent on creating/improving their tests will give you best results in oveall coverage. I mean it's better to spend 1h fixing class that has 60 out of 100 lines covered than class that has 2 out of 4 covered. Work in sandbox till you're > 75%
(there's a chance your sandbox is outdated and somebody created validation rules, required fields etc straight in production without deploying... that's why I asked to compile & run all tests in prod)
If there are classes/methods that aren't used anymore and it'd be safe to delete them - you can't do it with changeset, you need a special destructive deployment. For now you could comment them out and deploy that version. Just check if this is beneficial for you (I mean of course it's good to get rid of old code, easier maintenance... but if it happens to be well covered with tests you'll shoot yourself in the foot)
Add the created/updated test classes to changeset and you should be able to deploy it to prod.

Advice and experience for testing a CN1 app

I would like to start automating the testing of my app written in CodenameOne, but I find it difficult to visualize how to use the TestRecorder (section "Unit Testing") for "industrial" testing.
If anyone here is already using it, could you share a few tips about how you use it?
E.g. how do you use the different "Asserts" buttons, how do you structure your tests into suites and how do you chain them together (e.g. so each test case will start in the right context like where in the navigation structure it is supposed to run), do you need to manually edit the tests, ... And is there anything to be aware of before creating lots of tests interactively, e.g. to avoid that your tests are invalidated by some irrelevant change to your UI?
I read in the blog post from May 2017 that the TestRecorder "wasn’t picked up by many developers and as such it stagnated". I tried TestRecorder and immediately came across a seemingly basis error in it (missing test for null) when recording a test case using the Toolbar, which gave the impression it is still the case. So, if anyone here is using another approach that is working well for you, I'd love to hear about that.
See the test classes we use to test Codename One itself here: https://github.com/codenameone/CodenameOne/tree/master/tests/core
You can use the test recorder to generate a skeleton but you can do this manually just like any test. The test API lets you invoke the app or just pieces of it and perform assertions on the behaviors within.

How to integrate Protractor test cases with Hiptest?

For a Website which is made using angular js , our organization used protractor as the tool to automate test cases.
Our organization has come up with a new tool named 'HipTest' to manage test cases automation.
How to integrate protractor test cases with HipTest. I went to following links but was unable to fetch some useful information.
https://docs.hiptest.net/automate-your-tests/
https://github.com/hiptest/hiptest-publisher
Can Anyone help me how to start ?
I'm one of the main contributor or hiptest-publisher, so I should be able to help you.
The quick way to start with hiptest-publisher is to download the bootstrap of the tests from Hiptest (under the automation tab, you will have a "Javascript/Protractor" link).
You will get a zip file with four files (you should add all of them to your version control system, alongside the code of the application you are testing):
- one for the configuration of hiptest-publisher to use the command-line tool
- one for all the tests (you can split them later on, using the --with-folders option in the config file)
- one for the action words: that's the place where you will do the automation
- one for storing the status of the action words you exported (which is used with hiptest-publisher to see which action words have been updated since the last update)
Once the action words are implemented, the test files generated can be integrated in your test suite like any other Protractor test.
On the Hiptest side itself, the only requirement you have is that your tests should only be written using action words only. From what I understand from your post, you do not work directly in Hiptest yourself and you only manage the automation part (or did I get that wrong ? )
For pushing the execution results back to Hiptest, the principle is pretty simple:
- create a test run dedicated to the CI
- run the command "hiptest-publisher --config-file --test-run-id " before the tests (so only the tests inside the test run are executed, you do not want to run a test that someone is currently writing to be executed on fail of course)
- run your tests
- run the command "hiptest-publisher --config-file --push " to push the results back to hiptest.
Note that those two commands (including the test run ID) can be found directly inside Hiptest, from the "Automate" button in the test run.
If you have an Hiptest account, you can contact us directly on the chat, that might be easier to help you through the process.
Ho and I have a recording of the last webinar I made about automation, I guess you could find some useful information there too :)

Selenium automation report

I am using Selenium framework for my test cases execution.
I need an instant report of test cases that are passed while the full suite is in execution currently.
For Eg: There are 100 test cases in suite and five have run of which 3 passed, 2 failed and I need these instant report while the suite is in-progress. Can you please help me with this task?
You can use ExtentReport.
You can use it to log your test steps and once its done it will generate a report to show your results.
For what your looking for, ExtentReport uses a "flush".
If you call this flush after each test step it will amend the step and create the report.
This is something I'm looking into myself at the moment, so I wouldn't consider this an answer but something I've stumbled across myself, hope it helps.
Here is how to set up ExtentReports on your project with examples - http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/
You must use it in conjunction with a test runner eg. TestNG or JUnit.
For what you are trying to achieve is slightly different to the example. You need to call a flush after every test step so it will amend to the report after the step is completed rather than when all the tests are completed. Its not something I have done before but it was explained to me like the following
Just call .flush() after every test instead of once at the end of your test run. BUT you need to make sure the ExtentReports object itself is only initialized once, instead of being reinitialized at the start of every test. For example, I used TestNG. The ExtentReports is called once using #BeforeSuite, but the .flush() is called after every test using #AfterMethod. I hope this makes sense.
The only thing that can’t be solved via code is the HTML refresh as this is outside the control of the ExtentReports library (it doesn’t know where you’ve opened the actual HTML file). But this can be taken care of by using a simple browser plugin as I said. At least for Chrome there are a lot of them, just do a Google search for ‘chrome auto refresh’.
Hope this helps. If you need anymore advice don't hesitate to contact me.

Resources