How to reorder test cases in KIWI - kiwi-tcms

we need to re order some test cases so that it matches specific order of test flow
is there a way in kiwi to reorder these
there is option to sort in Test plan . but the reordering does not follow any pattern
A better request should be how to persist execution order of test cases

Related

Is there a way to show completion level?

As a test lead, I want to convey to stakeholders the current status of testing based on projects under test.
When I view telemetry, test plans, test cases, and test executions, I don't see a chart or summary of the planned work, nor a comparison of the planned work versus the completed work. I also don't see this kind of measure listed in the planned updates to Telemetry.
If Kiwi doesn't present this data, has anyone worked out a good way to gather it and present it outside of Kiwi?
Looks like what you are asking for doesn't currently exist but sounds like a good report to have. It needs more definitions though.
As a test lead, I want to convey to stakeholders the current status of
testing based on projects under test.
First let me start by saying in Kiwi TCMS there are Products, not projects. One product may comprise of many projects and vice versa. You may also have multi-layered products.
So let's define what a project means for you ? Is that a collection of products or something different ?
How do you define "planned work" ?
#Prome suggested planned work == number of test cases but that is incomplete. I can have test cases which are never executed during a specific timeframe, such that are already old and obsolete (but you don't want to remove them b/c you want to keep historical data), you may have a TestRun which is already finished but contains test cases which have not been executed (reasons may vary).
So how do you "plan testing work" in your team currently ?

table design for workflow with conditions

I need to design a tables for execution flow [this may not be the right word], each step/flow will be skipped or executed based on the conditions set. user should be able to add multiple step with conditions, so based on the conditions 'next' flow/step will change.
currently we designed table like below, routing table has the link between steps, based on the conditions[runtime data], next steps will be chosen.
is it correct approach are there any standard way to design the same?, here if we want to add any new step , i need to create a new step table, is there anyway we can avoid...in some case there wont be any conditions, it is strighforward flow like from step1 to step3 then end.
Sample DB structure
Don't reinvent the wheel. Using database to implement workflows is a clear antipattern. I would recommend looking into Cadence Workflow that provides much higher level API and includes tons of features that you never would be able to add into your home grown ad-hoc solution.

Sub-queries using XSB Prolog's C API (embedded)

I have a program (C++) that embeds XSB Prolog to use as a constraint solver. I have already written code using the low-level C API to inject facts and run queries. But I am getting stuck on a particular side-problem.
I would like to (for debugging purposes) run a query and then output each term that the query unifies with to a stream. In order to ensure that the outputs are nice, I thought it would be nice to use the output of string:term_to_atom/2 to generate the strings.
So, I'd like to put the query term in register 1, run xsb_query(), and then run string:term_to_atom/2 on the results. But running string:term_to_atom/2 is a query itself, and you can't run xsb_query() when you are in the middle of a query.
I tried using xsb_query_save(), hoping that I could then do a sub-query, followed by xsb_query_restore(), but that doesn't appear to work. The call to my sub-query still bombs out because there is already a query in progress.
I thought about saving a vector of variables created with p2p_new() that have been unified using p2p_unify() with reg_term(1), but I have no idea how or when these terms might get garbage collected, as I see no way for XSB Prolog to know that my C program is using them. (Unless I am supposed to call the undocumented p2p_deref() on them when I am done with them?)
Finally, I would like to do this in a single query (if possible) in order to avoid cluttering up the namespace with what would amount to a temporary rule. But maybe I am trying too hard and I should be using another approach entirely. Ideas?

Unit testing a select query

I am returning a set of rows, each representing a desktop machine.
I am stumped on finding a way to unit test this. There's not really any edge cases or criteria I can think of, to test. It's not like share prices where I might want to check I am getting data which is indeed 5 months old. It's not like storing person details where you could check that a certain length always works, or special characters, etc. Or currency and different currencies (£, $, etc) as strings.
How would I test this sort of resultset?
Also, in testing the returnset of a query, there are a few problems:
1) Testing you have the same number of rows as when you run the query on the server is brittle because someone might change the table data. Is this when you have a test server, which nobody changes unless they upload change scripts?
2) Do you test the dataset object is not null? So if it's instantiated as null, but is not after the query's executed, it's holding value (this doesn't prove the data is correct, just that data has been retrieved).
Thanks
You can use a component like NBuilder that will simulate your database. And as you can manage all the aspects of your dataset you can test several aspects of the database interaction: the number of records your query returns, the range of values in some field. And, because the dataset is always created with the arguments you choose the data are always the same so you can reproduce your tests completly decoupled from your database.
1 -
a) Never ever test against the production server, for any reason.
b) Tests should start from a known configuration, which you can achieve either with mock objects or with a test database (some might argue that unit tests should use mock objects and integration tests should use a test database).
As for test cases, start with a basic functionality test - put a "normal" row in and make sure you get it back. You'll appreciate having that test if you later refactor. Make sure the program responds correctly to columns being null, or blank. Put the maximum and minimum values in all the DB fields and make sure the object fields you're storing them in can fit that resolution. Check duplicate records in the DB, or missing records. If you have production data, grab a snapshot of it to put in your test DB and make sure that loads correctly. Is there a value that chronically causes difficulties in other parts of the program? Check it here too. And once you release the code, add to the test list any values you find in production that break the system (Regression testing).

How do I test a code generation tool?

I am currently developing a small project of mine that generates SQL calls in a dynamic way to be used by an other software. The SQL calls are not known beforehand and therefore I would like to be able to unit test the object that generates the SQL.
Do you have a clue of how would be the best approach to do this? Bear in mind that there is no possible way to know all the possible SQL calls to be generated.
Currently the only idea I have is to create test cases of the accepted SQL from the db using regex and make sure that the SQL will compile, but this does not ensure that the call returns the expected result.
Edited: Adding more info:
My project is an extension of Boo that will allow the developer to tag his properties with a set of attributes. This attributes are used to identify how the developers wants to store the object in the DB. For example:
# This attribute tells the Boo compiler extension that you want to
# store the object in a MySQL db. The boo compiler extension will make sure that you meet
# the requirements
[Storable(MySQL)]
class MyObject():
# Tells the compiler that name is the PK
[PrimaryKey(Size = 25)]
[Property(Name)]
private name as String
[TableColumn(Size = 25)]
[Property(Surname)]
private surname as String
[TableColumn()]
[Property(Age)]
private age as int
The great idea is that the generated code wont need to use reflection, but that it will added to the class in compile time. Yes the compilation will take longer, but there won't be a need to use Reflection at all. I currently have the code working generating the required methods that returns the SQL at compile time, they are added to the object and can be called but I need to test that the generated SQL is correct :P
The whole point of unit testing is that you know the answer to compare the code results to. You have to find a way to know the SQL calls before hand.
To be honest, as other answerers have suggested, your best approach is to come up with some expected results, and essentially hard-code those in your unit tests. Then you can run your code, obtain the result, and compare against the hard-coded expected value.
Maybe you can record the actual SQL generated, rather than executing it and comparing the results, too?
This seems like a hen-egg situation. You aren't sure what the generator will spit out and you have a moving target to test against (the real database). So you need to tie the loose ends down.
Create a small test database (for example with HSQLDB or Derby). This database should use the same features as the real one, but don't make a copy! You will want to understand what each thing in the test database is for and why it is there, so invest some time to come up with some reasonable test cases. Use your code generator against this (static) test database, save the results as fixed strings in your test cases. Start with a single feature. Don't try to build the perfect test database as step #1. You will get there.
When you change the code generator, run the tests. They should only break in the expected places. If you find a bug, replicate the feature in question in your test database. Create a new test, check the result. Does it look correct? If you can see the error, fix the expected output in the test. After that, fix the generator so it will create the correct result. Close the bug and move on.
This way, you can build more and more safe ground in a swamp. Do something you know, check whether it works (ignore everything else). If you are satisfied, move on. Don't try to tackle all the problems at once. One step at a time. Tests don't forget, so you can forget about everything that is being tested and concentrate on the next feature. The test will make sure that your stable foundation keeps growing until you can erect your skyscraper on it.
regex
I think that the grammar of SQL is non-regular, but context-free; subexpressions being the key to realize this. You may want to write a context-free parser for SQL to check for syntax errors.
But ask yourself: what is it you want to test for? What are your correctness criteria?
If you are generating the code, why not also generate the tests?
Short of that, I would test/debug generated code in the same way you would test/debug any other code without unit tests (i.e. by reading it, running it and/or having it reviewed by others).
You don't have to test all cases. Make a collection of example calls, be sure to include as many of the difficult aspects that the function will have to handle as possible, then look if the generated code is correct.
I would have a suite of tests that put in a known input and check that the generated SQL is as expected.
You're never going to be able to write a test for every scenario but if you write enough to cover at least the most regular patterns you can be fairly confident your generator is working as expected.
If you find it doesn't work in a specific scenario, write another test for that scenario and fix it.

Resources