Is there a way to show completion level? - kiwi-tcms

As a test lead, I want to convey to stakeholders the current status of testing based on projects under test.
When I view telemetry, test plans, test cases, and test executions, I don't see a chart or summary of the planned work, nor a comparison of the planned work versus the completed work. I also don't see this kind of measure listed in the planned updates to Telemetry.
If Kiwi doesn't present this data, has anyone worked out a good way to gather it and present it outside of Kiwi?

Looks like what you are asking for doesn't currently exist but sounds like a good report to have. It needs more definitions though.
As a test lead, I want to convey to stakeholders the current status of
testing based on projects under test.
First let me start by saying in Kiwi TCMS there are Products, not projects. One product may comprise of many projects and vice versa. You may also have multi-layered products.
So let's define what a project means for you ? Is that a collection of products or something different ?
How do you define "planned work" ?
#Prome suggested planned work == number of test cases but that is incomplete. I can have test cases which are never executed during a specific timeframe, such that are already old and obsolete (but you don't want to remove them b/c you want to keep historical data), you may have a TestRun which is already finished but contains test cases which have not been executed (reasons may vary).
So how do you "plan testing work" in your team currently ?

Related

Thought on creating a searchable database

I guess what I need is two things. First a way to input data into an Excel like application or a form builder, then a way to search those entries. For example.. CAR PART put a car Part A into Field 1 the next Field 2 would be car Type, followed by make and model. The fields would need to be made into a form consisting of preset inputs such as ( Title/Type ) and (Variable Categories) so a drop down menu, icons, or checkboxes would help narrow down the list of results. What pieces need to be in place to build/use a lightweight database/application design like this that allows inputting new information and then being able to search for latet search for variables? Also is there any application that does this already, a programming code to learn, or estimated cost and requirements to have it built?
First, there might be something off the shelf that does this already, and there are applications like this. Microsoft's Access would be a good place to start to see if it would fit your needs -- you can build forms and store data without much programming effort. As time goes on, you can scale up to a SQL Server.
It's not clear to me if your data is relational or not, and it might not matter much at first (any database will likely handle your queries to start). I originally thought your data was not relational, but re-reading your post, I'm not so sure now.
If that doesn't work, or you want more flexibility, then I'd start looking at NoSQL as an option. Some good choices include Mongo and RavenDB (there are many others).
You can program it yourself with just about any major language -- some provide more or less functionality based on the tie-in to the data.

How best to relate table into database

I've worked with databases on and off, but this is my first time designing one from scratch. Apologies if this already has an answer somewhere, I couldn't find anything satisfying.
The objective is to store quality-testing data during product assembly. A variable number of tests may be run on each unit, so I have many-to-one related tables for tests and builds.
The next table to add is a list of part numbers in the build (each unit is made of several hundred parts). From a physical and logical standpoint, it makes sense that these should be related to the Builds table. However, the client stated they must be related to the tests because parts are sometimes switched out between tests if a mistake is identified.
It seems like a huge waste of space to duplicate hundreds of parts each time a test is re-run, when only one or two are actually changing. However, I can't think of a better way. Any ideas?
Thanks in advance.
It sounds like you're running the test on the build itself, rather than the parts in particular. So it is as if the build has got versions, with each one being different from the one before because a part was changed.
That suggests to me that you need a build_version table that relates to the set of parts, and which is the subject of the test.
If there are a great many parts but only a few of them change between versions then you might have a build_version_part_changes table that expresses the relation between a build_version and its parts in terms of parts added and parts removed.
So if there is a test failure and parts are then changed, a new build_version record is created with an associated set of parts changes. The new build_version is then subject to another test.

TFS Burndown AuthorizedDate

Im looking for a way to manually adjust TFS task startdates so that my burndown appears correct.
Essentially the iteration has fixed start/end dates and some user stories did not get filled out until half way through the iteration.
This made the burndown have a bump in the road so it looks like we are below target.
I have full access to the TFS database and am wondering what queries I would need to write to get my tasks backdated to the start of the iteration.
I have read somewhere that it is System.AuthorizedDate that controls the burndown chart.
Any help appreciated.
J
You are correct on System.AuthorizedDate being used.
You won't be able to change the System.AuthorizedDate by means of the public API. It won't let you. And you cannot change the System.AuthorizedDate date by means of SQL update commands and remain in a supported state. Officially, Microsoft does not allow this and still maintain the ability for Microsoft to support you unless the SQL enacted changes where made under their guidance such as through a support incident.
I doubt a support incident to Microsoft will yield the update query as it's not a defect and as I explain later it could put you in a very bad place. Could you create a series of updates on the appropriate tables to backdate the System.AuthorizedDate? Without doubt. It might even work but I am not certain if it would work if you dared to do so. The reason is that the work items receive System.Id numbers sequentially as created. I do know that in version control there are expectations in the system that a higher changeset number must have a later commit date (Can't recall the exact field name) than any lower changeset number. It would not surprise me if there are similar expectations in the system for work items. You might find that with such a change to the field in the work item from SQL would render errors or unexpected outcomes in various places - I can imagine a future upgrade or even an update simply bombing and unable to perform. That's all hypothetical though because unless you wish to have your environment in an unsupported state you would not change it via SQL.
Outside creating your own burndown that evaluated differently I am not aware of a means to meet your desired goal under those conditions.

Unit testing a select query

I am returning a set of rows, each representing a desktop machine.
I am stumped on finding a way to unit test this. There's not really any edge cases or criteria I can think of, to test. It's not like share prices where I might want to check I am getting data which is indeed 5 months old. It's not like storing person details where you could check that a certain length always works, or special characters, etc. Or currency and different currencies (£, $, etc) as strings.
How would I test this sort of resultset?
Also, in testing the returnset of a query, there are a few problems:
1) Testing you have the same number of rows as when you run the query on the server is brittle because someone might change the table data. Is this when you have a test server, which nobody changes unless they upload change scripts?
2) Do you test the dataset object is not null? So if it's instantiated as null, but is not after the query's executed, it's holding value (this doesn't prove the data is correct, just that data has been retrieved).
Thanks
You can use a component like NBuilder that will simulate your database. And as you can manage all the aspects of your dataset you can test several aspects of the database interaction: the number of records your query returns, the range of values in some field. And, because the dataset is always created with the arguments you choose the data are always the same so you can reproduce your tests completly decoupled from your database.
1 -
a) Never ever test against the production server, for any reason.
b) Tests should start from a known configuration, which you can achieve either with mock objects or with a test database (some might argue that unit tests should use mock objects and integration tests should use a test database).
As for test cases, start with a basic functionality test - put a "normal" row in and make sure you get it back. You'll appreciate having that test if you later refactor. Make sure the program responds correctly to columns being null, or blank. Put the maximum and minimum values in all the DB fields and make sure the object fields you're storing them in can fit that resolution. Check duplicate records in the DB, or missing records. If you have production data, grab a snapshot of it to put in your test DB and make sure that loads correctly. Is there a value that chronically causes difficulties in other parts of the program? Check it here too. And once you release the code, add to the test list any values you find in production that break the system (Regression testing).

Find redundant pages, files, stored procedures in legacy applications

I have several horrors of old ASP web applications. Does anyone have any easy ways to find what scripts, pages, and stored procedures are no longer needed? (besides the stuff in "old___code", "delete_this", etc ;-)
Chances are if the stored proc won't run, it isn't being used because nobody ever bothered to update it when sonmething else changed. Table colunms that are null for every single record are probably not being used.
If you have your sp and database objects in source control (and if you don't why don't you?), you might be able to reaach through and find what other code it was moved to production with which should give you a clue as to what might call it. YOu will also be able to see who touched it last and that person might know if it is still needed.
I generally approach this by first listing all the procs (you can get this from the system tables) and then marking the ones I know are being used off the list. Profiler can help you here as you can see which are commonly being called. (But don't assume that because profiler didn't show the proc that it isn't being used, that just gives you a list of the ones to research.) This makes the ones that need to be rearched a much smaller list. Depending on your naming convention it might be relatively easy to see what part of the code should use them. When researching don't forget that procs are called in places other than the application, so you will need to check through jobs, DTS or SSIS packages, SSRS reports, other applications, triggers etc to be sure something is not being used.
Once you have identified a list of ones you don't think you need, share it with the rest of the development staff and ask if anyone knows if the proc is needed. You'll probably get a a couple more taken off the list this way that are used for something specialized. Then when you have the list, change the names to some convention that allows you to identify them as a candidate for deletion. At the same time set a deletion date (how far out that date is depends on how often something might be called, if it is called something like AnnualXYZReport, then make that date a year out). If no one complains by the deletion date, delete the proc (of course if it is in source control you can alawys get it back even then).
Onnce you have gone through the hell of identifying the bad ones, then it is time to realize you need to train people that part of the development process is to identify procs that are no longer being used and get rid of them as part of a change to a section of code. Depending on code reuse, this may mean searching the code base to see if someother part of the code base uses it and then doing the same thing discussed as above, let everyone know it will be deleted on this date, change the name so that any code referncing it will break and then on the date to delete getting rid of it. Or maybe you can have a meta data table where you can put candidates for deletion at the time you know that you have stopped using something and send a report around to everyone once a month or so to determine if anyone else needs it.
I can't think of any easy way to do this, it's just a matter of identifying what might not be used and slogging through.
For SQL Server only, 3 options that I can think of:
modify the stored procs to log usage
check if code has no permissions set
run profiler
And of course, remove access or delete it and see who calls...

Resources