I am returning a set of rows, each representing a desktop machine.
I am stumped on finding a way to unit test this. There's not really any edge cases or criteria I can think of, to test. It's not like share prices where I might want to check I am getting data which is indeed 5 months old. It's not like storing person details where you could check that a certain length always works, or special characters, etc. Or currency and different currencies (£, $, etc) as strings.
How would I test this sort of resultset?
Also, in testing the returnset of a query, there are a few problems:
1) Testing you have the same number of rows as when you run the query on the server is brittle because someone might change the table data. Is this when you have a test server, which nobody changes unless they upload change scripts?
2) Do you test the dataset object is not null? So if it's instantiated as null, but is not after the query's executed, it's holding value (this doesn't prove the data is correct, just that data has been retrieved).
Thanks
You can use a component like NBuilder that will simulate your database. And as you can manage all the aspects of your dataset you can test several aspects of the database interaction: the number of records your query returns, the range of values in some field. And, because the dataset is always created with the arguments you choose the data are always the same so you can reproduce your tests completly decoupled from your database.
1 -
a) Never ever test against the production server, for any reason.
b) Tests should start from a known configuration, which you can achieve either with mock objects or with a test database (some might argue that unit tests should use mock objects and integration tests should use a test database).
As for test cases, start with a basic functionality test - put a "normal" row in and make sure you get it back. You'll appreciate having that test if you later refactor. Make sure the program responds correctly to columns being null, or blank. Put the maximum and minimum values in all the DB fields and make sure the object fields you're storing them in can fit that resolution. Check duplicate records in the DB, or missing records. If you have production data, grab a snapshot of it to put in your test DB and make sure that loads correctly. Is there a value that chronically causes difficulties in other parts of the program? Check it here too. And once you release the code, add to the test list any values you find in production that break the system (Regression testing).
Related
I wish to create a generic component which can save the Object Name and field Names with old and new values in a BigObject.
The brute force algo says, on every update of each object, get field API names using describe and check old and new value of those fields. If it gets modified insert it into new BigObject.
But it will consume a lot of CPU time and I am looking for an optimum solution to handle this.
Any suggestions are appreciated.
Well, do you have any code written already? Maybe benchmark it and then see what you can optimise instead of overdesigning it from the start... Keep it simple, write test harness and then try to optimise (without breaking unit tests).
Couple random ideas:
You'd be doing that in a trigger? So your "describe" could happen only once. You don't need to describe every single field, you need only one operation outside of trigger's main loop.
Set<String> fieldNames = Account.sObjectType.getDescribe().fields.getMap().keyset();
System.debug(fieldNames);
This will get you "only" field names but that's enough. You don't care whether they're picklists or dates or what. Use that with generic sObject.get('fieldNameHere') and it's a good start.
or maybe without describe at all. sObject's getPopulatedFieldsAsMap() will give you cool Map which you can easily iterate & compare.
or JSON.serialize the old & new version of the object and if they aren't identical - you know what to do. No idea if they'll always serialise with same field order though so checking if the maps are identical might be better
do you really need to hand-craft this field history tracking like that? You have 1M records free storage but it could explode really easily in busier SF org. Especially if you have workflows, processes, other triggers that would translate to multiple updates (= multiple trigger runs) in same transaction. Perhaps normal field history tracking + chatter feed tracking + even salesforce shield (it comes with 60 more fields tracked I think) would be more sensible for your business needs.
Im looking for a way to manually adjust TFS task startdates so that my burndown appears correct.
Essentially the iteration has fixed start/end dates and some user stories did not get filled out until half way through the iteration.
This made the burndown have a bump in the road so it looks like we are below target.
I have full access to the TFS database and am wondering what queries I would need to write to get my tasks backdated to the start of the iteration.
I have read somewhere that it is System.AuthorizedDate that controls the burndown chart.
Any help appreciated.
J
You are correct on System.AuthorizedDate being used.
You won't be able to change the System.AuthorizedDate by means of the public API. It won't let you. And you cannot change the System.AuthorizedDate date by means of SQL update commands and remain in a supported state. Officially, Microsoft does not allow this and still maintain the ability for Microsoft to support you unless the SQL enacted changes where made under their guidance such as through a support incident.
I doubt a support incident to Microsoft will yield the update query as it's not a defect and as I explain later it could put you in a very bad place. Could you create a series of updates on the appropriate tables to backdate the System.AuthorizedDate? Without doubt. It might even work but I am not certain if it would work if you dared to do so. The reason is that the work items receive System.Id numbers sequentially as created. I do know that in version control there are expectations in the system that a higher changeset number must have a later commit date (Can't recall the exact field name) than any lower changeset number. It would not surprise me if there are similar expectations in the system for work items. You might find that with such a change to the field in the work item from SQL would render errors or unexpected outcomes in various places - I can imagine a future upgrade or even an update simply bombing and unable to perform. That's all hypothetical though because unless you wish to have your environment in an unsupported state you would not change it via SQL.
Outside creating your own burndown that evaluated differently I am not aware of a means to meet your desired goal under those conditions.
i have a class which queries for sales for last year and updates a field in another object.
How can i assert this value in my test class? This data would not be dependent on my test data as the environment could contain previous records other than my test data.
thanks
If you update your class to use the latest API version (as of Spring 12) then you'll find that there are now restrictions in place so that test methods can't access any data except for that which they've created. Not only does this help massively in scenarios such as the one you've described (where you want to be sure it uses certain data) but it also enforces best practice and means tests will always run properly when deploying to another environment.
I'm working on a data warehouse and I'm trying to figure out how to best verify that data from our data cleansing (normalized) database makes it into our data marts correctly. I've done some searches, but the results so far talk more about ensuring things like constraints are in place and that you need to do data validation during the ETL process (E.g. dates are valid, etc.). The dimensions were pretty easy as I could easily either leverage the primary key or write a very simple and verifiable query to get the data. The fact tables are more complex.
Any thoughts? We're trying to make this very easy for a subject matter export to run a couple queries, see some data from both the data cleansing database and the data marts, and visually compare the two to ensure they are correct.
You test your fact table loads by implementing a simplified, pared-down subset of the same data manipulation elsewhere, and comparing the results.
You calculate the same totals, counts, or other figures at least twice. Once from the fact table itself, after it has finished loading, and once from some other source:
the source data directly, controlling for all the scrubbing steps in between source and fact
a source system report that is known to be correct
etc.
If you are doing this in the database, you could write each test as a query that returns no records if everything correct. Any records that get returned are exceptions: count of x by (y,z) does not match.
See this excellent post by ConcernedOfTunbridgeWells for more recommendations.
Although it has some drawbacks and potential problems if you do a lot of cleansing or transforming, I've found you can round trip an input file by re-generating the input file from the star schema(s). Then simply comparing the input file to the output file. It might require some massaging to make them match (one is left padded, the other right padded).
Typically, I had a program which used the same layout the ETL used and did a compare, ignoring alignment within a field. Also, the files might have to be sorted - there is a command-line sort I used.
If your ETL does a transform incorrectly and you transform out incorrectly, it's still possible that this method doesn't show every problem in the DW, and I wouldn't claim it has complete coverage, but it's a pretty good first whack at a regression unit test for each load.
I am currently developing a small project of mine that generates SQL calls in a dynamic way to be used by an other software. The SQL calls are not known beforehand and therefore I would like to be able to unit test the object that generates the SQL.
Do you have a clue of how would be the best approach to do this? Bear in mind that there is no possible way to know all the possible SQL calls to be generated.
Currently the only idea I have is to create test cases of the accepted SQL from the db using regex and make sure that the SQL will compile, but this does not ensure that the call returns the expected result.
Edited: Adding more info:
My project is an extension of Boo that will allow the developer to tag his properties with a set of attributes. This attributes are used to identify how the developers wants to store the object in the DB. For example:
# This attribute tells the Boo compiler extension that you want to
# store the object in a MySQL db. The boo compiler extension will make sure that you meet
# the requirements
[Storable(MySQL)]
class MyObject():
# Tells the compiler that name is the PK
[PrimaryKey(Size = 25)]
[Property(Name)]
private name as String
[TableColumn(Size = 25)]
[Property(Surname)]
private surname as String
[TableColumn()]
[Property(Age)]
private age as int
The great idea is that the generated code wont need to use reflection, but that it will added to the class in compile time. Yes the compilation will take longer, but there won't be a need to use Reflection at all. I currently have the code working generating the required methods that returns the SQL at compile time, they are added to the object and can be called but I need to test that the generated SQL is correct :P
The whole point of unit testing is that you know the answer to compare the code results to. You have to find a way to know the SQL calls before hand.
To be honest, as other answerers have suggested, your best approach is to come up with some expected results, and essentially hard-code those in your unit tests. Then you can run your code, obtain the result, and compare against the hard-coded expected value.
Maybe you can record the actual SQL generated, rather than executing it and comparing the results, too?
This seems like a hen-egg situation. You aren't sure what the generator will spit out and you have a moving target to test against (the real database). So you need to tie the loose ends down.
Create a small test database (for example with HSQLDB or Derby). This database should use the same features as the real one, but don't make a copy! You will want to understand what each thing in the test database is for and why it is there, so invest some time to come up with some reasonable test cases. Use your code generator against this (static) test database, save the results as fixed strings in your test cases. Start with a single feature. Don't try to build the perfect test database as step #1. You will get there.
When you change the code generator, run the tests. They should only break in the expected places. If you find a bug, replicate the feature in question in your test database. Create a new test, check the result. Does it look correct? If you can see the error, fix the expected output in the test. After that, fix the generator so it will create the correct result. Close the bug and move on.
This way, you can build more and more safe ground in a swamp. Do something you know, check whether it works (ignore everything else). If you are satisfied, move on. Don't try to tackle all the problems at once. One step at a time. Tests don't forget, so you can forget about everything that is being tested and concentrate on the next feature. The test will make sure that your stable foundation keeps growing until you can erect your skyscraper on it.
regex
I think that the grammar of SQL is non-regular, but context-free; subexpressions being the key to realize this. You may want to write a context-free parser for SQL to check for syntax errors.
But ask yourself: what is it you want to test for? What are your correctness criteria?
If you are generating the code, why not also generate the tests?
Short of that, I would test/debug generated code in the same way you would test/debug any other code without unit tests (i.e. by reading it, running it and/or having it reviewed by others).
You don't have to test all cases. Make a collection of example calls, be sure to include as many of the difficult aspects that the function will have to handle as possible, then look if the generated code is correct.
I would have a suite of tests that put in a known input and check that the generated SQL is as expected.
You're never going to be able to write a test for every scenario but if you write enough to cover at least the most regular patterns you can be fairly confident your generator is working as expected.
If you find it doesn't work in a specific scenario, write another test for that scenario and fix it.