I'm using NUnit + Webdriver + C#. Setup class has next stucture:
[TestFixture(typeof(InternetExplorerDriver))]
[TestFixture(typeof(ChromeDriver))]
public partial class SetupBase<TWebDriver> where TWebDriver : IWebDriver, new()
{
public IWebDriver _driver;
[OneTimeSetUp]
public void OneTimeSetUp()
{
Init();
}
}
How can I set name of tests to include methode name, arguments and name of browser?
I tried with capabilities but it didn't help
ChromeOptions options = new ChromeOptions();
options.AddAdditionalCapability("Name", String.Format ("{0}_Chrome", TestContext.CurrentContext.Test.Name), true);
Also tried to use next code but was not able to find way how to pass driver type to NameAttribute
public class NameAttribute : NUnitAttribute, IApplyToTest
{
public NameAttribute(string name)
{
Name = String.Format("{0} {1}", name);
}
public string Name { get; set; }
public void ApplyToTest(Test test)
{
test.Properties.Add("Name", Name);
}
}
Can you help me please. Maybe need to update base class structure somehow?
This is how I use in tests
public class _001_General<TWebDriver> : SetupBase<TWebDriver> where TWebDriver : IWebDriver, new()
{
[OneTimeSetUp]
public void OneTimeSetupTest ()
{
//somework
}
[Test]
public void Test ()
{
//somework
}
}
Also SetupBase class contains functions that are used in tests
In NUnit, test cases, test methods, test fixtures and generic test fixture classes are all "tests" even though we sometimes talk loosely about "tests" as meaning test cases.
In your example, the following names are created:
_001_General<TWebDriver> (or maybe _001_General<>)
_001_General<InternetExplorerDriver>
Test
_001_General<ChromeDriver>
Test
Tests also have "fullnames", similar to that for a type or method. For example
_001_General<ChromeDriver>.Test
(Note: the fullname would also include a namespace, which I haven't shown.)
If you are using a runner that displays fullnames, like the NUnit 3 Console Runner, then there is no problem. So, I assume you are not. :-)
Some runners, like the NUnit 3 Visual Studio Test Adapter use simple test case names. So you would end up with a bunch of tests displayed with the name "Test."
Is that your problem? If so, this is not much of an answer. :-) However, I'll leave it as partial and add to it after hearing what runner you want to use and what you would like to see displayed in it.
UPDATE:
Based on your comment, what you really want to see is the test FullName - just as it is displayed by the NUnit 3 Console runner that TC runs for you. You'd like to see them in the VS IDE using the NUnit 3 VS Adapter.
Unfortunately, you can't right now. :-) More on that below. Meanwhile, here are some workarounds:
Use the console runner on your desktop. It's not as visual but works quite well. It's how I frequently work. Steps:
Install the console runner. I recommend using Chocolatey to install it globally, allowign you to use it with all your projects.
Set up your project to run the console runner with any options you like.
Make sure you use an external console window so you get the color display options that make the console runner easier to use.
Size your windows so you can see everything (if you have enough screen space) or just let the console run pop up on top of VS.
Try to trick VS by setting the test name in a way that includes the driver parameters. That's what you are already doing and it sounds as if you have already gotten almost all you can out of this option, i.e. class name without class parameters. You could try to take it a step further by creating separate classes for each driver. This would multiply the number of classes you have, obviously, but doesn't have to duplicate the code. You could use inheritance from the generic classes to create a new class with only a header in each place where it's needed. Like...
public class TestXYZDriver : TestDriver ...
This might be a lot of work, so it really depends on how important it is to you to get visual results that include fixture parameters right now.
For the future, you could request that the NUnit 3 Adapter project give an option of listing tests by their full names. I haven't worked on that project for a few years, so I'm not sure if it's actually possible. It may not be entirely in the control of the adapter, since VS controls what is displayed.
Please see this JSON returned when asked the question : What are the basic methods of Optional ? This is not the perfect match answer that is being returned in the Retrieve and Rank Tooling ( pasted below this JSON snippet). Can you please help me understand why this is happening?
{
"context": {
"conversation_id": "f87c08f1-2122-4d44-b0bc-d05cd458162d",
"system": {
"dialog_stack": "[root]",
"dialog_turn_counter": "1.0",
"dialog_request_counter": "1.0"
}
},
"inquiryText": "what are the basic methods of Optional",
"responseText": "Going to get answers from Devoxx corpus, ",
"resources": [
{
"id": "\"50a305ba-f8fd-4470-afde-884df5170e29\"",
"score": "1.5568095",
"title": "\"no-title\"",
"body": "\"Voxxed JUnit 5 – The Basics Nicolai Parlog 5 months ago Categories: Methodology Tags: java , JUnit , JUnit 5 19 SHARES Facebook Twitter Reddit Google Mail Linkedin Digg Stumbleupon Buffer Last time, we set up JUnit 5 to be able to write tests. So let’s do it! Overview This post is part of a series about JUnit 5: Setup Basics Architecture Conditions Injection … Most of what you will read here and more can be found in the emerging JUnit 5 user guide . Note that it is based on an alpha version and hence subject to change.Indeed, we are encouraged to open issues or pull requests so that JUnit 5 can improve further. Please make use of this opportunity! It is our chance to help JUnit help us, so if something you see here could be improved, make sure to take it upeam .This post will get updated when it becomes necessary. The code samples I show here can be found on GitHub . Philosophy The new architecture, which we will discuss another time, is aimed at extensibility. It is possible that someday very alien (at least to us run-of-the-mill Java devs) testing techniques will be possible with JUnit 5. But for now the basics are very similar to the current version 4. JUnit 5’s surface undergoes a deliberately incremental improvement and developers should feel right at home. At least I do and I think you will, too: Basic Lifecycle And Features class Lifecycle { #BeforeAll static void initializeExternalResources() { System.out.println(\\\"Initializing external resources...\\\"); } #BeforeEach void initializeMockObjects() { System.out.println(\\\"Initializing mock objects...\\\"); } #Test void someTest() { System.out.println(\\\"Running some test...\\\"); assertTrue(true); } #Test void otherTest() { assumeTrue(true); System.out.println(\\\"Running another test...\\\"); assertNotEquals(1, 42, \\\"Why wouldn't these be the same?\\\"); } #Test #Disabled void disabledTest() { System.exit(1); } #AfterEach void tearDown() { System.out.println(\\\"Tearing down...\\\"); } #AfterAll static void freeExternalResources() { System.out.println(\\\"Freeing external resources...\\\"); } } See? No big surprises. The Basics Of JUnit 5 Visibility The most obvious change is that test classes and methods do not have to be public anymore. Package visibility suffices but private does not. I think this is a sensible choice and in line with how we intuit the different visibility modifiers. Great! I’d say, less letters to type but you haven’t been doing that manually anyways, right? Still less boilerplate to ignore while scrolling through a test class. Test Lifecycle #Test The most basic JUnit annotation is #Test, which marks methods that are to be run as tests. It is virtually unchanged, although it no longer takes optional arguments. Expected exceptions can now be verified via assertions but as far as I know there is not yet a replacement for timeouts . JUnit 5 creates a new test instance for each test method (same as JUnit 4). Before And After You might want to run code to set up and tear down your tests. There are four method annotations to help you do that: #BeforeAll Executed once; runs before the tests and methods marked with #BeforeEach. #BeforeEach Executed before each test. #AfterEach Executed after each test. #AfterAll Executed once; runs after all tests and methods marked with #AfterEach. Because a new instance is created for each test, there is no obvious instance on which to call the #BeforeAll/ #AfterAll methods, so they have to be static. The order in which different methods annotated with the same annotation are executed is undefined. As far as I can tell the same is true for inherited methods. Whether it should be possible to define an order is currently being discussed . Except in name, these annotations work exactly like in JUnit 4. While not uncommon , I am not convinced of the names, though. See this issue for details. Disabling Tests It’s Friday afternoon and you just want to go home? No problem, just slap#Disabled on the test (optionally giving a reason) and run. A Disabled Test #Test #Disabled(\\\"Y U No Pass?!\\\") void failingTest() { assertTrue(false); } Test Class Lifecycle Compared to the prototype it is interesting to note that the test class lifecycle didn’t make it into the alpha version. It would run all tests on the same instance of the test class, thus allowing the tests to interact with each other by mutating state. As I already wrote while discussing the prototype: I think this is a typical case of a feature that is harmful in 99% of the cases but indispensable in the other 1%. Considering the very real risk of horrible inter-test-dependencies I’d say it was a good thing that it was taken out in its original form. But the JUnit team is discussing ways to bring it back in with a different name and added semantics. This would make its use very deliberate. What do you think? Assertions If #Test, #Before..., and #After... are a test suite’s skeleton, assertions are its heart. After the instance under test was prepared and the functionality to test was executed on it, assertions make sure that the desired properties hold. If they don’t, they fail the running test. Classic Classic assertions either check a property of a single instance (e.g. that it is not null) or do some kind of comparison (e.g. that two instances are equal). In both cases they optionally take a message as a last parameter, which is shown when the assertion fails. If constructing the message is expensive, it can be specified as a lambda expression, so construction is delayed until the message is actually required. Classic Assertions #Test void assertWithBoolean() { assertTrue(true); assertTrue(this::truism); assertFalse(false, () -> \\\"Really \\\" + \\\"expensive \\\" + \\\"message\\\" + \\\".\\\"); } boolean truism() { return true; } #Test void assertWithComparison() { List expected = asList(\\\"element\\\"); List actual = new LinkedList<>(expected); assertEquals(expected, actual); assertEquals(expected, actual, \\\"Should be equal.\\\"); assertEquals(expected, actual, () -> \\\"Should \\\" + \\\"be \\\" + \\\"equal.\\\"); assertNotSame(expected, actual, \\\"Obviously not the same instance.\\\"); } As you can see JUnit 5 doesn’t change much here. The names are the same as before and comparative assertions still take a pair of an expected and an actual value (in that order). That the expected-actual order is so critical in understanding the test’s failure message and intention, but can be mixed up so easily is a big blind spot. There’s nothing much to do, though, except to create a new assertion framework. Considering big players like Hamcrest (ugh!) or AssertJ (yeah!), this would not have been a sensible way to invest the limited time. Hence the goal was to keep the assertions focused and effort-free. New is that failure message come last. I like it because it keeps the eye on the ball, i.e. the property being asserted. As a nod to Java 8, Boolean assertions now accept suppliers , which is a nice detail. Extended Aside from the classical assertions that check specific properties, there are a couple of others. The first is not even a real assertion, it just fails the test with a failure message. 'fail' #Test void failTheTest() { fail(\\\"epicly\\\"); } Then we have assertAll, which takes a variable number of assertions and tests them all before reporting which failed (if any). #Test void assertAllProperties() { Address address = new Address(\\\"New City\\\", \\\"Some Street\\\", \\\"No\\\"); assertAll(\\\"address\\\", () -> assertEquals(\\\"Neustadt\\\", address.city), () -> assertEquals(\\\"Irgendeinestraße\\\", address.street), () -> assertEquals(\\\"Nr\\\", address.number) ); } Failure Message for ‘AssertALL’ org.opentest4j.MultipleFailuresError: address (3 failures) expected: but was: expected: but was: expected: but was: This is great to check a number of related properties and get values for all of them as opposed to the common behavior where the test reports the first one that failed and you never know the other values. Finally we have assertThrows and expectThrows. Both fail the test if the given method does not throw the specified exception. The latter also returns the exceptions so it can be used for further verifications, e.g. asserting that the message contains certain information. #Test void assertExceptions() { assertThrows(Exception.class, this::throwing); Exception exception = expectThrows(Exception.class, this::throwing); assertEquals(\\\"Because I can!\\\", exception.getMessage()); } Assumptions Assumptions allow to only run tests if certain conditions are as expected. This can be used to reduce the run time and verbosity of test suites, especially in the failure case. #Test void exitIfFalseIsTrue() { assumeTrue(false); System.exit(1); } #Test void exitIfTrueIsFalse() { assumeFalse(this::truism); System.exit(1); } private boolean truism() { return true; } #Test void exitIfNullEqualsString() { assumingThat( \\\"null\\\".equals(null), () -> System.exit(1) ); } Assumptions can either be used to abort tests whose preconditions are not met or to execute (parts of) a test only if a condition holds. The main difference is that aborted tests are reported as disabled, whereas a test that was empty because a condition did not hold is plain green. Nesting Tests JUnit 5 makes it near effortless to nest test classes. Simply annotate inner classes with #Nested and all test methods in there will be executed as well: package org.codefx.demo.junit5;// NOT_PUBLISHED import org.junit.gen5.api.BeforeEach; import org.junit.gen5.api.Nested; import org.junit.gen5.api.Test; import static org.junit.gen5.api.Assertions.assertEquals; import static org.junit.gen5.api.Assertions.assertTrue; class Nest { int count = Integer.MIN_VALUE; #BeforeEach void setCountToZero() { count = 0; } #Test void countIsZero() { assertEquals(0, count); } #Nested class CountGreaterZero { #BeforeEach void increaseCount() { count++; } #Test void countIsGreaterZero() { assertTrue(count > 0); } #Nested class CountMuchGreaterZero { #BeforeEach void increaseCount() { count += Integer.MAX_VALUE / 2; } #Test void countIsLarge() { assertTrue(count > Integer.MAX_VALUE / 2); } } } } As you can see, #BeforeEach (and #AfterEach) work here as well. Although currently not documented the initializations are executed outside-in. This allows to incrementally build a context for the inner tests. For nested tests to have access to the outer test class’ fields, the nested class must not be static. Unfortunately this forbids the use of static methods so #BeforeAll and#AfterAll can not be used in that scenario. ( Or can they? ) Maybe you’re asking yourself what this is good for. I use nested test classes to inherit interface tests , others to keep their test classes small and focused . The latter is also demonstrated by the more elaborate example commonly given by the JUnit team , which tests a stack: class TestingAStack { Stack stack; boolean isRun = false; #Test void isInstantiatedWithNew() { new Stack(); } #Nested class WhenNew { #BeforeEach void init() { stack = new Stack(); } // some tests on 'stack', which is empty #Nested class AfterPushing { String anElement = \\\"an element\\\"; #BeforeEach void init() { stack.push(anElement); } // some tests on 'stack', which has one element... } } } In this example the state is successively changed and a number of tests are executed for each scenario. Naming Tests JUnit 5 comes with an annotation #DisplayName, which gives developers the possibility to give more easily readable names to their test classes and methods. With it, the stack example which looks as follows: #DisplayName(\\\"A stack\\\") class TestingAStack { #Test #DisplayName(\\\"is instantiated with new Stack()\\\") void isInstantiatedWithNew() { /*…*/ } #Nested #DisplayName(\\\"when new\\\") class WhenNew { #Test #DisplayName(\\\"is empty\\\") void isEmpty() { /*…*/ } #Test #DisplayName(\\\"throws EmptyStackException when popped\\\") void throwsExceptionWhenPopped() { /*…*/ } #Test #DisplayName(\\\"throws EmptyStackException when peeked\\\") void throwsExceptionWhenPeeked() { /*…*/ } #Nested #DisplayName(\\\"after pushing an element\\\") class AfterPushing { #Test #DisplayName(\\\"it is no longer empty\\\") void isEmpty() { /*…*/ } #Test #DisplayName(\\\"returns the element when popped and is empty\\\") void returnElementWhenPopped() { /*…*/ } #Test #DisplayName( \\\"returns the element when peeked but remains not empty\\\") void returnElementWhenPeeked(){ /*…*/ } } } } This creates nicely readable output and should bring joy to the heart of BDD ‘ers! Reflection That’s it, you made it! We rushed through the basics of how to use JUnit 5 and now you know all you need to write plain tests: How to annotate the lifecycle methods (with #[Before|After][All|Each]) and the test methods themselves ( #Test), how to nest ( #Nested) and name ( #DisplayName) tests and how assertions and assumptions work (much like before). But wait, there’s more! We didn’t yet talk about conditional execution of tests methods, the very cool parameter injection, the extension mechanism, or the project’s architecture. And we won’t right now because we will take a short break from JUnit 5 and come back to it in about a month. Stay tuned! Window size: x Viewport size: x\"",
"docName": "\"JUnit 5 – The Basics - Voxxed.htm\""
},
{
"id": "\"0054b4e9-6b55-420e-84bc-8f31c79a949f\"",
"score": "1.2038735",
"title": "\"By Stefan Bulzan\"",
"body": "\"With the advent of lambdas in Java we now have a new tool to better design our code. Of course, the first step is using streams, method references and other neat features introduced in Java 8. Going forward I think the next step is to revisit the well established Design Patterns and see them through the functional programming lenses. For this purpose I’ll take the Decorator Pattern and implement it using lambdas. We’ll take an easy and delicious example of the Decorator Pattern: adding toppings to pizza. Here is the standard implementation as suggested by GoF: First we have the interface that defines our component: public interface Pizza { String bakePizza(); } We have a concrete component: public class BasicPizza implements Pizza { #Override public String bakePizza() { return \\\"Basic Pizza\\\"; } } We decide that we have to decorate our component in different ways. We go with Decorator Pattern. This is the abstract decorator: public abstract class PizzaDecorator implements Pizza { private final Pizza pizza; protected PizzaDecorator(Pizza pizza) { this.pizza = pizza; } #Override public String bakePizza() { return pizza.bakePizza(); } } We provide some concrete decorators for the component: public class ChickenTikkaPizza extends PizzaDecorator { protected ChickenTikkaPizza(Pizza pizza) { super(pizza); } #Override public String bakePizza() { return super.bakePizza() + \\\" with chicken topping\\\"; } } public class ProsciuttoPizza extends PizzaDecorator { protected ProsciuttoPizza(Pizza pizza) { super(pizza); } #Override public String bakePizza() { return super.bakePizza() + \\\" with prosciutto\\\"; } } And this is the way to use the new structure: Pizza pizza = new ChickenTikkaPizza(new BasicPizza()); String finishedPizza = pizza.bakePizza(); //Basic Pizza with chicken topping pizza = new ChickenTikkaPizza(new ProsciuttoPizza(new BasicPizza())); finishedPizza = pizza.bakePizza(); //Basic Pizza with prosciutto with chicken topping We can see that this can get very messy, and it did get very messy if we think about how we handle buffered readers in Java: new DataInputStream(new BufferedInputStream(new FileInputStream(new File(\\\"myfile.txt\\\")))) Of course, you can split that in multiple lines, but that won’t solve the messiness, it will just spread it. Now lets see how we can do the same thing using lambdas. We start with the same basic component objects: public interface Pizza { String bakePizza(); } public class BasicPizza implements Pizza { #Override public String bakePizza() { return \\\"Basic Pizza\\\"; } } But now instead of declaring an abstract class that will provide the template for decorations, we will create the decorator that asks the user for functions that will decorate the component. public class PizzaDecorator { private final Function toppings; private PizzaDecorator(Function... desiredToppings) { this.toppings = Stream.of(desiredToppings) .reduce(Function.identity(), Function::andThen); } public static String bakePizza(Pizza pizza, Function... desiredToppings) { return new PizzaDecorator(desiredToppings).bakePizza(pizza); } private String bakePizza(Pizza pizza) { return this.toppings.apply(pizza).bakePizza(); } } There is this line that constructs the chain of decorations to be applied: Stream.of(desiredToppings).reduce(identity(), Function::andThen); This line of code will take your decorations (which are of Function type) and chain them using andThen. This is the same as… (currentToppings, nextTopping) -> currentToppings.andThen(nextTopping) And it makes sure that the functions are called subsequently in the order you provided. Also, Function.identity() is translated to elem -> elem lambda expression. OK, now where will we define our decorations? Well, you can add them as static methods in PizzaDecorator or even in the interface: public interface Pizza { String bakePizza(); static Pizza withChickenTikka(Pizza pizza) { return new Pizza() { #Override public String bakePizza() { return pizza.bakePizza() + \\\" with chicken\\\"; } }; } static Pizza withProsciutto(Pizza pizza) { return new Pizza() { #Override public String bakePizza() { return pizza.bakePizza() + \\\" with prosciutto\\\"; } }; } } And now, this is how this pattern gets to be used: String finishedPizza = PizzaDecorator.bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto); //And if you static import PizzaDecorator.bakePizza: String finishedPizza = bakePizza(new BasicPizza(),Pizza::withChickenTikka, Pizza::withProsciutto); As you can see, the code got more clear and more concise, and we didn’t use inheritance to build our decorators. This is just one of the many design patterns that can be improved using lambdas. There are more features that can be used to improve the rest of them like using partial application (currying) to implement Adapter Pattern. I hope I got you thinking about adopting a more functional programming approach to your development style.\"",
"docName": "\"Decorator Design Pattern Using Lambdas - Voxxed.htm\""
}
]
}
When you submit a query to R&R with a trained ranker id, it will take the responses from the retrieve side of the service, and use the ranker to sort them into order based on what the ranker has "learned" about relevance.
The number of rows you fetch from the retrieve service in the first place is crucial for this.
To take the extreme case as an example, if you fetch only one row, it doesn't matter what training you've given the ranker. It has one thing to sort into order. So it will return that one thing.
If you fetch a small number of rows, for example 3, you will retrieve those three rows, and then the ranker will sort them. You will always get the same three rows - the difference the ranker makes will be what order they come in.
If you fetch a very large number of rows, for example 100, the ranker has 100 results to sort into order, so the top answer from such a query may well be different to what the top answer would be if you'd only fetched the top few.
When comparing the top result from two different apps querying the R&R service, it's therefore essential to take the rows parameter into account.
The web tooling you included a screenshot of uses a rows parameter of 30. It's retrieving 30 rows, and then using the ranker you've selected to sort them into order and display the top results.
My guess is that your application is either setting rows to something different, or not setting it at all and using the default value of 10. If you set the rows parameter in your application to 30, matching what the web tool is doing, I would expect the results to then be consistent.
There is more background about the rows parameter here : https://www.ibm.com/watson/developercloud/doc/retrieve-rank/training_data.shtml
I use SpecFlow with Coded UI to create some automated functional tests for a WPF application. Test case execution is performed using MsTest and Visual Studio Premium 2012.
I have a lot of test cases. If I execute them one by one everything is OK. If I put them all in an ordered test I receive the following error:
Microsoft.VisualStudio.TestTools.UITest.Extension.UITestControlNotAvailableException: The following element is no longer available: Name [], ControlType [Custom], AutomationId [reags:LoadView_1], RuntimeId [7,1620,64780193] ---> System.Windows.Automation.ElementNotAvailableException: The following element is no longer available: Name [], ControlType [Window], AutomationId [UnitializedCB3702D1-14B6-4001-8BC7-CD4C22C18BE1], RuntimeId [42,1770052]
at Microsoft.VisualStudio.TestTools.UITest.Extension.Uia.UiaUtility.MapAndThrowException(SystemException e, IUITechnologyElement element)
at Microsoft.VisualStudio.TestTools.UITest.Extension.Uia.UiaElement.get_AutomationId()
at Microsoft.VisualStudio.TestTools.UITest.Extension.Uia.UiaElement.HasValidAutomationId()
at Microsoft.VisualStudio.TestTools.UITest.Extension.Uia.UiaElement.get_FriendlyName()
at Microsoft.VisualStudio.TestTools.UITest.Common.UIMap.UIMapUtil.FillPropertyFromUIElement(UIObject obj, IUITechnologyElement element)
at Microsoft.VisualStudio.TestTools.UITest.Common.UIMap.UIMapUtil.FillPropertyOfTopLevelElementFromUIElement(UIObject obj, IUITechnologyElement element)
at Microsoft.VisualStudio.TestTools.UITest.Common.UIMap.UIMapUtil.FillTopLevelElementFromUIElement(IUITechnologyElement element, TopLevelElement obj, Boolean stripBrowserWindowTitleSuffix)
at Microsoft.VisualStudio.TestTools.UITest.Common.UIMap.UIMapUtil.GetCompleteQueryId(UITechnologyElement pluginNode)
at Microsoft.VisualStudio.TestTools.UITesting.UITestControl.GetQueryIdForCaching()
at Microsoft.VisualStudio.TestTools.UITesting.UITestControl.<>c__DisplayClass6.<CacheQueryId>b__5()
at Microsoft.VisualStudio.TestTools.UITesting.CodedUITestMethodInvoker.InvokeMethod[T](Func`1 function, UITestControl control, Boolean firePlaybackErrorEvent, Boolean logAsAction)
at Microsoft.VisualStudio.TestTools.UITesting.UITestControl.CacheQueryId(String queryId)
at Microsoft.VisualStudio.TestTools.UITesting.UITestControl..ctor(IUITechnologyElement element, UITestControl searchContainer, String queryIdForRefetch)
at Microsoft.VisualStudio.TestTools.UITesting.TechnologyElementPropertyProvider.GetPropertyValue(UITestControl uiControl, String propertyName)
at Microsoft.VisualStudio.TestTools.UITesting.UITestPropertyProvider.TryGetPropertyFromTechnologyElement(UITestControl uiControl, String propertyName, Object& value)
at Microsoft.VisualStudio.TestTools.UITesting.PropertyProviderBase.GetPropertyValue(UITestControl uiControl, String propertyName)
at Microsoft.VisualStudio.TestTools.UITesting.UITestPropertyProvider.GetPropertyValueWrapper(UITestControl uiControl, String propertyName)
at Microsoft.VisualStudio.TestTools.UITesting.UITestControl.GetPropertyValuePrivate(String propertyName)
The first couple of errors were fixed using this hint, but I have some auto-generated steps and in order to re-search the controls I have to move the code and... a lot of unnecessary and annoying work.
Could you suggest some another solution to fix this? Is there some trick with the ordered tests? Or some nice clean-up methods for problems like this?
Thanks!
Here's what I did with a recent project.
First I created some CodedUI test methods as if SpecFlow didn't exist so I could keep those layers separate. Then I created step definition classes in C# that delegate to the coded UI test methods I created.
In a before scenario hook I created my UIMap instances (the classes generated by the CodedUI test generator) so each scenario had a fresh instance of my UIMap classes. You need this because object references in these classes are cached. Each new screen in your app is a whole new object tree that CodedUI must traverse.
Many times my step definitions just dive right into the CodedUI API to create custom searches, and I used the auto generated methods in my UIMap classes as a point of reference.
A little elaboration on how I set up my test project.
About My Test Project
I created a new "Test" project in Visual Studio 2010, which references the following libraries:
Microsoft (probably comes with default Test project template)
Microsoft.VisualStudio.QualityTools.CodedUITestFramework
Microsoft.VisualStudio.QualityTools.UnitTestFramework
Microsoft.VisualStudio.TestTools.UITest.Common
Microsoft.VisualStudio.TestTools.UITest.Extension
Microsoft.VisualStudio.TestTools.UITesting
UIAutomationTypes
NuGet Packages
AutoMapper
AutoMapper.Net4
SpecFlow.Assist.Dynamic
TechTalk.SpecFlow
Test Project Structure
This was my first stab at CodedUI Tests. I came from a Ruby on Rails background, and did a fair amount of reading online about implementing CodedUI Tests and SpecFlow tests. It's not a perfect setup, but it seems to be pretty maintainable for us.
Tests (Test project)
Features/
Bar.feature
Foo.feature
Regression/
Screen1/
TestsA.feature
TestsB.feature
StepDefinitions/
CommonHooks.cs
DataAssertionSteps.cs
DataSteps.cs
FormSteps.cs
GeneralSteps.cs
PresentationAssertionSteps.cs
Screen1Steps.cs
Screen2Steps.cs
UI/
FormMaps/
Screen1FormMap.cs
Screen2FormMap.cs
UIMapLoader/
User.cs
UIMap.uitest (created by CodedUI test framework)
Models (C# Class Library Project)
Entities/
Blog.cs
Comment.cs
Post.cs
Repositories/
BlogRepository.cs
CommentRepository.cs
PostRepository.cs
ViewModels/
Screen1ViewModel.cs
Screen2ViewModel.cs
Tests/Features
This folder contains all the SpecFlow feature files implementing the basic business rules, or acceptance tests. Simple screens got their own feature file, whereas screens with more complex business logic were broken into multiple feature files. I tried to keep these features friendly to read for both Business and Developers.
Tests/Regression
Because our Web Application was not architected in a manor allowing unit testing, all of our testing must be done through the UI. The Tests/Regressions folder contains all the SpecFlow feature files for our full regression of the application. This includes the really granular tests, like typing too many characters into form fields, etc. These features weren't really meant as business documenation. They are only meant to prevent us from being woken up at 3 a.m. because of production problems. Why do these problems always happen at 3 a.m.? ...
Tests/StepDefinitions
The Test/StepDefinitions folder contains all the SpecFlow Step Definition files. I broke these files down first into common steps, and then steps pertaining to a particular screen in my application.
CommonHooks.cs -- Created by SpecFlow
[Binding]
public class CommonHooks
{
[BeforeTestRun]
public static void BeforeTestRun()
{
...
}
[BeforeScenario]
public void BeforeScenario()
{
User.General.OpenLauncher();
}
[AfterScenario]
public void AfterScenario()
{
User.General.CloseBrowser();
User.General = null;
}
}
The BeforeScenario and AfterScenario methods are where I create and/or destroy instances of the CodedUI UIMap classes (More on that further down)
DataAssertionSteps.cs -- Step definitions asserting that data shows up, or doesn't show up in the database. These are all Then ... step definitions.
Scenario: Foo
Then a Foo should exist
In DataAssertionSteps.cs:
[Then(#"a Foo should exist")]
public void ThenAFooShouldExist()
{
// query the database for a record
// assert the record exists
}
DataSteps.cs -- Steps to seed the database with data, or remove data. These are all Given ... step definitions used to set up a scenario.
FormSteps.cs -- Step definitions for interacting with forms. These all tend to be When I ... steps
GeneralSteps.cs -- Realy generic step definitions. Things like When I click the "Foo" link go here.
PresentationAssertionSteps.cs -- Generic steps asserting that the UI is behaving properly. Things like Then I should see the text "Foo" go here.
Screen1Steps.cs -- When I needed steps for a particular screen, I created a step definition file for that screen. For example, if I needed steps for the "Blog Post" screen, then I created a file call BlogPostSteps.cs, which contained all those step definitions.
Tests/UI
The Tests/UI folder contains a bunch of custom written C# classes that we used to map label text found in our *.feature files to the names of form controls. You might not need this layer, but we did. This makes it easier to refactor your test project if form control names change, and especially for Web Projects because the HTML form field names change based on the <asp /> containers in our ascx files.
Example class:
namespace Tests.UI.FormMaps.Screen1FormMap
{
public static IDictionary<string, string> Fields = new Dictionary<string, string>()
{
{ "First Name", "UserControlA_PanelB_txtFirstName" },
{ ... },
...
};
}
Example Step:
When I enter "Joe" in the "First Name" textbox in the "Screen 1" form
Example Step Definition:
[When(#"I enter ""(.*)"" in the ""(.*)"" textbox in the ""(.*)"" form")]
public void WhenIEnterInTheTextboxInTheForm(string text, string labelText, string formName)
{
if (formName == "Screen 1")
{
// form control name: Screen1FormMap.Fields[labelText]
}
...
}
The step definition then used the Tests.UI.FormMaps.Screen1FormMap.Fields property to retrieve the form control name based on the label text in the *.feature files.
Tests.UI.FormMaps.Screen1FormMap.Fields["First Name"]
Tests/UI/UIMapLoader/User.cs
The other thing inside this folder is the UI/UIMapLoader/User.cs file. This class is a custom written class providing easy access to all the UIMap classes generated by the CodedUI Test framework.
namespace Tests.UI.UIMapLoader
{
public static class User
{
private static UIMap _general;
public static UIMap General
{
get { return _general ?? (_general = new UIMap()); }
set { _general = value; }
}
}
}
That way the Step Definition classes can easily access the UI maps via:
User.General.SomeCodedUITestRecordedMethod(...);
You saw a reference to this class in the BeforeScenario and AfterScenario methods in the CommonHooks.cs file referenced above.
Models Project
This is just a class lib to encompass the entities and repositories allowing the test project to access the database. Nothing special here except the ViewModels directory. Some of the screens have complex relationships with data in the database, so I created a ViewModel class to allow my SpecFlow step definitions to easily seed the database with data for these screens.
I have a simple standard repository that loads a composite entity from the database. It injects all dependencies it need to read the complete entity tree from a database through IDbConnection (wich gives the repository access to IDbCommand, IDbTransaction, IDataReader) that I could mock.
public class SomeCompositionRootEntityRepository :
IRepository<SomeCompositionRoot>
{
public RecipeRepository(IDbConnection connection) { ... }
public void Add(SomeCompositionRootEntity item) { ... }
public bool Remove(SomeCompositionRootEntity item) { ... }
public void Update(SomeCompositionRootEntity item) { ... }
public SomeCompositionRootEntity GetById(object id) { ... }
public bool Contains(object id) { ... }
}
The question is how would I write unit test for this in a good way? If I want to test that the reposity has read the whole tree of objects and it has read it correcty, I would need to write a hughe mock that records and verifies the read of each and every property of each and every object in the tree. Is this really the way to go?
Update:
I think I need to refactor my repository to break the repository functionality and unit test into smaller units. How could this be done?
I am sure I do not want to write unit test that involve reading and writing from and to an actual database.
The question is: What functionality do you want to test?
Do you want to test that your repository actually loads something? In this case I would write a few (!) tests that go through the database.
Or do you want to test the functionality inside the repository methods? In this case your mock approach would be appropriate.
Or do you want to make sure that the (trivial) methods of your repository aren't broken? In this case mocks and one test per method might be sufficient.
Just ask yourself what the tests shall ensure and design them along this target!
I think I understand your question so correct me if I'm wrong...
This is a where you cross the line from unit to integration. The test makes sense (I've written these myself), but you're not really testing your repository, rather you're testing that your entity (SomeCompositionRoot) maps correctly. It isn't inserting/updating, but does involve a read to the database.
Depending on what ORM you use, you can do this a million different ways and typically its fairly easy.
::EDIT::
like this for linq for example ...
[TestMethod]
public void ShouldMapBlahBlahCorrectly()
{
CheckBasicMapping<BlahBlah>();
}
private T CheckBasicMapping<T>() where T : class
{
var target = _testContext.GetTable<T>().FirstOrDefault();
target.ShouldNotBeNull();
return target;
}