Cucumber - testing principals vs speed - selenium-webdriver

after reading many articles, in my understanding all Cucumber tests should be independent from each other and autonomous, so that are rules I follow when I am automating my web app tests.
Lets say I am testing web page that has multiple input fields.
Currently, for CRUD operations I have two types of scenarios:
Scenario: Check page display correct data
Given: I populate DB with data
When: I open the page
Then: Page data should match with data from DB
Scenario: Update page data
Given: I populate DB with data
When: I open the page
And: I update each field with some new data
When: I press save button to save data
Then: Page data should match with data from DB
So in this case I have two scenarios that check if data is displayed properly, and another one that updates data and check it as well, but because step that populates the database takes long (1-3 seconds) I was thinking, why not combine this two type of scenarios, into single one, greatly cutting execution time:
Scenario: Update page data
Given: I populate DB with data
When: I open the page
Then: Page data should match with data from DB
And: I update each field with some new data
When: I press save button to save data
Then: Page data should match with data from DB
As you can see, first I populate the database, than I check if it is properly displayed, next I modify it, and check again, so this way I checked two CRUD operations (read and update) in single scenario, but I believe it would be against principles.

It's perfectly fine to combine two CRUD operations in one scenario if your tests are more focussed on integration and end-to-end behaviour rather than unit / component behaviour (which probably is the case).
Of course you should always consider the balance between putting too much in one scenario versus fragmenting a feature into a lot of scenarios. And of course the trade off of asserting more than one thing in a scenario is that it potentially forces you to debug more when a scenario fails. So it's not about principles but rather a conscious choice that you may have to reconsider depending on the speed and stability of your application under test.

Couple of ideas, I can share.
...
When: I ...
And: I ...
When: ...
...
can become
...
When: I ...
And: I ...
And: ...
Then: ...
even better if you can abstract it to a declarative business function. Which will allow you to see the forest, and not get swamped by the long end-to-end scenarios.
It is good, to think for your BDD journeys from the end-user perspective
Given: I populate DB with data
is something that happens to the usual user very rarely, right? Unless you cover some specific admin/dev case. If, you are using it as precondition, take a look at the xUnit Fixture Setup patterns. DB validations are a recommended consideration, just not at the top most layer of your framework.
And
greatly cutting execution time
can be achieved via parallel execution of your features/scenarios. Not, by cutting test scenarios. Again, the tradeoff is in favor of the meaningful scenarios.

Related

Seamless Integration with REST API

Many examples on the net show you how to use ng-repeat with in-memory data, but in my case I have long table with infinite scroll that gets data by sending requests to a REST API (scroll down - fetch some data, scroll down again - fetch some more data, etc.). It does work, but I'm wondering how can I integrate that with filters?
Right now I have to call a specific method of API service that makes a request based on text in "search" input box and then controller updates $scope.data.
Is it possible to build a custom filter that would do that? And then my view would be utterly decoupled from the service and I could declaratively tell it how to group and order and filter data, regardless if it's in-memory or comes from a remote server, server that can serve only limited records at a time.
Also later I'm gonna need grouping and ordering as well, I'm so tempted to download the entire dataset and lock parts of the app responsible for grouping, searching and ordering (until all data is on the client), but:
a) that dataset is huge (hundred thousands of records)
b) nobody wants to deal with cache invalidation headaches
c) doing so feels so damn wrong, you don't really expect me to 'keep' all that data in-memory, right?
Can you guys point me to maybe some open-source examples where I can steal some ideas from?
Basically I need to build a service and filters that let me to work with my "pageable" data that comes from api, like it's in memory-data.
Regardless of how you choose to solve it (there are many ways to infinite-scroll with angular, here is one: http://binarymuse.github.io/ngInfiniteScroll/), at its latest current beta version, ng-repeat works really bad with large amount of data - so do filters. The reason is obvious - pulling so much information for changes is a tuff job. Moreover, ng-repeat by default will re-draw your complete list every time something changes.
There are many solutions you can explore in this area, here are the ones I found productive:
http://kamilkp.github.io/angular-vs-repeat/#?tab=8
http://www.williambrownstreet.net/blog/2013/07/angularjs-my-solution-to-the-ng-repeat-performance-problem/
https://github.com/allaud/quick-ng-repeat
You should also consider the following, which really helps with large amounts of data.
https://github.com/Pasvaz/bindonce
Updated
I guess you can't really control your server output, because filtering and ordering large amount of data are better off done on the server side.
I was pointing out the links above since even if you write your own filters (and order-bys), which is quite simple to do - http://jsfiddle.net/gdefpfqL/ - (filter by some company name and then click the "Add More" button - to add more items). ordering by is virtually impossible if you can't control the data coming for the server - the only option is getting it all, ordering and then lazy load from the client's memory. So if each of your list items doesn't have many binding by it self (as in the example I've added) - the list item is a fairly simple one (for instance: you simply present the results as a plain text in a <li>{{item.name}}</li> then angular ng-repeat might work for you. In this case, filters will work as expected - say you filter by searched text:
<li ng-repeat="item in items | filter:searchedText"></li>
even for new items added after the user has searched a text, it will still works because the magic of binding.

Short lived DbContext in WPF application reasonable?

In his book on DbContext, #RowanMiller shows how to use the DbSet.Local property to avoid 1.) unnecessary roundtrips to the database and 2.) passing around collections (created with e.g. ToList()) in the application (page 24). I then tried to follow this approach. However, I noticed that from one using [} – block to another, the DbSet.Local property becomes empty:
ObservableCollection<Destination> destinationsList;
using (var context = new BAContext())
{
var query = from d in context.Destinations …;
query.Load();
destinationsList = context.Destinations.Local; //Nonzero here.
}
//Do stuff with destinationsList
using (var context = new BAContext())
{
//context.Destinations.Local zero here again;
//So no way of getting the in-memory data from the previous using- block here?
//Do I have to do another roundtrip to the database here to get the same data I wanted
//to cache locally???
}
Then, what is the point on page 24? How can I avoid the passing around of my collections if the DbSet.Local is only usable inside the using- block? Furthermore, how can I benefit from the change tracking if I use these short-lived context instances not handing over any cache data to each others under the hood? So, if the contexts should be short-lived for freeing resources such as connections, have I to give up the caching for this? I.e. I can’t use both at the same time (short-lived connections but long-lived cache)? So my only option would be to store the results returned by the query in my own variables, exactly what is discouraged in the motivation on page 24?
I am developing a WPF application which maybe will also become multi-tiered in the future, involving WCF. I know Julia has an example of this in her book, but I currently don’t have access to it. I found several others on the web, e.g. http://msdn.microsoft.com/en-us/magazine/cc700340.aspx (old ObjectContext, but good in explaining the inter-layer-collaborations). There, a long-lived context is used (although the disadvantages are mentioned, but no solution to these provided).
It’s not only that the single Destinations.Local gets lost, as you surely know all other entities fetched by the query are, too.
[Edit]:
After some more reading in Julia Lerman’s book, it seems to boil down to that EF does not have 2nd level caching per default; with some (considerable, I think) effort, however, ones can add 3rd party caching solutions, as is also described in the book and in various articles on MSDN, codeproject etc.
I would have appreciated if this problem had been mentioned in the section about DbSet.Local in the DbContext book that it is in fact a first level cache which is destroyed when the using {} block ends (just my proposal to make it more transparent to the readers). After first reading I had the impression DbSet.Local would always return the same reference (Singleton-style) also in the second using {} block despite the new DbContext instance.
But I am still unsure whether the 2nd level cache is the way to go for my WPF application (as Julia mentions the 2nd level cache in her article for distributed applications)? Or is the way to go to get my aggregate root instances (DDD, Eric Evans) of my domain model into memory by one or some queries in a using {} block, disposing the DbContext and only holding the references to the aggregate instances, this way avoiding a long-lived context? It would be great if you could help me with this decision.
http://msdn.microsoft.com/en-us/magazine/hh394143.aspx
http://www.codeproject.com/Articles/435142/Entity-Framework-Second-Level-Caching-with-DbConte
http://blog.3d-logic.com/2012/03/31/using-tracing-and-caching-provider-wrappers-with-codefirst/
The Local property provides a “local view of all Added, Unchanged, and Modified entities in this set”. Like all change tracking it is specific to the context you are currently using.
The DB Context is a workspace for loading data and preparing changes.
If two users were to add changes at the same time, they must not know of the others changes before they saved them. They may discard their prepared changes which suddenly would lead to problems for other other user as well.
A DB Context should be short lived indeed, but may be longer than super short when necessary. Also consider that you may not save resources by keeping it short lived if you do not load and discard data but only add changes you will save. But it is not only about resources but also about the DB state potentially changing while the DB Context is still active and has data loaded; which may be important to keep in mind for longer living contexts.
If you do not know yet all related changes you want to save into the database at once then I suggest you do not use the DB Context to store your changes in-memory but in a data structure in your code.
You can of course use entity objects for doing so without an active DB Context. This makes sense if you do not have another appropriate data class for it and do not want to create one, or decide preparing the changes in them make more sense. You can then use DbSet.Attach to attach the entities to a DB Context for saving the changes when you are ready.

How to model Data Transfer Objects for different front ends?

I've run into reoccuring problem for which I haven't found any good examples or patterns.
I have one core service that performs all heavy datasbase operations and that sends results to different front ends (html, silverlight/flash, web services etc).
One of the service operation is "GetDocuments", which provides a list of documents based on different filter criterias. If I only had one front-end, I would like to package the result in a list of Document DTOs (Data transfer objects) that just contains the data. However, different front-ends needs different amounts of "metadata". The simples client just needs the document headline and a link reference. Other clients wants a short text snippet of the document, another one also wants a thumbnail and a third wants the name of the author. Its basically all up to the implementation of the GUI what needs to be displayed.
Whats the best way to model this:
As a lot of different DTOs (Document, DocumentWithThumbnail, DocumentWithTextSnippet)
tends to become a lot of classes
As one DTO containing all the data, where the client choose what to display
Lots of unnecessary data sent
As one DTO where certain fields are populated based on what the client requested
Tends to become a very large class that needs to be extended over time
One DTO but with some kind of generic "Metadata" field containing requested metadata.
Or are there other options?
Since I want a high performance service, I need to think about both network load and caching strategies.
Does anyone have any good patterns or practices that might help me?
What I would do is give the front end the ability to request the presence of the wanted metadata ( say getDocument( WITH_THUMBNAILS | WITH_TEXT_SNIPPET ) )
Then this DTO is built with only this requested information.
Adding all the possible metadata is as you said, unacceptable.
I will surely stay with one class defining all the possible methods (getTitle(), getThumbnail()) and if possible it will return a placeholder when the thumbnail was not requested. Something like "Image not available".
If you want to model this like a pattern, take a look at the factory patterns.
Hope this helps you.
Is there any noticable cost to creating a DTO that has all the data any of your views could need and using it everywhere? I would do that, especially since it insulates you from a requirement change down the line to have one of the views incorporate data one of the other views uses
ex. Maybe your silverlight/flash view doesn't show the title itself b/c it's in the thumb now, but they decide they want to sort by it later.
To clarify, I do not necesarily think you need to pass down all of the data every time, but I think your DTO class should define all of them. Just don't fall into the pits of premature optimization or analysis paralysis. Do the simplest thing first, then justify added complexity. Throw it all in and profile it. If the perf is unacceptable, optimize and try again.

What are the best practices for database development with Delphi?

How can I use the RAD way productively (reusing code). Any
samples, existing libraries, basic
crud generators?
How can I design the OOP way? Which
design patterns to use for
connection, abstracting different
engines/db access layers
(bde-dbexpress-ado), basic CRUD
operations.
I have my own Delphi/MySQL framework that lets me add 'new screens' very rapidly. I won't share it, but I can describe the approach I take:
I use a tabbed interface with a TFrame based hierarchy. I create a tab and link a TFrame into it.
I take care of all the crud plumbing, and concurrency controls using a standard mysql stored procedure implementation. CustomerSEL, CustomerGET, CustomerUPD, CustomerDEL, etc...
My main form essentially contains navbar panel and a panel containing TPageControl
An example of the classes in my hierarchy
TFrame
TMFrame - my derivation, with interface implementations capturing OnShow, OnHide, and some other particulars
--TWebBrowserFrame
--TDataAwareFrame
--TObjectEditFrame
--TCustomerEditFrame
--TOrderEditFrame
etc...
--TObjectListFrame
--TCustomerListFrame
etc...
and some dialogs..
TDialog
TMDialog
--TDataAwareDialog
--TObjectEditDialog
-- TContactEditDialog
etc..
--TObjectSelectDialog
--TContactSelectDialog
etc...
When I add a new object to manage, it could be a new attribute of customers, let's say we want to track which vehicles a customer owns.
create table CustomerVehicles
I run my special sproc generator that creates my SEL, GET, UPD, DEL
test those...
Derive from the base classes I mentioned above, drop some controls. Add a tab to the TCustomerEdit.
Delphi has always the Dataset as the abstract layer, expose this to your GUI via DataSources. Add the dataset to the customer data module, and "register it". My own custom function in my derived datamodule class, TMDataModule
Security control is similarly taken care of in the framework.. I 'Register' components that require a security flag to be visible or enabled.
I can usually add a new object, build the sprocs, add the maintenance screens within an hour.
Of course, that is usually just the start, usually when you add something, you use it for more than tracking. If this a garage application, we want to add the vehicle the customer brought into the garage, id it so we can track the history. But even so, it is fast.
I have tried subcontracting to younger guys using 'newer development tools', and they never seem to believe me when I say I can do this all ten times faster with Delphi! I can do in two hours bug-free what it seems to take them two days and they still have bugs...
DO - Be careful planning your VFI! As someone mentioned, if you want to change the name of a component on one of your parent classes, be prepared for trouble. You will need to open and 'edit' each child in the hierarchy, even if you clean DCU you can still have some DFM hell. I can assure you in 2006 this is still a problem.
DON'T create one monster datamodule
DO take your time in the upfront design, refactoring after you have created a ton of dependents can be a fun challenge, but a nightmare when you have to get something new working quickly!
Be very careful if you use the „put every DB objects into one big data module” (or "few big datamodules" in huge applications) approach. This can make your project having data module so big, that you will have to use HD monitor to see all TXDataset on this datamodule
Bottom line: switch to using specialized classes for business logic instead of big global data modules. Use global data modules with logic ONLY in very small projects.
Well, I strongly suggest you to use Actions (TActionList) when designing your user interface. There are many predefined actions including Next/Prev/Insert/Delete/Edit/Update operations that can be performed on datasets, so it is a good practice to use these actions and link them to buttons/menus on your forms. This prevents repeated code for UI logic.
There is no need for a CRUD generator for Delphi!! Add TDataSource, TDBGrid and TActionList to a form, add predefined data source actions to the action list, link those actions to buttons or menus, and you are done!
For large applications, I use the tiopf object persistance framework. That lets me deal with objects rather than datasets and swap databases easily. Most of my business logic moves into the business object model (BOM) and my forms are pretty dumb. tiopf has a few ways to connect the BOM to forms; persistance aware controls, Ttidataset for data-aware controls and Mogel Gui Mediator classes for connecting to normal controls.
For small and quick apps, I just use data modules and database components. The main things to remember are:
Put as much code in the data modules (and as little in the forms) as possible.
Do multiple data modules broken down by functionality eg the email module, the income module, the invoicing module...
Test, test, test
Use VFI (visual form inheritance). Design a standart DB form. For example, empty DataSet, DataSource, a PageControl consisting of 2 sheets. First will be empty, later on you'll add edit controls to manipulate data at child forms. Add DBGrid to the second sheet. Beware, this isn't the OOP way though, but it's easy and fast.
I would take a look at Data Abstract from Remobjects.

How to unit test an object with database queries

I've heard that unit testing is "totally awesome", "really cool" and "all manner of good things" but 70% or more of my files involve database access (some read and some write) and I'm not sure how to write a unit test for these files.
I'm using PHP and Python but I think it's a question that applies to most/all languages that use database access.
I would suggest mocking out your calls to the database. Mocks are basically objects that look like the object you are trying to call a method on, in the sense that they have the same properties, methods, etc. available to caller. But instead of performing whatever action they are programmed to do when a particular method is called, it skips that altogether, and just returns a result. That result is typically defined by you ahead of time.
In order to set up your objects for mocking, you probably need to use some sort of inversion of control/ dependency injection pattern, as in the following pseudo-code:
class Bar
{
private FooDataProvider _dataProvider;
public instantiate(FooDataProvider dataProvider) {
_dataProvider = dataProvider;
}
public getAllFoos() {
// instead of calling Foo.GetAll() here, we are introducing an extra layer of abstraction
return _dataProvider.GetAllFoos();
}
}
class FooDataProvider
{
public Foo[] GetAllFoos() {
return Foo.GetAll();
}
}
Now in your unit test, you create a mock of FooDataProvider, which allows you to call the method GetAllFoos without having to actually hit the database.
class BarTests
{
public TestGetAllFoos() {
// here we set up our mock FooDataProvider
mockRepository = MockingFramework.new()
mockFooDataProvider = mockRepository.CreateMockOfType(FooDataProvider);
// create a new array of Foo objects
testFooArray = new Foo[] {Foo.new(), Foo.new(), Foo.new()}
// the next statement will cause testFooArray to be returned every time we call FooDAtaProvider.GetAllFoos,
// instead of calling to the database and returning whatever is in there
// ExpectCallTo and Returns are methods provided by our imaginary mocking framework
ExpectCallTo(mockFooDataProvider.GetAllFoos).Returns(testFooArray)
// now begins our actual unit test
testBar = new Bar(mockFooDataProvider)
baz = testBar.GetAllFoos()
// baz should now equal the testFooArray object we created earlier
Assert.AreEqual(3, baz.length)
}
}
A common mocking scenario, in a nutshell. Of course you will still probably want to unit test your actual database calls too, for which you will need to hit the database.
Ideally, your objects should be persistent ignorant. For instance, you should have a "data access layer", that you would make requests to, that would return objects. This way, you can leave that part out of your unit tests, or test them in isolation.
If your objects are tightly coupled to your data layer, it is difficult to do proper unit testing. The first part of unit test, is "unit". All units should be able to be tested in isolation.
In my C# projects, I use NHibernate with a completely separate Data layer. My objects live in the core domain model and are accessed from my application layer. The application layer talks to both the data layer and the domain model layer.
The application layer is also sometimes called the "Business Layer".
If you are using PHP, create a specific set of classes ONLY for data access. Make sure your objects have no idea how they are persisted and wire up the two in your application classes.
Another option would be to use mocking/stubs.
The easiest way to unit test an object with database access is using transaction scopes.
For example:
[Test]
[ExpectedException(typeof(NotFoundException))]
public void DeleteAttendee() {
using(TransactionScope scope = new TransactionScope()) {
Attendee anAttendee = Attendee.Get(3);
anAttendee.Delete();
anAttendee.Save();
//Try reloading. Instance should have been deleted.
Attendee deletedAttendee = Attendee.Get(3);
}
}
This will revert back the state of the database, basically like a transaction rollback so you can run the test as many times as you want without any sideeffects. We've used this approach successfully in large projects. Our build does take a little long to run (15 minutes), but it is not horrible for having 1800 unit tests. Also, if build time is a concern, you can change the build process to have multiple builds, one for building src, another that fires up afterwards that handles unit tests, code analysis, packaging, etc...
I can perhaps give you a taste of our experience when we began looking at unit testing our middle-tier process that included a ton of "business logic" sql operations.
We first created an abstraction layer that allowed us to "slot in" any reasonable database connection (in our case, we simply supported a single ODBC-type connection).
Once this was in place, we were then able to do something like this in our code (we work in C++, but I'm sure you get the idea):
GetDatabase().ExecuteSQL( "INSERT INTO foo ( blah, blah )" )
At normal run time, GetDatabase() would return an object that fed all our sql (including queries), via ODBC directly to the database.
We then started looking at in-memory databases - the best by a long way seems to be SQLite. (http://www.sqlite.org/index.html). It's remarkably simple to set up and use, and allowed us subclass and override GetDatabase() to forward sql to an in-memory database that was created and destroyed for every test performed.
We're still in the early stages of this, but it's looking good so far, however we do have to make sure we create any tables that are required and populate them with test data - however we've reduced the workload somewhat here by creating a generic set of helper functions that can do a lot of all this for us.
Overall, it has helped immensely with our TDD process, since making what seems like quite innocuous changes to fix certain bugs can have quite strange affects on other (difficult to detect) areas of your system - due to the very nature of sql/databases.
Obviously, our experiences have centred around a C++ development environment, however I'm sure you could perhaps get something similar working under PHP/Python.
Hope this helps.
You should mock the database access if you want to unit test your classes. After all, you don't want to test the database in a unit test. That would be an integration test.
Abstract the calls away and then insert a mock that just returns the expected data. If your classes don't do more than executing queries, it may not even be worth testing them, though...
The book xUnit Test Patterns describes some ways to handle unit-testing code that hits a database. I agree with the other people who are saying that you don't want to do this because it's slow, but you gotta do it sometime, IMO. Mocking out the db connection to test higher-level stuff is a good idea, but check out this book for suggestions about things you can do to interact with the actual database.
I usually try to break up my tests between testing the objects (and ORM, if any) and testing the db. I test the object-side of things by mocking the data access calls whereas I test the db side of things by testing the object interactions with the db which is, in my experience, usually fairly limited.
I used to get frustrated with writing unit tests until I start mocking the data access portion so I didn't have to create a test db or generate test data on the fly. By mocking the data you can generate it all at run time and be sure that your objects work properly with known inputs.
Options you have:
Write a script that will wipe out database before you start unit tests, then populate db with predefined set of data and run the tests. You can also do that before every test – it'll be slow, but less error prone.
Inject the database. (Example in pseudo-Java, but applies to all OO-languages)
class Database {
public Result query(String query) {... real db here ...}
}
class MockDatabase extends Database {
public Result query(String query) {
return "mock result";
}
}
class ObjectThatUsesDB {
public ObjectThatUsesDB(Database db) {
this.database = db;
}
}
now in production you use normal database and for all tests you just inject the mock database that you can create ad hoc.
Do not use DB at all throughout most of code (that's a bad practice anyway). Create a "database" object that instead of returning with results will return normal objects (i.e. will return User instead of a tuple {name: "marcin", password: "blah"}) write all your tests with ad hoc constructed real objects and write one big test that depends on a database that makes sure this conversion works OK.
Of course these approaches are not mutually exclusive and you can mix and match them as you need.
Unit testing your database access is easy enough if your project has high cohesion and loose coupling throughout. This way you can test only the things that each particular class does without having to test everything at once.
For example, if you unit test your user interface class the tests you write should only try to verify the logic inside the UI worked as expected, not the business logic or database action behind that function.
If you want to unit test the actual database access you will actually end up with more of an integration test, because you will be dependent on the network stack and your database server, but you can verify that your SQL code does what you asked it to do.
The hidden power of unit testing for me personally has been that it forces me to design my applications in a much better way than I might without them. This is because it really helped me break away from the "this function should do everything" mentality.
Sorry I don't have any specific code examples for PHP/Python, but if you want to see a .NET example I have a post that describes a technique I used to do this very same testing.
I agree with the first post - database access should be stripped away into a DAO layer that implements an interface. Then, you can test your logic against a stub implementation of the DAO layer.
You could use mocking frameworks to abstract out the database engine. I don't know if PHP/Python got some but for typed languages (C#, Java etc.) there are plenty of choices
It also depends on how you designed those database access code, because some design are easier to unit test than other like the earlier posts have mentioned.
I've never done this in PHP and I've never used Python, but what you want to do is mock out the calls to the database. To do that you can implement some IoC whether 3rd party tool or you manage it yourself, then you can implement some mock version of the database caller which is where you will control the outcome of that fake call.
A simple form of IoC can be performed just by coding to Interfaces. This requires some kind of object orientation going on in your code so it may not apply to what your doing (I say that since all I have to go on is your mention of PHP and Python)
Hope that's helpful, if nothing else you've got some terms to search on now.
Setting up test data for unit tests can be a challenge.
When it comes to Java, if you use Spring APIs for unit testing, you can control the transactions on a unit level. In other words, you can execute unit tests which involves database updates/inserts/deletes and rollback the changes. At the end of the execution you leave everything in the database as it was before you started the execution. To me, it is as good as it can get.

Resources