Persistant login during django selenium tests - selenium-webdriver

Here the snippet I'm using for my end-to-end tests using selenium (i'm totally new in selenium django testing) ;
from django.contrib.auth.models import User
from django.contrib.staticfiles.testing import StaticLiveServerTestCase
from selenium.webdriver.chrome.webdriver import WebDriver
class MyTest(StaticLiveServerTestCase):
#classmethod
def setUpClass(cls):
super(DashboardTest, cls).setUpClass()
cls.selenium = WebDriver()
cls.user = User.objects.create_superuser(username=...,
password=...,
email=...)
time.sleep(1)
cls._login()
#classmethod
def _login(cls):
cls.selenium.get(
'%s%s' % (cls.live_server_url, '/admin/login/?next=/'))
...
def test_login(self):
self.selenium.implicitly_wait(10)
self.assertIn(self.username,
self.selenium.find_element_by_class_name("fixtop").text)
def test_go_to_dashboard(self):
query_json, saved_entry = self._create_entry()
self.selenium.get(
'%s%s' % (
self.live_server_url, '/dashboard/%d/' % saved_entry.id))
# assert on displayed values
def self._create_entry():
# create an entry using form and returns it
def test_create(self):
self.maxDiff = None
query_json, saved_entry = self._create_entry()
... assert on displayed values
I'm noticed that between each test the login is not persistant. So i can use _login in the setUp but make my tests slower.
So how to keep persistant login between test ? What are the best practices for testing those tests (djnago selenium tests) ?

Through-the-browser tests with Selenium are slow, period. They are, however, very valuable as they're the best shot you have at automating the true user experience.
You shouldn't try to write true unit tests with Selenium. Instead, use it to write one or two large functional tests. Try to capture an entire user interaction from start to finish. Then structure your test suite so that you can run your fast, non-Selenium unit tests separately, and only have to run the slow functional tests on occasion.
Your code looks fine, but in this scenario you'd combine test_go_to_dashboard and test_create into one method.

kevinharvey pointed me to the solution! Finally found out a way to reduce time of testing and keeping track of all tests:
I have renamed all methods starting with test.. to _test_.. and added a main method that calls each _test_ method:
def test_main(self):
for attr in dir(self):
# call each test and avoid recursive call
if attr.startswith('_test_') and attr != self.test_main.__name__:
with self.subTest("subtest %s " % attr):
self.selenium.get(self.live_server_url)
getattr(self, attr)()
This way, I can test (debug) individually each method :)

Related

How to clear/teardown db with pytest in Flask app

I know this might seem like duplication, I have found similar questions on SO on this topic, but none of them really worked for me. I simply need to clear out (or teardown) the database after each test, so every test works with a new empty one.
I am using fixture and my code looks like this:
#pytest.fixture(scope="module", autouse=True)
def test_client_db():
# set up
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///"
with app.app_context():
db.init_app(app)
db.create_all()
testing_client = app.test_client()
ctx = app.app_context()
ctx.push()
# do the testing
yield testing_client
# tear down
with app.app_context():
db.session.remove()
db.drop_all()
ctx.pop()
I am new to pytest and from what I have learnt, whatever goes before yield works as sort of a "set up", whatever goes after works as "teardown". Yet, when I run several tests, the database is not clear for each test, it holds data between them.
Why is it so? What is wrong with this fixture? What am i missing?
You have set the scope to module - this means the fixture will only be reset after all tests in a module had run.
Either set the scope to function or leave it completely, as function is the default.
See https://docs.pytest.org/en/stable/fixture.html#fixture-scopes

Is there a way to speed up AngularJS protractor tests?

I have created tests for my application. Everything works but it runs slow and even though only 1/3 of the application is tested it still takes around ten minutes for protrator to create the test data, fill out the fields, click the submit button etc.
I am using Google Crome for the testing. It seems slow as I watch protractor fill out the fields one by one.
Here's an example of my test suite:
suites: {
login: ['Login/test.js'],
homePage: ['Home/test.js'],
adminPage: ['Admin/Home/test.js'],
adminObjective: ['Admin/Objective/test.js'],
adminObjDetail: ['Admin/ObjectiveDetail/test.js'],
adminTopic: ['Admin/Topic/test.js'],
adminTest: ['Admin/Test/test.js'],
adminUser: ['Admin/User/test.js'],
adminRole: ['Admin/Role/test.js']
},
This is one test group:
login: ['Login/test.js'],
homePage: ['Home/test.js'],
adminUser: ['Admin/User/test.js'],
adminRole: ['Admin/Role/test.js']
This is another test group:
adminPage: ['Admin/Home/test.js'],
adminObjective: ['Admin/Objective/test.js'],
adminObjDetail: ['Admin/ObjectiveDetail/test.js'],
adminTopic: ['Admin/Topic/test.js'],
adminTest: ['Admin/Test/test.js'],
The two groups can run independently but they must run in the order I have above. After the answers I did read about sharing but I am not sure if this helps my situation as my tests need to be run in order. Ideally I would like to have one set of tests run in one browser and the other set in another browser.
I read about headless browsers such as PhantomJS. Does anyone have experience with these being faster?
Any advice on how I could do this would be much appreciated.
We currently use "shardTestFiles: true" which runs our tests in parallel, this could help if you have multiple tests.
I'm not sure what you are testing here, whether its the data creation or the end result. If the latter, you may want to consider mocking the data creation instead or bypassing the UI some other way.
Injecting in Data
One thing that you can do that will give you a major boost in performance is to not double test. What I mean by this is that you end up filling in dummy data a number of times to get to a step. Its also one of the major reasons that people need tests to run in a certain order (to speed up data entry).
An example of this is if you want to test filtering on a grid (data-table). Filling in data is not part of this action. Its just an annoying thing that you have to do to get to testing the filtering. By calling a service to add the data you can bypass the UI and seleniums general slowness (Id also recommend this on the server side by injecting values directly into the DB using migrations).
A nice way to do this is to add a helper to your pageobject as follows:
module.exports = {
projects: {
create: function(data) {
return browser.executeAsyncScript(function(data, callback) {
var api = angular.injector(['ProtractorProjectsApp']).get('apiService');
api.project.save(data, function(newItem) {
callback(newItem._id);
})
}, data);
}
}
};
The code in this isnt the cleanest but you get the general gist of it. Another alternative is to replace the module with a double or mock using [Protractor#addMockModule][1]. You need to add this code before you call Protractor#get(). It will load after your application services overriding if it has the same name as an existing service.
You can use it as follows :
var dataUtilMockModule = function () {
// Create a new module which depends on your data creation utilities
var utilModule = angular.module('dataUtil', ['platform']);
// Create a new service in the module that creates a new entity
utilModule.service('EntityCreation', ['EntityDataService', '$q', function (EntityDataService, $q) {
/**
* Returns a promise which is resolved/rejected according to entity creation success
* #returns {*}
*/
this.createEntity = function (details,type) {
// This is your business logic for creating entities
var entity = EntityDataService.Entity(details).ofType(type);
var promise = entity.save();
return promise;
};
}]);
};
browser.addMockModule('dataUtil', dataUtilMockModule);
Either of these methods should give you a significant speedup in your testing.
Sharding Tests
Sharding the tests means splitting up the suites and running them in parallel. To do this is quite simple in protractor. Adding the shardTestFiles and maxInstences to your capabilities config should allow you to (in this case) run at most two test in parrallel. Increase the maxInstences to increase the number of tests run. Note : be careful not to set the number too high. Browsers may require multiple threads and there is also an initialisation cost in opening new windows.
capabilities: {
browserName: 'chrome',
shardTestFiles: true,
maxInstances: 2
},
Setting up PhantomJS (from protractor docs)
Note: We recommend against using PhantomJS for tests with Protractor. There are many reported issues with PhantomJS crashing and behaving differently from real browsers.
In order to test locally with PhantomJS, you'll need to either have it installed globally, or relative to your project. For global install see the PhantomJS download page (http://phantomjs.org/download.html). For local install run: npm install phantomjs.
Add phantomjs to the driver capabilities, and include a path to the binary if using local installation:
capabilities: {
'browserName': 'phantomjs',
/*
* Can be used to specify the phantomjs binary path.
* This can generally be ommitted if you installed phantomjs globally.
*/
'phantomjs.binary.path': require('phantomjs').path,
/*
* Command line args to pass to ghostdriver, phantomjs's browser driver.
* See https://github.com/detro/ghostdriver#faq
*/
'phantomjs.ghostdriver.cli.args': ['--loglevel=DEBUG']
}
Another speed tip I've found is that for every test I was logging in and logging out after the test is done. Now I check if I'm already logged in with the following in my helper method;
# Login to the system and make sure we are logged in.
login: ->
browser.get("/login")
element(By.id("username")).isPresent().then((logged_in) ->
if logged_in == false
element(By.id("staff_username")).sendKeys("admin")
element(By.id("staff_password")).sendKeys("password")
element(By.id("login")).click()
)
I'm using grunt-protractor-runner v0.2.4 which uses protractor ">=0.14.0-0 <1.0.0".
This version is 2 or 3 times faster than the latest one (grunt-protractor-runner#1.1.4 depending on protractor#^1.0.0)
So I suggest you to give a try and test a previous version of protractor
Hope this helps
Along with the great tips found above I would recommend disabling Angular/CSS Animations to help speed everything up when they run in non-headless browsers. I personally use the following code in my Test Suite in the "onPrepare" function in my 'conf.js' file:
onPrepare: function() {
var disableNgAnimate = function() {
angular
.module('disableNgAnimate', [])
.run(['$animate', function($animate) {
$animate.enabled(false);
}]);
};
var disableCssAnimate = function() {
angular
.module('disableCssAnimate', [])
.run(function() {
var style = document.createElement('style');
style.type = 'text/css';
style.innerHTML = '* {' +
'-webkit-transition: none !important;' +
'-moz-transition: none !important' +
'-o-transition: none !important' +
'-ms-transition: none !important' +
'transition: none !important' +
'}';
document.getElementsByTagName('head')[0].appendChild(style);
});
};
browser.addMockModule('disableNgAnimate', disableNgAnimate);
browser.addMockModule('disableCssAnimate', disableCssAnimate);
}
Please note: I did not write the above code, I found it online while looking for ways to speed up my own tests.
From what I know:
run test in parallel
inject data in case you are only testing a UI element
use CSS selectors, no xpath (browsers have a native engine for CSS, and the xpath engine is not performance as CSS engine)
run them on high performant machines
use as much as possible beforeAll() and beforeEach() methods for instructions that you repeat often in multiple test
Using Phantomjs will considerably reduce the duration it takes in GUI based browser, but better solution I found is to manage tests in such a way that it can be run in any order independently of other tests, It can be achieved easily by use of ORM(jugglingdb, sequelize and many more) and TDB frameworks, and to make them more manageable one can use jasmine or cucumber framework, which has before and after hookups for individual tests. So now we can gear up with maximum instances our machine can bear with "shardTestFiles: true".

App Engine + NoseGAE bizarre broken test

I'm using the NoseGAE to write local unit tests for my App Engine application, however something is suddenly going wrong with one of my tests. I have standard setUp and tearDown functions, but one test seemingly broke for a reason I can't discern. Even stranger, setUp and tearDown are NOT getting called each time. I added global variables to count setUp/tearDown calls, and on my 4th test (the now seemingly broken one), setUp has been called twice and tearDown has been called once. Further, one of the objects from the third test exists when I query it by id, but not in a general query for its type. Here's some code that gives the bizarre picture:
class GameTest(unittest.TestCase):
def setUp(self):
self.testapp = webtest.TestApp(application)
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_datastore_v3_stub(
consistency_policy=datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=1),
require_indexes=True,
root_path="%s/../../../" % os.path.dirname(__file__)
)
def tearDown(self):
self.testbed.deactivate()
self.testapp.cookies.clear()
def test1(self):
...
def test2(self):
...
def test3(self):
...
# I create a Game object with the id 123 in this particular test
Game(id=123).put()
...
def test4(self):
print "id lookup: ", Game.get_by_id(123)
print "query: ", Game.query().get()
self.assertIsNone(Game.get_by_id(123))
This is an abstraction of the tests, but illustrates the issue.
The 4th test fails because it asserts that an object with that id does not exist. When I print out the two statements:
id lookup: Game(key=Key('Game', 123))
query: None
The id lookup shows the object created in test3, but the query lookup is EMPTY. This makes absolutely no sense to me. Further, I am 100% sure the test was working earlier. Does anyone have any idea how this is even possible? Could I possibly have some local corrupted file causing an issue?
I somewhat "solved" this. This issue only reproduced when I had other test cases in other files that were failing. Once I solved those, all my tests passed. I still don't fully understand why other failing tests should cause these bizarre issues with the testbed, but to anyone else having this issue, try fixing your other test cases first and see if that doesn't cause it to go away.

How are my tests not affecting the database...?

I recently switched from using regular old tests to using WebTest and this "No Database Test Runner"
from django.test.simple import DjangoTestSuiteRunner
class NoTestDbDatabaseTestRunner(DjangoTestSuiteRunner):
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
Here's an example test which HAS to be hitting the database somehow...
What is happening? Are my tests hitting the database but rolling back to some old state? Test-to-test I can see that each created listing has an incremented id.
def test_image_upload(self):
form_data = self.listing_form_defaults.copy()
form_data['images-TOTAL_FORMS'] = '3'
upload_files = [
('images-0-image', 'testdata/1.png'),
('images-1-image', 'testdata/2.png'),
('images-2-image', 'testdata/3.png'),
]
form_resp = self.app.post(
reverse('listing_create'),
form_data,
upload_files=upload_files,
user='kmike'
).follow()
assert len(form_resp.context['listing'].images.all()) == 3
form_resp.context['listing'].images.all() HAS to be hitting the database, I print'd it and it had database records from my database.
I'm just confused--my tests run blazing fast and don't seem to actually change my database, how is this working/happening?!
Tests that require a database (namely, model tests) will not use your “real” (production) database. Separate, blank databases are created for the tests.
See Here

GAE Python Testing

First I'm quite new to GAE/Python, please bear with me. Here's the situation
Have a test_all.py which tests all the test suites in my package. The TestCase's setUp current look like;
import test_settings as policy # has consistency variables <- does not work
...
def setUp(self):
# First, create an instance of the Testbed class.
self.testbed = testbed.Testbed()
# Then activate the testbed, which prepares the service stubs for use.
self.testbed.activate()
# Consistency policy nomal operations
self.policy = None
if policy.policy_flag == 'STRICT':
# Create a consistency policy that will simulate the High Replication consistency model.
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=0)
# Initialize the datastore stub with this policy.
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
This is my primitive attempt to setup the datastore with different consistency policies to run tests against them. Total fail.
What I want is to run my test cases against different datastore consistencies at one go from my root test_all.py Using the above or other, how can I do this? How do I pass in parameters to the TestCase from the test runner?
Python, unit test - Pass command line arguments to setUp of unittest.TestCase
The above thread shows exactly how to best pass runtime arguments to the test case. Specifically answer 2.

Resources