I know this might seem like duplication, I have found similar questions on SO on this topic, but none of them really worked for me. I simply need to clear out (or teardown) the database after each test, so every test works with a new empty one.
I am using fixture and my code looks like this:
#pytest.fixture(scope="module", autouse=True)
def test_client_db():
# set up
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///"
with app.app_context():
db.init_app(app)
db.create_all()
testing_client = app.test_client()
ctx = app.app_context()
ctx.push()
# do the testing
yield testing_client
# tear down
with app.app_context():
db.session.remove()
db.drop_all()
ctx.pop()
I am new to pytest and from what I have learnt, whatever goes before yield works as sort of a "set up", whatever goes after works as "teardown". Yet, when I run several tests, the database is not clear for each test, it holds data between them.
Why is it so? What is wrong with this fixture? What am i missing?
You have set the scope to module - this means the fixture will only be reset after all tests in a module had run.
Either set the scope to function or leave it completely, as function is the default.
See https://docs.pytest.org/en/stable/fixture.html#fixture-scopes
Related
Here the snippet I'm using for my end-to-end tests using selenium (i'm totally new in selenium django testing) ;
from django.contrib.auth.models import User
from django.contrib.staticfiles.testing import StaticLiveServerTestCase
from selenium.webdriver.chrome.webdriver import WebDriver
class MyTest(StaticLiveServerTestCase):
#classmethod
def setUpClass(cls):
super(DashboardTest, cls).setUpClass()
cls.selenium = WebDriver()
cls.user = User.objects.create_superuser(username=...,
password=...,
email=...)
time.sleep(1)
cls._login()
#classmethod
def _login(cls):
cls.selenium.get(
'%s%s' % (cls.live_server_url, '/admin/login/?next=/'))
...
def test_login(self):
self.selenium.implicitly_wait(10)
self.assertIn(self.username,
self.selenium.find_element_by_class_name("fixtop").text)
def test_go_to_dashboard(self):
query_json, saved_entry = self._create_entry()
self.selenium.get(
'%s%s' % (
self.live_server_url, '/dashboard/%d/' % saved_entry.id))
# assert on displayed values
def self._create_entry():
# create an entry using form and returns it
def test_create(self):
self.maxDiff = None
query_json, saved_entry = self._create_entry()
... assert on displayed values
I'm noticed that between each test the login is not persistant. So i can use _login in the setUp but make my tests slower.
So how to keep persistant login between test ? What are the best practices for testing those tests (djnago selenium tests) ?
Through-the-browser tests with Selenium are slow, period. They are, however, very valuable as they're the best shot you have at automating the true user experience.
You shouldn't try to write true unit tests with Selenium. Instead, use it to write one or two large functional tests. Try to capture an entire user interaction from start to finish. Then structure your test suite so that you can run your fast, non-Selenium unit tests separately, and only have to run the slow functional tests on occasion.
Your code looks fine, but in this scenario you'd combine test_go_to_dashboard and test_create into one method.
kevinharvey pointed me to the solution! Finally found out a way to reduce time of testing and keeping track of all tests:
I have renamed all methods starting with test.. to _test_.. and added a main method that calls each _test_ method:
def test_main(self):
for attr in dir(self):
# call each test and avoid recursive call
if attr.startswith('_test_') and attr != self.test_main.__name__:
with self.subTest("subtest %s " % attr):
self.selenium.get(self.live_server_url)
getattr(self, attr)()
This way, I can test (debug) individually each method :)
I'm building an SPA in AngularJS served by a Laravel (5.1) backend. Of late I've been encountering an annoying error, a server 500 or code 0 error which is abit hard to explain how it comes but let me try to may be someone will understand the dental formula of my problem.
When i start my AngularJS controller, I make several server calls (via independent $http calls from services) to retrieve information i might later need in the controller. For example,
Functions.getGrades()
.then(function(response)
{
$scope.grades = response.data;
});
Subjects.offered()
.then(function(response)
{
$scope.subjects = response.data;
});
Later on i pass these variables (grades or subjects) to a service where they are used for processing. However, these functions are randomly returning code 500 server errors after they run, and sometimes returning status code 0 after running. This happens in a random way and it is hard for me to point out the circumstances leading to their popping up. This leaves me with frequent empty Laravel-ised error screens like the ones shown below.
Anyone reading my mind?
Ok, after a suggestion given in a comment above that I check my Laravel log files (located in storage/logs/laravel.log- Laravel 5.1), i found out that the main error most of these times was this one: 'PDOException' with message 'SQLSTATE[HY000] [1044] Access denied for user ''#'localhost' to database 'forge'' in ..., plus another one that paraphrased something like No valid encrypter found. These were the key opener.
On reading another SO thread here, it said in part:
I solved, sometimes laravel not read APP_KEY in .ENV. And returns a value "SomeRandomString" (default is defined in config / app.php), and have the error "key length is invalid", so the solution is to copy the value of APP_KEY, to the value 'key 'in config / app.php, that's all! I solved!
That was exactly the issue! When loading the DB params from the .env to config/database.php, Laravel was sometimes unable to read the environment variables and went for the fallback default fallback options (forge for DB name and username and SomeRandomString for the APP_KEY). So, to solve this i just did as advised: copied the APP_KEY in .env to the config/app.php and edited the default DB parameters to the actual DB name and username/password I'm using. Just that and i was free from pollution. Hope someone finds this helpful.
I'm using the NoseGAE to write local unit tests for my App Engine application, however something is suddenly going wrong with one of my tests. I have standard setUp and tearDown functions, but one test seemingly broke for a reason I can't discern. Even stranger, setUp and tearDown are NOT getting called each time. I added global variables to count setUp/tearDown calls, and on my 4th test (the now seemingly broken one), setUp has been called twice and tearDown has been called once. Further, one of the objects from the third test exists when I query it by id, but not in a general query for its type. Here's some code that gives the bizarre picture:
class GameTest(unittest.TestCase):
def setUp(self):
self.testapp = webtest.TestApp(application)
self.testbed = testbed.Testbed()
self.testbed.activate()
self.testbed.init_datastore_v3_stub(
consistency_policy=datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=1),
require_indexes=True,
root_path="%s/../../../" % os.path.dirname(__file__)
)
def tearDown(self):
self.testbed.deactivate()
self.testapp.cookies.clear()
def test1(self):
...
def test2(self):
...
def test3(self):
...
# I create a Game object with the id 123 in this particular test
Game(id=123).put()
...
def test4(self):
print "id lookup: ", Game.get_by_id(123)
print "query: ", Game.query().get()
self.assertIsNone(Game.get_by_id(123))
This is an abstraction of the tests, but illustrates the issue.
The 4th test fails because it asserts that an object with that id does not exist. When I print out the two statements:
id lookup: Game(key=Key('Game', 123))
query: None
The id lookup shows the object created in test3, but the query lookup is EMPTY. This makes absolutely no sense to me. Further, I am 100% sure the test was working earlier. Does anyone have any idea how this is even possible? Could I possibly have some local corrupted file causing an issue?
I somewhat "solved" this. This issue only reproduced when I had other test cases in other files that were failing. Once I solved those, all my tests passed. I still don't fully understand why other failing tests should cause these bizarre issues with the testbed, but to anyone else having this issue, try fixing your other test cases first and see if that doesn't cause it to go away.
First I'm quite new to GAE/Python, please bear with me. Here's the situation
Have a test_all.py which tests all the test suites in my package. The TestCase's setUp current look like;
import test_settings as policy # has consistency variables <- does not work
...
def setUp(self):
# First, create an instance of the Testbed class.
self.testbed = testbed.Testbed()
# Then activate the testbed, which prepares the service stubs for use.
self.testbed.activate()
# Consistency policy nomal operations
self.policy = None
if policy.policy_flag == 'STRICT':
# Create a consistency policy that will simulate the High Replication consistency model.
self.policy = datastore_stub_util.PseudoRandomHRConsistencyPolicy(probability=0)
# Initialize the datastore stub with this policy.
self.testbed.init_datastore_v3_stub(consistency_policy=self.policy)
This is my primitive attempt to setup the datastore with different consistency policies to run tests against them. Total fail.
What I want is to run my test cases against different datastore consistencies at one go from my root test_all.py Using the above or other, how can I do this? How do I pass in parameters to the TestCase from the test runner?
Python, unit test - Pass command line arguments to setUp of unittest.TestCase
The above thread shows exactly how to best pass runtime arguments to the test case. Specifically answer 2.
Is it possible to get info on what instance you're running on? I want to output just a simple identifier for which instance the code is currently running on for logging purposes.
Since there is no language tag, and seeing your profile history, I assume you are using GAE/J?
In that case, the instance ID information is embedded in one of the environment attributes that you could get via ApiProxy.getCurrentEnvironment() method. You could then extract the instance id from the resulting map using key BackendService.INSTANCE_ID_ENV_ATTRIBUTE.
Even though the key is stored in BackendService, this approach will also work for frontend instances. So in summary, the following code would fetch the instance ID for you:
String tInstanceId = ApiProxy.getCurrentEnvironment()
.getAttributes()
.get( BackendService.INSTANCE_ID_ENV_ATTRIBUTE )
.toString();
Please keep in mind that this approach is quite undocumented by Google, and might subject to change without warning in the future. But since your use case is only for logging, I think it would be sufficient for now.
With the advent of Modules, you can get the current instance id in a more elegant way:
ModulesServiceFactory.getModulesService().getCurrentInstanceId()
Even better, you should wrap the call in a try catch so that it will work correctly locally too.
Import this
import com.google.appengine.api.modules.ModulesException;
import com.google.appengine.api.modules.ModulesServiceFactory;
Then your method can run this
String instanceId = "unknown";
try{
instanceId = ModulesServiceFactory.getModulesService().getCurrentInstanceId();
} catch (ModulesException e){
instanceId = e.getMessage();
}
Without the try catch, you will get some nasty errors when running locally.
I have found this super useful for debugging when using endpoints mixed with pub-sub and other bits to try to determine why some things work differently and to determine if it is related to new instances.
Not sure about before, but today in 2021 the system environment variable GAE_INSTANCE appears to contain the instance id:
instanceId = System.getenv("GAE_INSTANCE")