How to clean database after scenario in Python behave - python-behave

I'm pretty new to the world of python/behave and API testing, and I'm trying to clean the database after 1 scenario is run by calling the tag #clean_database.
Can you please assist?
I guess that I will need a database_context.py in my context_steps folder but I'm not sure how to do the connection to the database...

Seems like you have 2 questions here:
(1) How do I connect to the database?
This question doesn't involve behave, so you should ask this question elsewhere--perhaps on the MySQL-Python thread if you're using MySQL (which you haven't specified) or on the Python thread.
(2) How do I use behave to call specific tags?
For the latter, check out the documentation for running tagged tests and see how to run behave from your Python program.

Related

A way to check Oracle finished sql

I've got Docker Compose cluster and one of the containers is Oracle 12c. There is schema.sql file to initialize the db. I would like my application to wait until the db executes all the sql. How can I do it in an automatic fashion with bash?
Thank you very much for any suggestions!
There's a lot to explain here, but I'll link one of my previous answers for a similar problem - steps are actually the same because only the database service and background differs.
1)
First thing is, you have to provide a bash script that will wait until a service will respond via http. In databases it usually happens when DB is ready to go and all initializations are done.
the wait-for-it.sh script written by vishnubob in his wait-for-it repo # github.
2)
Second thing, you have to get that script inside each container that requires your DB.
3)
Third, you specify a entrypoint in your compose file, that will execute the waiting script before the actual command running your service will trigger.
example of a entrypoint (as reference to the answer I link to)
docker-entrypoint.sh:
#!/bin/bash
set -e
sh -c './wait-for-it.sh oracle:3306 -t 30'
exec "$#"
All these steps are explained in detail here in scenario 2, be aware of a reference to my another answer inside the answer I'm pointing at here. This issue is a very common problem for beginners and takes quite a lot of explanation, so I cannot post it all here.
note here concerning depends_on which you might think is a native solution for this problem from docker - as docs state, it only waits until the container is running, not actually finished it's internal jobs - docker is not aware of how much there is to be done.

How to skip phoronix-test-suite inital questions

I would like to use phoronix-test-suite to benchmark cloud instances of different providers.
Nevertheless automation seems to get hang, because phoronix-test-suites asks three initial questions to accept license agreement, whether to upload benchmark results to openbenchmarking and so on.
I know that the batch-run can be preconfigured using the user-config.xml file. But this seems not sufficient to run benchmarks non-interactively the first time.
Phoronix-test-suite still asks its initial questions which prevents automatic benchmarking of
Can anybody help? Is there another file which phoronix-test-suite needs to not ask its initial questions?
running "phoronix-test-suite enterprise-setup" on PTS 5.6+ is another way to avoid the initial setup questions.
After digging in the sources I discovered environment variable - PTS_SILENT_MODE. Just set it to 1.
Example:
On fresh install when I run
PTS_SILENT_MODE=1 phoronix-test-suite benchmark pts/openssl-1.9.0
Then 3 initial questions are not asked

pass SQLCMD variables to dbDacFx provider with msdeploy

I'm currently using msdeploy's dbDacFx provider to deploy a .dacpac to a database. The dacpac expects three SQLCMD variables. The syntax I am using looks like this:
-setParam:kind=SqlCommandVariable,scope=Database.dacpac,match=foo1,value="foo1 value"
I've been trying everything I can find but unfortunately there is next to no documentation around this process. The output I am getting from msdeploy says Missing values for the following SqlCmd variables: foo1 foo2 foo3.
If anybody can spot what I'm doing wrong that would be fantastic. If anybody knows where to get some documentation that would be fantastic as well. I would gladly accept any answers that say what I should do, but I would love to understand what all of these values are and WHY I'm doing it wrong.
Edit:
At this point it would appear that such an option does not exist for dbDacFx. Our use case for this feature is attempting to deploy the same template database (managed in visual studio sql projects) to 70+ databases, which we would like to do in parallel. dbSqlPackage (besides being deprecated) is not thread safe and does not allow for parallel deployments. dbDacFx overcomes this shortcoming however it cannot (in my experimentation) be passed SqlCmdVariables that reside at the project level, and it cannot use publish profiles like dbSqlPackage can. I'm in contact with a member of the web deploy team and will post any updates if I am able to figure out how to overcome any of these shortcomings.
After emailing some Microsoft employees from the MSDeploy team.... it turns out this is not possible at this time, but may be considered for a future release.

"Unit" Testing Database

I'm running Oracle 11g SE1 .
Just wondering if there're any tools that would allow me to test the data integrity of a ( mostly read-only ) schema. Essentially, what I want to do is to have some queries that run every night or so and see if they return the expected result. For example:
SELECT COUNT(*) FROM PATIENTS WHERE DISEASE = 'Clone-Killing Nanovirus';
Expected result : 59.
How do people normally do such testing ?
I've used SQLUnit and written about it here. I don't believe any new development is being done on it but it should accomplish your goal.
SQL Developer (free, as in beer) also has a Unit Testing framework. I have installed it and that's about it. I want to use it more, but I've been working with BI the past few years so no external pressure to learn it.
The tests that you want to create sound pretty simple, so either of those should work well for you. The next step would be to have them run on a schedule (cron, windows scheduler, etc) or you can go crazy with a continuous integration tool like Atlassian's Bamboo (haven't used it).
Of course you could skip the tools altogether and just write up scripts that are called from the command line. Fancy would have you writing the results to a database table so you can easily skin it, simple would be piping the results to a text file and reviewing that each day.
Hope this helps.
You could batch up your queries and run a simple perl script using DBI that would run the queries and check them against an accepted tolerance and email you if something didn't meet thresholds. I know I have written such db checking code before to make sure items were within thresholds. Perl is a good tool for this sort of thing as the DBI module can connect to your database and then you can run some canned queries and easily send yourself an email using the MIME package. http://www.perl.com/pub/1999/10/DBI.html

Work with Database using Spock and Geb.

I hope someone have already faced an issue to verify that application shows correct data from database. I reviewd how groovy used SQL, but I have no idea where and how I should do that. I'm just starting to use gradle+Spock+Geb for testing application. I have a few files where I described a couple of pages from application, a couple of modules and a file with spock specification. Where and how I need to connect to Oracle DB, use SQL and compare result's data with application's ones?
P.S. I write everything in notepad++ and launch from command line writing 'gradlew firefoxTest'. Does exist any more comfortable way to work with gradle+spock+geb?
Thanks in advance.
Because there are no other answers, I wanted to provide a solution someone at my company thought of. This assumes you already have a project that uses some sort of JDBC. In our case it is JDBI.
The idea is to extend Classloader and then use that to directly access the data access object class via the JVM. That idea should work.
I have not tested it out because it doesn't completely fit our use-case. I'll admit that this does not completely apply to your use case, but technically you could just run the jar of an existing project, which can access the database.

Resources