how to pass variable argument to context.execute_steps in python-behave - python-behave

I want to execute certain step before executing my function. The step takes variable as an argument. I am not able to pass it in context.execute_steps.
eg.
call1 = "version"
call1_method = "get"
context.execute_steps('''When execute api{"method":call1_method, "call":call1}''')
however this does not work. I get error in argument parsing because variables are not in quotes. I don't see any such example in behave documentation. Any help would be really appreciated.

There's a bunch of stuff that could be going on. I found here that if you're using python 3 you need to make it unicode by including a u in front of '''. I also was a bit tripped up that you have to include when, then or given as part of execute step command (not just the name of the step), but your example seems to be doing that. I'm kind of confused by what the way you're passing variables but can tell you this how you use execute_steps and works in python 2 & 3 with behave 1.2.5.
#step('I set the Number to "{number:d}" exactly')
def step_impl(context, number):
context.number = 10
context.number = number
#step('I call another step')
def step_impl(context):
context.execute_steps(u'''when I set the Number to "5" exactly''')
print(context.number)
assert context.number == 10, 'This will fail'
Then calling:
Given I call another step
When some not mentioned step
Then some other not mentioned step
Would execute when I set the Number to "5" exactly as part of the when I call another step.
Hard to say for your exact example because I'm unfamiliar with the other step you're trying to execute but if you defined the previous step with something like:
#step('execute api "{method}" "{version}"')
def step_impl(context, method, version):
# do stuff with passed in method and version variables
you should the be able to use it like so in another step.
#step('I call another step')
def step_impl(context):
context.execute_steps(u'''When execute api "get" "version1"''')
If your problem is just passing information between steps. You could just use context to pass it between them.
#step('I do the first function')
def step_impl(context):
context.call1 = "version"
context.call1_method = "get"
#step('I call another step')
def step_impl(context):
print(%s and %s are available in this function % (context.call1, context.call1_method)
And then call the steps in a row
When I do the first function
And I call another step
...

Related

Behave BDD reusing steps from common step file

How do i re use steps from common_step.py
example:
common_feature:
Given log in with "abc" user
common_step:
#given('log in with "{user}" user')
anotherFile_feature
Given log in with "xyz" user
anotherFile_steps:
how do i pass that xyz user in here, can i get an example
thanks
If you need to reuse your user in another steps or parts of your code, shouldn't you just assign the value of the user to a variable?
#given('log in with "{user}" user')
def do_login(context, user)
context.user = user # the context variable will be accessible during execution time
First of all, I recommend that you post more documented questions, as it is difficult to understand what you are asking.
As #herickmota says, if you want to share values between different functions in your code you should use the context variable, which is shared by all the steps during the execution of the test, and in which you can store as many values as you want to be able to retrieve them wherever you want.
If, on the other hand, what you want is to call from a step to another step that you have already defined, you should use the context.execute_steps method, of which you have the documentation at https://behave.readthedocs.io/en/stable/api.html?=execute_steps#behave.runner.Context.execute_steps and you can see an example at https://jenisys.github.io/behave.example/tutorials/tutorial08.html.
I'll give you some examples:
This is the example of a valid directory tree. You must have the *.feature files inside a directory named "features" and also inside that directory you must have de "steps" directory with the steps definitions inside.
The second directory tree is an invalid directory tree, because the anohter_file_feature.feature file is unable to find the step definitions. In any case, I can't find any situation where you should separate the features in this way.
The last of the directory trees is the most commonly used sort, as it allows you to separate features into folders according to the criteria you see fit. This is the structure I would recommend to organise your Behave files.
Once we are clear about the valid distributions of the files in different folders, let's talk about how to call the methods from the different *.feature files.
To begin with, the first Scenario we have as an example would be the features/common_features/common_feature.feature, in which we make the most typical call to a step to which we pass a parameter, in this case the name "Manuel".
Feature: Common_feature
Scenario: Scenario number one
Given log in with "Manuel" user
The definition of the step is in the file features/steps/common_step.py
from behave import *
#given(u'log in with "{user}" user')
def step_impl(context, user):
print("I'm logging with the user: " + user)
The output if we run de command behave features/common_features/common_feature.feature is:
$ behave features/common_features/common_feature.feature
Feature: Common_feature # features/common_features/common_feature.feature:1
Scenario: Scenario number one # features/common_features/common_feature.feature:3
Given log in with "Manuel" user # features/steps/common_step.py:4
1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
1 step passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.001s
In another path, we have the file features/another_features/another_file_feature.feature in which we have defined the following two examples: one invoking again the step we already had declared in features/steps/common_step.py and another with a new step, which implicitly calls the step log in with "{user}" user:
Feature: Common_feature
Scenario: Scenario number two
Given log in with "José" user
Scenario: Scenario number three
Given calling indirectly with "Antonio" user
The definitions of the step we have in features/steps/another_file_steps.py is as follows:
from behave import *
#given(u'calling indirectly with {user} user')
def step_impl(context, user):
context.execute_steps("Given log in with " + user + " user")
And the output of running the command behave features/another_features/another_file_feature.feature is:
$ behave features/another_features/another_file_feature.feature
Feature: Common_feature # features/another_features/another_file_feature.feature:1
Scenario: Scenario number two # features/another_features/another_file_feature.feature:3
Given log in with "José" user # features/steps/common_step.py:4
Scenario: Scenario number three # features/another_features/another_file_feature.feature:7
Given calling indirectly with "Antonio" user # features/steps/another_file_steps.py:4
1 feature passed, 0 failed, 0 skipped
2 scenarios passed, 0 failed, 0 skipped
2 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.006s
As you can see, here are some basic examples of the file structure in Behave and how to call both directly and indirectly to the steps from the features.

R automate testing?

Currently, I employ following script to test for something called Granger causality. Note: My main question is about the script structure, not the method.
#These values always have to be specified manually
dat <- data.frame(df[2],df[3])
lag = 2
# VAR-Model
V <- VAR(dat,p=lag,type="both")
V$var
summary(V)
wald.test(b=coef(V$var[[1]]), Sigma=vcov(V$var[[1]]), Terms=c(seq(2, by=2, length=lag)))
names(cof1[2])
wald.test(b=coef(V$var[[2]]), Sigma=vcov(V$var[[2]]), Terms=c(seq(1, by=2, length=lag)))
names(cof1[1])
Main issue is, that I always manually have to change the testing pair in dat <- data.frame(..). Further, I always manually enter "lag = x" after some tests on stationarity that can rather not be automated.
Let's say I would have to test following pairs:
df[2],df[3]
df[2],df[4]
df[2],df[5]
df[6],df[7]
df[8],df[9]
can I somehow state that in an array for the test? Assuming I would also know lag for each testing pair, could I also add that to it?
It would be perfect to directly output my test results into a table, instead of manually changing the data and then entering the result into an Excel/LaTeX.

Odoo - Cannot loop through model records

I want to call a method every time my module gets installed or updated. Inside that method I want to loop through model records, but I'm only getting different errors.
This documentation looks pretty straightforward: https://www.odoo.com/documentation/9.0/reference/orm.html
But it doesn't work for me. I'm getting this error:
ParseError: "'account.tax' object has no attribute '_ids'" while parsing
This is how I call the method:
<openerp>
<data>
<function model="account.tax" name="_my_method" />
</data>
</openerp>
I took this from the first answer here: https://www.odoo.com/forum/help-1/question/how-can-i-execute-a-sql-statement-on-module-update-and-installation-6131
My model:
class my_account_tax(models.Model):
_name = 'account.tax'
_inherit = 'account.tax'
def _my_method(self, cr, uid, ids=None, context=None):
self.do_operation()
def do_operation(self):
print self
for record in self:
print record
It is basically a copy-paste from the docs. I only added method parameters cr, uid,.. If I take them away (and just leave 'self'), the error is a little different:
ParseError: "_my_method() takes exactly 1 argument (3 given)"
But also does not tell much.
use new api
#api.multi #if you use the new api you don't have to list all parameter in the function
def _my_method(self):
but you can keep it like that and do a pool on your model than loop throw the result that you get don't use self
if you use the new api use : self.env['model_name'].search([domain])

Google AppEngine Pipelines API

I would like to rewrite some of my tasks as pipelines. Mainly because of the fact that I need a way of detecting when a task finished or start a tasks in specific order. My problem is that I'm not sure how to rewrite the recursive tasks to pipelines. By recursive I mean tasks that call themselves like this:
class MyTask(webapp.RequestHandler):
def post(self):
cursor = self.request.get('cursor', None)
[set cursor if not null]
[fetch 100 entities form datastore]
if len(result) >= 100:
[ create the same task in the queue and pass the cursor ]
[do actual work the task was created for]
Now I would really like to write it as a pipeline and do something similar to:
class DoSomeJob(pipeline.Pipeline):
def run(self):
with pipeline.InOrder():
yield MyTask()
yield MyOtherTask()
yield DoSomeMoreWork(message2)
Any help with this one will be greatly appreciated. Thank you!
A basic pipeline just returns a value:
class MyFirstPipeline(pipeline.Pipeline):
def run(self):
return "Hello World"
The value has to be JSON serializable.
If you need to coordinate several pipelines you will need to use a generator pipeline and the yield statement.
class MyGeneratorPipeline(pipeline.Pipeline):
def run(self):
yield MyFirstPipeline()
You can treat the yielding of a pipeline as if it returns a 'future'.
You can pass this future as the input arg to another pipeline:
class MyGeneratorPipeline(pipeline.Pipeline):
def run(self):
result = yield MyFirstPipeline()
yield MyOtherPipeline(result)
The Pipeline API will ensure that the run method of MyOtherPipeline is only called once the result future from MyFirstPipeline has been resolved to a real value.
You can't mix yield and return in the same method. If you are using yield the value has to be a Pipeline instance. This can lead to a problem if you want to do this:
class MyRootPipeline(pipeline.Pipeline):
def run(self, *input_args):
results = []
for input_arg in input_args:
intermediate = yield MyFirstPipeline(input_arg)
result = yield MyOtherPipeline(intermediate)
results.append(result)
yield results
In this case the Pipeline API just sees a list in your final yield results line, so it doesn't know to resolve the futures inside it before returning and you will get an error.
They're not documented but there is a library of utility pipelines included which can help here:
https://code.google.com/p/appengine-pipeline/source/browse/trunk/src/pipeline/common.py
So a version of the above which actually works would look like:
import pipeline
from pipeline import common
class MyRootPipeline(pipeline.Pipeline):
def run(self, *input_args):
results = []
for input_arg in input_args:
intermediate = yield MyFirstPipeline(input_arg)
result = yield MyOtherPipeline(intermediate)
results.append(result)
yield common.List(*results)
Now we're ok, we're yielding a pipeline instance and Pipeline API knows to resolve its future value properly. The source of the common.List pipeline is very simple:
class List(pipeline.Pipeline):
"""Returns a list with the supplied positional arguments."""
def run(self, *args):
return list(args)
...at the point that this pipeline's run method is called the Pipeline API has resolved all of the items in the list to actual values, which can be passed in as *args.
Anyway, back to your original example, you could do something like this:
class FetchEntitites(pipeline.Pipeline):
def run(self, cursor=None)
if cursor is not None:
cursor = Cursor(urlsafe=cursor)
# I think it's ok to pass None as the cursor here, haven't confirmed
results, next_curs, more = MyModel.query().fetch_page(100,
start_cursor=cursor)
# queue up a task for the next page of results immediately
future_results = []
if more:
future_results = yield FetchEntitites(next_curs.urlsafe())
current_results = [ do some work on `results` ]
# (assumes current_results and future_results are both lists)
# this will have to wait for all of the recursive calls in
# future_results to resolve before it can resolve itself:
yield common.Extend(current_results, future_results)
Further explanation
At the start I said we can treat result = yield MyPipeline() as if it returns a 'future'. This is not strictly true, obviously we are actually just yielding the instantiated pipeline. (Needless to say our run method is now a generator function.)
The weird part of how Python's yield expressions work is that, despite what it looks like, the value that you yield goes somewhere outside the function (to the Pipeline API apparatus) rather than into your result var. The value of the result var on the left side of the expression is also pushed in from outside the function, by calling send on the generator (the generator being the run method you defined).
So by yielding an instantiated Pipeline, you are letting the Pipeline API take that instance and call its run method somewhere else at some other time (in fact it will be passed into a task queue as a class name and a set of args and kwargs and re-instantiated there... this is why your args and kwargs need to be JSON serializable too).
Meanwhile the Pipeline API sends a PipelineFuture object into your run generator and this is what appears in your result var. It seems a bit magical and counter-intuitive but this is how generators with yield expressions work.
It's taken quite a bit of head-scratching for me to work it out to this level and I welcome any clarifications or corrections on anything I got wrong.
When you create a pipeline, it hands back an object that represents a "stage". You can ask the stage for its id, then save it away. Later, you can reconstitute the stage from the saved id, then ask the stage if it's done.
See http://code.google.com/p/appengine-pipeline/wiki/GettingStarted and look for has_finalized. There's an example that does most of what you need.

parallel code execution python2.7 ndb

in my app i for one of the handler i need to get a bunch of entities and execute a function for each one of them.
i have the keys of all the enities i need. after fetching them i need to execute 1 or 2 instance methods for each one of them and this slows my app down quite a bit. doing this for 100 entities takes around 10 seconds which is way to slow.
im trying to find a way to get the entities and execute those functions in parallel to save time but im not really sure which way is the best.
i tried the _post_get_hook but the i have a future object and need to call get_result() and execute the function in the hook which works kind of ok in the sdk but gets a lot of 'maximum recursion depth exceeded while calling a Python objec' but i can't really undestand why and the error message is not really elaborate.
is the Pipeline api or ndb.Tasklets what im searching for?
atm im going by trial and error but i would be happy if someone could lead me to the right direction.
EDIT
my code is something similar to a filesystem, every folder contains other folders and files. The path of the Collections set on another entity so to serialize a collection entity i need to get the referenced entity and get the path. On a Collection the serialized_assets() function is slower the more entities it contains. If i could execute a serialize function for each contained asset side by side it would speed things up quite a bit.
class Index(ndb.Model):
path = ndb.StringProperty()
class Folder(ndb.Model):
label = ndb.StringProperty()
index = ndb.KeyProperty()
# contents is a list of keys of contaied Folders and Files
contents = ndb.StringProperty(repeated=True)
def serialized_assets(self):
assets = ndb.get_multi(self.contents)
serialized_assets = []
for a in assets:
kind = a._get_kind()
assetdict = a.to_dict()
if kind == 'Collection':
assetdict['path'] = asset.path
# other operations ...
elif kind == 'File':
assetdict['another_prop'] = asset.another_property
# ...
serialized_assets.append(assetdict)
return serialized_assets
#property
def path(self):
return self.index.get().path
class File(ndb.Model):
filename = ndb.StringProperty()
# other properties....
#property
def another_property(self):
# compute something here
return computed_property
EDIT2:
#ndb.tasklet
def serialized_assets(self, keys=None):
assets = yield ndb.get_multi_async(keys)
raise ndb.Return([asset.serialized for asset in assets])
is this tasklet code ok?
Since most of the execution time of your functions are spent waiting for RPCs, NDB's async and tasklet support is your best bet. That's described in some detail here. The simplest usage for your requirements is probably to use the ndb.map function, like this (from the docs):
#ndb.tasklet
def callback(msg):
acct = yield ndb.get_async(msg.author)
raise tasklet.Return('On %s, %s wrote:\n%s' % (msg.when, acct.nick(), msg.body))
qry = Messages.query().order(-Message.when)
outputs = qry.map(callback, limit=20)
for output in outputs:
print output
The callback function is called for each entity returned by the query, and it can do whatever operations it needs (using _async methods and yield to do them asynchronously), returning the result when it's done. Because the callback is a tasklet, and uses yield to make the asynchronous calls, NDB can run multiple instances of it in parallel, and even batch up some operations.
The pipeline API is overkill for what you want to do. Is there any reason why you couldn't just use a taskqueue?
Use the initial request to get all of the entity keys, and then enqueue a task for each key having the task execute the 2 functions per-entity. The concurrency will be based then on the number of concurrent requests as configured for that taskqueue.

Resources