Behave BDD reusing steps from common step file - python-behave

How do i re use steps from common_step.py
example:
common_feature:
Given log in with "abc" user
common_step:
#given('log in with "{user}" user')
anotherFile_feature
Given log in with "xyz" user
anotherFile_steps:
how do i pass that xyz user in here, can i get an example
thanks

If you need to reuse your user in another steps or parts of your code, shouldn't you just assign the value of the user to a variable?
#given('log in with "{user}" user')
def do_login(context, user)
context.user = user # the context variable will be accessible during execution time

First of all, I recommend that you post more documented questions, as it is difficult to understand what you are asking.
As #herickmota says, if you want to share values between different functions in your code you should use the context variable, which is shared by all the steps during the execution of the test, and in which you can store as many values as you want to be able to retrieve them wherever you want.
If, on the other hand, what you want is to call from a step to another step that you have already defined, you should use the context.execute_steps method, of which you have the documentation at https://behave.readthedocs.io/en/stable/api.html?=execute_steps#behave.runner.Context.execute_steps and you can see an example at https://jenisys.github.io/behave.example/tutorials/tutorial08.html.
I'll give you some examples:
This is the example of a valid directory tree. You must have the *.feature files inside a directory named "features" and also inside that directory you must have de "steps" directory with the steps definitions inside.
The second directory tree is an invalid directory tree, because the anohter_file_feature.feature file is unable to find the step definitions. In any case, I can't find any situation where you should separate the features in this way.
The last of the directory trees is the most commonly used sort, as it allows you to separate features into folders according to the criteria you see fit. This is the structure I would recommend to organise your Behave files.
Once we are clear about the valid distributions of the files in different folders, let's talk about how to call the methods from the different *.feature files.
To begin with, the first Scenario we have as an example would be the features/common_features/common_feature.feature, in which we make the most typical call to a step to which we pass a parameter, in this case the name "Manuel".
Feature: Common_feature
Scenario: Scenario number one
Given log in with "Manuel" user
The definition of the step is in the file features/steps/common_step.py
from behave import *
#given(u'log in with "{user}" user')
def step_impl(context, user):
print("I'm logging with the user: " + user)
The output if we run de command behave features/common_features/common_feature.feature is:
$ behave features/common_features/common_feature.feature
Feature: Common_feature # features/common_features/common_feature.feature:1
Scenario: Scenario number one # features/common_features/common_feature.feature:3
Given log in with "Manuel" user # features/steps/common_step.py:4
1 feature passed, 0 failed, 0 skipped
1 scenario passed, 0 failed, 0 skipped
1 step passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.001s
In another path, we have the file features/another_features/another_file_feature.feature in which we have defined the following two examples: one invoking again the step we already had declared in features/steps/common_step.py and another with a new step, which implicitly calls the step log in with "{user}" user:
Feature: Common_feature
Scenario: Scenario number two
Given log in with "José" user
Scenario: Scenario number three
Given calling indirectly with "Antonio" user
The definitions of the step we have in features/steps/another_file_steps.py is as follows:
from behave import *
#given(u'calling indirectly with {user} user')
def step_impl(context, user):
context.execute_steps("Given log in with " + user + " user")
And the output of running the command behave features/another_features/another_file_feature.feature is:
$ behave features/another_features/another_file_feature.feature
Feature: Common_feature # features/another_features/another_file_feature.feature:1
Scenario: Scenario number two # features/another_features/another_file_feature.feature:3
Given log in with "José" user # features/steps/common_step.py:4
Scenario: Scenario number three # features/another_features/another_file_feature.feature:7
Given calling indirectly with "Antonio" user # features/steps/another_file_steps.py:4
1 feature passed, 0 failed, 0 skipped
2 scenarios passed, 0 failed, 0 skipped
2 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.006s
As you can see, here are some basic examples of the file structure in Behave and how to call both directly and indirectly to the steps from the features.

Related

how to pass variable argument to context.execute_steps in python-behave

I want to execute certain step before executing my function. The step takes variable as an argument. I am not able to pass it in context.execute_steps.
eg.
call1 = "version"
call1_method = "get"
context.execute_steps('''When execute api{"method":call1_method, "call":call1}''')
however this does not work. I get error in argument parsing because variables are not in quotes. I don't see any such example in behave documentation. Any help would be really appreciated.
There's a bunch of stuff that could be going on. I found here that if you're using python 3 you need to make it unicode by including a u in front of '''. I also was a bit tripped up that you have to include when, then or given as part of execute step command (not just the name of the step), but your example seems to be doing that. I'm kind of confused by what the way you're passing variables but can tell you this how you use execute_steps and works in python 2 & 3 with behave 1.2.5.
#step('I set the Number to "{number:d}" exactly')
def step_impl(context, number):
context.number = 10
context.number = number
#step('I call another step')
def step_impl(context):
context.execute_steps(u'''when I set the Number to "5" exactly''')
print(context.number)
assert context.number == 10, 'This will fail'
Then calling:
Given I call another step
When some not mentioned step
Then some other not mentioned step
Would execute when I set the Number to "5" exactly as part of the when I call another step.
Hard to say for your exact example because I'm unfamiliar with the other step you're trying to execute but if you defined the previous step with something like:
#step('execute api "{method}" "{version}"')
def step_impl(context, method, version):
# do stuff with passed in method and version variables
you should the be able to use it like so in another step.
#step('I call another step')
def step_impl(context):
context.execute_steps(u'''When execute api "get" "version1"''')
If your problem is just passing information between steps. You could just use context to pass it between them.
#step('I do the first function')
def step_impl(context):
context.call1 = "version"
context.call1_method = "get"
#step('I call another step')
def step_impl(context):
print(%s and %s are available in this function % (context.call1, context.call1_method)
And then call the steps in a row
When I do the first function
And I call another step
...

How to generate seperate request by loop method & using beanshell preporcessor

Can some one help me out for solution.
Objective/Issue: In Jmeter search by subject name request need to pass the variable "subjName" from BeanshellPreProcessor.
"subjName" This variable is having list of subjects which are being sent to "search by subject request"
My object is to send the subject separately & FOR EACH SUBJECT A NEW REQUEST SHOULD BE GENERATED.
Bean shell script is as follows
and the search by subject name
Please suggest me the logic/script how can I generate different request for different subject?
Define loop with loop count 9
Add a Counter: Start: 0 Increment: 1 Maximum: 8 Reference Name: cnt
Inside beanshell use vars.get("cnt") instead of variable i (you don't need the loop inside beanshell)

R automate testing?

Currently, I employ following script to test for something called Granger causality. Note: My main question is about the script structure, not the method.
#These values always have to be specified manually
dat <- data.frame(df[2],df[3])
lag = 2
# VAR-Model
V <- VAR(dat,p=lag,type="both")
V$var
summary(V)
wald.test(b=coef(V$var[[1]]), Sigma=vcov(V$var[[1]]), Terms=c(seq(2, by=2, length=lag)))
names(cof1[2])
wald.test(b=coef(V$var[[2]]), Sigma=vcov(V$var[[2]]), Terms=c(seq(1, by=2, length=lag)))
names(cof1[1])
Main issue is, that I always manually have to change the testing pair in dat <- data.frame(..). Further, I always manually enter "lag = x" after some tests on stationarity that can rather not be automated.
Let's say I would have to test following pairs:
df[2],df[3]
df[2],df[4]
df[2],df[5]
df[6],df[7]
df[8],df[9]
can I somehow state that in an array for the test? Assuming I would also know lag for each testing pair, could I also add that to it?
It would be perfect to directly output my test results into a table, instead of manually changing the data and then entering the result into an Excel/LaTeX.

boost log every hour

I'm using boost log and I want to make basic log principal file: new error log at the beginning of each hour (if error exists), and to name it like "file_%Y%m%d%H.log".
I have 2 problems with this boost library:
1. How to rotate file at the beginning of each hour?
This isn't possible with rotation_at_time_interval parameter because it creates new file regarding first written record in file, and the hour in file name doesn't match that rule. Is it possible to have multiple rotation_at_time_point for one file in sink or is there some other solution?
2. When file exceed some size I want it to start new file and in that case it should append some index to file name. With adding rotation_size parametar and %N to file name it will increment N all the time while application is running. I want that N to be reset at the beginning of each hour, just as my file name changes. Does anybody have any idea how to do that with this boost log library?
This is basic principal in creating log files in industry. I really don't understand how this can't be done with library which is dedicated for creating log files.
Library itself doesn't provide a way to rotate file at the begging of every hour, but i had same problem so i used a function wrapper, which return true on begging of every hour.
I find this way better for me, because i can controll efficency of code.
from boost.org:
bool is_it_time_to_rotate();
void init_logging(){
boost::shared_ptr< sinks::text_file_backend > backend =
boost::make_shared< sinks::text_file_backend >(
keywords::file_name = "file_%5N.log",
keywords::time_based_rotation = &is_it_time_to_rotate
);
}
For a second question i really dont undrestand it well.

parallel code execution python2.7 ndb

in my app i for one of the handler i need to get a bunch of entities and execute a function for each one of them.
i have the keys of all the enities i need. after fetching them i need to execute 1 or 2 instance methods for each one of them and this slows my app down quite a bit. doing this for 100 entities takes around 10 seconds which is way to slow.
im trying to find a way to get the entities and execute those functions in parallel to save time but im not really sure which way is the best.
i tried the _post_get_hook but the i have a future object and need to call get_result() and execute the function in the hook which works kind of ok in the sdk but gets a lot of 'maximum recursion depth exceeded while calling a Python objec' but i can't really undestand why and the error message is not really elaborate.
is the Pipeline api or ndb.Tasklets what im searching for?
atm im going by trial and error but i would be happy if someone could lead me to the right direction.
EDIT
my code is something similar to a filesystem, every folder contains other folders and files. The path of the Collections set on another entity so to serialize a collection entity i need to get the referenced entity and get the path. On a Collection the serialized_assets() function is slower the more entities it contains. If i could execute a serialize function for each contained asset side by side it would speed things up quite a bit.
class Index(ndb.Model):
path = ndb.StringProperty()
class Folder(ndb.Model):
label = ndb.StringProperty()
index = ndb.KeyProperty()
# contents is a list of keys of contaied Folders and Files
contents = ndb.StringProperty(repeated=True)
def serialized_assets(self):
assets = ndb.get_multi(self.contents)
serialized_assets = []
for a in assets:
kind = a._get_kind()
assetdict = a.to_dict()
if kind == 'Collection':
assetdict['path'] = asset.path
# other operations ...
elif kind == 'File':
assetdict['another_prop'] = asset.another_property
# ...
serialized_assets.append(assetdict)
return serialized_assets
#property
def path(self):
return self.index.get().path
class File(ndb.Model):
filename = ndb.StringProperty()
# other properties....
#property
def another_property(self):
# compute something here
return computed_property
EDIT2:
#ndb.tasklet
def serialized_assets(self, keys=None):
assets = yield ndb.get_multi_async(keys)
raise ndb.Return([asset.serialized for asset in assets])
is this tasklet code ok?
Since most of the execution time of your functions are spent waiting for RPCs, NDB's async and tasklet support is your best bet. That's described in some detail here. The simplest usage for your requirements is probably to use the ndb.map function, like this (from the docs):
#ndb.tasklet
def callback(msg):
acct = yield ndb.get_async(msg.author)
raise tasklet.Return('On %s, %s wrote:\n%s' % (msg.when, acct.nick(), msg.body))
qry = Messages.query().order(-Message.when)
outputs = qry.map(callback, limit=20)
for output in outputs:
print output
The callback function is called for each entity returned by the query, and it can do whatever operations it needs (using _async methods and yield to do them asynchronously), returning the result when it's done. Because the callback is a tasklet, and uses yield to make the asynchronous calls, NDB can run multiple instances of it in parallel, and even batch up some operations.
The pipeline API is overkill for what you want to do. Is there any reason why you couldn't just use a taskqueue?
Use the initial request to get all of the entity keys, and then enqueue a task for each key having the task execute the 2 functions per-entity. The concurrency will be based then on the number of concurrent requests as configured for that taskqueue.

Resources