I have been asked to provide a list of every behave Feature and Scenario we run as part of our regression pack for a document to an external client (not the steps)
As our regression test suite is currently around 50 feature files with at least 10 scenarios in each I would rather not copy and paste manually.
Is there a way to export the Feature name and ID and then the name and ID of each scenario that comes under that feature either to a CSV or text file?
Currently our behave tests are run locally and I am using PyCharm IDE to edit them in.
I have found a roundabout way to do this.
Set Behave to export to an external txt file using the command
outfiles = test_list
Then use the behave -d command to run my tests as a dry run.
This then populates the txt file with the feature, scenario and step of every test.
I can export this to Excel and through filtering can isolate the feature and scenario lines removing the steps and then use text to columns to split the feature/scenario description from its test path/name.
If there is a less roundabout way of doing this it would be good to know as it looks like this is information we will need to be able to provide on a semi regular basis moving forwards.
You can take advantage of context.scenario to get scenario name and feature name and then write them into a text file.
You should put these code in after_scenario in environment.py so that you also can get the scenario status.
I am using this for export scenario name, status and feature name into a text file. Each will be separated by "|". I later import this file to a excel file for reporting.
Here is the code you can use for reference:
def write_scenario_summary(context, scenario, report_path):
try:
# scenario status could be [untested, skipped, passed, failed]
status = scenario.compute_status().name.upper()
feature = ReportingHelper.get_feature_name(scenario)
logging_info = '{status} | {feature} | | {scenario_name}'.format(
status=status,
feature=feature,
scenario_name=scenario.name)
print(logging_info, file=open(report_path, 'a'))
def get_feature_name(scenario):
feature_file_path = scenario.feature.filename
return os.path.basename(feature_file_path)
Hope it helps.
Related
I have created a logic app where I need to check if my file name contains "ABC" then i need to copy file and paste it in ABC folder Azure else need to check if my file name contains "ZYX" then paste it in ZYX folder in Azure.
In switch function its giving me an error.
"The execution of template action 'Switch' failed: The result of the evaluation of 'scope' action expression '#body('Get_file_content')' is not valid. It is of type 'Object' but is expected to be a value of type 'String, Integer'." [Image1]
or if I am trying to use 2 conditions in parallel branch its giving me the below error.
![Image2]
I also tried conditions, if the file name contains "ABC" then copy paste in ABC folder if false then i tried using another conditions inside the false command.
Also, follow up question would be if I have multiple file name with ABC then can I merge and place it in one file and paste that in ABC folder in Azure Blob
Image using switch function:
Image Conditions using parallel branch:
![Image Conditions using parallel branch]
Attaching the latest screenshot with your suggestion.
enter image description here
i just tried using one and its giving me the error.
enter image description here
Reason for the error : You cannot passfile content as a condition check in the switch Connector.
Solution: In order to get the desired output as described above you need to pass the file name as a condition check in switch control. As we receive the File Name from the SharePoint connector as base64 format we need to decode it to string using **base64ToString(FileName)** to compare the folder names present in the containers/blob.
Here is the code view of the logic app based on the above-discussed requirement.
Also, follow up question would be if I have multiple file name with
ABC then can I merge and place it in one file and paste that in ABC
folder in Azure Blob
Using the above flow, merging of files is not possible since each file is saved in different extensions but override of the file takes place in the blob if you are uploading or updating the same file.
For more information about merging of files using Logic app you can refer this BLOG.
UPDATED ANSWER
Before Comparing the string/ Filename you can use 'Compose' before the 'Condition' Connector and convert the string into lower case.
toLower(base64ToString(triggerOutputs()['headers']['x-ms-file-name-encoded']))
I currently have a python program which exports Test Runs, Test Plans, and Test Cases to CSV.
I am using the TestRun Model but I cannot get the information highlighted in the status column seen here. Is there anyway to get this information?
Thanks!
I currently have a python program which exports Test Runs, Test Plans, and Test Cases to CSV.
This is great. Please consider contributing your script under https://github.com/kiwitcms/api-scripts
I am using the TestRun Model but I cannot get the information highlighted in the status column seen here. Is there anyway to get this information?
Note: these screenshots are old!
There is no TestRun.status field, the status is only a visual property which is calculated by the UI based on the presence or absence of TestRun.stop_date field. If this field is null/None then the status is "Running", otherwise it is "Stopped".
In the current TR search page for example we query the TestRun.filter API with a
stop_date__isnull=True parameter.
I want to read a single file (the file is a html document) from my computer and store it in a Corpus (I'm using the package tm).
Do you have any solution to do that?
Here is what I tried :
data<-read.csv(fileName)
c2<-Corpus(VectorSource(data))
it mostly works, but I sometime get the error : more columns than column names
I guess I'm not supposed to use read.csv for a webpage, as I didn't find a better solution.
Thanks for your help =)
A webpage definitely does not conform to the specifications that a CSV should. Instead you probably want to use the readHTMLTable function from the XML package.
This is grabbing from an actual webpage but it should be the same idea
file <- "http://xkcd.com/"
dat <- readLines(file)
c2 <- Corpus(VectorSource(dat))
I am using the Drupal 7 Migrate module to create a series of nodes from JPG and EPS files. I can get them to import just fine. But I notice that when I am done importing them if I look at the nodes it creates, none of the attached filefield and thumbnail files contain filename information.
Upon inspecting the file_managed table I see that both the filename and filemime fields are empty for ONLY the files that I attached via the migrate module. This also creates an issue with downloading the files.
Now I think the problem has to do with the fact that I am using "file_link" instead of "file_copy" as the file operation I specify. The problem is I am importing around 2TB (thats Terabytes) of image files. We had to put in a special request with Rackspace just to get access to that much disk space on our server. So I can't go around copying from one directory to the next because of space issues. So "file_link" seems like the obvious choice.
Now you probably want to see how I am doing this exactly, so here is the code snippet:
$jpg_arguments = MigrateFileFieldHandler::arguments(NULL,
'file_link', FILE_EXISTS_RENAME, 'en', array('source_field' => 'jpg_name'),
array('source_field' => 'jpg_filename'), array('source_field' => 'jpg_filename'));
$this->addFieldMapping('field_image', 'jpg_uri')
->arguments($jpg_arguments);
As you can see I am specifying no base path (just like the beer.inc example file does). I have set file_link, the language, and the source fields for the description, title, and alt.
It is able to generate thumbnails from the JPGs. But still missing those columns of data in the db table. I traced through the functions the best I could but I don't see what is causing this. I tried running the uri in the table through the functions that generate the filename and the filemime and they output just fine. It is like something is removing just those segments of data.
Does anyone have any idea what this could be? I am using the Drupal 7 Migrate module version 2.2. It is running on Drupal 7.8.
Thanks,
Patrick
Ok, so I have found the answer to yet another question of mine. This is actually an issue with the migrate module itself. The issue is documented here. I will be repealing this bounty (as soon as I figure out how).
I need to write a small R script for people who never used R before that imports a file and does some things with it. I would like to minimize user input as much as possible, and since assigning the file-path is basically all the user input required I was wondering, is it possible to get a popup screen (basically your usual "open file" screen) allowing someone to select a file (import the name as string in R or something)?
The file.choose function performs this, eg:
fname <- file.choose()
source(file.choose())
You may also want to look at choose.files (for multiple files) and choose.dir (for just selecting a directory path).
The tcltk package gives you tk_choose.files.
If you want to go beyond file choosers then you can use the package to build user interfaces.
It's worth mentioning rChoiceDialogs::rchoose.files. I'm not completely sold yet, but they advertise it as being completely cross platform and fixing the annoying problem common to choose.files and tk_choose.files of popping up behind other windows. See their vignette here.