z.show limit problem in python interpreter - apache-zeppelin

I am using python interpreter in zeppelin and I want to display pandas DataFrame through z.show() without limit in 1000 rows.
But when I increase zeppelin.python.maxResult then nothing changes and z.show() still display 1000 rows.
Can anyone know what is the problem?

There is an additional workaround needed.
You did correctly with the modification of zeppelin.python.maxResult in the interpreter GUI. What you should try is the modification of the property zeppelin.interpreter.output.limit in zeppelin-site.xml in Zeppelin's conf file.
To modify zeppelin-site.xml you have to make a copy of zeppelin-site.xml.template and paste it as zeppelin-site.xml, then modify it.

Related

Behave print all tests to a text file

I have been asked to provide a list of every behave Feature and Scenario we run as part of our regression pack for a document to an external client (not the steps)
As our regression test suite is currently around 50 feature files with at least 10 scenarios in each I would rather not copy and paste manually.
Is there a way to export the Feature name and ID and then the name and ID of each scenario that comes under that feature either to a CSV or text file?
Currently our behave tests are run locally and I am using PyCharm IDE to edit them in.
I have found a roundabout way to do this.
Set Behave to export to an external txt file using the command
outfiles = test_list
Then use the behave -d command to run my tests as a dry run.
This then populates the txt file with the feature, scenario and step of every test.
I can export this to Excel and through filtering can isolate the feature and scenario lines removing the steps and then use text to columns to split the feature/scenario description from its test path/name.
If there is a less roundabout way of doing this it would be good to know as it looks like this is information we will need to be able to provide on a semi regular basis moving forwards.
You can take advantage of context.scenario to get scenario name and feature name and then write them into a text file.
You should put these code in after_scenario in environment.py so that you also can get the scenario status.
I am using this for export scenario name, status and feature name into a text file. Each will be separated by "|". I later import this file to a excel file for reporting.
Here is the code you can use for reference:
def write_scenario_summary(context, scenario, report_path):
try:
# scenario status could be [untested, skipped, passed, failed]
status = scenario.compute_status().name.upper()
feature = ReportingHelper.get_feature_name(scenario)
logging_info = '{status} | {feature} | | {scenario_name}'.format(
status=status,
feature=feature,
scenario_name=scenario.name)
print(logging_info, file=open(report_path, 'a'))
def get_feature_name(scenario):
feature_file_path = scenario.feature.filename
return os.path.basename(feature_file_path)
Hope it helps.

Export individual cell in IPython/Jupyter notebook

I am able to export the entire notebook as HTML, but I would like to export just a single cell, together with its output.
Is there some way of doing this?
One way to do this is to use a custom preprocessor.
I explain how to do this briefly in response to Simple way to choose which cells to run in ipython notebook during run all.
To summarize: you can extend nbconvert.preprocessors.ExecutePreprocessor to create a preprocessor that checks cell metadata to determine whether that cell should be executed and/or output.
I use Jupyter Notebooks for report generation all the time, so I wrote a collection of custom processors to extend nbconvert behavior:
meta-language to determine what cells get executed and included in the final report (if/else logic on entire notebook sections)
executing code in markdown cells
removing code cells from output.
taking input arguments from the command line
I haven't had time to wrap these in an distributable extension, but you can see the code here: https://gist.github.com/brazilbean/3ebb31324f6dad212817b3663c7a0219.
Please feel free to use/modify/do-great-things with these examples. :)

Get values from an array. JMeter

I have values in file:
en-us, de-de, es-es, cs-cz, fr-fr, it-it, ja-jp, ko-kr, pl-pl, pt-br, ru-ru, tr-tr, zh-cn, zh-tw.
how can I get this values for one request?
I want to create a query that takes the value of these in turn and writes the variable
This scenario can be achieved using Jmeter component "CSV Data Set Config"
Please refer to below mentioned link:
Jmeter CSV Data Set Config
Hope this will help
Can't comment, not enough karma. In response to above questions your path is probably wrong. If you use a debug sampler to show what path the CSV reader is taking I think you will find it is looking at something like C:/Jmeter/C:/path/to/CSV/file.
Another option for completing this is to use inline CSVRead. In your HTTP request use code like this -
${__CSVRead(etc/filters.csv,0)}${__CSVRead(etc/filters.csv,next)}
etc/filters is the RELATIVE path from Jmeters active running directory. In my case this evaluates to
C:/git/JmeterScripts/etc/filters.csv
In either case, I am sure your problem is that Jmeters active running directory is not what you think it is. I have had this problem several times with the same error.

Advanced Merge File Content

I am under Windows and I am using Sublime2 Text Editor (i can download and use any other software) or PHP script.
Now, I am searching for solution to advance merge two files (or lines at one file, no matter). KEY - is the same part of two files
sourcefileone.txt:
KEY|CCC
sortcefiletwo.txt
KEY|BBB
Need to merge and receive this:
KEY|BBB|CCC
Any Solution? thanks
seems there is no ready solution. To solve the problem I already wrote php code that load to files in array, than explode them and merge in the new format at new array.

Difficulty with filename and filemime when using Migrate module

I am using the Drupal 7 Migrate module to create a series of nodes from JPG and EPS files. I can get them to import just fine. But I notice that when I am done importing them if I look at the nodes it creates, none of the attached filefield and thumbnail files contain filename information.
Upon inspecting the file_managed table I see that both the filename and filemime fields are empty for ONLY the files that I attached via the migrate module. This also creates an issue with downloading the files.
Now I think the problem has to do with the fact that I am using "file_link" instead of "file_copy" as the file operation I specify. The problem is I am importing around 2TB (thats Terabytes) of image files. We had to put in a special request with Rackspace just to get access to that much disk space on our server. So I can't go around copying from one directory to the next because of space issues. So "file_link" seems like the obvious choice.
Now you probably want to see how I am doing this exactly, so here is the code snippet:
$jpg_arguments = MigrateFileFieldHandler::arguments(NULL,
'file_link', FILE_EXISTS_RENAME, 'en', array('source_field' => 'jpg_name'),
array('source_field' => 'jpg_filename'), array('source_field' => 'jpg_filename'));
$this->addFieldMapping('field_image', 'jpg_uri')
->arguments($jpg_arguments);
As you can see I am specifying no base path (just like the beer.inc example file does). I have set file_link, the language, and the source fields for the description, title, and alt.
It is able to generate thumbnails from the JPGs. But still missing those columns of data in the db table. I traced through the functions the best I could but I don't see what is causing this. I tried running the uri in the table through the functions that generate the filename and the filemime and they output just fine. It is like something is removing just those segments of data.
Does anyone have any idea what this could be? I am using the Drupal 7 Migrate module version 2.2. It is running on Drupal 7.8.
Thanks,
Patrick
Ok, so I have found the answer to yet another question of mine. This is actually an issue with the migrate module itself. The issue is documented here. I will be repealing this bounty (as soon as I figure out how).

Resources