Jmeter-file compare - file

I am new to jmeter & I am looking for an option to compare two files using Jmeter. Both the files are generated using Save Response to file in jmeter. Also both files contain response to a jdbc request, with 100s of values across multiple columns & rows. I am using __FileToString() function in my response assertion to compare the two files. But this fails if my file has data with some special chars. Any tips how could I handle this ? OR any other ways to compare two Jmeter created files ? I would also want to know the records that are different in both files.
I know files could be compared using a lot of other tools, but I would really want to do this using Jmeter please. Thank you!

You can do it using JSR223 Sampler and Groovy language. For example this code allows comparison of two text files:
def file1 = new File('/path/to/file1')
def file2 = new File('/path/to/file2')
def file1Lines = file1.readLines('UTF-8')
def file2Lines = file2.readLines('UTF-8')
if (file1Lines.size() != file2Lines.size()) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage('Files size is different, omitting line-by-line compare')
} else {
def differences = new StringBuilder()
file1Lines.eachWithIndex { String file1Line, int number ->
String file2Line = file2Lines.get(number)
if (!file1Line.equals(file2Line)) {
differences.append('Difference # ').append(number).append('. Expected: ')
.append(file1Line).append('. Actual: ' + file2Line)
differences.append(System.getProperty('line.separator'))
}
}
if (differences.toString().length() > 0) {
SampleResult.setSuccessful(false)
SampleResult.setResponseMessage(differences.toString())
}
}
In case of any differences, the sampler will fail and you will see the information about deltas in the "Response Message" section:
Not that this will not necessary work for all types of files, for example, binary files.
References:
Groovy: reading files
SampleResult class JavaDoc
Apache Groovy - Why and How You Should Use It

Related

How to export Snowflake Web UI Worksheet SQL to file

Classic Snowflake Web UI and the new Snowsight are great at importing sql from a file but neither allows you to export sql to a file. Is there a workaround?
You can use an IDE to connect to snowflake and write queries. Then the scripts can be downloaded using IDE features and can sync with git repo as well.
dbeaver is one such IDE which supports snowflake :
https://hevodata.com/learn/dbeaver-snowflake/
The query pane is interactive so the obvious workaround will be:
CTRL + A (select all)
CTRL + C (copy)
<open_favourite_text_editor>
CTRL + P (paste)
CTRL + S (save)
This tool can help you while the team develops a native feature to export worksheets:
"Snowflake Snowsight Extensions wrap Snowsight features that do not have API or SQL alternatives, such as manipulating Dashboards and Worksheets, and retrieving Query Profile and step timings."
https://github.com/Snowflake-Labs/sfsnowsightextensions
Further explained on this post:
https://medium.com/snowflake/importing-and-exporting-snowsight-dashboards-and-worksheets-3cd8e34d29c8
For example, to save to a file within PowerShell:
PS > $dashboards | foreach {$_.SaveToFolder(“path/to/folder”)}
PS > $dashboards[0].SaveToFile(“path/to/folder/mydashboard.json”)
ETA: I'm adding this edit to the front because this is what actually worked.
Again, BSON was a dead end & punycode is irrelevant. I don't know why punycode is referenced in the metadata file; but my best guess is that they might use punycode to encode the worksheet name itself (though I'm not sure why that would be needed since it shouldn't need to be part of a URL).
After doing terrible things and trying a number of complex ways of dealing with escape character hell, I found that the actual encoding is very simple. It just works as an 8 bit encoding with anything that might cause problems escaped away (null, control codes, double quotes, etc.). To load, treat the file as a text file using an 8-bit encoding; extract the data as a JSON field, then re-encode that extracted data as that same encoding. I just used latin_1 to read; but it may not even matter which encoding you use as long as you are consistent and use the same one to re-encode. The encoded field will then be valid zlib compressed data.
I decided that I wanted to start from scratch so I needed to back the worksheets first and I made a Python script based on my findings above. Be warned that this may return even worksheets that you previously closed for good. After running this and verifying that backups were created, I just ran rm #~/worksheet_data/;, closed the tab & reopened it.
Here's the code (fill in the appropriate base directory location):
import os
from collections import OrderedDict
import configparser
from sqlalchemy import create_engine, exc
from snowflake.sqlalchemy import URL
import pathlib
import json
import zlib
import string
def format_filename(s: str) -> str: # From https://gist.github.com/seanh/93666
"""Take a string and return a valid filename constructed from the string.
Uses a whitelist approach: any characters not present in valid_chars are
removed. Also spaces are replaced with underscores.
Note: this method may produce invalid filenames such as ``, `.` or `..`
When I use this method I prepend a date string like '2009_01_15_19_46_32_'
and append a file extension like '.txt', so I avoid the potential of using
an invalid filename.
"""
valid_chars = "-_.() %s%s" % (string.ascii_letters, string.digits)
filename = ''.join(c for c in s if c in valid_chars)
# filename = filename.replace(' ','_') # I don't like spaces in filenames.
return filename
def trlng_dash(s: str) -> str:
"""Removes trailing character if present."""
return s[:-1] if s[-1] == '-' else s
sso_authenticate = True
# Assumes CLI config file exists.
config = configparser.ConfigParser()
home = pathlib.Path.home()
config_loc = home/'.snowsql/config' # Assumes it's set up from Snowflake CLI.
base_dir = home/r'{your Desired base directory goes here.}'
json_dir = base_dir/'json' # Location for your worksheet stage JSON files.
sql_dir = base_dir/'sql' # Location for your worksheets.
# Assumes CLI config file exists.
config.read(config_loc)
# Add connection parameters here (assumes CLI config exists).
# Using sso so only 2 are needed.
# If there's no config file, etc. enter by hand here (or however you want to do it).
connection_params = {
'account': config['connections']['accountname'],
'user': config['connections']['username'],
}
if sso_authenticate:
connection_params['authenticator'] = 'externalbrowser'
if config['connections'].get('password', None) is not None:
connection_params['password'] = config['connections']['password']
if config['connections'].get('rolename', None) is not None:
connection_params['role'] = config['connections']['rolename']
if locals().get('database', None) is not None:
connection_params['database'] = database
if locals().get('schema', None) is not None:
connection_params['schema'] = schema
sf_engine = create_engine(URL(**connection_params))
if not base_dir.exists():
base_dir.mkdir()
if not json_dir.exists():
json_dir.mkdir()
if not (sql_dir).exists():
sql_dir.mkdir()
with sf_engine.connect() as connection:
connection.execute(f'get #~/worksheet_data/ \'file://{str(json_dir.as_posix())}\';')
for file in [path for path in json_dir.glob('*') if path.is_file()]:
if file.suffix != '.json':
file.replace(file.with_suffix(file.suffix + '.json'))
with open(json_dir/'metadata.json', 'r') as metadata_file:
files_meta = json.load(metadata_file)
# List of files from metadata file will contain some empty worksheets.
files_description_orig = OrderedDict((file_key_value['name'], file_key_value) for file_key_value in sorted(files_meta['activeWorksheets'] + list(files_meta['inactiveWorksheets'].values()), key=lambda x: x['name']) if file_key_value['name'])
# files_description will only track non empty worksheets
files_description = files_description_orig.copy()
# Create updated files description filtering out empty worksheets.
for item in files_description_orig:
json_file = json_dir/f"{files_description_orig[item]['name']}.json"
# If a file didn't make it or was deleted by hand, we should
# remove from the filtered description & continue to the next item.
if not (json_file.exists() and json_file.is_file()):
del files_description[item]
continue
with open(json_file, 'r', encoding='latin_1') as f:
json_dat = json.load(f)
# If the file represents a worksheet with a body field, we want it.
if not json_dat['wsContents'].get('body'):
del files_description[item]
## Delete JSON files corresponsing to empty worksheets.
# f.close()
# try:
# (json_dir/f"{files_description_orig[item]['name']}.json").unlink()
# except:
# pass
# Produce a list of normalized filenames (no illegal or awkward characters).
file_names = set(
format_filename(trlng_dash(files_description[item]['encodedDetails']['scriptName']).strip())
for item in files_description)
# Add useful information to our files_description OrderedDict
for file_name in file_names:
repeats_cnt = 0
file_name_repeats = (
item
for item
in files_description
if file_name == format_filename(trlng_dash(files_description[item]['encodedDetails']['scriptName']).strip())
)
for file_uuid in file_name_repeats:
files_description[file_uuid]['normalizedName'] = file_name
files_description[file_uuid]['stemSuffix'] = '' if repeats_cnt == 0 else f'({repeats_cnt:0>2})'
repeats_cnt += 1
# Now we iterate on non-empty worksheets only.
for item in files_description:
json_file = json_dir/f"{files_description[item]['name']}.json"
with open(json_file, 'r', encoding='latin_1') as f:
json_dat = json.load(f)
body = json_dat['wsContents']['body']
body_bin = body.encode('latin_1')
body_txt = zlib.decompress(body_bin).decode('utf8')
sql_file = sql_dir/f"{files_description[item]['normalizedName']}{files_description[item]['stemSuffix']}.sql"
with open(sql_file, 'w') as sql_f:
sql_f.write(body_txt)
creation_stamp = files_description[item]['created']/1000
os.utime(sql_file, (creation_stamp,creation_stamp))
print('Done!')
As mentioned at Is there any option in snowflake to save or load worksheets? (and in Snowflake's own documentation), in the Classic UI, the worksheets are saved at the user stage under #~/worksheet_data/.
You can download it with a get command like:
get #~/worksheet_data/<name> file:///<your local location>; (though you might need quoting if running from Windows).
The problem is that I do not know how to access it programmatically. The downloaded files look like JSON but it is not valid JSON. The main key is "wsContents" and contains most of the worksheet information. Its value includes two subkeys, "encoding" and "body".
The "encoding" key denotes that gzip is being used. The "body" key seems to be the actual worksheet data which looks a lot like a straight binary representation of the compressed text data. As such, any JSON reader will choke on it.
If it is anything like that, I do not currently know how to access it programmatically using Python.
I do see that a JSON like format exists, BSON, that is bundled into PyMongo. Trying to use this on these files fails. I even tried bson.is_valid and it returns False so I am assuming that it means that these files in Snowflake are not actually BSON.
Edited to add: Again, BSON is a dead end.
Examining the "body" value as just binary data, the first two bytes of sample files do seem to correspond to default zlib compression (0x789c). However, attempting to run straight zlib.decompress on the slice created from that first byte to the last corresponding to the first & last characters of the "body" value results in the error:
Error - 3 while decompressing data: invalid code lengths set
This makes me think that the bytes there, as is, are at least partly garbage and still need some processing before they can be decompressed.
One clue that I failed to mention earlier is that the metadata file (called "metadata" and which serves as an inventory of the remaining files at the #~/worksheet_data/ location) declares that the files use the punycode encoding. However, I have not known how to use that information. The data in these files doesn't particularly look like what I feel punycode should look like nor does it particularly make sense to me that you would use punycode on binary data that is not meant to ever be used to directly generate text such as zlib compressed data.

Good way to avoid java.lang.IndexOutOfBoundsException, when joining a list

So I am trying to read a rather large XML file into a String. Currently joining a list of .readLines() like this:
def is = zipFile.getInputStream(entry)
def content = is.getText('UTF-8')
def xmlBodyList = content.readLines()
return xmlBodyList[1..xmlBodyList.size].join("")
However I am getting this output in console:
java.lang.IndexOutOfBoundsException: toIndex = 21859
I don't need any explanation on IndexOutOfBoundsExceptions, but I am having a hard time figuring out how to program around this issue.
How can I implement this differently, so it allows for a large enough file size?
About Good way to avoid java.lang.IndexOutOfBoundsException
error is here:
return xmlBodyList[1..xmlBodyList.size].join("")
A good way to check variables before accessing and you can use relative range accessor:
assert xmlBodyList.size>1 //check value
return xmlBodyList[1..-1].join("") //use relative indexes -1 = the last one
About large files processing
If you need to iterate through all the lines and execute some operation here is an example:
def stream = zipFile.getInputStream(entry)
stream.eachLine("UTF-8"){line, index->
if(index>1){ //skip first line
//do something here with each line from file
println "$line $index"
}
}
there are a lot of additional groovy methods over java.io.InputStream that could help you to process large file without loading it into memory:
http://docs.groovy-lang.org/latest/html/groovy-jdk/java/io/InputStream.html

Merge results of ExecuteSQL processor with Json content in nifi 6.0

I am dealing with json objects containing geo coordinate points. I would like to run these points against a postgis server I have locally to assess point in polygon matching.
I'm hoping to do this with preexisting processors - I am successfully extracting the lat/lon coordinates into attributes with an "EvaluateJsonPath" processor, and successfully issuing queries to my local postgis datastore with "ExecuteSQL". This leaves me with avro responses, which I can then convert to JSON with the "ConvertAvroToJSON" processor.
I'm having conceptual trouble with how to merge the results of the query back together with the original JSON object. As it is, I've got two flow files with the same fragment ID, which I could theoretically merge together with "mergecontent", but that gets me:
{"my":"original json", "coordinates":[47.38, 179.22]}{"polygon_match":"a123"}
Are there any suggested strategies for merging the results of the SQL query into the original json structure, so my result would be something like this instead:
{"my":"original json", "coordinates":[47.38, 179.22], "polygon_match":"a123"}
I am running nifi 6.0, postgres 9.5.2, and postgis 2.2.1.
I saw some reference to using replaceText processor in https://community.hortonworks.com/questions/22090/issue-merging-content-in-nifi.html - but this seems to be merging content from an attribute into the body of the content. I'm missing the point of merging the content of the original and either the content of the SQL response, or attributes extracted from the SQL response without the content.
Edit:
Groovy script following appears to do what is needed. I am not a groovy coder, so any improvements are welcome.
import org.apache.commons.io.IOUtils
import java.nio.charset.*
import groovy.json.JsonSlurper
def flowFile = session.get();
if (flowFile == null) {
return;
}
def slurper = new JsonSlurper()
flowFile = session.write(flowFile,
{ inputStream, outputStream ->
def text = IOUtils.toString(inputStream, StandardCharsets.UTF_8)
def obj = slurper.parseText(text)
def originaljsontext = flowFile.getAttribute('original.json')
def originaljson = slurper.parseText(originaljsontext)
originaljson.put("point_polygon_info", obj)
outputStream.write(groovy.json.JsonOutput.toJson(originaljson).getBytes(StandardCharsets.UTF_8))
} as StreamCallback)
session.transfer(flowFile, ExecuteScript.REL_SUCCESS)
If your original JSON is relatively small, a possible approach might be the following...
Use ExtractText before getting to ExecuteSQL to copy the original JSON into an attribute.
After ExecuteSQL, and after ConvertAvroToJSON, use an ExecuteScript processor to create a new JSON document that combines the original from the attribute with the results in the content.
I'm not exactly sure what needs to be done in the script, but I know others have had success using Groovy and JsonSlurper through the ExecuteScript processor.
http://groovy-lang.org/json.html
http://docs.groovy-lang.org/latest/html/gapi/groovy/json/JsonSlurper.html

solr : how to add extra fields values not in the document

I know lucene, just started to learn how to use solr. In the simple example, the way to add document is to used the example ../update -jar post.jar to add document, the question is without writing my own add document in java, using the same way (... post.jar), is there a way to add additional fields not in the document? For example, say my schema include name, age, id fields, but the document has no 'id' field but I want the id and its value to be included, of course I know what id and value I want but how do I include it?
Thanks in advanced!
I don't believe you can mix the two. You can use post.jar to add documents using arguments passed in on the commandline, a file, stdin or a simple crawl from a web page but there is no way to combine them. In the source code for post.jar you can see it's a series else if statements so they are mutually exclusive.
-Ddata args, stdin, files, web
Use args to pass arguments along the command line (such as a command
to delete a document). Use files to pass a filename or regex pattern
indicating paths and filenames. Use stdin to use standard input. Use
web for a very simple web crawler (arguments for this would be the URL
to crawl).
https://cwiki.apache.org/confluence/display/solr/Simple+Post+Tool
/**
* After initialization, call execute to start the post job.
* This method delegates to the correct mode method.
*/
public void execute() {
final long startTime = System.currentTimeMillis();
if (DATA_MODE_FILES.equals(mode) && args.length > 0) {
doFilesMode();
} else if(DATA_MODE_ARGS.equals(mode) && args.length > 0) {
doArgsMode();
} else if(DATA_MODE_WEB.equals(mode) && args.length > 0) {
doWebMode();
} else if(DATA_MODE_STDIN.equals(mode)) {
doStdinMode();
} else {
usageShort();
return;
}
if (commit) commit();
if (optimize) optimize();
final long endTime = System.currentTimeMillis();
displayTiming(endTime - startTime);
}
http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/java/org/apache/solr/util/SimplePostTool.java
You could try to modify the code but I think a better bet would be to either pre-process your xml files to include the missing fields, or learn to use the API (either via Java or hitting it with Curl) to do this on your own.

Scala - Remove files from folder by file suffix name

I'm looking for an elegant way to remove all files in a folder that have the extension .jpg
I have the following to count the total jpg files in a folder:
Option(new File(path).list).map(_.filter(_.endsWith(".jpg")).size).getOrElse(0)
Thanks in advance, any help much appreciated :)
for {
files <- Option(new File(path).listFiles)
file <- files if file.getName.endsWith(".jpg")
} file.delete()
Some extra comment,
To extend #Debilski's answer:
Touching files obviously causes side effects. To make this functionally effectfull, please do something like:
def deleteFilesBySuffix[G[_]: Sync](suffix: String)(dirName: String): G[Unit] =
Sync[G].suspend(Sync[G].fromTry(Try(for {
files <- Option(new File(dirName).listFiles)
file <- files if file.getName.endsWith(suffix)
} file.delete())))
Then,
You'll have to run this code with an effect which can delay the
execution of this method like:
import cats.IO
import cats.syntax.foldable._
val r = deleteFilesBySuffix[IO]("jpg")("/tmp")
//Still nothing happened
//Another example with multiple dirs:
val dirNames = List("/tmp", "/tmp/myDir")
val res = dirNames.traverse_(deleteFilesBySuffix[IO]("jpg"))
//Actually run it..
r.unsafeRunSunc()
//Now files are deleted..
In my opinion this is much safer, and uses Scala's power of effects
os-lib lets you delete all files with a certain extension with a one liner:
os.list(os.pwd/"pics").filter(_.ext == "jpg").map(os.remove)
As described here, os-lib is the easiest way to perform filesystem operations with Scala. It lets you write beautiful, concise, Scala code without dealing with ugliness of the underlying Java libs.
Here's some setup code if you'd like to test this out on your machine:
os.makeDir(os.pwd/"pics")
os.write(os.pwd/"pics"/"family.jpg", "")
os.write(os.pwd/"pics"/"cousins.txt", "")
os.write(os.pwd/"pics"/"gf.jpg", "")
os.write(os.pwd/"pics"/"friend.gif", "")

Resources