How to run multiple scenarios with different protocols sequentially? - gatling

I have 2 scenarios
val scn = scenario("getJson").exec(getJson).inject(atOnceUsers(1)).protocols(httpProtocol)
val scn1 = scenario("sendJson").exec(processJsons).inject(atOnceUsers(1)).protocols(amqpConf)
I want to run them sequentially in Gatling 3.5.1
I tried these ways
1. setup(scn,scn1)
2. scn.andThen(scn1)
setup(scn)
but both ways it is not working, everytime scn1 is executed first.
Someone please help me. TIA

Gatling DSL components are immutable, so 2 is broken, you're passing the unchained scn.
setUp(
scn.andThen(scn1)
)
or
val chained = scn.andThen(scn1)
setUp(chained)

Related

Django - Certain Py Module doesn't work in IIS

This is my first time putting a question here. Some background, I started coding on Django this year and Python more than a year but less than 2 years so I don't much. This problem is a concern to my what I developing in at work. And my team working on it is novice or no experience on coding Python or Django or both.
The Problem
We have a web app based on Django 3.0.2 and use MSSQL for the db. Our company policy is to use Windows server and IIS as the prod and test server. We done a lot of work on it and all work well except for some python library and Django module that don't work, mainly Xlwings and Django-post-office. For XLwings, it doesn't run the code and Excel(we have valid license and latest Excel programme on the server).
code below;
filepath = BASE_DIR + '\\media\\Template.xlsm'
temp_path = BASE_DIR + '\\media\\' + '{}.{}'.format(uuid4().hex, 'xlsm')
shutil.copyfile(filepath, temp_path)
pythoncom.CoInitialize()
app1 = xw.App(visible=True)
wt = xw.Book(temp_path)
sht = wt.sheets['Cover']
sht.range('E5').value = request.POST.get('year')
sht.range('E6').value = request.POST.get('company')
sht.range('E7').value = companydata.employer_no
sht.range('E8').value = email
wt.save(temp_path)
app1.quit()
As for Django-post-office, we have a module using it that got it working but other modules using it doesn't work. Same code that is used except for the template and subject.
Code below;
plaintext = get_template('template.txt')
htmly = get_template('template.html')
mail.send(
[email],
NAME_EMAIL_DOMAIN,
message = plaintext.render(info),
html_message = htmly.render(info),
subject = f'Subject',
priority = 'now',
attachments = {
'RemunerationTemplate.xlsm': temp_path,
},
)
The bizarre thing is it work on CMD but it just doesn't work on IIS. We don't know why and how to rectify it. We ask the IT Support (department responsible for the servers) said that the test server is not restricted.
The Fail Fix
Try change IIS config, enviroment etc - No Dice
Change code - We are noobs, idk what's wrong but clearly doesn't work
Google it - too sparse for info
Ask the IT support guy if he has any idea - he doesn't care much on it and just say to me to piss off
So, I grateful if any help for this since we are nearing the deployment date and this issue is haunting me in my sleep for the past week already. Thank You

How to send custom DocumentOperation to DocumentProcessing pipeline from a Processor?

Scenario: I've been stuck on this for way to long and I think solution might be easy but I just can't see it, this is the scenario:
cURL POST to http://localhost:8080/my_imports (raw JSON data on body)
->
MyImportsCustomHandler (extends ThreadedHttpRequestHandler [Validations]
->
MyObjectProcessor (extends Processor) [JSON deserialize and data massage]
->
MyFirstDocumentProcessor (extends DocumentProcessor) [Set some fields and save]
Problem is that execution never reaches MyFirstDocumentProcessor, likely because request didn't started from the document_api endpoints (intentionaly).
There are no errors thrown, just processing route never reaches the document processor chain, I think it should because on MyObjectProcessor I'm doing:
DocumentType type =
localDocHandler.getDocumentTypeManager().getDocumentType("my_doc");
DocumentId id = new DocumentId("id:default:my_doc::2");
Document document = new Document(type, id);
DocumentPut docPut = new DocumentPut(document);
Processing proc = com.yahoo.docproc.Processing.of(docPut);
I got this idea from here: https://github.com/vespa-engine/vespa/blob/master/docproc/src/test/java/com/yahoo/docproc/util/SplitterJoinerTestCase.java
but on that test I see this line splitter.process(p);, which I'm not able to find a suitable replacement that works inside a Processor, in that context I only have the Request, Execution and DocumentProcessingHandler
I hope somebody versed on Vespa con shine some light on this, is just the last hop on the processing chain that I can't bridge :|
To write documents from Java code, you need to use the Document Access API:
http://docs.vespa.ai/documentation/document-api-guide.html#document-access
A working solution is in https://github.com/vespa-engine/sample-apps/pull/44

Google appengine pipelines - define the queue to use

I'd like to be able to set which queue to use within a pipeline, so that I can use custom settings for that pipeline in queue.yaml. The only way I can see to do this is to do so when the stage is started, via:
first_stage = ingest.CustomPipelineA(some_data)
first_stage.start(queue_name=foo)
However, I have nested and pre-requisite pipelines, such as:
with pipeline.InOrder():
yield CustomPipelineA(some_shared_data)
future_b = yield CustomPipelineB(some_shared_data)
with pipeline.After(future_b):
future_c = yield CustomPipelineC(some_shared_data, future_b)
with pipeline.After(future_c):
future_d = yield CustomPipelineD(some_shared_data, future_c)
It would be nice if I could set the queue name on the constructor, but it's not possible based on the pipeline docs: https://code.google.com/p/appengine-pipeline/wiki/GettingStarted#Execution_ordering.
Any ideas?
I think it's possible in Python (but not in Java). Here's an example from the same webpage as you linked to :
stage = MySearchEnginePipeline(15)
stage.start(queue_name='pipelinequeue')
I believe I've figured this out for Execution Ordering, within the run statement, you can:
self._context.queue_name = "my-custom-queue-name"

Jmeter: How to create an array in bean shell post processor and make it available in other thread groups?

Does anyone knows how to create an array in bean shell post processor and make it available in other thread groups?
I've been searching for a while and i'm not managing to solve this.
Thanks
There is no need to do it through writing and reading files. Beanshell extension mechanism is smart enough to handle it without interim 3rd party entities.
Short answer: bsh.shared namespace
Long answer:
assuming following Test Plan Structure:
Thread Group 1
Beanshell Sampler 1
Thread Group 2
Beahshell Sampler 2
Put following Beanshell code to Beanshell Sampler 1
Map map = new HashMap();
map.put("somekey","somevalue");
bsh.shared.my_cool_map = map;
And the following to Beanshell Sampler 2
Map map = bsh.shared.my_cool_map;
log.info(map.get("somekey"));
Run it and look into jmeter.log file. You should see something like
2014/01/04 10:32:09 INFO - jmeter.util.BeanShellTestElement: somevalue
Voila.
References:
Sharing variables (from JMeter Best Practices)
How to use BeanShell: JMeter's favorite built-in component guide
Following some advice, here's how i did it:
The HTTP request has a Regular Expressions Extractor to extract the XPTO variable from the request. Then, a BeanShell PostProcessor saves data to a CSV file:
String xpto_str = vars.get("XPTO");
log.info("Variable is: " + xpto_str);
f = new FileOutputStream("/tmp/xptos.csv", true);
p = new PrintStream(f);
this.interpreter.setOut(p);
print(xpto_str + ",");
f.close();
Then, in second thread group, i added a CSV Data Set Config, in which i read the variable from the file. This is really easy, just read the guide (http://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config).
Thanks

AppEngine - Optimize read/write count on POST request

I need to optimize the read/write count for a POST request that I'm using.
Some info about the request:
The user sends a JSON array of ~100 items
The servlet needs to check if any of the received items is newer then its counterpart in the datastore using a single long attribute
I'm using JDO
what i currently do is (pseudo code):
foreach(item : json.items) {
storedItem = persistenceManager.getObjectById(item.key);
if(item.long > storedItem.long) {
// Update storedItem
}
}
Which obviously results in ~100 read requests per request.
What is the best way to reduce the read count for this logic? Using JDO Query? I read that using "IN"-Queries simply results in multiple queries executed after another, so I don't think that would help me :(
There also is PersistenceManager.getObjectsById(Collection). Does that help in any way? Can't find any documentation of how many requests this will issue.
I think you can use below call to do a batch get:
Query q = pm.newQuery("select from " + Content.class.getName() + " where contentKey == :contentKeys");
Something like above query would return all objects you need.
And you can handle all the rest from here.
Best bet is
pm.getObjectsById(ids);
since that is intended for getting multiple objects in a call (particularly since you have the ids, hence keys). Certainly current code (2.0.1 and later) ought to do a single datastore call for getEntities(). See this issue

Resources