JMeter script that loops and increases throughput until failure - loops

Curious if anyone has any ideas on how I might create a jmeter script that will loop the scenario while increasing throughput and load until an error is received.
Any guidance would be appreciated.

You can go the following way:
Set your test to run "Forever" on Thread Group level
Set "Action to be taken on a Sampler error" to "Stop Test"
Add a Constant Throughput Timer to your test plan with very low initial value like 60 requests per minute (= 1 request per second)
Despite "Constant" word in its name, Constant Throughput Timer value can be changed on-the-fly see official documentation for example.

Related

Wrk vs Gatling benchmark test comparison

With wrk, I runt the following command :
wrk -t10 -c10 -d30s http://localhost:8080/myService --latency -H "Accept-Encoding: gzip"
As a result, I obtain Requests/sec: 15000 and no error
I am trying to reproduce the same kind of test with Gatling. So I have tried the following :
scn.inject(
rampUsersPerSec(1) to 15000 during (30 seconds)
)
But as a result, I obtain errors :
---- Errors --------------------------------------------------------------------
i.n.c.AbstractChannel$AnnotatedSocketException: Can't assign r 573 (42,44%)
equested address: localhost/127.0.0.1:8080
i.n.c.AbstractChannel$AnnotatedSocketException: Resource tempo 530 (39,26%)
rarily unavailable: localhost/0:0:0:0:0:0:0:1:8080
j.i.IOException: Premature close 247 (18,30%)
From wrk, I believe my server can handle 15000 request/s but with Gatling it seems not the case. Do you have an idea why such a difference ?
Disclaimer: Gatling's creator here
You're comparing apples and oranges.
With wrk, you're opening 10 connections and looping as fast as possible during 30s.
With your current Gatling set up, you're spawning 225,015 virtual users ((1 + 15,000) / 2 * 30), each one trying to open its own connection.
I recommend you reading this article about picking injection profiles that make sense for your use case.
If you really want to do the same thing as wrk here, you need to wrap your scenario in a during(30) loop and change your injection profile to atOnceUsers(10).
You also have the option of using a shared connection pool.
Then, you can't expect any other to load test tool to be as fast as wrk for this kind of logicless, static test.
Also note that:
there was a mistake in Gatling's JVM configuration that was fixed in Gatling 3.4.0 that hurt performance in this kind of minimalistic
super high throughput tests, see issue
Gatling runs on a JVM, hence with a runtime, so it needs to warm up, boot throughput will be lower than the warm one

How to fix System.LimitException: Apex CPU time limit exceeded caused by workflows with email alert?

I'm trying to execute batch test method on 100 records and get CPU Runtime Limit error.
I placed the Limits.getCpuTime() method in my code and noticed that my code without the workflow segment takes 3148 ms to complete. However, when I activate two workflows that sends emails to one user each, I get the CPU runtime limit error. In total my process without those two workflows takes around 10 seconds to complete while with them activated it takes around 20 seconds.
#IsTest
static void returnIncClientAddress(){
//Select Required Records
User incidentClient = [SELECT Id FROM User WHERE Username = 'bbaggins#shire.qa.com' LIMIT 1];
BMCServiceDesk__Category__c category = [SELECT Id FROM BMCServiceDesk__Category__c WHERE Name = 'TestCategory'];
BMCServiceDesk__BMC_BaseElement__c service = [SELECT ID FROM BMCServiceDesk__BMC_BaseElement__c WHERE Name = 'TestService'];
BMCServiceDesk__BMC_BaseElement__c serviceOffering = [SELECT ID FROM BMCServiceDesk__BMC_BaseElement__c WHERE Name = 'TestServiceOffering'];
//Create Incidents
List<BMCServiceDesk__Incident__c> incidents = new List<BMCServiceDesk__Incident__c>();
for(integer i = 0; i < 100; i++){
BMCServiceDesk__Incident__c incident = new BMCServiceDesk__Incident__c(
BMCServiceDesk__FKClient__c = incidentClient.ID,
BMCServiceDesk__FKCategory__c = category.ID,
BMCServiceDesk__FKServiceOffering__c = serviceOffering.ID,
BMCServiceDesk__FKBusinessService__c = service.ID,
BMCServiceDesk__FKStatus__c = awaiting_for_handling
);
incidents.add(incident);
}
test.startTest();
insert incidents;
test.stopTest();
}
I expected the email workflows and alerts to be processed in batch and sent without being so expensive in CPU time, but it seems that Salesforce takes a lot of time both checking the workflows rules and executing on them when needed. The majority of the process' time seems to be spent on sending the workflows' emails (which it doesn't actually do because it's a test method).
There's not much you can do to control the execution time of Workflow Rules. You could try converting them into Apex and benchmarking to see whether that results in improvement in time consumed, but I suspect the real solution is that you're going to have to dial down your bulk test.
The CPU limit for a transaction is 10 seconds. If your unit test code is already taking approximately 10 seconds to complete without Workflows (I'm not sure exactly what bounds your 3148 ms and 10 s refer to), you've really got only two choices:
Make the sum total of automation running on insert of this object faster;
Reduce the quantity of data you're processing in this unit test.
It's not clear what you're actually testing here, but if it's an Apex trigger, you should make sure that it's properly bulkified and does not consume unnecessary CPU time, including through trigger recursion. Reviewing the call stack in your logs (or simply adding System.debug() statements) may help with that.
Lastly - make sure you write assertions in your test method. Test methods without assertions are close to worthless.
Are there triggers on the BMCServiceDesk__Incident__c or on objects modified by the Workflow? Triggers on updates could possible cause the code to execute multiple times in the same execution context causing you to hit the cpu limit. Consider prventing reentry into triggers or performing check to only run triggers if specific criteria is met.
Otherwise consider refactoring the code if possible to have work executed within the same loop if possible as loops especially nested loops drive up your cpu usage. Usually workflow on their own dont drive up CPU limit unless triggers are executes due to workflow updates.

Contant timer delays the thread more than is set

I have the Test plan where is 10 requests. Just requests without Contant timer takes about 18 seconds. When I add one Contant timer with 1000 miliseconds delay after the third request It takes about 28 seconds.
Is It problem of the JMeter or I'm doing something wrong?
I'm running at Ubuntu - ElementaryOS with JMeter v. 2.11 r1554548.
I'm testing another server not mine laptop.
At Jmeter test plan I'm using Cache, Cookie manager and Request Defaults at the begin. One request with POST action. And Summary report, Graph results, View results in Table a Simple data writer at the end of test plan.
Everything is in one thread.
Order of timer object has no impact and does not mean it executes where it is located.
In fact it will apply to every child request of the parent of timer.
Read this:
http://jmeter.apache.org/usermanual/test_plan.html
4.10 Scoping Rules

Which NDB query function is more efficient to iterate through a big set of query results?

I use NDB for my app and use iter() with limit and starting cursor to iterate through 20,000 query results in a task. A lot of time I run into timeout error.
Timeout: The datastore operation timed out, or the data was temporarily unavailable.
The way I make the call is like this:
results = query.iter(limit=20000, start_cursor=cursor, produce_cursors=True)
for item in results:
process(item)
save_cursor_for_next_time(results.cursor_after().urlsafe())
I can reduce the limit but I thought a task can run as long as 10 mins. 10 mins should be more than enough time to go through 20000 results. In fact, on a good run, the task can complete in just about a minute.
If I switched to fetch() or fetch_page(), would they be more efficient and less likely to run into the timeout error? I suspect there's a lot of overhead in iter() that causes the timeout error.
Thanks.
Fetch is not really any more efficient they all use the same mechanism, unless you know how many entities you want upfront - then fetch can be more efficient as you end up with just one round trip.
You can increase the batch size for iter, that can improve things. See https://developers.google.com/appengine/docs/python/ndb/queryclass#kwdargs_options
From the docs the default batch size is 20, which would mean for 20,000 entities a lot of batches.
Other things that can help. Consider using map and or map_async on the processing, rather than explicitly calling process(entity) Have a read https://developers.google.com/appengine/docs/python/ndb/queries#map also introducing async into your processing can mean improved concurrency.
Having said all of that you should profile so you can understand where the time is used. For instance the delays could be in your process due to things you are doing there.
There are other things to conside with ndb like context caching, you need to disable it. But I also used iter method for these. I also made an ndb version of the mapper api with the old db.
Here is my ndb mapper api that should solve timeout problems and ndb caching and easily create this kind of stuff:
http://blog.altlimit.com/2013/05/simple-mapper-class-for-ndb-on-app.html
with this mapper api you can create it like or you can just improve it too.
class NameYourJob(Mapper):
def init(self):
self.KIND = YourItemModel
self.FILTERS = [YourItemModel.send_email == True]
def map(self, item):
# here is your process(item)
# process here
item.send_email = False
self.update(item)
# Then run it like this
from google.appengine.ext import deferred
deferred.defer(NameYourJob().run, 50, # <-- this is your batch
_target='backend_name_if_you_want', _name='a_name_to_avoid_dups')
For potentially long query iterations, we use a time check to ensure slow processing can be handled. Given the disparities in GAE infrastructure performance, you will likely never find an optimal processing number. The code excerpt below is from an on-line maintenance handler we use which generally runs within ten seconds. If not, we get a return code saying it needs to be run again thanks to our timer check. In your case, you would likely break the process after passing the cursor to your next queue task. Here is some sample code which is edited down to hopefully give you a good idea of our logic. One other note: you may choose to break this up into smaller bites and then fan out the smaller tasks by re-enqueueing the task until it completes. Doing 20k things at once seems very aggressive in GAE's highly variable environment. HTH -stevep
def over_dt_limit(start, milliseconds):
dt = datetime.datetime.now() - start
mt = float(dt.seconds * 1000) + (float(dt.microseconds)/float(1000))
if mt > float(milliseconds):
return True
return False
#set a start time
start = datetime.datetime.now()
# handle a timeout issue inside your query iteration
for item in query.iter():
# do your loop logic
if over_dt_limit(start, 9000):
# your specific time-out logic here
break

Why does the task name contain "now / 30"?

In the video/PDF from "Data pipelines with Google App Engine" Brett puts "now / 30" into the task name noting that he will explain the reason later, but somehow he never does. :)
http://www.youtube.com/watch?v=zSDC_TU7rtc#t=41m35
task_name = '%s-%d-%d' % (sum_name, int(now / 30), index)
Do you have any idea about the reason? Does it have anything to do with the 7 day period in which one can't re-use task names?
Link to the session page
Brett Slatkin's own explanation
[Brett]
Hey all,
The int(time.time()/30) part of the task name is to prevent queue stalls. When memcache gets evicted the work index counter will be reset to zero. That means new fork-join work items may insert tasks that are named the same as tasks that were already inserted. By including a time window of ~30 seconds in the task name, we ensure that this problem can only last for about thirty seconds. This is also why you should raise an exception when you see a TombstonedTaskError exception.
Worst-case scenario if the clocks are wonky is that two tasks are run to do the fan-in work instead of just one, which is an acceptable trade-off in many cases and a fundamental possibility when using the task queue API. This can be mitigated using pigeon-hole acknowledgment entities, like I use in my materialized view example.
Hope that helps,
[/Brett]

Resources