Is there a way in Jmeter to execute a sample just before thread shutdown?
For example, I have a test plan that inserts data into a database and autocommit is disabled on the connection. Each thread spawns its own connection to the database. Plan runs on a schedule (i.e. I don't know samples count) and I want to commit all inserted rows at the end of the test. Is there a way to do that?
The easiest is going for tearDown Thread Group which is designed for performing clean-up actions.
The harder way is to add a separate Thread Group with 1 thread and 1 iteration and 1 JSR223 Sampler with the following Groovy code:
class ShutdownListener implements Runnable {
#Override
public void run() {
//your code which needs to be executed before test ends
}
}
new ShutdownListener().run()
Try running the commit sample based on some if condition w.r.t duration or iterationnum
For ex: if you are supposed to run 100 iterations :
An If controller with the condition -
__groovy(${__iterationNum}==100)
should help.
ok this might not be the most optimal but could be workable
Add the following code in a JSRSampler inside a onceonly controller
def scenarioStartTime = System.currentTimeMillis();
def timeLimit= ctx.getThreadGroup().getDuration()-10; //Timelimit to execute the commit sampler
vars.put("scenarioStartTime",scenarioStartTime.toString());
vars.put("timeLimit",timeLimit.toString());
Now after your DB insert sampler add the following condition in a if controller and add the commit sampler.
${__groovy(System.currentTimeMillis()-Long.valueOf(vars.get("scenarioStartTime"))>=Long.valueOf(vars.get("timeLimit"))*1000)}
This condition should let you execute the commit sampler just before the end of test duration.
Related
While integration testing, I am attempting to test the execution of a stored procedure. To do so, I need to perform the following steps:
Insert some setup data
Execute the stored procedure
Find the data in the downstream repo that is written to by the stored proc
I am able to complete all of this successfully, however, after completion of the test only the rows written by the stored procedure are rolled back. Those rows inserted via the JdbcAggregateTemplate are not rolled back. Obviously I can delete them manually at the end of the test declaration, but I feel like I must be missing something here with my configuration (perhaps in the #Transactional or #Rollback annotations.
#SpringBootTest
#Transactional
#Rollback
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class JobServiceIntegrationTest #Autowired constructor(
private val repo: JobExecutorService,
private val template: JdbcAggregateTemplate,
private val generatedDataRepo: GeneratedDataRepo,
) {
#Nested
inner class ExecuteMyStoredProc {
#Test
fun `job is executed`() {
// arrange
val supportingData = supportingData()
// act
// this data does not get rolled back but I would like it to
val expected = template.insert(supportingData)
// this data does get rolled back
repo.doExecuteMyStoredProc()
val actual = generatedDataRepo.findAll().first()
assertEquals(expected.supportingDataId, actual.supportingDataId)
}
}
fun supportingData() : SupportingData {
...
}
}
If this was all done as part of a physical database transaction, I would anticipate the inner transactions are all rolled back when the outer transaction rolls back. Obviously this is not that, but that's the behavior I'm hoping to emulate.
I've made plenty of integration tests and all of them roll back as I expect, but typically I'm just applying some business logic and writing to a database, nothing as involved as this. The only unique situations about this test from my other tests is that I'm executing a stored proc (and the stored proc contains transactions).
I'm writing this data to a SQL Server DB, and I'm using Spring JDBC with Kotlin.
Making my comment into an answer since it seemed to have solved the problem:
I suspect the transaction in the SP commits the earlier changes. Could you post the code for a simple SP that causes the problems you describe, so I can play around with it?
Having a bit of a problem getting a stored procedure being executed from an Azure timer triggered function and am struggling to find out why.
I have a bunch of small stored procedures that do some row updates and inserts based on some logic which are executed using Azure timer triggered functions with no issues however one of them misbehaves.
The stored procedure is rather adaptive index defragmentation from here: (https://github.com/microsoft/tigertoolbox/tree/master/AdaptiveIndexDefrag)
It takes about a minute to run and it is scheduled to run during the night on a daily basis.
Here is the code to execute function responsible executing the above sproc:
public static class MyFunction {
[FunctionName("MyFunction")]
public static async Task Run([TimerTrigger("0 0 0 */1 * * ")] TimerInfo myTimer, ILogger log) {
await using var conn = new SqlConnection("connection string");
await using var command = new SqlCommand("do_the_thing", conn) {CommandType = CommandType.StoredProcedure};
try {
command.Connection.Open();
var result = await command.ExecuteScalarAsync();
log.LogInformation($"Query result: {result}");
}
catch (Exception ex_) {
log.LogError(ex_, "OH NO!");
}
log.LogInformation("Went smoothly");
}
}
The result value is -1, Azure monitor says query executed successfully, but looking at the logs and load sql server side, the stored procedure have not been run.
All other functions running smaller sprocs reuse the above code. I have made a test function with a test sproc using the exact code and it works fine, however executing index defrag one always fails but no error or exception is given. All looks fine, the only indicator that tells me that sproc failed to run is the duration timer of the Azure function responsible executing index defrag sproc, it is always too short, max I have seen is 2.5 sec when it should be over a minute.
Any help will be appreciated.
finally you tried to pass #debugMode=1 parameter , then you got a message that the code has limited permissions just to do some tasks from SqlConnection.InfoMessage event. When the permissions are granted, you can successfully execute the stored procedure async via timer triggered Azure function.
I use the function of submitJob to get the jobId and I try to cancel the job by using the cancelJob function, but I failed to stop the job. What function should I use to stop the job?
I use the code below:
submitJob("aa", "a1", replay, [ds], [sink], date, `time, 10)
cancelJob(aa)
The function submitJob will return the actual jobId that might be different from the input jobId. So please use the jobId returned by the submitJob function.
DolphinDB uses thread pool to run jobs. So if the job is simple and contains no sub tasks, we still can't cancel the job.
I have the Test plan where is 10 requests. Just requests without Contant timer takes about 18 seconds. When I add one Contant timer with 1000 miliseconds delay after the third request It takes about 28 seconds.
Is It problem of the JMeter or I'm doing something wrong?
I'm running at Ubuntu - ElementaryOS with JMeter v. 2.11 r1554548.
I'm testing another server not mine laptop.
At Jmeter test plan I'm using Cache, Cookie manager and Request Defaults at the begin. One request with POST action. And Summary report, Graph results, View results in Table a Simple data writer at the end of test plan.
Everything is in one thread.
Order of timer object has no impact and does not mean it executes where it is located.
In fact it will apply to every child request of the parent of timer.
Read this:
http://jmeter.apache.org/usermanual/test_plan.html
4.10 Scoping Rules
I am building a job in Talend that queries a restful service. In the job, I initiate a job and get a job ID back. I then query a status service, and need to wait for the job to complete. How would I go about doing this in Talend? I have been playing around with tLoop, tFlowToIterate, tIterateToFlow and tJavaRow components to try get this to work, but am not sure how to configure it.
Here's a summary of what I'm trying to do:
1. tRest: Start a job and get job ID
|
--> 2. tRest: Poll status of job
|
--> 3. tUnknown?: If the job is running, sleep and re-run Step 2.
|
--> 4. tRest: when the job is complete, retrieve the results
How would I set up step 3 above?
Basically you want something like
tInfiniteLoop --iterate--> (subjob for querying the service and determining if result is ready) --if (result is ready)--> (subjob for fetching the result) --on subjob ok--> tjava with "counter_tInfiniteLoop_1 = -1;" to leave loop (don't know of a better alternative)
I would advice to implementing a timeout or maximum number of lookups and maybe even an automatically increasing sleep time.