My requirement was that whenever there was an error in the Jmeter execution, there should be a delay before starting the next iteration.
"Start Next Thread Loop" is selected at the Thread Group level for Sampler error. Due to this Jmeter will start a new Iteration if there is any error during the execution.
For this purpose I have used Beanshell Timer at the start of the Iteration. Following code is added for delay if the response code is anything other than "200"
String code = prev.getResponseCode();
if(code!="200"){
log.info("::::::::::::::::::::::::::::Response Code = "+code);
log.info("sleep for 10sec");
return 10000;
}
else
return 0;
Please let me know if there are any better ways of doing this.
I believe the prev.getResponseCode() can also be used to do any kind of cleanup task, in case there is any error.
Like for example, if a user logs into the application and got an error before doing logout. We can test at the start of the iteration if the previous response code is in error, if so, we can make the user logout of the application.
You can do this using:
If Controller that will check if last response was in error or not using
${__jexl2("${JMeterThread.last_sample_ok}" == "false",)}
Test Action that will be used to Start Next Thread Loop
Test plan would have the following layout:
Related
Background:
I want to perform a test where I need to check time taken to transfer data from one api to another api.
for ex:
/API_1 is sending data
/API_2 gets the data from API_1. here not all data is received at once it is coming in chunks and taking time.
So i want to record this delay.
I am trying this
val ExeScn= scenario("CheckDelay")
.exec(http("XX-Post")
.post("src/v1/req")
.header("Authorization", "ABCD")
.body(StringBody(postBody))
.asJSON
.check(jsonPath("$.data.name[*].address[*].property").findAll.is("Completed")
)
How do I apply a loop here?
I want to run scenario("CheckDelay") till below condition
.check(jsonPath("$.data.name.address.property").find.is("Completed")
loop: asAlongAs
condition for the loop: : isUndefined()
check: simple find with a notExists
About 6 months or so ago I started developing an automated test suite for an AngularJS application developed by my company, using Protractor.
After I had been working on this for a couple of months, some other work came up which I had to prioritise over the test suite, and so I haven't looked at it since around the end of November/ December last year.
When I stopped working on it, I made sure that everything that I had done up to that date was in a fully working state (commented/ removed the tests I had started working on but hadn't finished, etc), and committed that git branch. At this point, I was able to run all of the tests I had written up until then using the command protractor conf.js, and I could see that they were all passing as expected.
I recently checked my testing branch out again, as I have a day or two in between other projects, and thought I could make use of the time by working on the testing again.
The first thing I did once I had checked out the testing branch, was to try running my test scripts again, to ensure everything I had implemented so far was still working.
However, while most of the tests do still pass, a few of them are now failing due to time outs, even though I had made sure that all of the timing elements were working correctly before I shelved the testing for a while.
I have tried increasing the times that my tests should sleep or wait for things at the points at which they are failing, but this doesn't seem to have made a difference.
The particular tests that are now failing due to time outs are:
1.
it('should navigate to the Config/Platform page & check the values are all correct', function() {
browser.waitForAngularEnabled(false);
browser.actions().mouseMove(configMenuBtn).perform();
browser.wait(EC.visibilityOf(pageConfigPlatformBtn), 8000).then(browser.sleep(5000).then( /*ERF(14/11/2017 # 1630) browser.sleep() required to give DialogMgr service time to complete */
pageConfigPlatformBtn.click().then(function(){
browser.sleep(10000); /*ERF(14/11/2017 # 1640) This line is needed- because of how the form HTML is created, it needs time to be replaced by configured HTML that is displaying the required fields */
var eth0Mode = element(by.model('modelinstances.platform.eth_0_mode'));
var eth0Address = element(by.model('modelinstances.platform.static_ip.eth_0_address'));
var eth0Netmask = element(by.model('modelinstances.platform.static_ip.eth_0_netmask'));
var eth0gateway = element(by.model('modelinstances.platform.static_ip.eth_0_gateway'));
var eth1mode = element(by.model('modelinstances.platform.eth_1_mode'));
var eth1Address = element(by.model('modelinstances.platform.static_ip.eth_1_address'));
var eth1netmask = element(by.model('modelinstances.platform.static_ip.eth_1_netmask'));
var eth1gateway = element(by.model('modelinstances.platform.static_ip.eth_1_gateway'));
expect(browser.getCurrentUrl()).toMatch(moxaConfigPlatformUrlRegExpPattern);
expect(eth0Mode.getAttribute('value')).toBe("Static IP");
expect(eth0Address.getAttribute('value')).toBe("192.168.1.127");
expect(eth0Netmask.getAttribute('value')).toBe("255.255.255.0");
expect(eth0gateway.getAttribute('value')).toBe("192.168.1.1");
expect(eth1mode.getAttribute('value')).toBe("Static IP");
expect(eth1Address.getAttribute('value')).toBe("192.168.2.127");
expect(eth1netmask.getAttribute('value')).toBe("255.255.255.0");
expect(eth1gateway.getAttribute('value')).toBe("");
})));
})
The failure message for this test is:
App should navigate to the Config/Platform page & check the values are all correct
Message:
Failed: Wait timed out after 8002ms
Stack:
TimeoutError: Wait timed out after 8002ms
2.
it('should navigate to the Config/Date/Time page', function() {
browser.waitForAngularEnabled(false);
browser.actions().mouseMove(configMenuBtn).perform();
browser.wait(EC.visibilityOf(pageConfigDateTimeBtn), 2000).then(browser.sleep(1000).then( /*ERF(14/11/2017 # 1630) browser.sleep() required to give DialogMgr service time to complete */
pageConfigDateTimeBtn.click().then(function() {
expect(browser.getCurrentUrl()).toBe(VM + '/#/config/systemtime');
})));
})
The failure message for this test is:
App should navigate to the Config/Date/Time page
Message:
Failed: Wait timed out after 2023ms
Stack:
TimeoutError: Wait timed out after 2023ms
3.
it('should navigate to the Tag Browser page (final test)', function() {
console.log("Start final Tag Browser page test");
browser.waitForAngularEnabled(false);
browser.wait(EC.visibilityOf(pagesMenuBtn), 10000).then(
browser.actions().mouseMove(pagesMenuBtn).perform().then(
browser.wait(EC.visibilityOf(pageConfigDateTimeBtn), 2000).then(browser.sleep(1000)).then( /*ERF(14/11/2017 # 1650) browser.sleep() required to give DialogMgr service time to complete */
browser.wait(EC.visibilityOf(pageTagBrowserBtn), 12000).then(
pageTagBrowserBtn.click().then(
function() {
console.log("Tag Browser menu button clicked");
}).then(
browser.wait(EC.visibilityOf(tagBrowserPageTagsLink), 20000).then(
function(){
console.log("End Tag Browser page test (then call)");
expect(browser.getCurrentUrl()).toBe(VM + '/#/pages/tagbrowser');
}
)
)
)
)
)
);
});
The failure message for this test is:
App should navigate to the Tag Browser page (final test)
Message:
Failed: Wait timed out after 2009ms
Stack:
TimeoutError: Wait timed out after 2009ms
I have tried increasing the times that the wait() calls are being passed, but this hasn't resolved the issue.
I have read in the past that automated testing can be quite flakey, and that changes to the environment in which they're run can cause them to fail- so I'm guessing it's possible that because my computer will have changed since the tests were last run successfully (i.e. new software installed), this may be causing the tests to fail...?
Is there a method of 'best practice' for resolving this sort of issue with automated testing, or is just a case of having to go back and tweak my test scripts until they start passing again?
It's probably worth mentioning that all of my test are written in a spec.js file, and that these tests which failed are the last 3 of 18 test scripts in that file (i.e. the first 15 all still pass).
Anyone have any ideas how I can resolve this/ get my tests passing again?
I'm attempting to create a test plan when a certain value is reached, then some functionality happens. The test plan consists of multiple threads running with a loop, and when some condition is reached I'd like to fire an HTTP request .
I'll drill down to the guts of it:
In my test I have logic in a looping way with multiple threads, and when a condition is met (the condition is met every 10 seconds) then I need to iterate through a value that it's value should be saved from the previous iteration - that value which I defined is a property (inside user.properties) - startIndex = 0 (initialized to 0).
So I've made a While Controller which it's condition is like this:
${__javaScript(${__P(startIndex,)}<=${currBulk},)}
And I expect the HTTP request, which depends on startIndex value inside the while to be executed when startIndex<=currBulk variable.
Inside the While Controller the HTTP request should to be fired until all indexes are covered, and I've written it like this inside BeanShell PostProcessor:
int startIndexIncInt = Integer.parseInt(props.getProperty("startIndex")); //get the initiated index of the loop
startIndexIncInt = startIndexIncInt + 1; //increment it and see if needed to fire the request again, by the original While condition
vars.put("startIndexIncIntVar", String.valueOf(startIndexIncInt));
props.put("startIndex",vars.get("startIndexIncIntVar")); //the property incremental and update
So, I designed it like in order that in the next time (after 10 more seconds) I'll have an updated startIndex that will be compared to the new currBulk (which is always updated by my test plan).
And I just cant have it done . I keep receiving errors like:
startIndexIncInt = Integer.parseInt(props.ge . . . '' : Typed variable declaration : Method Invocation Integer.parseInt
Needless to say that also the var startIndexIncIntVar I defined isn't setted (I checked via debug sampler).
Also, my problem isn't with the time entering the while, my problems are basically with the variable that I should increment and use inside my HTTP request (the while condition, and beanshell post processor script)
Just for more info on it, if I'd written it as pseudo code it would look like this:
startInc = 0
----Test plan loop----
------ test logic, currBulk incremented through the test-----
if(time condition to enter while){
while (startIndex <= currBulk){
Send HTTP request (the request depends on startIndex value)
startIndex++
}
}
Please assist
It appears to be a problem with your startIndex property as I fail to see any Beanshell script error, the code is good so my expectation is that startIndex property is unset or cannot be cast to the integer. You can get a way more information regarding the problem in your Beanshell script in 2 ways:
Add debug() command to the beginning of your script - you will see a lot of debugging output in the console window.
Put your code inside try block like:
try {
int startIndexIncInt = Integer.parseInt(props.getProperty("startIndex")); //get the initiated index of the loop
startIndexIncInt = startIndexIncInt + 1; //increment it and see if needed to fire the request again, by the original While condition
vars.put("startIndexIncIntVar", String.valueOf(startIndexIncInt));
props.put("startIndex", vars.get("startIndexIncIntVar")); //the property incremental and update
} catch (Throwable ex) {
log.error("Beanshell script failure", ex);
throw ex;
}
this way you will be able to see the cause of the problem in jmeter.log file
Actually it appears that you are overscripting as incrementing a variable can be done using built-in components like Counter test element or __counter() function. See How to Use a Counter in a JMeter Test article for more information on the domain.
I want to repeat a volley request more than one time.
Something like:
For (int i=0;i<10;i++){
RequestA{
TextView.setText(TextView.getText + JsonArray[i].toString);
}
}
When i run the program the FOR loop 10 times but the Volley request just one time with i=9, so in the TextView print just the content of the JsonArray[i=9].
So my question is: why also the Volley Request doesn't loop 10 times?
I don't know how you use volley. But, i believe the volley does loop 10 times. The reason why you only see the JsonArray[i=9] is because you are overriding the value in textView. Instead of using textView, try using log to get the value and read from logcat.
Stack community.
I'm using the eval() function in PHP so my users can execute his own code in my website (Yes, i know it is a dangerous function, but that's not the point).
I want to store all the PHP errors that occur during the interpretation of the code, is there a way to fetch all of them? i want to get and register them in a table of my database.
The error_get_last gets only the last error, but i want all of them.
Help me, please. It is even possible?
General
You cannot use eval() for this, as the evaled code will run in the current context, meaning that the evaled code can overwrite all vars in your context. Beside from security considerations this could / would break functionality. Check this imaginal example:
$mode = 'execute'
// here comes a common code example, it will overwrite `$mode`
eval('
$mode = 'test';
if(....) { ...
');
// here comes your code again, will fail
switch ( $mode) {
...
}
Error Tracking
You cannot track the errors this way. One method would be to use set_error_handler() to register a custom error handler which stores the errors to db. This would work, but what if the user uses the function in it's code? Check the following examples:
set_error_handler('my_handler');
function my_handler($errno, $errstr, $errfile, $errline) {
db_save($errstr, ...);
}
eval('
$a = 1 / 0; // will trigger a warning
echo $b; // variable not defined
'
);
This would work. But problems will arise if have an evaled code like this:
eval('
restore_error_handler();
$a = 1 / 0; // will trigger a warning
echo $b; // variable not defined
'
);
Solution
A common solution to make it possible that others can execute code on your servers is:
store user code into temporary file
disable critical functions like fopen() ... in the php.ini
execute the temporary php file by php-cli and display output (and errors) to the user
if you separate stdin from stdout when calling the php-cli, you can parse the error messages and store them in a DB
According to the documentation, you just can't :
If there is a parse error in the evaluated code, eval() returns FALSE and execution of the following code continues normally. It is not possible to catch a parse error in eval() using set_error_handler().
EDIT: you can't do it with eval(), but you apparently can with php_check_syntax function. You have to write the code to a file in order to check its syntax.