I want to execute some test cases in my selenium framework for infinite time means it should run 24*7. I have tried to search a lot in google but unable to find the solution for it. please help me as how can i achieve this execution using TestNG.
If you want it through testng then what #mackowski suggested should also work - though reports will get overwritten. If you want long running tests and not necessarily all the time tests, then you can put invocationCount as a high number too.
However, I think you should be taking help of Jenkins to schedule this job, say every 2 minutes every hour every day of the week -
simple configuration will handle this for you.
Your reports would be saved for each run
A failure in one will not cause the run to be aborted.
Plus you may run out of memory if you do it in one run.
Take your pick.
There are couple ways of doing that. You need to run your tests in infinite loop. The one way of doing this is to write simple Java program that will run your tests over and over again.
Here is example code
public static void main(String[]args ) {
while(true) {
TestListenerAdapter tla = new TestListenerAdapter();
TestNG testng = new TestNG();
testng.setTestClasses(new Class[] { Run2.class });
testng.addListener(tla);
testng.run();
}
}
Here you can find how to run TestNG programmatically http://testng.org/doc/documentation-main.html#running-testng-programmatically
Related
I'm having trouble getting a class with implements Support.MilestoneTriggerTimeCalculator not to run every time I edit something in the case.
This class is called in the Entitlement Processes inside the milestone.
Can this be stopped? I really only want it to run 1 time when creating the case and no more.
I really need help for this problem.
Thanks!!
So, I've written a few Gatling tests and know how to write test setup for a max duration.
setUp(testScenario.inject(atOnceUsers(3))).maxDuration(5 minutes)
Now, I want to achieve something along this:
setUp(testScenario.inject(atOnceUsers(3))).maxRequests(1000 requests)
How should I approach that?
Here instead of limiting my time, I'm limiting my test setup by achieving a number of requests.
Any assistance is appreciated. Thanks.
In general there is no maxRequests() option. You should think of each injected user as of actual user that independently executes some steps and finish his work rather than a thread that executes steps in loop. With that approach it is as simple as setting up certain injection strategy fe.: inject(constantUsersPerSec(10) during(100 seconds)). This way you will simulate actual users behavior (real users are independent and do not relay on other users). Of course there may be some cases where you want simulate users that makes lot of requests but in that case you should write scenario that executes certain number of requests fe.: with repeat loop:
val floodingScenario = scenario("Flood").repeat(250){
// some execs here
}
setUp(
floodingScenario.inject(
atOnceUsers(4) // each user executes steps 250 times = 1000 executes total
)
)
In my anylogic Project, I want to terminate my execution and run the simulation for N times. in each of the simulation I store my output in an excel file which depends on the run count.
Instead of stopping and running by my click, I want to do it automatically. How can I do that?
I try to use an event and write by while loop (myparm<=N) and in loop I wrote getEngine().run, but it didn't work!
if it is possible please help me.
Thanks
Below is an overview of a methodology of how you can do it using the existing simulation framework used by AnyLogic
You need to make use of the simulation setup in order to run multiple runs of the model and save the output. My suggested setup will be the following:
Have a button on your Simulation Experiment page (The first page you see when running the model) that you will use to start off the multiple model runs. In here you set the engine to not run in real time mode by using
getEngine().setRealTimeMode(false);
you might also want to set the initial seed and some other model parameters that you might also want to change and perhaps save after model execution. When you have setup the model the way you want use run() to start running the model.
Now under the Simulation Experiment setup page under the 'Java actions' section you need to specify what the model must do after it finished running the model. In the 'After simulation run' section write some code to save the data from the model into your Excel files. To access variables and objects from the model use root, e.g.
saveSomeData(root.myDataset);
where saveSomeData is a function on the Simulation page to save my data set found on the model, called myDataset, to an Excel file. It would be great to also save the seed and the specific parameters, if you changed any, to the Excel file for future reference.
Once you have saved the data output from the model you can specify a new seed and perhaps change parameters again and then call the run() again to run the model for another iteration. When the model has finished running it will again call the 'After simulation run' code here, so do put a stop condition otherwise it will just continue running one iteration after the other. You can access the number of model runs by using
getEngine().getRunCount()
Also, your model needs to have some stop condition, otherwise once it starts running it will never stop. You can specify this in the Simulation Experiment page under the 'Model time' section or programatically in your model using
finishSimulation();
In order to run the model cyclically, please use the following code in the Action field of a timeout triggered event or On destroy field of the top-level agent:
new Thread(){
public void run(){
// stops the model
getExperiment().stop();
try {
// delay
this.sleep(1000);
} catch(Exception e) {};
// runs it again
((Simulation) getExperiment()).button.action();
}
}.start();
The model results should be written to the Excel file before executing this code.
As Jaco-Ben suggested, you can specify getEngine().getRunCount() as condition of restarting the Simulation experiment.
I am trying to run a simple test on multiple browsers, here is a mock up of the code I've got:
String url = "http://www.anyURL.com";
WebDriver[] drivers = { new FireFoxDriver(), new InternetExplorerDriver,
newChromDriver() };
#Test
public void testTitle() {
for (int i = 0; i < drivers.length; i++) {
// navigate to the desired url
drivers[i].get(url);
// assert that the page title starts with foo
assertTrue(drivers[i].getTitle().startsWith("foo"));
// close current browser session
drivers[i].quit();
}// end for
}// end test
For some reason this code is opening multiple browsers seemingly before the first iteration of loop is completed.
What is actually happening here? and what is a good/better way to do this?
Please understand that I am by no means a professional programmer, and I am also brand new to using Selenium, so if what I am attempting is generally bad practice please let me know, but please don't be rude about it. I will respect your opinion much more if you are respectful in your answers.
No it's not.
In fact, most of the test frameworks have convenient ways to handle sequential/parallel executions of test. You can parametrize test class to run the same tests on multiple browsers. There is an attribute in TestNG called Parameters which can be used with setting.xml for cross browser testing without duplicating the code. An example shown here
I would no do that.
Most of the time it is pointless to immediately run your test against multiple browsers. Most of the problems you run into as you are developing new code or changing old code is not due to browser incompatibilities. Sure, these happens, but most of the time a test will fail because, well, your logic is wrong, and it will not just fail on one browser but on all of them. What do you gain from getting told X times rather than just once that your code is buggy? You've just wasted your time. I typically get the code working on Chrome and then run it against the other browsers.
(By the way, I run my tests against about 10 different combinations of OS, browser and browser version. 3 combinations is definitely not good enough for good coverage. IE 11 does not behave the same as IE 10, for instance. I know from experience.)
Moreover, the interleaving of tests from multiple browsers just seems generally confusing to me. I like one test report to cover only one configuration (OS, browser, browser version) so that I know if there are any problems exactly which configuration is problematic without having to untangle what failed on which browser.
I am executing 1000 tests using selenium webdriver.
for each test case I need to click "ID" element on the webpage.
I used WebElement x = driver.findElement(By.xpath("//*[#id='TEST']").click(); event.
But unfortunately for couple of test scenarios (2 or 3 out of 1000) it is throwing an error saying that "Unable to find an element". for the remaining test cases it is executing as usual.
I tried to use Try & Catch methods & Refresh the page but functionality is working as usual but performance is too slow.
Have you tried using the ExpectedConditions class ( http://selenium.googlecode.com/svn/trunk/docs/api/java/org/openqa/selenium/support/ui/ExpectedConditions.html )
Should be enough to just add some wait for the elementToBeClickable before clicking it.
What do you mean by "Try & Catch methods & Refresh methods".
Selenium imitates a web browser itself, thus it is sometimes unstable.
1000 tests takes a lot of time, so make sure your computer doesn't fall asleep and don't disturb it's test process untill it is done.
Some tests will fail if you minimize the browser.
I would recommend possibly increasing your implicit wait. Something like
driver.manage().timeouts().implicitlyWait(15, TimeUnit.SECONDS);