It is possible to 'capture' or persist the time it takes per unittest, when running a team build on TFS2010. Ideally saving it to database (like a loadtest can save it to a result store).
Thanks in advance!
If you run Visual Studio unit tests during build, you can choose to publish the test results to the server, then later you can query the test run and results to find out the duration of each test result.
The code to query the test results per build looks like this:
var tcmService = TeamProjectCollection.GetService<ITestManagementService>();
var tcmProject = tcmService.GetTeamProject(TeamProjectName);
ITestRun testRun = tcmProject.TestRuns.ByBuild(BuildUri).First();
ITestCaseResultCollection results = testRun.QueryResults();
foreach (ITestResult result in results) { Console.WriteLine(result.Duration); }
You will need to obtain the team project collection, know the team project name and the build uri. This code assumes that your build has only one published test run, though that sometimes is not true because you can publish other test runs to the same build after it is completed.
Hope this helps.
Related
I know this might seem like duplication, I have found similar questions on SO on this topic, but none of them really worked for me. I simply need to clear out (or teardown) the database after each test, so every test works with a new empty one.
I am using fixture and my code looks like this:
#pytest.fixture(scope="module", autouse=True)
def test_client_db():
# set up
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///"
with app.app_context():
db.init_app(app)
db.create_all()
testing_client = app.test_client()
ctx = app.app_context()
ctx.push()
# do the testing
yield testing_client
# tear down
with app.app_context():
db.session.remove()
db.drop_all()
ctx.pop()
I am new to pytest and from what I have learnt, whatever goes before yield works as sort of a "set up", whatever goes after works as "teardown". Yet, when I run several tests, the database is not clear for each test, it holds data between them.
Why is it so? What is wrong with this fixture? What am i missing?
You have set the scope to module - this means the fixture will only be reset after all tests in a module had run.
Either set the scope to function or leave it completely, as function is the default.
See https://docs.pytest.org/en/stable/fixture.html#fixture-scopes
Is it possible in Jenkins to create a job, that will run n-times?
I would like to write a script in configuration (windows batch command / groovy) which allows me to do it. In this script, I would like to have an array with parameters and then run this job with each parameter in the cycle. It should look like that:
paramArray [] = ["a","b","c"];
for(int i = 0; i < paramArray.length; i++)
{
//Here I want to run this job with each parameter
job.run(paramArray[i]);
}
Please, help me with that issue.
I found the answer!
We need to create 2 pipelines in Jenkins: downstream and upstream jobs.
1. The downstream job is parameterized and take 1 string parameter in 'General' section
Then, it just prints the choosing parameter in 'Pipeline' section:
Here is the result of this downstream job:
2. The upstream job has an array with all possible parameters for a downstream job.
And in the loop, it runs a downstream job with each parameter from an array.
In the result, an upstream job will run a downstream job 3 times with each parameter.
:)
I think you can't run Jenkins job according to your above code. But you can configure the cronjob in Jenkins using “Build periodically” for run Jenkins job periodically.
go to Jenkins job > Configure > tick Build periodically in build Triggers
and put cronjob syntax like below image and Save.
This job runs every 15 minutes. and also you can set a specific time in the schedule.
Please see the example from https://jenkins.io/doc/book/pipeline/jenkinsfile/ in "Handling parameters" section: With a Jenkinsfile like this (example here copied from that doc), you can launch "Build with parameters" and give params. Since you want multiple params, you can delimit them with , or ; or something else based on your data. You just need to parse the input params to get the values using the delimiter you chose.
pipeline {
agent any
parameters {
string(name: 'Greeting', defaultValue: 'Hello', description: 'How should I greet the world?')
}
stages {
stage('Example') {
steps {
echo "${params.Greeting} World!"
}
}
}
}
Based on the documentation for Objectify and Google Cloud Datastore, I would expect the queries and the batch loads in the following code to execute in parallel:
List<Iterable<Key<MyType>>> results = new ArrayList<>();
for (...) {
results.add(ofy().load()
.type(MyType.class)
.filter(...)
.keys()
.iterable());
}
...
Iterable<MyType> keys = ...;
Collection<MyType> c = ofy().load().keys(keys).values();
But the trace makes it look like each query and each entity load executes in sequence:
What gives?
It looks like this only happens when doing a cached get from Memcache. With similar code I see the expected async behavior for datastore_v3.Get/Put/Delete:
It seems the reason for this is that Objectify doesn't use AsyncMemcacheService. Indeed, there is an open issue for this on the project page, and this can also be confirmed by checking out the source and doing a grep -r AsyncMemcacheService.
Regarding the serial datastore_v3.RunQuery calls, calls to ofy().load().type(...).filter(...).iterable() are 'asynchronous' in that they return immediately, however the actual Datastore queries themselves get executed serially as the App Engine Datastore API doesn't expose an explicitly async API for queries.
I have created tests for my application. Everything works but it runs slow and even though only 1/3 of the application is tested it still takes around ten minutes for protrator to create the test data, fill out the fields, click the submit button etc.
I am using Google Crome for the testing. It seems slow as I watch protractor fill out the fields one by one.
Here's an example of my test suite:
suites: {
login: ['Login/test.js'],
homePage: ['Home/test.js'],
adminPage: ['Admin/Home/test.js'],
adminObjective: ['Admin/Objective/test.js'],
adminObjDetail: ['Admin/ObjectiveDetail/test.js'],
adminTopic: ['Admin/Topic/test.js'],
adminTest: ['Admin/Test/test.js'],
adminUser: ['Admin/User/test.js'],
adminRole: ['Admin/Role/test.js']
},
This is one test group:
login: ['Login/test.js'],
homePage: ['Home/test.js'],
adminUser: ['Admin/User/test.js'],
adminRole: ['Admin/Role/test.js']
This is another test group:
adminPage: ['Admin/Home/test.js'],
adminObjective: ['Admin/Objective/test.js'],
adminObjDetail: ['Admin/ObjectiveDetail/test.js'],
adminTopic: ['Admin/Topic/test.js'],
adminTest: ['Admin/Test/test.js'],
The two groups can run independently but they must run in the order I have above. After the answers I did read about sharing but I am not sure if this helps my situation as my tests need to be run in order. Ideally I would like to have one set of tests run in one browser and the other set in another browser.
I read about headless browsers such as PhantomJS. Does anyone have experience with these being faster?
Any advice on how I could do this would be much appreciated.
We currently use "shardTestFiles: true" which runs our tests in parallel, this could help if you have multiple tests.
I'm not sure what you are testing here, whether its the data creation or the end result. If the latter, you may want to consider mocking the data creation instead or bypassing the UI some other way.
Injecting in Data
One thing that you can do that will give you a major boost in performance is to not double test. What I mean by this is that you end up filling in dummy data a number of times to get to a step. Its also one of the major reasons that people need tests to run in a certain order (to speed up data entry).
An example of this is if you want to test filtering on a grid (data-table). Filling in data is not part of this action. Its just an annoying thing that you have to do to get to testing the filtering. By calling a service to add the data you can bypass the UI and seleniums general slowness (Id also recommend this on the server side by injecting values directly into the DB using migrations).
A nice way to do this is to add a helper to your pageobject as follows:
module.exports = {
projects: {
create: function(data) {
return browser.executeAsyncScript(function(data, callback) {
var api = angular.injector(['ProtractorProjectsApp']).get('apiService');
api.project.save(data, function(newItem) {
callback(newItem._id);
})
}, data);
}
}
};
The code in this isnt the cleanest but you get the general gist of it. Another alternative is to replace the module with a double or mock using [Protractor#addMockModule][1]. You need to add this code before you call Protractor#get(). It will load after your application services overriding if it has the same name as an existing service.
You can use it as follows :
var dataUtilMockModule = function () {
// Create a new module which depends on your data creation utilities
var utilModule = angular.module('dataUtil', ['platform']);
// Create a new service in the module that creates a new entity
utilModule.service('EntityCreation', ['EntityDataService', '$q', function (EntityDataService, $q) {
/**
* Returns a promise which is resolved/rejected according to entity creation success
* #returns {*}
*/
this.createEntity = function (details,type) {
// This is your business logic for creating entities
var entity = EntityDataService.Entity(details).ofType(type);
var promise = entity.save();
return promise;
};
}]);
};
browser.addMockModule('dataUtil', dataUtilMockModule);
Either of these methods should give you a significant speedup in your testing.
Sharding Tests
Sharding the tests means splitting up the suites and running them in parallel. To do this is quite simple in protractor. Adding the shardTestFiles and maxInstences to your capabilities config should allow you to (in this case) run at most two test in parrallel. Increase the maxInstences to increase the number of tests run. Note : be careful not to set the number too high. Browsers may require multiple threads and there is also an initialisation cost in opening new windows.
capabilities: {
browserName: 'chrome',
shardTestFiles: true,
maxInstances: 2
},
Setting up PhantomJS (from protractor docs)
Note: We recommend against using PhantomJS for tests with Protractor. There are many reported issues with PhantomJS crashing and behaving differently from real browsers.
In order to test locally with PhantomJS, you'll need to either have it installed globally, or relative to your project. For global install see the PhantomJS download page (http://phantomjs.org/download.html). For local install run: npm install phantomjs.
Add phantomjs to the driver capabilities, and include a path to the binary if using local installation:
capabilities: {
'browserName': 'phantomjs',
/*
* Can be used to specify the phantomjs binary path.
* This can generally be ommitted if you installed phantomjs globally.
*/
'phantomjs.binary.path': require('phantomjs').path,
/*
* Command line args to pass to ghostdriver, phantomjs's browser driver.
* See https://github.com/detro/ghostdriver#faq
*/
'phantomjs.ghostdriver.cli.args': ['--loglevel=DEBUG']
}
Another speed tip I've found is that for every test I was logging in and logging out after the test is done. Now I check if I'm already logged in with the following in my helper method;
# Login to the system and make sure we are logged in.
login: ->
browser.get("/login")
element(By.id("username")).isPresent().then((logged_in) ->
if logged_in == false
element(By.id("staff_username")).sendKeys("admin")
element(By.id("staff_password")).sendKeys("password")
element(By.id("login")).click()
)
I'm using grunt-protractor-runner v0.2.4 which uses protractor ">=0.14.0-0 <1.0.0".
This version is 2 or 3 times faster than the latest one (grunt-protractor-runner#1.1.4 depending on protractor#^1.0.0)
So I suggest you to give a try and test a previous version of protractor
Hope this helps
Along with the great tips found above I would recommend disabling Angular/CSS Animations to help speed everything up when they run in non-headless browsers. I personally use the following code in my Test Suite in the "onPrepare" function in my 'conf.js' file:
onPrepare: function() {
var disableNgAnimate = function() {
angular
.module('disableNgAnimate', [])
.run(['$animate', function($animate) {
$animate.enabled(false);
}]);
};
var disableCssAnimate = function() {
angular
.module('disableCssAnimate', [])
.run(function() {
var style = document.createElement('style');
style.type = 'text/css';
style.innerHTML = '* {' +
'-webkit-transition: none !important;' +
'-moz-transition: none !important' +
'-o-transition: none !important' +
'-ms-transition: none !important' +
'transition: none !important' +
'}';
document.getElementsByTagName('head')[0].appendChild(style);
});
};
browser.addMockModule('disableNgAnimate', disableNgAnimate);
browser.addMockModule('disableCssAnimate', disableCssAnimate);
}
Please note: I did not write the above code, I found it online while looking for ways to speed up my own tests.
From what I know:
run test in parallel
inject data in case you are only testing a UI element
use CSS selectors, no xpath (browsers have a native engine for CSS, and the xpath engine is not performance as CSS engine)
run them on high performant machines
use as much as possible beforeAll() and beforeEach() methods for instructions that you repeat often in multiple test
Using Phantomjs will considerably reduce the duration it takes in GUI based browser, but better solution I found is to manage tests in such a way that it can be run in any order independently of other tests, It can be achieved easily by use of ORM(jugglingdb, sequelize and many more) and TDB frameworks, and to make them more manageable one can use jasmine or cucumber framework, which has before and after hookups for individual tests. So now we can gear up with maximum instances our machine can bear with "shardTestFiles: true".
Is it possible to get info on what instance you're running on? I want to output just a simple identifier for which instance the code is currently running on for logging purposes.
Since there is no language tag, and seeing your profile history, I assume you are using GAE/J?
In that case, the instance ID information is embedded in one of the environment attributes that you could get via ApiProxy.getCurrentEnvironment() method. You could then extract the instance id from the resulting map using key BackendService.INSTANCE_ID_ENV_ATTRIBUTE.
Even though the key is stored in BackendService, this approach will also work for frontend instances. So in summary, the following code would fetch the instance ID for you:
String tInstanceId = ApiProxy.getCurrentEnvironment()
.getAttributes()
.get( BackendService.INSTANCE_ID_ENV_ATTRIBUTE )
.toString();
Please keep in mind that this approach is quite undocumented by Google, and might subject to change without warning in the future. But since your use case is only for logging, I think it would be sufficient for now.
With the advent of Modules, you can get the current instance id in a more elegant way:
ModulesServiceFactory.getModulesService().getCurrentInstanceId()
Even better, you should wrap the call in a try catch so that it will work correctly locally too.
Import this
import com.google.appengine.api.modules.ModulesException;
import com.google.appengine.api.modules.ModulesServiceFactory;
Then your method can run this
String instanceId = "unknown";
try{
instanceId = ModulesServiceFactory.getModulesService().getCurrentInstanceId();
} catch (ModulesException e){
instanceId = e.getMessage();
}
Without the try catch, you will get some nasty errors when running locally.
I have found this super useful for debugging when using endpoints mixed with pub-sub and other bits to try to determine why some things work differently and to determine if it is related to new instances.
Not sure about before, but today in 2021 the system environment variable GAE_INSTANCE appears to contain the instance id:
instanceId = System.getenv("GAE_INSTANCE")