How to integrate Protractor test cases with Hiptest? - angularjs

For a Website which is made using angular js , our organization used protractor as the tool to automate test cases.
Our organization has come up with a new tool named 'HipTest' to manage test cases automation.
How to integrate protractor test cases with HipTest. I went to following links but was unable to fetch some useful information.
https://docs.hiptest.net/automate-your-tests/
https://github.com/hiptest/hiptest-publisher
Can Anyone help me how to start ?

I'm one of the main contributor or hiptest-publisher, so I should be able to help you.
The quick way to start with hiptest-publisher is to download the bootstrap of the tests from Hiptest (under the automation tab, you will have a "Javascript/Protractor" link).
You will get a zip file with four files (you should add all of them to your version control system, alongside the code of the application you are testing):
- one for the configuration of hiptest-publisher to use the command-line tool
- one for all the tests (you can split them later on, using the --with-folders option in the config file)
- one for the action words: that's the place where you will do the automation
- one for storing the status of the action words you exported (which is used with hiptest-publisher to see which action words have been updated since the last update)
Once the action words are implemented, the test files generated can be integrated in your test suite like any other Protractor test.
On the Hiptest side itself, the only requirement you have is that your tests should only be written using action words only. From what I understand from your post, you do not work directly in Hiptest yourself and you only manage the automation part (or did I get that wrong ? )
For pushing the execution results back to Hiptest, the principle is pretty simple:
- create a test run dedicated to the CI
- run the command "hiptest-publisher --config-file --test-run-id " before the tests (so only the tests inside the test run are executed, you do not want to run a test that someone is currently writing to be executed on fail of course)
- run your tests
- run the command "hiptest-publisher --config-file --push " to push the results back to hiptest.
Note that those two commands (including the test run ID) can be found directly inside Hiptest, from the "Automate" button in the test run.
If you have an Hiptest account, you can contact us directly on the chat, that might be easier to help you through the process.
Ho and I have a recording of the last webinar I made about automation, I guess you could find some useful information there too :)

Related

Salesforce apex not passing validation

I have two apex classes that I am trying to push to production from a sandbox. When I go to validate the change set, it fails on the code coverage part saying that the code coverage is 50% and needs to be 75%. Both of the classes have well above 75% code coverage as one class is at 100% and the other is at 95% from one of the test classes that I wrote within the sandbox. Is there something that I am missing here?
By default all local tests are run during deployment, all custom code your company written (doesn't run tests from installed managed packages. Because they're likely to fail on your required fields and these fails won't stop your deployment).
If you understand what you're doing and are under time pressure to deploy it - you can use the "run specified tests" option & list the test classes. As long as they all have 75% and all triggers being deployed have at least 1% coverage - it'll work. It'll also be quicker. You can do it in the changeset deployment UI as well as in sfdx deploy command options.
But it's a bit of "pro" move, I wouldn't do it unless you have some CI setup and something runs all tests in a sandbox from time to time. You could deploy without realising you've broken functionality you thought is unrelated. It's bit like deploying a new required field or validation rule - "what could possibly go wrong", it's just config, tests don't need to run... boom, headshot.
Proper way to do it would be to refresh sandbox from prod, deploy your code there, run all tests, keep investigating until you bring whole org back to 75%+ and then deploy that.

Need to understand on Pipeline for the issue that i am facing

I am working on POC for my client to implement VSTS Pipelines for CICD.
While working on i have observed that my pipeline is picking all the components instead of one component.
Ex: I have 4 components and change was made only on one component, when i create a pull request for deployment to target org, ideally it should pick only the change which was modified, instead during deployment it is picking all the 4 components.
What's being deployed is controlled by the manifest file (package.xml). You specify what you're interested in, what you want to retrieve & deploy. Sometimes you can put wildcards in it (deploy all apex classes you can find), sometimes you really have to list stuff (standard objects, reports, email templates).
So out of the box deployment is always a complete package, whether files changed or not. It's bit overkill but on the other hand what are you going to sign-off in user acceptance test phase? Not just the tickets changed, the state of whole system including regression tests.
If you don't want that - you'd need a script that cherry-picks files changed from commit X to commit Y or something. There's been some attempts to do it, check answers to How to create Salesforce incremental package.xml automatically?
Next year (safe harbor blah blah blah) SF plans to release better DevOps tools: https://admin.salesforce.com/blog/2020/new-devops-center-is-awesome-for-admins

Advice and experience for testing a CN1 app

I would like to start automating the testing of my app written in CodenameOne, but I find it difficult to visualize how to use the TestRecorder (section "Unit Testing") for "industrial" testing.
If anyone here is already using it, could you share a few tips about how you use it?
E.g. how do you use the different "Asserts" buttons, how do you structure your tests into suites and how do you chain them together (e.g. so each test case will start in the right context like where in the navigation structure it is supposed to run), do you need to manually edit the tests, ... And is there anything to be aware of before creating lots of tests interactively, e.g. to avoid that your tests are invalidated by some irrelevant change to your UI?
I read in the blog post from May 2017 that the TestRecorder "wasn’t picked up by many developers and as such it stagnated". I tried TestRecorder and immediately came across a seemingly basis error in it (missing test for null) when recording a test case using the Toolbar, which gave the impression it is still the case. So, if anyone here is using another approach that is working well for you, I'd love to hear about that.
See the test classes we use to test Codename One itself here: https://github.com/codenameone/CodenameOne/tree/master/tests/core
You can use the test recorder to generate a skeleton but you can do this manually just like any test. The test API lets you invoke the app or just pieces of it and perform assertions on the behaviors within.

Selenium automation report

I am using Selenium framework for my test cases execution.
I need an instant report of test cases that are passed while the full suite is in execution currently.
For Eg: There are 100 test cases in suite and five have run of which 3 passed, 2 failed and I need these instant report while the suite is in-progress. Can you please help me with this task?
You can use ExtentReport.
You can use it to log your test steps and once its done it will generate a report to show your results.
For what your looking for, ExtentReport uses a "flush".
If you call this flush after each test step it will amend the step and create the report.
This is something I'm looking into myself at the moment, so I wouldn't consider this an answer but something I've stumbled across myself, hope it helps.
Here is how to set up ExtentReports on your project with examples - http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/
You must use it in conjunction with a test runner eg. TestNG or JUnit.
For what you are trying to achieve is slightly different to the example. You need to call a flush after every test step so it will amend to the report after the step is completed rather than when all the tests are completed. Its not something I have done before but it was explained to me like the following
Just call .flush() after every test instead of once at the end of your test run. BUT you need to make sure the ExtentReports object itself is only initialized once, instead of being reinitialized at the start of every test. For example, I used TestNG. The ExtentReports is called once using #BeforeSuite, but the .flush() is called after every test using #AfterMethod. I hope this makes sense.
The only thing that can’t be solved via code is the HTML refresh as this is outside the control of the ExtentReports library (it doesn’t know where you’ve opened the actual HTML file). But this can be taken care of by using a simple browser plugin as I said. At least for Chrome there are a lot of them, just do a Google search for ‘chrome auto refresh’.
Hope this helps. If you need anymore advice don't hesitate to contact me.

What is a good approach to write Automated tests that depend on data that needs to be setup before executing the test

I am currently working on writing automated tests using Selenium Webdriver. We use MTM to run our test suites. I need some ideas as to what would be a good way to write these tests.
Currently before running these tests, we perform a basic setup that sets the username and password that would be required to login to the site, set the browser that the test should use, and few other things.
Currently the data that is required for each of the test is setup manually and is already present in the database . The test simply performs a keyword search, finds the necessary data it needs and then performs the assertions. What we would like to achieve is find such data that is already present in the database and use it instead of creating it manually. That way I can run these tests across different environments(dev,qa,production).
The site I am testing is an e-commerce website. I mostly write tests for specific features that my team develops, and thus many of these tests require some specific data. e.g setting up a store that has products with certain shipping rates, with particular offers etc. I would like to find a way to automate or almost remove this manual process of setting up the data. That way I have the flexibility to run these tests across environments. Could you please direct me to some articles/suggestions that can help me achieve this ?
If I am understanding your question correctly, you want to automate the test data setup.
You can achieve this in following ways:
If possible, write a sql script which inserts the desired data in db. Now you can execute this while running your tests. If you are using TestNG framework, then there is already an annotation available like #BeforeTest. You can execute that sql script in this annotation, it will be executed once before your test and data is ready.
Prepare data in a spreadsheet. Create an algorithm, fill the data dynamically in spreadsheet and from there either read directly and fetch it to your test using #BeforeTest or if required, data in spreadsheet can be inserted in db also.

Resources