We have quite and extensive wizard UI flow and in order to test development changes (e.g.: DOM chanes) at the end of the flow we need to go through all the steps every time since there is data dependancy gathered in previous steps.
This is tedious, takes a lot of time, every time.
Have been thinking about some way of defaulting data but then still we have to click some buttons to get some a-ync data based on the input and press the next btn in the wizard steps.
Using a protractor like behaviour would be excellent. We already have tests set up for that which can take us to the point we need to verify, while developing, in seconds and having all the (stubbed) data in there.
Like to hear your thoughts on this and if such an automated Protractor way of getting to a certain point is possible.
EDIT: why not just use the Protractor test we use on the test server to use locally to go through the development steps and let it stop at a certain point?
While writing the post and re-reading it I answered it in the EDIT with Protractor. Havent tried it yet but should do the trick.
Related
My company hired an agency to create an MQL salesforce object. Its constructed from an Apex class with various triggers.
We no longer have a need for it, and as the standing saleforce admin, there is none at the company who knows Apex. I'm taking classes to learn it, but wanted to check in and see how I can deprecate the object from salesforce by archiving/deleting (or even just commenting out the code) to push the update to production.
Does anyone have insight into how to go about doing this? All of the courses I've taken are basic understanding of Apex and how to write small triggers, classes and queries. The agency who built the class left 0 documentation on its code.
You can't write code in production so whatever you'll try to do - will have to be done in sandbox, tested and then deployed.
There's a way to do a "destructive deployment" and really delete it but you'll need programming tools (VSCode, Eclipse IDE or Ant + Migration Tool). It's bit advanced topic, I'd suggest you hire a dev ;) or try to just comment them out.
In sandbox you can comment out the body (bodies?) of triggers and classes. You shouldn't kill whole file, leave some empty skeletons like
public with sharing class MqlGenerator{
/* kill everything
*/
}
trigger MqlTrigger on MQL__c (after insert){
/* kill everything
*/
}
Of course if there's trigger on Account and it does 10 things, only 2 of them relate to MQL then don't comment everything out ;) It'll be bit of trial and error for you, depends how clean the code is.
You will have to touch triggers, normal classes and likely unit tests too because if they did decent job - there will be tests that verify these triggers do something and now these tests will start to fail.
Add the files to changeset as you go (you do changesets, right? Doesn't sound like you deploy with Git+SFDX for example). From time to time run Apex Classes -> Compile all classes and run unit tests. Some manual testing wouldn't hurt too. If you are unsure what's left you can click on MQL's fields, there's "Where is this used?" button. Or even try clicking delete & repeating until it succeeds ;)
After you deploy this changeset...
If the MQL__c has no triggers (for example it is created in Account updates but itself doesn't have triggers), you might actually be able to delete the object. If there are related triggers, workflows etc SF will stop you. The only way to really delete it would be to run this destructive deploy. It's possible without installing anything, use the link I included and for example workbench would let you make a deployment. But it's bit "pro", if you're unsure start with commenting stuff out and maybe leave the empty skeleton until you're more comfortable. You can always hide the object's Tab, remove right to Read the object and it'll disappear from listviews, reports... it'll be an eyesore only for sysadmins.
If object has to stay around but the data storage is significant you could try truncating the object. If it gives you trouble - Data Loader, export all records (just IDs), then delete. Maybe even with hard delete option so you skip recycle bin.
I would like to start automating the testing of my app written in CodenameOne, but I find it difficult to visualize how to use the TestRecorder (section "Unit Testing") for "industrial" testing.
If anyone here is already using it, could you share a few tips about how you use it?
E.g. how do you use the different "Asserts" buttons, how do you structure your tests into suites and how do you chain them together (e.g. so each test case will start in the right context like where in the navigation structure it is supposed to run), do you need to manually edit the tests, ... And is there anything to be aware of before creating lots of tests interactively, e.g. to avoid that your tests are invalidated by some irrelevant change to your UI?
I read in the blog post from May 2017 that the TestRecorder "wasn’t picked up by many developers and as such it stagnated". I tried TestRecorder and immediately came across a seemingly basis error in it (missing test for null) when recording a test case using the Toolbar, which gave the impression it is still the case. So, if anyone here is using another approach that is working well for you, I'd love to hear about that.
See the test classes we use to test Codename One itself here: https://github.com/codenameone/CodenameOne/tree/master/tests/core
You can use the test recorder to generate a skeleton but you can do this manually just like any test. The test API lets you invoke the app or just pieces of it and perform assertions on the behaviors within.
For my work I used DotTrace to analyze slowness one of our cliënts experiences in our WPF desktop application.
I used it before to do this which resulted in the conclusion that the DataBase calls where slow which we could then find a solution for.
This time however, I see 75% of the execution time in Native code and no clear slowness in the user code.
I searched some around and saw a few other people with the same question.
The answer there was either that it's normal (previous snapshot also had just a tiny part of execution time in user code, so that seems okay) or that you can analyze it further if you check the box "Collect native allocations" when making the snapshot (which I unfortunately didn't check).
If I check just the user code most of the execution time resides in DevExpress DLLs which are third party UI components. Could you then say that this is moving towards hardware related slowness (see User code part of the snapshot below)?
I used the Timeline option to create the snapshot.
My questions:
Since the snapshot doesn't show a lot of time in true user code (excluding the DevExpress components), could I then conclude that this slowness isn't caused by inefficiency in our code?
Can I tell anything from the native code part of the snapshot (see screenshot below)?
Is Timeline even handy for this case or are one of the other sampling options clearer?
How would I proceed in such a case to move close to the source of the slowness?
Thanks in advance for your help!
Sebastiaan
Native code part of the snapshot:
The native part is always called by the managed code.
The timeline is not efficient in this case. Here you filtered only the native part.
For this kind of analyzing, i recommand using the Sampling mode where you have a better view of your hot spots. The native part will still be there but you can see which managed code called it.
Is there any way to trigger a .bat script when a selenium node is idle?
I have a selenium grid setup consisting of one hub and three nodes on separate machines. What i'm trying to do is to have a script that cleans up the testing environment on each node after a test suite has been executed. As there might be other test suites starting directly after another, I somehow need to trigger the script when the node has been idle for a few seconds.
The script itself is relatively fast, takes about 1-2seconds to run. How can I trigger this at an appropriate time?
The short answer is, you cannot do this from outside of the grid (atleast its not that straight forward). The reason why I say this is because, at any given point in time you can very easily find out the current usage statistics, but just before you trigger some cleanup actions, the grid may end up routing a new test to the node which is being cleaned up and thus causing invalid test failures.
Sometime back I created a blog post which talks about how to go about building a Self Healing Grid (which is what you are after). The details are specified in an elaborate manner here
If you are interested in consuming something thats already built and don't want to spend time re-inventing the wheel, you can take a look at the following Open source implementations implementations:
SeLion's Enhanced Grid built by PayPal (I was involved in building this).
Selenium Grid Extras built by GroupOn.
I am using Selenium framework for my test cases execution.
I need an instant report of test cases that are passed while the full suite is in execution currently.
For Eg: There are 100 test cases in suite and five have run of which 3 passed, 2 failed and I need these instant report while the suite is in-progress. Can you please help me with this task?
You can use ExtentReport.
You can use it to log your test steps and once its done it will generate a report to show your results.
For what your looking for, ExtentReport uses a "flush".
If you call this flush after each test step it will amend the step and create the report.
This is something I'm looking into myself at the moment, so I wouldn't consider this an answer but something I've stumbled across myself, hope it helps.
Here is how to set up ExtentReports on your project with examples - http://www.ontestautomation.com/creating-html-reports-for-your-selenium-tests-using-extentreports/
You must use it in conjunction with a test runner eg. TestNG or JUnit.
For what you are trying to achieve is slightly different to the example. You need to call a flush after every test step so it will amend to the report after the step is completed rather than when all the tests are completed. Its not something I have done before but it was explained to me like the following
Just call .flush() after every test instead of once at the end of your test run. BUT you need to make sure the ExtentReports object itself is only initialized once, instead of being reinitialized at the start of every test. For example, I used TestNG. The ExtentReports is called once using #BeforeSuite, but the .flush() is called after every test using #AfterMethod. I hope this makes sense.
The only thing that can’t be solved via code is the HTML refresh as this is outside the control of the ExtentReports library (it doesn’t know where you’ve opened the actual HTML file). But this can be taken care of by using a simple browser plugin as I said. At least for Chrome there are a lot of them, just do a Google search for ‘chrome auto refresh’.
Hope this helps. If you need anymore advice don't hesitate to contact me.