In Google Website Optimizer, can I use a page in 2 tests simultaneously? - google-website-optimizer

Best explained via example...I want:
One A/B test to alternate between PageA and PageB.
A second A/B test then alternates between PageC and PageB.
In other words, I want PageB to be and active option in 2 tests at the same time.
My question is: Will the test results make sense? I'm concerned that if 100 visitors go through the first test, and 100 visitors go throug h the second test, then the test results report will look like this:
First A/B test: PageA 20/50 (conversions / visits)
PageB 60/100
Second A/B test: PageA 40/50 (conversions / visits)
PageB 60/100
(see, if 15 of the 50 visitors from the first test PageB visitors converted, and 45 of the second test's PageB visitors converted, then PageB should lose both tests! But the numbers together make it look like it is winning.
Or is GWO smart enough to know that "This page view is a result of a Test1 redirect, so only count the results for that test."

It depends on what you mean by "switches".
GWO uses two sections - control script and tracking script. The control script goes on any page where you want something to happen, and the tracking script goes on the destination / conversion page only. A single page can be the destination / conversion page for many tests without problem.
If you're hoping to control two tests on one page you'll have to provide more detail on what you are doing. Are you using javascript to do a redirect in a multivariate test, or are you running A/B tests?

Related

Having one query for cucumber bdd selenium automation

I have 2 scenarios to be executed.
1st scenario returns me one variable as Card I'd.
This card I'd iwant to consume in 2nd scenario
You can't preserve things between scenarios and connect scenarios together. Each scenario is a separate test and everything is reset between scenarios. This is by design.
You need to change your approach to writing your second scenario, so it has a Given which sets up your card.

Cucumber - testing principals vs speed

after reading many articles, in my understanding all Cucumber tests should be independent from each other and autonomous, so that are rules I follow when I am automating my web app tests.
Lets say I am testing web page that has multiple input fields.
Currently, for CRUD operations I have two types of scenarios:
Scenario: Check page display correct data
Given: I populate DB with data
When: I open the page
Then: Page data should match with data from DB
Scenario: Update page data
Given: I populate DB with data
When: I open the page
And: I update each field with some new data
When: I press save button to save data
Then: Page data should match with data from DB
So in this case I have two scenarios that check if data is displayed properly, and another one that updates data and check it as well, but because step that populates the database takes long (1-3 seconds) I was thinking, why not combine this two type of scenarios, into single one, greatly cutting execution time:
Scenario: Update page data
Given: I populate DB with data
When: I open the page
Then: Page data should match with data from DB
And: I update each field with some new data
When: I press save button to save data
Then: Page data should match with data from DB
As you can see, first I populate the database, than I check if it is properly displayed, next I modify it, and check again, so this way I checked two CRUD operations (read and update) in single scenario, but I believe it would be against principles.
It's perfectly fine to combine two CRUD operations in one scenario if your tests are more focussed on integration and end-to-end behaviour rather than unit / component behaviour (which probably is the case).
Of course you should always consider the balance between putting too much in one scenario versus fragmenting a feature into a lot of scenarios. And of course the trade off of asserting more than one thing in a scenario is that it potentially forces you to debug more when a scenario fails. So it's not about principles but rather a conscious choice that you may have to reconsider depending on the speed and stability of your application under test.
Couple of ideas, I can share.
...
When: I ...
And: I ...
When: ...
...
can become
...
When: I ...
And: I ...
And: ...
Then: ...
even better if you can abstract it to a declarative business function. Which will allow you to see the forest, and not get swamped by the long end-to-end scenarios.
It is good, to think for your BDD journeys from the end-user perspective
Given: I populate DB with data
is something that happens to the usual user very rarely, right? Unless you cover some specific admin/dev case. If, you are using it as precondition, take a look at the xUnit Fixture Setup patterns. DB validations are a recommended consideration, just not at the top most layer of your framework.
And
greatly cutting execution time
can be achieved via parallel execution of your features/scenarios. Not, by cutting test scenarios. Again, the tradeoff is in favor of the meaningful scenarios.

After changing test case, test case in test run not changed. Does it work as designed?

Add a test case to a test run, then change test cases, e.g., steps, or expected results. But test case in test run displays changed before.
At first, I think it maybe a bug. Discuss with my workmate, he think it right. Test run keep a copy after test case add to run.
Yes, seems reasonable. Does this as designed?
This is working as designed. TestRun keeps a copy of the TestCase text from the time when TR was created. You see for build 1 I may want a very basic test for say the login screen, but for build 2, 3, etc I want to improve this.
This is why such functionality exists.
In TestRun page there is the Cases -> Update menu which will refresh TestCase text for IDLE entries.
For the ones that have already been executed either change them to IDLE before update or just create a new TestRun.

Google Analytics timingVar

I have a react application which is integrated with google analytics (GA). We re piloting this integration to collect the user behaviour in different scenarios following the guides from GA. As an example we have an external link in one of our pages which will take users to a different application. On clicking this line we are firing two GA functions. One is to record the click event and the other is to find the time the user spent on the base page before clicking the link. We can see the click event in the GA but not able to see the timing variable.
In the console we can see the GA calls like :
[react-ga] called ga('send', fieldObject);
log.js:2 [react-ga] with fieldObject:{"hitType":"event","eventCategory":"ExternalLink","eventAction":"Clicked","eventLabel":"Acme-Web"}
log.js:2 [react-ga] called ga('send', fieldObject);
log.js:2 [react-ga] with fieldObject: {"hitType":"timing","timingCategory":"ExternalLink","timingVar":"timeSpent","timingValue":17432,"timingLabel":"Acme-Web"}
Any pointers would be helpful.
According to the docs there are sample rates that are applied to timing Hits.
This means that if you don't have enough pageviews you will only see 100 timing hits and even if you have many pageviews timing hits will be capped at around 10,000 / day or 1% of pageviews.
This means that timing events are pretty unreliable for most use cases. You should instead fire a second event with a Value attached to it.

What useful information can I get from my heatmap?

I have some sort of a heatmap and I can see how my users are using my website. Now I know where they are clicking, how much time it costs to complete a set of instructions, how much do they navigate between pages, etc.
So given that I have information with of this kind:
12/12/2014 12:45:00 - User pressed button 1 on page 1
12/12/2014 12:45:15 - User pressed button 2 on page 1
12/12/2014 13:00:00 (15 minutes delay) - User pressed button 3 on page 1
Now comes the hard part - how do I process this kind of information? For example how do I know that the user is lost on my website (if there is 15 minutes delay - does this means that his phone rang or my UI is bad?). And also - how can I find some patterns in large amount of data - say every third user spends 15 minutes after the second click to find what he has to click next.
What is the correct approach here? Thanks.
In order to derive useful information from raw data you need some context. You need to be clear about what expected user behaviour is and, where appropriate, what you are aiming for the user to do (eg. buy a product, register, make a comment etc.).
For example, if you have an event splash page with a big button to book a place, and you find that a lot of people click on that button very soon after they arrive, that's probably a good thing. If you have a page full of important information that you want people to read, and they click away just as quickly - that's really not a good thing.
It sounds obvious but so many people fall in to the trap of trying to evaluate user behaviour without being clear about the context - and without acknowledging that the very same number can mean very different things depending on that context.
Evaluate each page of major section of your site and outline what is there, and how you'd expect users to interact with it. How long would you expect a user to spend on that page? Where would be the logical place to go next? Is this a logical place to leave the site (I booked, I'm done), or is a user leaving the site here a failure? And so on. Then compare these expectations to the reality you see from your heat map.
Don't get too hung up on individual cases - if one person took 15 minutes on a page that should take 30 seconds, that was probably the phone ringing. If 90% of visitors take 15 minutes, then your page needs re-evaluating.
Lastly, pay as much attention to what people don't do as what they do. Everyone's eye is drawn to the bright spots on a heat map or the rows at the top of a chart with big numbers. With analytics, a lot of the most useful information is what you expected to see people doing, but they aren't. Again, to realise this information you need to have defined that expected behaviour.

Resources