For the past 2 years Secret Escapes have been using Google Content Experiments (or Google Website Optimizer, as it was formerly known). We ran over 120 tests, ranging from simple copy changes to complex site-wide overhauls. Whilst a great free tool, it does have its limitations. As we try to build upon our testing culture here, it seemed odd that only the tech team had access to launch and push tests live. Even simple single word changes required the input of a developer to output the test snippet onto the page, and often the biggest hold-up of all was the need for a release. With a marketing team stretching across 3 territories desperate to test new copy for PPC campaigns, it was pretty unsustainable for tech to be implementing all the tests.
This is where Optimizely came in… Optimizely is probably one better known of the testing platforms, along with Monetate, SiteSpect and Visual Website Optimizer. Relatively simple to use, with a WYSWIG editor – it makes it easy for the marketing teams to launch and manage their own testing program. We switched to Optimizely 3 weeks ago, and here are some of my early thoughts on using it and comparing it to GCE.
Test Design & Options
For a marketer, there is no comparison between the two. Optimizely along with it’s user friendly UI allows the marketing and product team, with no technical knowledge, to devise, plan and launch tests within a matter of minutes. Simple copy tests are literally a 5 minute set-up. More complex tests changing position and CSS are a little more tricky, and for a site such as Secret Escapes which has conditional changes depending on location or user status – it is quite simple to break sections of the site by simply moving an object in the Wsywig editor. To help avoid such issues, we have introduced a few sign-off and testing steps, – however with only the tech team knowing the real nuances of the site, it can be possible to miss these. More complex tests which contain significant changes to the site we have implemented using a redirect in Optimizely, this is very much the same method used in Content Experiments – so it allows us to re-run previous tests with relative ease. As we are seemingly unusual in using query parameters (as opposed to completely different URLs) to distinguish each variation, there was a little input needed from the support team at Optimizely, who helped set out the blueprint for how to set up these tests.
Content Experiments on the other hand is totally anti-marketer. Every test requires tech input in order to edit the HTML/CSS). This would require tests to flow through the regular development process, requiring a release of the application to launch any AB test. One advantage to this is the security in knowing that the site has been thoroughly tested and validated through the standard thorough QA process. The obvious disadvantage is that every test would need to be included in the sprint process which we follow at Secret Escapes. This simply meant we couldn’t get as many tests as we should, live.
Ease of adding test code
Optimizely requires a single snippet on the site. Super simple, and means future tests don’t require a release – unless they have required a server side change.
Content Experiments requires a separate snippet for every test you run. At times we had 10 snippets all being loaded onto the site. To keep the site maintainable and clean of old AB test code, the close out of a test had to be prioritised through out sprint process. For each test launch, or closeout, we required a release – again more work for the tech team.
Whilst there are no inbuilt tools in Content Experiments to target users, with some thought it was very very easy for a dev to target particular users. Every test snippet we wrapped with ‘IF’ statements to ensure it was fired only on the correct pages, to the right users. Complex tests sometimes contained up to 7/8 conditions, but with it all being carried out server side – we found it very easy to test, and confident tests running on concurrent pages wouldn’t impact each other.
Content Experiments wins hands down for the best conversion goals and metrics. As a feature of GA you’re able to use any goals set up in the Analytics profile.
Optimizely makes it very easy to add click or URL based goals, it’s all done using the WSYWIG editor. It does seem a little more complex to get goals such as margin or average order value in. At times it feels like you’re just duplicating what’s already in GA.
Test reporting and segmenting
As a Google Analytics based tool, content experiments is great for reporting and segmentation. We extensively use custom segments to analyse a test post-running. It is a little extra work in building these segments, however it has allowed us to analyse test performance by source, time, device.
One thing I do love about Optimizely, however, is the real-time statistics. You do have to be careful not to stop a test early, such as the moment you see a test showing as 95% chance to beat original (see how not to run an abtest, on why you shouldn’t do this!) – but it does help flag up any bugs with AB test or goal setup very quickly.
As ever switching to a new platform requires time to familiarise yourself with it. After 2 years of Content Experiments, tech had gotten familiar with ways to manipulate it to best suit their needs. Its integration with Analytics also made it incredibly easy to analyse those tests without the need to generate new segments. However it does restrict a company from quickly launching tests. At times, the sprint and release process do cause a roadblock, particularly for a company looking to move fast in their testing programme.
Optimizely will no doubt widen the testing culture within Secret Escapes. And with our international sites all keen to test their respective landing pages, it will allow them to try more market centric tests – without impacting the tech team, which is situated in the UK. It does bring the worry of tech losing control or visibility of what code and content is entering site, however a process involving thorough testing and validation will minimise the risk of this impacting users.
Will it prove a good investment? Almost definitely. Within 3 weeks of launching we have seen simple tests run in Germany turn 15% uplifts in conversion, and we have also been able to spot tech bugs almost immediately after launching thanks to the real-time analytics. With some time spent setting up goals, educating our internal teams/users and familiarising ourselves with the tool and it’s limitations – it’ll mean better AB tests, rolled out quicker.