A/B testing (a.k.a. “website optimization”) refers to a practice in which a modification to the existing user experience co-exists, for a time, with the original experience in the form of a randomized controlled experiment (RCE). The existing experience serves as the control and the new one is the variant. The traffic is split randomly (but not necessarily equally) between the two experiences and measurements are collected for comparing their relative performance with respect to some target metric, e.g. conversion. The experiment is run for as long as it takes for the measurements to reach statistical significance — a mathematical term meaning that enough traffic has passed through the experiment such that the observed difference is unlikely to be due to chance.
- Very limited variance: no change involving the server can be instrumented.
- Impossible to use operational data in test targeting.
- Impossible to correlate experiment data with the operational data.
- Implementation is entirely throwaway code.
- Operational risk: test definitions are not subject to regular Dev Ops.
- Productization risk: winning experiments need to be re-implemented.
While client-side variance injection may work well for content-oriented websites, it is categorically inadequate for interactive applications, where most changes involve the server. It also offers no viable option for non-HTML-fronted applications, e.g. native mobile or telephony applications.
Client-side injection may seem attractive because it is simple and easy to implement (hence all the claims to “no programming required”), but in reality this simplicity is 1) overstated and 2) due to its limitations, not to its power.
Even though some of the existing commercial tools enjoy success with smaller customers, they are grossly inadequate for mid- and enterprise size applications. At this time, such companies have to choose between doing no A/B testing at all or rely on custom development, which is expensive, error prone, and frequently futile. A number of bellwether technology companies in recent months have publicized their A/B testing efforts and even open-sourced their internally developed A/B testing frameworks.
The novelty of our approach is that it does away with client-side variance injection and replaces it with a sophisticated proprietary technology that overcomes all of the shortcomings of the former. The complexity with which Variant’s technology thinks about tests is groundbreaking. It operates on concepts and supports scenarios that current tools don’t even know how to talk about.
At the core of this technology is a novel Experiment Definition Model (XDM) — a self-contained abstraction, comprising a complete set of entities, relationships between them and operations on them that are required to fully describe and manage a set of RCEs operating on a computer application. XDM directly supports:
- Separation of experiment instrumentation and experience implementation — the application programmer writes code to implement the new experience without any concern for how this experience will be instrumented. The new experience may be a small improvement, or a complex new feature, once it’s developed, instrumenting it as a Variant experiment is a small, constant time configuration.
- Productization of the new code is virtually risk-free because it is the same code that’s already been deployed.
- Full stack instrumentation — Variant is capable of instrumenting any change, no matter how complex.
- Platform and domain neutrality — Variant makes no assumptions about the technology stack on which the target application is built or its domain. The only assumption is that it is an interactive application that responds to real time user input, like a Web native mobile or telephony application. This makes Variant ideal for heterogeneous experiments, e.g. an experiment may span a Web application and a native mobile app.
- Full support for concurrent experiments — XDM is capable of complete definition the entire variance space across all concurrent experiments, including hybrid variants.