Evaluation of Online Controlled Experiments

SOMIT GUPTA
1 min readDec 6, 2019

--

A/B Testing is the gold standard to estimate the causal relationship between a change in a product and its impact on key outcome measures. It is widely used in the industry to test changes ranging from simple copy change or UI change to more complex changes like using machine learning models to personalize user experience. The key aspect of A/B testing is evaluation of experiment results. Designing the right set of metrics — correct outcome measures, data quality indicators, guardrails that prevent harm to business, and a comprehensive set of supporting metrics to understand the “why” behind the key movements is the #1 challenge practitioners face when trying to scale their experimentation program. On the technical side, improving sensitivity of experiment metrics is a hard problem and an active research area, with large practical implications as more and more small and medium size businesses are trying to adopt A/B testing and suffer from insufficient power. In this tutorial we will discuss challenges, best practices, and pitfalls in evaluating experiment results, focusing on both lessons learned and practical guidelines as well as open research questions.

Presenters:

  • Somit Gupta, Microsoft.
  • Xiaolin Shi, Snap Inc.
  • Pavel Dmitriev, Outreach.
  • Xin Fu, Facebook.
  • Avijit Mukherjee, Facebook.

Tutorial @ The Web Conference, 20th April, 2020 (PST).

Slides for the tutorial

Introduction

Best Practices

Advanced Topics

--

--

No responses yet