This article will help you:
An experiment can’t tell you anything without events to track. Adding metrics to your experiment occurs on the Plan tab, in the Experiment Goals panel. Here, you’ll tell Amplitude Experiment what you want to use as your exposure event and what you want your primary metric to be, as well as any secondary metrics. The primary metric determines whether your hypothesis has been accepted or rejected—and therefore, whether your experiment has succeeded or failed.
There’s a lot riding on your primary metric, so it’s important to select the right one. If you’re not experienced in A/B testing, it can be hard to know which one that is. But if you know what to look for, your odds of a successful variant improve dramatically:
- Try to identify the single user action that will tell you if your variant is successful.
- Measure an event that is directly affected by the change you’ve made in your variant.
- Pick an event that fully captures the user behavior you’re trying to affect.
One common mistake is defaulting to a revenue metric when it’s not appropriate. This happens when your variant introduces a change that is separate from the metric you’ve selected. If your variant changes how your product page looks and functions, you should choose a metric on that page as your primary metric, instead of a revenue metric that might not come into play for several more steps down the funnel.
Amplitude Experiment lets you define multiple metrics when running an experiment. Unlike a primary metric, secondary metrics aren’t required—but they are often helpful. They can not only improve the quality of your analysis, but help evaluate whether it’s even worthwhile to roll out your experiment at all.
To set up the metrics for your experiment, follow these steps:
- On your experiment’s Plan tab, choose the exposure event. This is the event that users will have to trigger before joining the experiment. It is strongly recommended that you use the Amplitude Exposure event.
The Amplitude exposure event is sent when your app calls
.variant(). It sets the user properties Amplitude Experiment uses to conduct its analyses. When you use the Amplitude exposure event, you can be certain your app will trigger the event at the correct time.
That said, you can also select a custom exposure event instead. Click Custom Exposure, then Select event … to do so. Be aware that there is a much greater risk of triggering a custom exposure event at the wrong time; this can lead to a sample ratio mismatch.
For more information, see our article in the Amplitude Developer Center about exposure events.
- Select your primary metric from the Primary Metric drop-down, or create a new metric. Next to Direction, specify whether you’re expecting the success metric to increase or decrease; if you’re not sure, choose Any. Then set the minimum goal for the experiment, otherwise known as the minimum detectable effect. This is the minimum amount of difference between the control and the variant there should be in order for the experiment to be considered a success.
- Under Secondary Metrics, repeat this process for any secondary metrics you want to include.
NOTE: Amplitude Experiment does not support the use of custom metrics.
Use the sample size calculator to estimate the sample size you'll need to achieve significant results in your experiment, given your success metric settings. Amplitude Experiment will pre-populate reasonable industry defaults based on historical data, but you can adjust the confidence level, statistical power, minimum detectable effect, standard deviation, and test type as needed. For more information on these and other Amplitude Experiment concepts, be sure to see our glossary of key experimentation terms.
Pick an experiment type
Amplitude Experiment allows you to choose the type of experiment you run. Under Experiment Goals on the Plan tab, you'll have the choice of one of the following experiment types:
- Hypothesis Testing (default): Experiments where you’re using data to determine which variant to roll out based on performance. If no variant outperforms the control, you’ll usually want to roll back the experiment and stick with the control experience.
- Do No Harm (DNH): Experiments where you already have a direction in mind, and the purpose of the experiment is to make sure that this change does not significantly harm key metrics. This type of experiment is often used for design system changes, or features that have to be sunset.
As an example, let's say you've chosen to run a hypothesis testing experiment with a direction setting of "increase" and a minimum goal (MDE) of 2%. This means you believe the metric will increase by at least 2%. If you change the experiment type to Do No Harm, you'd be saying that you expect the metric to "not increase by 2%." A good use case for a Do No Harm experiment is launching a service agreement in your app and then testing for a lack of change in user retention.
NOTE: When the results of experiments are not statistically significant, try rolling back to the control for hypothesis testing experiments, and rolling out the highest performing variant for DNH experiments. If there is more than one treatment variant for the DNH experiment, you should:
choose the treatment with the most positive lift if the direction on the primary metric is “increase”
choose the treatment with the most negative lift if the direction on the primary metric is “decrease”
Create a new metric
If you don’t want to use any of the metrics in the drop-down list, you can create a new metric. To do so, follow these steps:
- From any of the metrics drop-downs, click + Create new Metric.
- In the Create Metric modal, give your new metric a name and select its type. A metric can be one of four specific types: unique conversions, average event totals, sum of property value, or the average of property value.
- Choose the metric event, which is the event that best represents that metric. Then click Create.
Edit an existing metric
Once created, your metrics can be edited later. Be careful, though: when you edit an existing metric, it will change for every experiment that uses it.
To edit an existing metric, follow these steps:
- Click any of the metrics drop-downs and hover over the metric you wish to edit.
- Click Edit Full Definition. The Edit Metric modal will open.
- Make the changes you need and click Save Changes.
The next step is creating and adding your variants.