This article will help you:
|
For decades, product teams have relied on experimentation as a way to prioritize and implement product adjustments. But it’s never been easy. Because of that, these experiments often just tweak peripheral issues around the margins, instead of driving the big-picture changes that optimize the overall product experience.
Amplitude Experiment is a workflow-driven behavioral experimentation platform that actually accelerates your roadmaps, letting you focus on making better decisions and building better products.
With Experiment, you can easily modify and configure product experiences for unique audiences through:
- Product experimentation: Improve key KPIs by running experiments and A/B tests to onboard new users, reduce friction for checkout experiences, roll out new features, and more.
- Progressive feature delivery: Pre-plan and stage new features for beta testers, a percentage of your users, or even specific target audiences.
- Dynamic in-product experiences: Deploy and adapt custom experiences at scale.
Amplitude Experiment enables all this through flags—easy-to-set up switches that let you modify your product's experience without having to change code. Use them to set up experiments in your product, or to stage and roll out new features straight to your users. Your code uses the Amplitude Experiment SDK or REST API to communicate with Amplitude Experiment.
NOTE: Amplitude Experiment uses a sequential testing statistical model in all experiments.
This article will provide a high-level overview of the Amplitude Experiment workflow, complete with links to more detailed articles in our Help Center. We’ll start with the workflow for creating an experiment, and follow that with the workflow for creating a feature flag.
Creating an experiment: an overview
Many experimentation programs fail in the first step of the process—nobody can articulate the problem experimentation is supposed to solve. If you can’t explain—in clear, simple language—why you’re running an experiment, you can’t realistically hope to learn anything useful from it.
Before doing anything else, spend some time coming up with a strong mission statement for your experiment. It should, at the very least, answer these two questions: What’s the problem, and how can running an experiment help you solve it? The effort you put in at this stage will pay big dividends later.
Once you’ve done that, you’re ready to configure your experiment. This means creating a new deployment (or choosing one you’ve created earlier) for the experiment, and installing the SDK you’ll be using.
Create a hypothesis
Next, reach back to the mission statement you came up with for your experiment. This will serve as the foundation for your experiment’s hypothesis. What’s a hypothesis? Think of it as a prediction of how your experiment is likely to turn out. This is how you’ll know if your experiment succeeds or fails.
But this problem statement is only the first part of a hypothesis. There are two others: a proposed solution, and a predicted result. The first is essentially a description of the changes you want to make to fix your problem—for example, consolidate two steps of the onboarding process into one—while the second is what you expect the results to be: i.e., “decrease onboarding churn by 20%.”
Here’s an example of a hypothesis statement:
“User churn in our onboarding funnel is significantly higher than industry average. Product data suggests our funnel may be too confusing; we believe this can be fixed by consolidating steps two and three in the funnel. As a result of this change, we expect onboarding churn to decrease by 20%.”
The hypothesis statements you use will be different, particularly around the problem definition stage. For example, your question may be more exploratory in nature—i.e., "why are so many users dropping out of our onboarding funnel?"—or you may be more interested in testing different solutions to a problem you already understand ("we've come up with several potential UI changes to rectify a known user pain point; which one works best?"). Still, this basic template is a good one to follow, especially if you’re new to experimentation.
Pick a metric
Take a look at the last sentence in that hypothesis statement. What do you notice about it? For one thing, it includes a specific measurement of how you expect user behavior to change—onboarding churn will decrease by 20%. This is what will determine whether your experiment is a success: either you’ll hit this number, or you won’t.
But how will you know? You need a way to measure that decrease in churn. To do that, you’ll need a metric. In Amplitude Experiment, any event you log in Amplitude Analytics can serve as an experiment metric. For the example experiment described above, you’d want to use the event your product uses to track drop-off in your funnels as your metric.
Create a variant
At this point, you could go ahead and just roll out the new onboarding process to all your users and see what happens. But if you were to do that, you won’t know if the product changes you made were responsible for any improvement in your onboarding churn rate. And what if that rate gets worse after you roll out the new funnel? It could be the result of poor design choices, random chance, or some external influence you didn’t consider—but you’ll never know for sure.
This is why you’ll need to create at least one variant for your experiment. A variant is simply a different user experience that you’ll show to a percentage of your users. Keeping with the example we’ve been using, the variant here would be the new, streamlined version of the onboarding process. Some of your users will see that, while others will see the current process instead (also known as the control). It’s the differences in how your users respond to each variant that will determine the experiment’s success.
When coming up with variants, it’s good practice to keep the number of changes in each one low (if you can get that number down to one, so much the better). It’s also good to make sure the variants are noticeably different from each other. This way, you can be confident that users in each variant segment are truly experiencing the product differently from each other, and that your changes are what’s driving any differences in behavior between the segments.
Allocate users
Now that you’ve got your variants, you’ll need to decide how many of your users will see them. You can choose to roll your experiment out to your entire user base, or you can just roll it out to a fraction of them instead. You’ll specify how many users in your experiment will see the control experience and how many will see your variants. You can define specific user segments to include or exclude from your experiment, and you can even choose a specific experience for individual user or device IDs.
Activate your experiment
At this point, you’re ready to roll out your experiment to your users. Just flip the toggle switch to Active, and your experiment will be live. Congratulations!
Analyze your results
After your experiment goes live, you can generate and view your results at any time. Experiment will tell you when your experiment has reached statistical significance, and it gives you all the data you need to analyze and interpret your results, and to apply those learnings to your product experience going forward.
To learn more about how to design, roll out, and learn from experiments, check out our Help Center articles on working with Amplitude Experiment.
Creating a feature flag: an overview
If, on the other hand, you’re planning a phased feature rollout instead, your workflow is even simpler. Because you’re not asking a question about user behavior in your product, you don’t need to worry about creating a hypothesis, picking a metric, or analyzing your results. All you have to do is create a feature flag.
NOTE: Behind the scenes, experiments and flags are very similar, but the basic difference is this: An experiment helps you make sure you’re building the right thing for your business, while feature flags allow seamless feature releases and rollbacks. You can use a similar approach for both, but you should think about them differently, and your workflows should probably reflect that.
Once you’ve configured your deployment, go straight to creating your variants. The basic idea—i.e., a new and different product experience that some users see but others do not—remains the same. But instead of exploring how different user segments react to different user experiences, you’ll be choosing which users get access to new features first. When working with feature flags, the variant represents code for a new feature that isn’t yet released to your entire user base.
You’ll still allocate users to your variants as you would if you were running an experiment, and activating your flag is as simple as switching on your experiment.
Check out our Help Center article to learn more about feature flags and how they work in Amplitude Experiment.
Delete old experiments and flags
Deleting experiments and feature flags you no longer need is simple. Just select Archive from the drop-down menu next to the Active toggle switch.
Further reading
Learn more about Amplitude Experiment and how it can help you optimize the product experience for your users with these articles in our Help Center:
- Amplitude Experiment: key terms
- Configure your experiment
- Design your experiment
- Roll out your experiment
- Learn from your experiment
- QA your experiment
- Troubleshoot your experiment
- Set up mutually exclusive experiments
- How randomization works in Amplitude Experiment
- Performance and scaling in Amplitude Experiment
- Amplitude Experiment SDK documentation