Impact Analysis helps you understand how first-time engagement with one feature affects the rate of another behavior. The chart shows you how much or how many users take an action before and after those users do something for the first time.
What you will learn in this article
This article will go over how to understand the different ways that you can view the Impact Analysis chart as well as a few best practices when it comes to validating hypotheses in order to understand user behavior.
- It is important to keep in mind that correlation does not imply causation.
- This feature is available only to Enterprise, Growth, and Scholarship customers.
- It is best practice to have an alternate hypothesis around other possible actions that your users can take when analyzing impact.
Table of Contents
- What you'll learn in this article
- Setting up the report
- Causal Inference Interpretation Best Practices
- Video Walkthrough
Teams use Impact Analysis charts to:
- learn whether discovering a feature for the first time changes how often their users take an action
- determine if users who interacted with a new or changed feature are doing more of a behavior, relative to the time they tried the new or changed feature
In the following example for a music listening app, you can see the change in the average number of times that users play a song or video, after they discover the ability to favorite a song for the first time:
Setting up the report
Start by selecting a treatment event, a user action that you hypothesize affects your users' propensity to take some key action.
Next, select the outcome event that you want to see whether that behavior changed after users did the treatment event for the first time.
Finally, select a date range, which will define the time window in which Amplitude will find all users who performed the treatment event for the first time. Please note, "first time" is defined as the user not having done the treatment event anytime in the X calendar days before the beginning of the selected date range.
X calendar days will vary depending on the time interval you've chosen.
- Daily: X is 90 calendar days.
- Weekly: X is 91 calendar days (or 13 weeks).
- Monthly: X is 120 calendar days (or 4 months).
- Quarterly: X is 360 calendar days (or 4 quarters).
In the music app example, if the user selected the 10/15/2018 - 11/15/2018 range, then the users included in the results would be all of those who did "Favorite Song or Video" within that time who had NOT previously performed that event at any point between 10/15/2018 and 7/17/2018 (91 calendar days before the beginning of the selected time window).
Understanding the chart
The chart plots the outcome event metric on a relative n-day basis, from the time each user performed the treatment event for the first time. Amplitude lines up each user's relative timeline for you, so that you can easily see the pattern. The center line represents the day or week the users first performed the analysis.
So, in the above example, you can see that the users who favorited a song for the first time between 10/15 and 11/15 played an average of just over 4 songs or videos in week after they first tried favoriting... In contrast, those users only played an average of around 2 songs in the week before they discovered the favoriting feature.
When set to Average, the Y axis will represent the mean number of times that users performed the outcome event, of those who did the outcome event at least once, in each n-relative-day/n-relative-week interval. You can hover over each data point to see how many users did the given outcome event at least once in each interval.
Per the example below, 61,647 users played a song or video in the week after favoriting a song for the first time between 10/15 and 11/15.
When set to Active %, the Y axis will represent the percentage of people who did the outcome event at least once in each n-relative-day/n-relative-week interval, who performed any active event in that interval. You can hover over each data point to see how many users did the given outcome event at least once in each interval.
Per the example below, of the 66,339 users who were active in the week after favoriting a song for the first time, 92.9% played a song or video.
When set to Frequency, the Y axis will show the distribution of the number of times people did the outcome event at least once in each n-relative-day/n-relative-week interval.
In the example below, 7,540 users played 4 songs or videos in the week after favoriting a song for the first time.
Properties allow you to compute the average or sum of an event property for a given outcome event, across every instance of that outcome event done in each n-relative-day/n-relative-week interval. For example, you could plot the average song/ video length across the songs played by users in their weeks before and after favoriting a song.
Causal Inference Interpretation Best Practices
Impact Analysis helps you to directionally validate hypotheses, in order to develop a better understanding of the effects between user behaviors. It is not a replacement for randomized experimentation, which is still the gold standard for determining causal effects. We encourage you to think of Impact Analysis as a tool to help you determine where you should try experimenting to help your users engage more successfully with your product.
Here are a few things to consider before making causal conclusions:
- Alternate hypotheses: Have you thought about other potential actions that users take around the same time that they perform your hypothesized causal behavior for the first time? Such actions might also be contributing to the change in the rate of the outcome behavior. If those alternative actions are instrumented (tracked in Amplitude), try creating other Impact Analysis charts with those actions as the treatment event to evaluate whether you see a similar pattern. If you do observe a similar pattern, you'll need to further investigate how much each treatment is contributing to the change in outcome through user research and randomized experiments whenever possible.
- User counts: If your outcome metrics show high volatility (changing dramatically between intervals) or looks like a dramatic change relative to the intervals before vs after the treatment, you should check for a small user count, which can explain the inconsistency or magnitude. A small handful of users can swing the metric one way and another, while large user counts typically have a "smoothing" effect where the metric will have a more stability. Be cautious when making conclusions with small user counts, because they don't necessarily reflect a broader pattern!