In an industry first, Apteligent has quantified the correlation between mobile app crashes and increased churn rates. In this report, we not only deduce that a correlation exists — we leverage our data to show exactly how crashes drive an increase in churn. For those new to the space, churn can be thought of as the inverse of retention. There are many ways to define retention and churn. We focus on two of the most popular definitions:
We analyze users that have a crash on a certain day, and then look a few weeks into the future to see if they are still using the app.
Nth Day Retention
We ask the simple but important question: of users that had a crash on a certain day, how many return the next day?
Per-User Crash Rate
The crash rate for an app is the number of its crashes divided by the number of its app loads. Since we are focused on user churn, we need to distill this metric down into a per-user crash rate. For example, a single user may have a 100% crash rate (they loaded the app once and it crashed once on a given day). This doesn’t mean the app itself is crashing at a 100% rate for all users, but allows us to consider the segments of the population experiencing that issue.
When most people think of user churn, they think of “hard churn” which occurs when a user leaves and never returns to a mobile app. Churn rate is the percentage of users that do not return over a specified time period. In many cases, this means uninstalling the app completely. Our rolling retention calculation is a closer estimation of hard churn. However, it’s possible for a user to stop using an app for a period of time, because of a crash, and then return at a later date. This is where Nth Day Retention comes into play, specifically in this report we look at the impact that crashes have on users returning the next day.
For more in-depth definitions, see the Methodology section at the end of the report.
CRASHES INCREASE CHURN BY AS MUCH AS 534%
This is a 6X Increase From Normal Churn
The charts above and below are an analysis of rolling retention (“hard churn”) on Android. The X-axis is a per-user crash rate while the Y-axis is the churn rate. The bar chart above illustrates the percentage increase in churn rate due to crash rate. The best fit line-graph below models how the actual churn rate increases as the user crash rate increases; the 0% crash rate at the beginning of the line is “normal” churn that occurs regardless of crash. In this case that number is about 1.2%. Churn increases to about 7.4% as the per user crash rate approaches 100%.
We believe that viewing churn through the lens of Android data represents a more accurate view than using iOS data since the iOS platform limits us from sending a crash until the next app load. This means that it’s possible for a user to load an app, experience a crash, and never load the app again without ever counting the crash as a cause for churn (since the crash would be sent on the next app load). The iOS graph is shown below. Largely due to the platform limitation described above, the churn impact of a crash is 2.4x compared to 6.3x in the Android analysis.
USERS ARE UP TO 8X LESS LIKELY TO RETURN THE NEXT DAY AFTER A CRASH
The graphs in this section were calculated using Nth Day Retention, specifically Day1 retention. This allows us to determine if crashes cause fewer users to return to the app the next day. Like the previous graphs, the X-axis is the per-user crash rate while the Y-axis is the churn rate. On average, 1.8% of users who had a crash-free app experience didn’t use the app the next day. This is a weighted average influenced by high traffic apps that have a very high retention rate. This model shows a steep increase in churn rate as the per user crash rate increases. In fact, as the per user crash rate approaches 100%, the churn rate increases to almost 15%.
If every app were treated equally in our system, then the unweighted average churn for users who had a crash-free app experience is actually 22%. This number jumps to 45% for next day app opens if every app was averaged equally.
The iOS platform limitations described during rolling retention apply to Nth Day Retention as well. This causes the churn rate to increase about 2.4x on iOS instead of 8x on Android. We believe the Android figures to be closer in reality to what actually takes place on iOS.
70% of users that have a crash experience one every other time they load the app
The graph below shows the average per-user crash rate distribution for all users that experience a crash. On average, 70% of users that have a crash experience one every other time they load the app!
One surprising result is that for users that do experience a crash, about 13% of them have a 100% crash rate (i.e., the app crashes every time they load it!). Unsurprisingly, the rolling and Nth day retention results show that those are the users that churn the most.
On the graph above, a crash rate of 0% is left out because we’re focused on users that did experience a crash. The majority of users actually do not experience a crash, which is why you see per-app average crash rates in the 4-7% range.
Users with Low Engagement Have a Higher Crash Sensitivity
As part of our methodology, described below, we made sure to segment users based on the number of app loads per day and crashes per day. The graph below is the raw data for our rolling retention analysis:
The red dots are user segments that are churning, and the teal dots are retained segments. At 0 crashes per day, you see natural churn occurring regardless of crashes. As crashes per day increase we see a steady stream of churning users, especially those that load an app 10 times or fewer per day. The cluster towards the bottom of the graph is a strong indication of something you may know intuitively: it is harder to retain users with lower engagement, especially those who are experiencing crashes. The inverse result is also interesting — users with higher engagement are more resilient (less sensitive) to crashes. Apteligent’s dashboards allow you to track the impact of poor performance on your app’s daily active users. In a future report we will explore how crashes and other performance metrics negatively impact engagement.
App Store Categories: Shopping Apps Susceptible
In this section we break down our churn data by app store category. We analyze which categories have users that are most crash-sensitive or for whom experiencing a crash means a higher likelihood of churn. We also look at which categories are the most resilient to crashes, which means crashes increase churn at a lower rate. In the tables below, “Churn Multiplier” is the quantified increase in churn due to a user that experiences a 100% crash rate.
Rolling Retention (“Hard Churn”)
The results below are surprising and illustrate the impact that a poorly performing app has on its user base. Shopping and Finance, two of the most revenue-critical categories, came out on top of the most sensitive list.
It’s interesting to speculate on why some categories are more “crash resilient” than others. For example, an addicting game that crashes a lot may not sway a user enough to abandon the app. An airline app may perform terribly, but if you need to pull up your e-ticket you’re going to try again, regardless of how frustrating or futile the experience.
We excluded some categories with higher statistical error rates on the base level of churn (churn at 0% crash rate) from the analysis above.
Below is a more complete list of categories sorted by how much churn increases at a 100% per user crash rate. Unlike the tables above, this list does not take into account the base level of churn which gave us the multiplier metric.
1-Day Retention (“Crash and Come Back the Next Day?”)
Our previous analysis of 1-day retention showed that the impact on churn was much higher than the rolling retention analysis, and similar results are reflected in the app category data below. In other words, fewer users will come back the very next day after a crash, but over a longer time period they may give the app another chance.
The results show Shopping again in the top three most crash sensitive categories. Business and Education appear here but not in the top three for rolling retention. Those types of apps may be essential to use (for example, for school or work), so despite a crash causing a user to abandon the app the next day, as the results of this analysis show, the users will come back in the future at a higher rate out of necessity.
As we saw with rolling retention, Games and Travel also appear on this list as being resilient to crashes. Again, some categories with higher statistical error rates on the base level of churn were excluded. Below is a more complete list of categories sorted by how much next-day churn increases at a 100% per user crash rate. Unlike the tables above, this list does not take into account the base level of churn which gave us the multiplier metric.
Rolling Retention vs N-Day Retention
In this final section we explore the difference between the two retention methodologies discussed in the report. The graph below combines the churn models.
The Nth day retention model (the teal fitted line) has a steeper slope, which means churn rate increases more rapidly as the per-user crash rate increases. This makes sense intuitively; users are less likely to come back the very next day if the app isn’t working correctly. However, they may try the app again in the coming weeks which would decrease the churn rate (and slope of the fitted line) which we see in the rolling retention analysis.
Another characteristic of the graph is the spread or error rates between the dots and the fitted line. There is much less spread on the rolling retention because there are fewer external forces, such as usage patterns, that influence the result. If you consider whether or not a user returns to an app the next day, there can be many reasons beyond a crash; maybe it’s a commuting app and the next day is Saturday or perhaps it’s a news app that is loaded daily regardless of its performance. These usage patterns are averaged out in the rolling retention model since we’re considering almost a month’s worth of data.
- In an industry first, Apteligent has established that an increase in crashes is strongly correlated with higher churn rates.
- Crashes increase churn by as much as 534%: This represents a 6x increase from your “average” churn rates.
- Crashes largely decrease next day app opens by as much as 8x the normal rate.
- The data shows that “light” users, or users with lower app opens per day, tend to churn at higher rates based on crashes.
- The impact of crashes on churn also varies by app store category: Shopping & Finance in particular are vulnerable to crashes causing increased churn rates, while Games & Travel were much more resilient.
As an app owner, the ROI to invest in performance is clear — revenue is lost as your customers churn, and it is much more expensive to acquire new users than to keep existing users¹. Our platform is designed to analyze and surface these critical issues impacting your business metrics.
Methodology & Assumptions
Take all your users on a certain day, call this Day0. Then choose a day in the future. For example, 28 days. The rolling retention would be the percentage of users from Day0 that are still using the app on Day28, or any day thereafter.
For our report, we’ve improved on this definition. We instead look at rolling retention averaged for the course of a week. In other words, we calculate rolling retention for each day, Day0, Day1 until Day7 and then average the result based on performance and usage cohorts. Each user is segmented into a similar group of users that load the app on average the same amount of times per day, and experience the same amount of crashes per day. For each starting day, we looked ahead 21 days to see the percent of users in each group that returned to the app. In practice, the 21 days represents the “any day thereafter” defined above.
Nth Day Retention
Take all of your users on a certain day, call this Day0. Pick a day in the future, for example, Day7, and report back the percent of users still using the app on that day. This can be done for longer time periods as well. Instead of using just Day0, we could choose Day0 through Day 7, and then ask for that entire week’s worth of users, whether or not they return the following week.
For our report, we’ve asked of users that had a crash on Day0, how many return the next day? We take the average of this Day0 analysis for a span of 30 days. (i.e. – Day1 we compare to Day2, Day2 to Day3, etc).
For both rolling and nth day retention, we grouped users into usage and performance cohorts. For example, we determined if a user loaded an app an average of X times per day and had an average of Y crashes per day. For each X,Y tuple we tracked total users fitting that profile and how many returned the next day (nth day retention) or anytime in the 21 day period (rolling). This means we tracked two rows each (App Loads Per Day, Crashes Per Day) tuple: how many users remained, how many users left.
For Nth day retention, we calculated “did these users come back the next day” each day for 30 days and then took an unweighted average. Unweighted in the sense that each tuple was given equal weight in the average. For rolling retention we actually took a weighted average, meaning tuples with higher user counts contributed a higher weight to the average.
For the purposes of this analysis, we only looked at users that loaded an app more than once a day.