How bad can a 3.7-star rating really be? Imagine a Fortune 500 retailer with considerable online revenue, but only a minuscule fraction of online sales derived from their mobile app. In fact, this exact retailer approached us, asking for assistance. To help them get their mobile app revenue back on track, we would need to fix their mobile app performance, and improve its app store rating.
Using our considerable mobile experience, we had a hunch that our customer’s revenue woes were related to their rather poor mobile app rating. Upon closer inspection, the negative reviews in our customer’s mobile app related to quality and performance issues. Users would regularly complain that features were not working; the app was crashing or other big performance problems. Following up on this, we reviewed the mobile app’s specs to find that the crash rate was alarmingly high, and there was minimal test coverage. The crash rate and other KPIs that determine app stability were compromised and needed fixing. This case study demonstrates that fixing these stability issues would stop or abate the flow of negative reviews and improve app ratings. Favorable ratings increase downloads, usage and therefore, boost mobile revenue.
Identifying areas of improvement
After our initial evaluation, we compiled a list that noted areas for improvement that we felt would have the most significant impact on the app’s quality:
The high crash rate of around 6%, frustrating 1 out of every 16 users.
A dated homepage that has not been keeping up with more modern, state-of-the-art designs.
Shopping cart quality issues - the shopping cart would sometimes randomly double in size, or not accept any coupon codes.
Integration builds were taking far too long, and it was slowing development and maintenance processes by a couple of hours, which created a development bottleneck.
Regression testing was manual and lengthy, tests were not run with adequate frequency. A complete test could take up to 4 days. Manual testing was a bottleneck for fast production and was not reliable because of human error. The result was regression bugs not being identified promptly.
Test cases were mostly designed after the development phase. The difficulty of designing a test plan after the fact was evident in the dismal 4% test coverage.
QA mainly relied on simulators for testing, missing any issues appearing only on actual hardware.
An obsolete architectural framework made maintenance difficult. MVC (Model - View - Controller) is over 30 years old, and not designed with the needs of mobile applications in mind.
Inefficient code - there were about 3000 lines of duplicate code in the main program that made the executable program large and contributed to stability issues. Also, there were embedded third party libraries that were also unused.
Solution: steps taken to tackle the problem areas
After identifying the problem areas, we created a development plan to address the various problems. Here is a summary of the steps that we took:
Redesigned homepage and shopping cart, giving the shopping experience a more modern and streamlined look and feel.
Implemented CI/CD (Continuous integration and continuous delivery). CI/CD allowed us to make development and testing improvements smoother and more frequently.
Increased test automation. By using CI/CD methods, we automated much of the regression testing.
All newly written code included a test plan implemented simultaneously with code development. When practical, tests were also written to cover existing code modules.
Vastly expanded test suites on actual hardware platforms. QA testing now included test reports using “real world” scenarios.
Moved the architectural model to VIPER (View, Interactor, Presenter, Entity and Routing).
Removed code duplications and unnecessary libraries to make the program more compact and efficient, which is vital with limited size and computing resources on a mobile device.
Results: Our end of the project statistics
From the start to the end of the project, we achieved almost all our performance milestones. Here are some of our more notable development results:
The crash rate dropped to 0.12%, or 1 in over 800 users
The simplified and enriched home page resulted in a 7.2% increase to the overall conversion rate on Android and a 4.3% increase in conversion rate on iOS
Automated testing dramatically decreased testing time, and increased developer productivity. Testing time was reduced from 4 days to about an hour, with more reliable results.
Increased test coverage to over 20% of existing code, and over 80% of new code modules.
Code duplications were eliminated (0.0%).
Move to VIPER architecture resulted in a cleaner code design that is easier to maintain.
Program image size has been reduced by 30%, resulting in faster downloads and execution.
The indirect results were also impressive. The retailer’s customer satisfaction, app rating, downloads and revenue all grew significantly:
The app store ratings rose from 3.7 to 4.8 stars
Annual revenue generated from the mobile application had risen over 30 times its original value
The app became one of the top 10 app store downloads leading up to Black Friday
Black Friday visits were up by 76%, and revenue was up by 64% over the previous year’s Black Friday.
This case study illustrates the direct correlation between app quality, ratings and revenue. User frustration created a challenging atmosphere for a retailer’s mobile app crash rate issues. Once we addressed these issues, the app’s ratings and revenue immediately rebounded. Generally speaking, customers remain loyal to an application that meets their expectations of stable, predictable operation. They express their loyalty with a high app rating, and most importantly, increased sales.