Nobody wants situations where e-commerce performance issues lead to lost revenue. Nothing upsets engineering teams more than frantic application troubleshooting and patching in the middle of the night. And absolutely nothing upsets business people and destroys careers more than watching customers switch to your competition during the year’s heaviest sales days because their website or mobile application is visibly faster. Continuous Performance Testing can help prevent these problems by catching them during the development process instead of after an application is in production.
Here are four studies that show how software performance problems can affect busy e-commerce properties:
- 2012: Research showed that Amazon would lose $1.6 billion in sales every year if its site took one more second to load.
- 2013: 39% of e-retailers claimed they lost money last year due to performance or stability problems.
- 2014: The web performance monitoring company Catchpoint Systems looked at aggregate performance on Black Friday and compared it to the same timeframe in 2013. The results are notable: desktop web pages were 19.85 percent slower, while mobile web pages were a whopping 57.21 percent slower.
- 2015: Some major e-retailers’ sites buckled under the pressure of heavy holiday traffic during 2015’s Cyber Monday peak traffic times.
With stakes this high, it is not surprising that companies are willing to make substantial investments in the identification and prevention of performance issues. The central question is, “How do we do this, systematically and reliably?”
Continuous Performance Testing is gaining increased attention and support from the developer and vendor communities. The core idea is simple and straightforward: as developers routinely and continuously develop new features and make changes to an application, wouldn’t it be great to know, immediately and precisely, what kind of impact each change would have on the application’s performance?
In order to do that, the new application build has to be tested for performance under stress — after the build process is complete, but before the build is released to production. Since most development teams already have a functioning CI process, performance testing simply becomes a step in the CI pipeline — usually performed after regression testing but before production deployment.
The goals of continuous performance testing can be broadly classified in four categories:
1. Certify that the application meets performance targets. Performance testing objectives must correlate to business needs and represent real-world business scenarios concerning real customers. Start with input from the business side: what transaction volume and response latency is required to support the immediate business? What scenarios are most critical: checking out many small orders simultaneously or processing a relatively modest number of “big baskets” containing dozens of items? Each company has different critical performance constraints. These KPIs must be met by every production release, and therefore must be codified as the top priority in all test cases.
2. Plan capacity and manage growth. Even excellent performance from the applications you currently have in production won’t necessarily protect you from crashes if the number of customers, requests or data sets suddenly increase. Preparation for growth is a serious challenge. Performance testing helps build a capacity planning model and gives you a chance to update your production environment before a demand surge causes a major problem.
3. Track useful performance metrics. All too often, business users only receive ‘green’ reports that tell them performance is good or panic reports about catastrophic failures, with nothing in between. A well-designed performance testing strategy can include many key metrics so business users can see trends and prepare for them. These metrics could be related to user experience, capacity and business areas (e.g. response time, maximum throughput, resources utilization, etc.).
4. Identify load-related weaknesses and track SLA break points. Solid knowledge of your weak points and of load factors that can break your digital services is essential to develop upgrade plans and backup plans. A system-wide capacity upgrade well ahead of the demand is often neither practical, nor the only way to manage performance risks. However, the knowledge of your risks and exposures is a critical element of sensible risk management strategies. Performance stress testing is how you discover these areas and provide all necessary information in advance to make informed risk management decisions.
The challenge lies in the practical application of these ideas. Making a fully-automated performance regression and stress testing system is non-trivial, and for many development teams still falls into the category of “rocket science”. We will look at the complexities in detail in the next blog post.
Grid Dynamics has been an avid supporter, advocate and pioneer of continuous performance testing, essentially since the company’s inception. Early, continuous performance testing has been an integral part of how we have designed, tested and delivered all mission-critical digital platforms for clients including eBay, PayPal, Macy’s, Kohl’s, and others. It goes hand-in-hand with our beliefs and practices when it comes to cloud enablement and continuous delivery. In 2010 we open-sourced Jagger, our internal tool for facilitating continuous automated performance testing that we developed over the years to support large-scale digital platform development for our clients.