The Early Bird and Second Mouse — a QA Story

By Russ Lott | 4/19/18

This week, I'm making what many consider a life-altering, religious change. I’m switching from Android to an iPhone.

I had an iPhone 3GS years ago and it worked well enough, but Android hit the market hard and I followed suit. They had devices with newer features, more customization with widgets, better cameras, and a series of features that have consistently come out ahead of Apple. Also, though, came the pains of Android. Glitchy applications. Frozen screens. Poor hardware resulting in blown fuses and short-lived batteries. My decision to make the change came on a call with one of my team members. Where part of the conversation was, "Do you know what’s better than something that is cutting edge? Something that actually works."

See, for me (like many), I love having the newest, latest, and greatest. However, when the glitches, black screen of death, and other issues came up, my ability to use the basic functions of a smartphone were cut off. Having to manually restart my phone when the screen blanked out. Not being able to see a map or put in a dial-in code to a conference line. Or, having to try to hold the charger in place in a port to keep the phone alive while trying to get directions and dialing into a conference call and the car isn't picking up the Bluetooth. At some point you say, “I need to have a phone that just works.” It needs to make calls, send texts, receive and send emails, and have a GPS. Cutting edge is cool, but reliability has been a pain I'm resolving by changing brands.

The reason I raise this story is because this question comes up throughout the IT industry. Velocity is important. Getting changes, new features, and new products released is important. Getting out products that the customer wants, needs, and gets value out of is important. Getting them out before your competition is often the difference between a market leader and niche player. Yet according to Capgemini's World Quality Report, only 4 percent of applications are still in use 1 month after download. According to a TechCrunch report, 79 percent of people will give an app a second chance, but only 16 percent will give an application a third chance. The net result here is that lots of people are willing to give applications a chance but if you have issues out of the gate, you run the risk of rapidly losing market share right off the bat.

This leaves organizations with a huge stress to decide if they should be the early bird, first to market, or the second mouse with a usable quality experience. But why can't we do both? Isn't that the whole reason we have QA organizations in the first place? QA's role is to be the first mouse, to hit the traps before things are ever released. The problem is, our QA organizations are not maturing, they are devolving. The amount of daily code being generated worldwide is unimaginable. And if a 3-week sprint is supposed to be 2 weeks dev and 1 week testing, it becomes 13 days dev and 2 days test. Or, if you have 5 weeks to develop and 1 week to test, business leaders will ask if you can cut out the testing week and release a week early. So QA devolves into, “Can I validate our code and how much can I validate in the shortest timeframe?” Instead, what if we moved from an organization that checked the box on testing code into a QA organization that validated user experience ahead of release? What if we took that even a step further and matured our QA groups into business assurance teams that not only made sure the user experience was clean but that the application behavior actually provided the value our companies were touting to users?

We can certainly do this today. Ultimately, it boils down to how we manage our people, processes, and solutions that support our organizations. The use of AI, machine learning, automation, and analytics provides the necessary framework to increase velocity and be the early bird, while still providing the business assurance from QA so that our production releases are the second mouse.

The real question is, why aren't we?

I’d love to know your thoughts on the topic. Please use the comments to share your feedback.

Topics: QA, QA testing, User Experience, User experience testing, Performance testing, Functional testing, App testing

Russ Lott

Written by Russ Lott

Russ Lott is the Enterprise Business Segment Manager at Eggplant. He has just over 15 years of experience in the IT industry as both an end user and a consultant. Russ has helped several large-scale digital transformations within the Fortune 1000 and helped organizations save millions of man hours through automation. He prefers waffles to pancakes, but french toast over both.

Stay up-to-date with the latest in test automation

Lists by Topic

see all