Click. That’s the sound of a customer seeking out your competitor because your point-of-sale (POS) system didn’t deliver the experience they wanted or expected. You know that your QA teams tested the code and it worked. So, what happened?
Quality assurance (QA) used to be a compliance activity. You were releasing a product and needed to test it and stamp it “approved.” QA was about testing that the code worked. You might manually test the code. You might have even tried some automation — coding a set of test scripts that would try to capture regressions or errors that you had eradicated in the past, but which somehow crept back in. All in all, you were reasonably satisfied that you achieved a level of test coverage that met your goals. Then, you put your code into production and crossed your fingers that nothing went wrong. And if it did, you tried to fix it as quickly as humanly possible.
It used to be that software testers could test their applications on just one platform, and only have to worry about testing that the code worked.
Sometimes I feel as if I’m the Forrest Gump of quality assurance (QA). Since 1998, I’ve been through the beginning of automated integration testing and service virtualization through being a co-founder of Class I.Q. (now IBM Greenhat). I’ve been through the first phases of an automated testing center of excellence (ACOE). I’ve been there for the start of risk-based testing, and I’ve been a part of the transformation of QA from a somewhat necessary function to something that is now the core and chief concern of any company putting out quality software and apps.
Everything about software has changed—how it’s architected, developed and produced, what it does, what users want from it, and how often they expect new features. To keep up, organisations are turning to continuous delivery and DevOps. Yet product teams still do a lot of manual testing, which consumes a lot of time they don’t have, thanks to shrinking test windows. Incorporating automation into your testing approach is a great strategy, but figuring out where and how to start isn’t necessarily quick and easy.
We recently co-hosted a webinar with Bloor Research about the Future of Testing, and in it, we conducted an informal poll about artificial intelligence (AI) and testing. When we asked what everyone thought the biggest advantage was to incorporating AI into a test automation strategy, attendees overwhelmingly selected team productivity and efficiency.
For a while now (about 10 years), Dev and Ops have been trying to get along. After all, collaboration between the two creates fast feedback loops and gets high-quality software into users’ hands faster. But with a new space emerging, digital experience management, Dev and Ops need to make a new BFF—the business—to stay in sync.
It’s not often you hear dev teams shouting from the rooftops about a relatively minor software release. (Actually, developers rarely shout in the first place, except when playing a lively game of foosball.) But we think this one is pretty cool.
We recently commissioned a study of 750 development team leaders in the UK and the U.S. to gauge the extent of the pressure today’s organizations are experiencing with respect to app development. On the same day that we announced our App Gap research results—revealing that almost half of businesses feel the pressure to launch often untested apps—we hosted the first in our series of our Digital Automation Intelligence Roadshows.
You can find 28 million apps on Google Play and 22 million in Apple’s App Store. Yet, nearly one in four people who download an app use it only once. Apps are incredibly slow under certain circumstances, don’t work in key parts of the workflow, and have less-than-optimal usability. The app scrap heap is growing because many organizations are still testing to ensure code quality, not a superior user experience (UX).