Today’s complex, digital world relies on computer systems and software and downtime is more than an inconvenience—in many industries, it’s a critical failure that can have a lasting, negative impact on customer experience, the corporate brand and the bottom line.
Wikipedia will tell you that “software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.” While technically accurate, this definition does not encapsulate all that testing does and how critical the function is in our complex, hyperconnected world.
A recent SD Times article suggested that AI is not delivering tangible testing improvements, cautioning “…customers need to realize there’s probably more hype than reality in most of what test solution vendors are saying.” This sentiment reminds us of the unhelpfully polarized “manual-vs-automated test execution” debate that held back testing for 10 years and we’ve only recently moved on from with the realization that there’s a place for both in testing. It would be catastrophic for the testing industry if we went into another 10 year debate of “AI-vs-non-AI” only to realize that there are strong benefits to AI, but it probably doesn’t solve everything.
These sentiments are unhelpful and dangerous to the testing community. Sure, there are exaggerated claims about the use and impact of AI in some products, but there are also fantastic products that demonstrate that the technology can deliver real improvements today. Below are just a few examples of how AI can help testing now—not several years down the road:
A recent Forbes Insights survey confirmed that there is strong C-Suite interest in artificial intelligence, with eighty percent of CEOs and eighty-five percent of IT leaders pointing to AI as a core component of their digital transformation efforts. While the technology is sometimes—and mistakenly—associated with job loss, AI’s true power actually lies in its ability to augment human decision-making.
People make mistakes. Human behavior so often falls short of ‘expected standards’, it begs the question why we hold ourselves to such standards at all. Too often, we build systems and processes on the implicit assumption that the people using them will be rational, infallible, and consistent. Of course, the truth is that most of us are anything but.
Our general fallibility is obviously closely tied to AI and test automation. Automated testing is immune to the unintentional biases and lapses in concentration that affect human testers.