Shift Up: Out of One Tool, Many

By Michael Giacometti | 5/31/18

This is Michael's third blog in his Shift Up series. You can read the first blog here and second blog here.

On May 21, 2018, Bank of America announced that it was rolling out its chatbot, Erica, to all its mobile customers. On the surface, the premise makes sense. It’s making the bank more relatable. It’s providing real-time customer support to people where artificial intelligence (AI) assistants like Siri and Alexa are becoming the norm. It doesn’t have the limitations that some phone-based IVRs have, and it aims to provide immediate assistance instead of making us wait for a human (we’ve all shouted “representative” or pressed zero dozens of times to get a real person). Erica is a great way for Bank of America to optimize the customer experience.

But let’s pull back the covers and ask some basic questions. How does Erica know the customer so well?  How does Erica pull from different sources of information? How does Erica know what products and services to offer? What systems, both homegrown and third party, does Erica need to be effective? 

In this age, AI and machine learning are supercharging humans. Which means it’s even more important for Bank of America, and other organizations building custom AI, to get product quality right and make sure the product meets customer needs — versus alienating them and negatively affecting the brand.

Providing quality assurance (QA) and quality control for Erica surfaces some interesting challenges.  Decisions and data are transmitted in near-real time. Interaction is fast. Permutations and combinations of interactions are endless. In QA lexicon, what’s considered a test pass or a fail? What is proper test case or requirements coverage? And don’t forget test data management. There’s complexity on top of complexity. How would we have covered such challenges in the past, before AI, before digital, before CI/CD, and before DevOps?

In my experience, we would have used a spreadsheet to capture the data. A combination of automated and manual testing to design and run the test cases. And a test management tool to record it all. This method was a solid choice, but now that arrangement is too slow. Automation has changed and the test management tool is dead. Which brings us to the second component of shift up: out of one tool, many.

AI-assisted testing has overtaken the test management tool as the keystone of any testing effort.

Test management tools had a useful purpose. They supported the test manager’s efforts to organize, track, and report on requirements, test cases, and defects. Later, these included integration with common automated testing tools, requirement management tools, and reporting tools, among others. In the transition to shift left, test management tools had to integrate with development management and project management tools. During the transition to shift right, these tools needed to keep up with new, continuous testing and DevOps tools and techniques.

Even with patchwork integrations, there were shifts in how testers, IT, and management were educated, and changes in process. There was always a delta — a gap between what was tested, what needed to be tested, and what was considered done. Introducing DevOps and CI/CD made the test management tool a bottleneck and something to be bypassed when leveraging tools like JIRA and Rally (now CA Agile Central).

Indeed, the problems that were solved by test management tools, and that subsequently brought about their downfall with DevOps, still present a fundamental issue. In the Digital Age, test managers still have to cover more permutations, more technologies, and deal with very limited budgets to make it all happen.

This is where AI-assisted testing comes in. Automated testing tools, test data management tools, and platforms like JIRA and Rally (CA Agile Central) need to integrate with your AI testing engine. Simply put, the AI testing engine is the one tool out of which many supporting tools will be leveraged. 

AI-assisted testing solutions such as Eggplant AI are specifically designed to:

  • Test cross-platform, cross-browser, and cross-technology to provide a seamless, end-to-end testing experience.
  • Leverage traditional testing KPIs such as requirements, test, and defect coverage, and combine them with external factors to test exactly how your end user will use the product.
  • Quickly identify the riskiest parts of your application, assemble automated test cases to exercise them, and report the pass or fail of such actions much faster than any human could.

There’s one more piece that Eggplant AI can test today that can’t easily be done through traditional testing and traditional automated testing. In the beginning of this blog, I wrote about Erica, the AI chatbot from Bank of America. Eggplant AI can actually use Erica, or your own proprietary AI, to create automated tests that Erica decides need to be run, according to internal analytics that Erica records. 

This creates another paradigm that deserves more study. We now see the need for AI acceptance testing. That is, does the AI that’s working with your suite of applications understand and accept the performance of that suite to accomplish its mission? The fusion engine within Eggplant AI makes this possible.

Eggplant AI’s API allows for seamless, bilateral communication between Eggplant, the application under test, and the AI under test to ensure that quality is met. At the end of the process, not only does your application have an acceptable level of quality but the Eggplant AI engine ensures that it will delight your customers.

Eggplant AI and its ability to integrate diverse AI capabilities, technologies, and platforms is what makes shift up possible.

Topics: User Experience, User experience testing, voice recognition software, test management, DevOps, Eggplant AI, artificial intelligence, shift up, shift right, shift left

Michael Giacometti

Written by Michael Giacometti

Michael Giacometti is the director of technical services at Eggplant. With more than 20 years of experience, he is an internationally recognized leader in QA. Michael was a co-founder of Class I.Q. (now part of IBM Greenhat), has designed features for HP ALM, and developed licensed QA products for Cognizant. In addition to speaking at several conferences, Michael has published white papers on the future of QA, and has led several, large-scale QA and digital assurance transformations within the Fortune 100.

Stay up-to-date with the latest in test automation

Lists by Topic

see all