Testing for Emotional Outcomes: another Shift-Up Blog

By Michael Giacometti | 2/13/19

With the Shift-Up series thus far, we have explored the importance of testing and thinking as a customer. The basic premise is that we need to add another dimension to Quality Assurance other than Shift-Left and Shift-Right. This new dimension focuses on how your customer is actually using your application and if the intersection of your application, customer behavior, and your company’s business objectives all align. 

Shift-Up says that even if your application has zero defect leakage and meets all the internal business and technical requirements, it could still fail because it’s not meeting revenue or customer expectations.  Losing revenue or not meeting revenue projections with your application is the most important type of defect to fix. By implementing Shift-Up, we are looking at how testing or designing for business outcomes is the new, primary driver in a constantly online world that no longer runs on transactional relationships between your company and your customer. 

Testing for Business Outcomes is synonymous with testing to ensure that the application performs as expected in the market.  By their nature, Business Outcomes are about either revenue collection or ensuring that your application is habitually used by your customer. This is why we create software.  However, let me introduce you to another idea that is just as important and adds another vector into how your application performs for both your company and your customer. 

Are we testing for the proper emotional outcome? Indeed, when testing, are we taking any target emotional outcomes into account? Do we want the user to be happy or angry? Do we want them to feel a sense of urgency that they must buy your product or use your app? We know what we want the business outcome to be, but how do we want your customer to feel when using it?

Let me give you some examples.

If you run an online marketplace, and your customer is buying off emotion—the need to buy it now—do you know how your application needs to perform for them to maintain that sense of urgency? Do you need something different from your baseline performance timings? Do customers require a different look and feel to the UI?

Or what if your software requires medical personnel to enter and read medical information while treating a patient in a life or death situation? It is understandably a very stressful environment where patient care is much more important than going through a workflow. You have your performance baseline. What should the performance time be if the doctor, nurse, or paramedic is stressed? Are your workflow and UI simple enough to provide and extract the maximum amount of information with the minimal amount of human interface? 

Finally, if a passenger is checking in for a flight at an airport kiosk, and they are a late or nervous flier, is the kiosk software providing an experience that abates frustration and creates a feeling that everything is under control?

These elements may or may not be accounted for by those designing requirements. Are you testing for them? Are they fringe test cases that will be executed if you have the time? Are the performance or UI requirements different based upon the emotional state of your customer? More to the point, how do you test for them? Can any of this be automated? What are the different types of questions you need to ask?

Remember the difference between Quality Assurance and Quality Control. No matter how different or how “Digital” we get, the foundations of QA stay the same. In this case, the Quality Assurance needs to focus on the emotional outcomes and ensure that those questions are being asked while you are reviewing requirements. Simple questions such as “how is the customer supposed to be feeling when using this function or feature?”, “are you trying to invoke a certain emotion?” or “is the application easy to use if the user is emotionally distressed?” Obviously not all questions are going to be applicable to all situations, but you should get the idea. 

“Can emotional outcome be tested for?” and “can it be automated?” are an obvious YES. If you think about it, assuming functional testing is just generally accepted as a given, you are focusing on two things. The first is UI testing. A UI that is easy to use, clean, welcoming, and intuitive is required. You would be testing for color contrasts, button sizes, text features/fonts, and even layout. Maybe the UI changes depending upon the situation. For example, using the paramedic example above, maybe the buttons used in an emergency are bigger and bolder for functions that need to be done or results that need to be read. Maybe non-essential UI is hidden. An automated tool can easily test for this. 

The second is performance testing. The speed with which your application performs can drive the emotional outcome you are looking for and prevent the ones you aren’t. For example, say your baseline performance for an online retail app is 10 seconds between looking at the shopping cart and checking out.  How many people back out during that time because they have second thoughts about the purchase? How many would buy on the emotional impulse if the same workflow took 5 seconds?  Testing this requires no special software, simply adjusting the baseline performance to meet expectations for desired emotional outcomes.

So, testing for emotional outcomes is possible. It is simply a matter of asking the right questions and being cognizant about it during requirements or story design. It is a matter of tweaking your UI and working with your customer or user experience team to make sure your tests are exercising their UI layouts, and it’s tweaking the expected results of your performance tests to ensure that the application will not cause frustration or create a serious situation because it’s taking too long to work.

Learn more about how Eggplant DAI can help you automate testing and improve business outcomes.

Topics: DevOps, QA testing, User Experience, Performance testing, User experience testing, testing best practices, artificial intelligence, digital automation intelligence, QA, shift up, AI-assisted testing, user acceptance testing, usability testing

Michael Giacometti

Written by Michael Giacometti

Michael Giacometti is the director of technical services at Eggplant. With more than 20 years of experience, he is an internationally recognized leader in QA. Michael was a co-founder of Class I.Q. (now part of IBM Greenhat), has designed features for HP ALM, and developed licensed QA products for Cognizant. In addition to speaking at several conferences, Michael has published white papers on the future of QA, and has led several, large-scale QA and digital assurance transformations within the Fortune 100.

Check out our newsletter for the latest in Eggplant news, events, blogs, and more.

Lists by Topic

see all