Revenue, conversions, response start time, page load duration, active sessions and bounces – six key KPIs across millions of possible combinations of device types, operating systems and browser versions – all monitored for their actual vs expected performance. The real-time insight engine in Real Customer Insights is pretty impressive.
We’ve recently made some huge enhancements to Eggplant AI, our intelligent test automation solution. The release of version 2.2 not only includes a number of new reporting features – it also transforms its capabilities.
Burndown charts, feature completeness, code quality, pass/fail testing. Dev and test managers have access to lots of data from many sources about an upcoming release. But none of it directly relates to business outcomes. While decisions might be influenced by data, they’re still largely subjective and more often based on experience.
There are certain inalienable truths about businesses: they all want to succeed and they all want to beat their competitors. What's slightly different is how a business defines success. For a healthcare company, it might be lives saved. For an insurance company, it's the number of policies bought. For an e-commerce retailer, it's shopping basket conversions.
Testing is critical for organizations like NASA, the US Army, Northrop Grumman, BAE Systems, Lockheed Martin, MBDA, the UK’s Ministry of Defense and the Metropolitan and Scottish Police, where lives are on the line. As we've worked with customers like these over many years, we've noticed how much more testing is than just making sure the system works — it’s about ensuring we test for mission success and continuously optimize mission outcomes. Whether you're designing systems for command and control (C2); to provide support for complex police operations, such as hostage negotiations; or for shooting down an enemy missile, you should plan your testing and monitoring strategy to continuously test against the desired mission outcomes.
The weather, the tennis, the football — with all the distractions, you’d think those of us on the Real User Monitoring team would be kicking our feet up, right? Not a chance! I'm super excited to tell you about our latest release: a brand-new version of our Performance Trends Report.
Everything about software has changed—how it’s architected, developed and produced, what it does, what users want from it, and how often they expect new features. To keep up, organisations are turning to continuous delivery and DevOps. Yet product teams still do a lot of manual testing, which consumes a lot of time they don’t have, thanks to shrinking test windows. Incorporating automation into your testing approach is a great strategy, but figuring out where and how to start isn’t necessarily quick and easy.
We recently co-hosted a webinar with Bloor Research about the Future of Testing, and in it, we conducted an informal poll about artificial intelligence (AI) and testing. When we asked what everyone thought the biggest advantage was to incorporating AI into a test automation strategy, attendees overwhelmingly selected team productivity and efficiency.
For a while now (about 10 years), Dev and Ops have been trying to get along. After all, collaboration between the two creates fast feedback loops and gets high-quality software into users’ hands faster. But with a new space emerging, digital experience management, Dev and Ops need to make a new BFF—the business—to stay in sync.