Why You Need Synthetic as Well as Real User Monitoring and APM

By Alex Painter | 1/15/19

Is your website meeting KPIs for speed and availability? Are key user journeys working as they should? Is your website’s performance now the same as it was this time yesterday? Are you sure? How can you tell?

There are a lot of ways to find out how your website is performing. Real user monitoring (RUM) can give you a wealth of real-time insight into the experience your website is actually delivering. Our Real Customer Insights solution goes one step further and allows you to correlate performance data with business KPIs. Application performance management (APM) solutions give you a great understanding of what’s going on under the hood.

But there’s still one job for which you need something quite different.

If you want consistent, reliable benchmarking of your website’s speed and availability from an external perspective, you need an external monitoring solution.

Synthetic monitoring is like a metronome for your website, testing at regular intervals and precisely engineered to give you results you can trust every time.

Why does this matter?

Synthetic monitoring allows you to test from a controlled environment, giving you a clean baseline that you can’t get anywhere else.

This delivers a uniquely objective view of performance.

If it changes, you know that something in your system has changed. You can also track your competitors in the same way, allowing you to benchmark your relative performance over time – something you can’t do with RUM or an APM. Similarly, synthetic monitoring can tell you about your and your competitors’ availability – something else RUM and APMs can’t do.

Making sure you can trust the results

eltapeSynthetic monitoring is invaluable if you can trust it. And that means testing from a carefully controlled, sanitized environment. Otherwise, it’s a bit like using an elastic tape measure – pointless, frustrating and potentially painful.

For example, at Eggplant we’ve gone to great lengths to make sure our own Monitoring Insights service gives you clean, reliable data.

This is partly about connectivity – we have strict SLAs with a select group of ISPs (including Tier 1 carriers) keep any variations to a bare minimum.

But it’s also about the testing environment. What else is it doing at the same time? Is it downloading updates? Running other processes? Could it even be infected with malware? All this can contribute to unwanted peaks and troughs in what should be a level playing field.

Then there’s the ‘observer effect’ – is the testing itself affecting the results? Is the machine’s performance being affected by the fact that it’s also collecting, data, making calculations, writing to a database, preparing results, etc?

This makes capacity planning essential, both for the test server and the platform. There is also a great deal of work involved in avoiding contamination by concurrent tests and managing bugs.

Ultimately, it’s all about filtering out any external noise, so you get test results you know you can trust.

How do you know you’re getting results you can rely on?

Not all test environments offer the same level of stability.

There are several risks associated with using a monitoring service that fails to deliver consistent, reliable results.

One is the potential for false positives – alerts triggered not by changes in the system under test but by changes in network conditions or in the system doing the testing. This isn’t just an irritation – it can also mean time and resources devoted to investigating non-existent problems.

The second is that it becomes difficult or impossible to establish a baseline – one of the reasons for using synthetic monitoring in the first place.

The third is that genuine changes in performance can be drowned out by the noise, and real problems get ignored.

This is illustrated by the two graphs below. Both represent a week’s worth of monitoring data, with tests carried out at the same frequency and load times shown on the y axis.

The first graph shows the variations in performance picked up by Eggplant’s monitoring.

The second is for the same website over the same period, but shows the fluctuations seen by another monitoring service.mon1and2

It certainly appears that there is a lot more noise in the second graph, making it harder to pick out genuine performance issues or establish a baseline.

The lesson? Synthetic performance monitoring is essential, but it’s not going to give you what you need unless you’re doing it properly.

Download our white paper on How to Build a Complete Picture of Performance with Synthetic & Real User Monitoring

Topics: User Experience, synthetic monitoring, website performance, Monitoring Insights

Alex Painter

Written by Alex Painter

Having worked for a number of years in marketing and web development, Alex joined Eggplant as a web performance consultant, helping organizations deliver faster, more reliable online experiences. He is now Eggplant's product marketing director.

Check out our newsletter for the latest in Eggplant news, events, blogs, and more.

Lists by Topic

see all