Real-World Use Cases For End User Experience Monitoring

Clearly, there’s no silver bullet when it comes to monitoring complex digital infrastructures. However, being able to gather a range of data – including what the end user is doing on the network – can help keep you better informed and alert you to issues.

Furthermore, fully understanding what end users are experiencing lets you see how effective your company is at delivering applications and services. But, without precise infrastructure monitoring, you can’t get this insight. And you may not realize when there’s an issue, like lagging webpage response times, for instance.

Survey Says: End User Experience Monitoring Yields Results

In June 2015, Gartner released survey results centered around the importance of monitoring the end user experience.

In “Survey Analysis: End-User Experience Monitoring Is The Critical Dimension For Enterprise APM Consumers,” Gartner reported that 61 percent of survey participants said application performance monitoring (APM) technology was either important or critically important.

Gartner suggested that monitoring the end user experience might not be the first tool enterprises look for, but one that they should more strongly consider.

“The preference for end user experience monitoring aligns well with the desire to improve the customer experience quality as a purchase criteria,” Gartner reports. “The interest in analytics, at first, does not seem to correlate with improving troubleshooting, but because of the increasing complexity of the application and infrastructure environment, we have observed rising client interest in analytics to improve root cause analysis and other capabilities.”

Gartner recommends that enterprise APM customers deploy a solution that not only enables understanding of the end user experience, but the context of the business impact of poor performance as well.

How it Works

End user experience monitoring typically involves deploying appliances at strategic locations within the enterprise or service provider environment. For example, appliances may be placed in a bank’s branch office or a retail store to monitor customers’ experiences when they attempt to access Wi-Fi in those heavily trafficked locations. Or a service provider might deploy appliances in specific mobile zones to better determine actual experience of their customers accessing the network via small cell sites.

These appliances collect performance metrics by running synthetic tests over the network, as if they were an actual person on a smart phone, laptop or some other network-enabled device. They actively test entire network paths, collecting in-depth data across enterprise, cloud and communication service provider infrastructures.

These appliances collect performance metrics by running synthetic tests over the network, as if they were an actual person on a smart phone, laptop or some other network-enabled device. They actively test entire network paths, collecting in-depth data across enterprise, cloud and communication service provider infrastructures.

High frequency polling

Traditional five minute polling cycles are insufficient in many instances. A spike in traffic that lasts only a couple seconds represents less than 1% of a five minute polling cycle. The anomaly will be completely flattened and undetectable when averaged out over that time span. Yet this brief spike can disrupt business transactions, VoIP communications, and other latency-sensitive applications. You need a monitoring solution that allows for high frequency polling down to the second. Even one minute polling cycles may be insufficient. Bell Mobility, a wireless provider in Canada, revealed an interesting internal study. It showed average spikes in traffic were as much as 350% higher when viewed at 1.5 second rates compared to 60 second polling intervals.

Raw data retention

Granular data collection is only useful when you can maintain that data for a sufficient time frame. Some monitoring solutions may average and consolidate historical data over time for storage reasons. This results in a lesser understanding of historical events. It also weakens your ability to forecast future capacity needs with accuracy. Many organizations seek a performance monitoring platform that maintains a year of as-polled data. You should not have to invest in extra storage capacity to do this.

Massive scale

We live in the age of Big Data. Applications, systems and network devices produce massive volumes of machine data. The rise of virtualization and cloud services exacerbates the issue. As you spin-up new resources faster than before, your environment produces exponentially more data. Your performance monitoring platform must scale with your data collection needs. Inability to scale your solution forces you to make tough decisions about what you will and won’t monitor. This creates a visibility gap. You never know what might go wrong in your environment. So taking a broad approach to data collection supported by a highly scalable platform is the best strategy.

Use Case 1: Monitoring an Application's Migration to the Cloud

In a world where more and more organizations are transferring physical assets to the cloud, it’s critical to monitor this migration at every turn.

If you’re an organization with physical assets, you’re accustomed to having direct control over each and every application because you typically know what to expect and can easily predict application response times. When moving to the cloud, it may seem as if you’re losing this sense of control.

But when transitioning to the cloud, you must maintain that same level of monitoring precision. So, you need to compare how your application behaved as a physical asset to how it functions in the cloud.

To do this, you need to know what the end-user is experiencing. If applications aren’t responding in time, it’s not worth it to move them to the cloud. Furthermore, if service is slow or degradated, you can drill down into the data to find where the issue is and why it happened. This information will guide your next move – maybe you need to tweak the pipe size, or maybe a router number isn’t matching up. Whatever the case, you’ll likely find your answer in the data.

It’s wise to start with non-critical business applications. Look for differences in how the applications behave as a physical asset compared to how they perform on the cloud. If response times and other data indicators match up, your transition is going well. You can then likely move other applications to the cloud.

Use Case 2: Monitoring Mobile Access to Most Popular Online Applications

A leading wireless cell phone carrier recently deployed appliances to monitor end user experience at locations across the United States. The mobile service provider set up these devices to synthetically test how users are reaching the 50 most popular applications – including Facebook, Twitter, Netflix, Hulu and others – used on the network. Spread out geographically, these devices test response times and more.

Carriers don’t typically own their entire infrastructure, so it can be challenging to test response times. However, since end user experience monitoring solutions usually allow testing across third party infrastructures, you can easily find bottlenecks or degredated service along all routes along the network.

The devices monitor every hop taken along the path to get from the user’s phone to Facebook, for instance. By using a performance monitoring solution that looks at the end-user experience, you get a detailed and granular view of the connection and how the user accesses a particular application through the network.

But remember, the path data takes isn’t always the same. Just because one pipe is bigger than another doesn’t mean there won’t be latency in the network. But by taking a closer look at the end user experience, you’ll be able to see that latency or any other service issues on your dashboard.

It’s important that your end-user monitoring work in tandem with your overall performance monitoring solution. Just like with routine monitoring, you’ll want to baseline your end user monitoring data so you can differentiate between what’s normal and what’s abnormal and possibly problematic. With these baselines, you can set alerts to inform your system administrators when your network strays above or below the norm.

Finally, you’ll want to stack the end user data side-by-side with routine performance monitoring data to get a uniquely whole picture of your network at any given moment.

Use Case 3: Monitoring End User Experience During the Presidential Election

During the 2012 U.S. Presidential election, one software company put their end user monitoring solution to the test. The goal – to see how the election would affect the end user experience on both candidate and news websites, which all saw significant traffic increases.

The solution performed HTTP requests between remote sites and important web services. User experience was broken out by server, network and browser response times and was trended in charts, as well.

The software company noticed an overall uptick in server latency throughout election evening, corresponding with poll closing times. Both Barack Obama’s website ( and Mitt Romney’s website ( experienced a steady amount of utilization, dropping off around 5 p.m. EST, when, presumably, undecided voters finished their last-minute research.

The end user monitoring solution also synthetically tested news sites like NBC ( and Fox News ( for site response times. Both NBC and Fox News had heavy usage, with latency spiking on the sites from 1 to 5 p.m. EST. Fox News noticed a latency spike around 9 p.m., when Florida was still undecided and Pennsylvania had been awarded to Barack Obama. And, there was a steep buildup on both sites to the 11:30 p.m. EST presidential announcement, with a sharp drop off to “normal” traffic immediately after.

When you can match up real-time events with the synthetic end user experience data, you get an accurate and detailed view of network performance that can aid in future capacity planning around major world events.

Use Case 4: Enterprise Consumer Products

A large enterprise consumer products company utilized an end user experience monitoring solution to address a list of challenges – slow voice and video applications; lack of visibility into the performance of the carrier’s network; and complaints of poor performance from end users.

As a result of deploying this solution, the company reduced the troubleshooting time of user issues across the application, network and end user domains. And, it’s estimated they saved between 25 and 49 percent of the time needed to pinpoint performance problems, as compared to before the solution was utilized.

“Good tool to verify the SLA agreed with service provider and QoS configuration in the cloud,” the enterprise reported. “Faster troubleshooting of WAN issues.”

Use Case 5: Large Food Services Chain

A large food services chain deployed end user experience monitoring to address a lack of visibility into the performance of the carrier’s network.

The food services chain enabled the end user monitoring solution, they said, to conduct pre-deployment assessments of network and application environments, to manage infrastructure spending by understanding bandwidth consumption, and to verify that the WAN provider was delivering against stated service level agreements.

The organization reported that they achieved ROI on the end user monitoring solution and realized the business value of the tool in under a week.

Use Case 6: Moore-Wilson

Moore-Wilson, a British design agency, sought a way to ensure new projects got built smoothly, while keeping an eye on older ones. The agency, which provides a full web app experience – from design and development, to lifecycle maintenance – looked to resolve operational problems with as little effort as possible.

The organization deployed end user experience monitoring on its 60 production servers. Almost immediately, the agency reported that a year-long problematic and unpredictable memory consumption issue had been resolved. Because of the insight gained into the application’s behavior, the agency was able to find the root cause – rarely visited pages were continuously being hit by a web crawler.

“Memory consumption would go through the roof, and it would just burn itself into the ground. And that’s been going on for some time. We never really identified it in the traditional means,” said the company’s hosting and service manager, who cited that the end user experience monitoring platform “made it blatantly obvious what the problems were.”

Best Practices for End User Experience Monitoring

  • To avoid visibility gaps, first look for an infrastructure monitoring platform that supports collection of ay time series data, regardless of source. You’ll also need the ability to poll sub-minute (high frequency) and retain raw, as-polled data for a year.
  • Seek a scalable infrastructure monitoring platform that can baseline any metric to learn “normal” performance, and then alert when deviations occur.
  • Make sure you have the ability to drill down into the raw data for a granular view of your digital infrastructure. Your monitoring solution should be able to go to each network device, server, storage or any entity and retrieve data using traditional methods like SNMP and also data collected via third party platforms.
  • Deploy a synthetic end user experience monitoring solution, which can monitor apps inside the firewall, your global SaaS offering, or even third-party tools. These metrics should be graphed alongside your existing infrastructure KPIs, all in a single dashboard or report, to present a service-level view of performance.


Troubleshooting in today’s modern networks – there’s no silver bullet. You need to be able to gather a range of data in one spot. If you rely solely on one data source, you may be missing much of what your end users are experiencing. Being able to bring all your data into one dashboard gives you the context you need to keep your network high functioning.

Whether you want to gauge the end user experience via mobile, Wi-Fi or fixed environments, your organization can significantly benefit from monitoring the end user experience.

You can synthetically test users from different geographies to see how they’re accessing websites and applications, for a complete understanding of what a full transaction or connection to the network looks like.

With this data – and combined with other routine performance monitoring data – you’ll always know what’s going on within your network and you can resolve issues immediately.

Data in all forms is a powerful tool. The more data, the better informed you’ll be. A contingent of various, rich and robust data will give you the business-critical insight you need to keep your network functioning at its best.

SevOne Scroll to Top