You have chest pains during your morning jog. You’re a little out of breath and your stamina’s fading fast. But, when you go to the doctor, no one checks your vital signs or hooks you up to a heart monitor to find out what’s going on. Instead, you’re put into a room with a nurse whose sole job is to watch you in case you have a heart attack.
What? This approach makes no sense. It’s inefficient, a poor use of manpower, and most significantly, it does nothing to identify and treat the problem before it causes real damage.
The same could be said about the traditional approach to applications performance monitoring (APM).
Many companies focus time and resources on watching the IT infrastructure but ignore the inner-workings of the applications themselves. Instead of proactively preventing issues, they’re rushing in with emergency response only when application performance falls so low that it stalls the company’s ability to do business.
In today’s world of instant access, this reactive approach has the potential to do critical damage to a company’s reputation and its profitability.
Moving Beyond the “Physical”
Proactive monitoring has been around since the 1990’s, but its typical focus was infrastructure—making sure the server was running and the network connection was up. The process also involved a lot of IT personnel “watching the glass,” ready to react before the trouble tickets started pouring in.
All of this worked pretty well back in the pre-mobile, less connected world. Ten years ago, if email went down or a web site took a while to load, it was no big deal. Today, if the web site isn’t loaded in two seconds, your customer is gone. If a patient database, order handling or manufacturing control system goes down for an hour, productivity comes to a standstill.
It’s no longer enough that the “box” is up. The applications themselves have to be performing at their individual peaks or the business will suffer.
But, here’s the thing: Monitoring applications isn’t a black and white process like monitoring infrastructure. Applications are not necessarily “up” or “down.” Instead, you have to monitor for latency; detecting that the application is not delivering exactly what you want in the timeframes you want. You’re looking for the IT equivalent of a slower pulse, a drop in blood pressure, or a subtle change that could foreshadow a decline in the hours or days ahead.
To compound the challenge, today’s applications aren’t all in one place. Most enterprises employ a distributed application model, in which data and resources could be located literally anywhere. Remote monitoring might show that the system is up, but there’s no way to compare the user experience for the app running in Boston to the user experience of the same app running in Brussels.
Finally, application monitoring tools are complex and require a lot of manpower to run. Older tools could require code changes, which are intrusive and increase application complexity. Few companies have resources to devote to this initiative. So, the approach reverts to reactive—searching for the problem once it impacts the company’s ability to process claims on time, take reservations, or meet delivery deadlines.
The good news is: Companies can gain a complete end-to-end view of application availability and performance behavior without adding a staff of specialists, disrupting operations or breaking the bank.
It all starts with automation.
The RX: Automation, Synthetic Transactions and a Managed Service Approach
Think back to our chest pain scenario and the subsequent trip to the emergency room. If a patient appeared with chest pains, chances are that individual would be placed on a heart monitor, which continuously tracked cardiac rhythm. If irregularities occurred, an attending nurse would get an alert. The patient might undergo a stress test to see how the heart reacts under simulated conditions, like exertion. By analyzing the cardiac patterns and test data, doctors could identify cause and determine what they need to do to get that heart healthy.
By applying these same types of proactive techniques to business application monitoring, companies can gain a detailed view of performance, from a true user perspective, so they can mitigate issues before major damage is done.
Real User Monitoring (RUM) is a passive monitoring method that records actual user interactions with a web site or application to identify if the application is performing to predetermined service levels, or if not, what part of the business process is breaking down. While this approach is automated, it is still reactive because nothing happens until a user actually encounters a problem. In other words, we see the heart attack quickly enough to save the day with CPR, but we’re not preventing the heart attack from happening.
But, when companies incorporate an automated solution that uses synthetic transactions, they can proactively uncover performance issues without relying on a user to experience a problem. Instead of waiting for the “heart attack,” these tools anticipate it by mimicking typical user transactions to identify latent apps without taxing system resources. For example, if a retailer has a high-traffic consumer website, the automated tool would access that URL, mimic the steps required to make a purchase online, then measure response. It’s like a stress test applied to your global application portfolio—all automatically monitored.
The best automated APM solutions include an alert feature to notify designated personnel via email or text if latency occurs, a drilldown diagnostic mechanism to identify the root cause, and a customizable dashboard with a unified view of application availability status.
Some latency issues can be both identified and rectified through automation. If latency results from increases in volume (think website after new smartphone introduction) or seasonal peaks, IT personnel can quickly identify the cause, then add capacity or take other steps to bring performance back up to predetermined standards before the site goes completely down.
Instead of using valuable human resources to “watch” the applications, you use technology tools, then engage your staff when real action is needed. You increase app availability, minimize business interruption and prevent lost sales, all while making better use of your existing staff.
Keeping the Performance Pumping
When it comes to managing application health, the holistic approach is the best approach. Look for a comprehensive APM solution that can monitor strategic and business critical applications and feed data back into a single dashboard. Just as you wouldn’t want a partial physical exam, taking the “bits and pieces” approach to APM is ineffective in an integrated business environment. You need a solution that not only identifies potential issues, but can diagnose the cause of the problem. It’s like a heart monitor that not only shows a coming breakdown but quickly identifies what’s causing the degradation, be that an irregular heartbeat, a blockage or some other stress factor. By understanding the underlying issue before it causes a real problem, applications won’t need CPR after performance fails. Just as a holistic, proactive approach is best for human wellness, it is the approach-of-choice to ensure the health of your applications.
The good news is, getting this level of proactive monitoring doesn’t mean a large capital investment or long wait times.
HP’s Application Performance Monitoring Solution, for example, is a single managed service solution that monitors entire application portfolios, whether HP is hosting the infrastructure and supporting the applications or not. This toolset has combined and automated many existing monitoring tools and can be delivered in a flexible, “as a Service” model and rapidly deployed.
Today, applications are the heart of your business. By automating performance monitoring, you can maximize application uptime and reduce business risk—without missing a beat. The health of your company depends on it.