Companies in nearly every industry are using utility computing, although you might better recognize this trend by current buzzwords such as on-demand computing, virtualization, Service-Oriented-Architecture (SOA), R&D clusters, and compute farms. Dave Jackson, CTO and founder of Cluster Resources, Inc., says it’s the number-one model companies are selecting for their new IT systems and for high-performance computing (HPC) needs.
It’s no wonder, considering the competitive advantage it provides. Utility computing evolved from its first-generation concept (tapping into a computing resources grid and lowering total ownership costs by paying only for the amount of computing power an organization used) by marrying the capacity-on-demand approach (rapid access and scalability as needed).
Today, the model optimizes computing resources, aligning technology so an organization can respond in real time to its dynamically changing business needs, maintaining high service levels when it has more data to process or less time to process it. It allows dynamic provisioning (redeployment or reconfiguring) of assets, moving software to run on servers with available capacity at a particular moment.
Jackson says this resource-sharing operating strategy was constrained in the first-generation model of utility computing by issues of ownership and management when it came to allocating workloads to shared resources. So he designed and developed the cluster, grid, and utility-computing management software known as the Moab line of products, which Cluster Resources provides.
Many large organizations have spent the money and have “boatloads of resources” but are not achieving anticipated return on investment, says Jackson. “They’ve found out that just having resources is not enough. If you can’t re-apply a resource to a new task within a matter of minutes, that resource’s value is really only 10 percent of what it should be.”
Clients of Cluster Resources comprise two different groups, both with the same pressures–the need to be more responsive to business needs. One group is enterprises with internal clusters and grids in an on-demand type of scenario to better handle dynamic workloads. The other group is outsourcing service providers whose customers are demanding that the providers become more responsive to the buyers’ quickly changing business needs.
The demand in doing business today requires you to be responsive,” states Jackson. “You can’t say, ‘We have to wait three months while we put in a cooling system and other electricals.'”
IBM’s Virtual Loaner Program
Four years ago, IBM’s Systems and Technology Group began developing a program that today enables thousands of IBM Business Partners who are registered with IBM PartnerWorld to have free remote access to its hardware, operating systems, and software resources on demand. IBM designed the Virtual Loaner Program to speed the development and testing of applications on IBM platforms, ultimately enabling IBM’s partners to achieve faster time to market with solutions for their customers.
But the Virtual Loaner Program had to work in the same manner as if the machines were located in the Business Partner’s own facilities. “When I came on board, we were looking for a very flexible and extensible workload scheduler,” recalls Dennis Nadbornik, Project/Program Manager, IBM, Systems and Technology Group. “We needed something that could integrate with the hosting software that we were developing ourselves as well as with commercial IBM software and some recent strategic acquisitions.”
For the solution, IBM needed a scheduling and policy engine that could integrate with IBM products like Tivoli Provisioning Manager and WebSphere Application Server and provide a central repository for time- and criteria-based business process automation.
“Moab’s Workload Manager tool became a cornerstone of our solution,” states Nadbornik. “Moab filled the gap between the Tivoli software and WebSphere Web Portal to enable our partners to request an advance reservation, get real-time availability information, book the reservation, and have the free resources provisioned from scratch in less than two real-time hours.”
Nadbornik says the Cluster Resources team brought insight and know-how to the project that enabled the Virtual Loaner Program to be launched not only on time but also with features that were six months ahead of schedule. Their collaboration has continued through the years. “Over the past four years, our requirements for this solution have changed, and the Moab software changed to meet our new requirements. Cluster Resources was flexible to meet the evolution of our offering,” says Nadbornik.
Three Trends
Trend #1. Shift from Internal to External Resources
Jackson says there are three ways organizations are using utility computing these days. In the first model, an organization has various internal clusters in departments or business units, which adapt to various workloads.
In the second model, there are the unit clusters but also a centralized adaptive computing center. As various departments or business units require more resources, they use the Moab technology to import a portion of the adaptive center’s resources into the local cluster, growing it dynamically to run the workload.
The third model is to outsource and take advantage of the remote resources of providers such as IBM. This is an important trend, although its rate of adoption tends to fall for the most part along industry lines.
HPC resources provider, R-Systems, N.A., Inc., based at the Research Park at University of Illinois at Urbana-Champaign, is reaping the results of organizations shifting their HPC needs to external providers. The company has a significant number of clients in manufacturing that are experiencing major internal computing-capacity constraints.
Brian Kucic, Vice President Business Development, says a driver in the manufacturing industry is that “their simulation model sizes are getting larger. One manufacturer, for instance, has to do simulation in chunks because they have only minimal resources. By doing simulation in chunks, they’re not getting the best results possible.”
R-Systems’ utility-computing model is ideal for such clients. “Our costs are in line with their R&D funding budgets, and they don’t need to have in-house technology expertise to do the ramp-up and constant upgrades of HPC resources,” says Kucic.
The hosted utility-computing model is also attractive to R-Systems’ academic and industrial research clients. Kucic says they need fast turnaround times and they also don’t have the machine room to house their computing clusters. He says, “A lot of manufacturing, academic, and other clients don’t have the budget for the infrastructure when their computing needs are sporadic and need access to our resources through our utility-computing model. Some clients, however, prefer our dedicated hosting model where we house, manage, and maintain their clusters in our facilities.”
In both models, R-Systems provides rapid-response service. Part of the foundation for that high level of service is the Cluster Resources Moab products, which enable quick reactions and resource configurations to meet clients’ needs. “With Moab, we can turn around multiple configurations within a single environment very quickly,” states Kucic. “It’s helping us get new clients as well as taking care of our existing clients, and it differentiates our services from others in the marketplace today.”
Jackson at Cluster Resources says the oil and gas industry’s use of utility computing is “getting to the point where it’s almost pervasive.” The workload is volatile, with computing power requirements “surging for weeks at a time and then almost disappearing.” Similarly, the retail industry needs more computing resources to support its peak workload months. Both are leveraging the outsourced model.
Financial institutions, he says, are using utility computing because they “need to switch resources around–on a dime. They need almost unlimited resources that they can redirect to certain applications in order to capitalize on money.” Because of the sensitive nature of their business, they don’t outsource this need.
The pharmaceutical industry, according to Jackson, “has a hodge-podge of clusters and grids scattered all over the place, and it’s becoming a management nightmare.” These companies have a surge in workload when they have projects that are close to patent or when studies need to be completed. In recent years, they began using the outsourced model for R&D or for clinical data processing, leveraging a provider’s expertise in making clusters and grids more efficient.
Jackson says the interest in the government sector is for adaptive computing, making the clusters customize themselves by adding additional servers, networks, and operating systems as needed to adapt to the workload. These are seldom outsourced.
Healthcare organizations are not yet using the utility computing model to a wide extent.
Trend #2. Intelligent Management of Applications
A big trend is what Jackson calls “outsourced boutique services,” where an organization outsources a particular HPC application to run remotely instead of on an internal cluster if that’s the most efficient solution at the time. Data mining, regression testing, and materials analysis are examples of these kinds of HPC deals.
Here’s how it works. The desktop user pushes a button to run an application. Based on the cost of a resource and the response time required for the computing task, the intelligent Moab solution seamlessly determines where the application will run. If the local cluster is full, it moves the application out to the outsourcing provider’s remote site (with a fully customized provision environment for that application) and then sends the results back to the user’s desktop either in real time or upon completion. “The user can’t tell if the system just ran in Malaysia, or in Massachusetts, or wherever. It’s seamless,” says Jackson.
Jackson compares Moab’s routing functionality to a puppet master. The application routes through Moab, but Moab never actually touches the application or the data. Moab just orchestrates the provisioning of the tool environment (storage, user creation, network security, etc.).
Although there are not yet a lot of organizations outsourcing their applications in this manner, Cluster Resources is currently involved in a lot of pilot projects. Jackson predicts there will be a lot of deals of this nature by the end of the year. “This is truly Software-as-a-Service,” he says.
Trend #3. Service Provider Performance Guarantees
Moab’s greatest power, says Jackson, is that it “can see the future.” Thus, it can make guarantees and deliver true business service-level agreements for utility computing and capacity-on-demand-type services. It can figure the probability of failure and can line up resources to meet a company’s request to have a certain number of nodes by a certain time. “The Moab solution can guarantee that request will be met. So outsourcing customers can trust and depend on their service provider’s ability to deliver on such business promises.”
However, Jackson points out that most people don’t yet realize this high-level capability exists. Software solutions such as Moab that manage the resources involved in utility computing and capacity-on-demand models are still in their infancy regarding levels of adoption.
“People are currently in the first phase of low-level control–still wowed at seeing the solution orchestrate the resources. Their expectations are not high enough yet to take advantage of the real capabilities that Moab offers in higher-level control of business guarantees,” Jackson states.
He believes the ability to have high-level and low-level control over the model at the same time is one of the biggest factors that prevented grid computing from really taking off and delivering on its promise. “It was like people threw things into a cloud,” recalls Jackson. “They lost control and nobody could make guarantees or promises. But this new utility computing model with policy-rich SLAs and a quality-of-service- (QoS)-rich intelligence solution managing a vast set of capabilities is very much a winner. The adoption rate is faster than anything I’ve seen.”
Kucic says both R-Systems and R-People (providing HPC consulting services) are also seeing a large adoption rate of the model. “We were just spun out from the National Center for Supercomputing Applications (NCSA) […] and our Web site’s still under construction. But we’re already getting so many calls from organizations needing our services that it’s unbelievable. We’re looking at the next level of what we and Cluster Resources can do for our clients.”
Lessons from Outsourcing Journal:
- More and more outsourcing clients in nearly every industry are demanding that their service providers become more responsive to clients’ quickly changing business needs.
- A hosted utility-computing model in an outsourced solution provides buyers the benefit of keeping their high-performance-computing (HPC) needs in line with organizational project budgets and also avoids the capital expense of acquiring, upgrading, and maintaining the resources.
- An emerging Software-as-a-Service (SaaS) and outsourcing trend leverages a software solution that seamlessly manages when and where an outsourced HPC application will run remotely instead of on a client’s internal cluster. Data mining, regression testing, and materials analysis are some examples of these kinds of HPC deals.
- The HPC software-management industry has evolved to the point where such a solution can enable a provider’s guaranteed level of service. Although this use of the software is still nascent, industry experts predict it will soon change the way ITO clients know they can trust and depend on their outsourcing provider’s service-delivery promises for HPC needs.