Calculating Client/Server Value

By Tom Abate

[Sidebar]: Case Study: Millipore Corp.
[Sidebar]: Case Study: AirTouch Communications

It may not be easy for users to get a clear measure of how well their client/server systems are doing their job. Some organizations that have found reliable metrics are willing to share them.

Ron Hawkins had a problem. As the director of information services for the North American division of Millipore Corp. in Bedford, MA, Hawkins runs a Unix-based client/server operation that serves 800 users and supports the company's core business of manufacturing filtration systems for pharmaceutical and research laboratories. Hawkins thought that his client/server operation was successful, but without a set of metrics against which to measure Millipore's performance, he could not be sure he was delivering the best bang for the buck, while improving his internal processes to match the best-of-breed client/server shops.

From past experience, Hawkins knew the value of using client/server metrics. Millipore was a founding member in 1992 of the Massive Open Systems Environment Standard (MOSES), an effort by pioneering open systems users to set standards and devise solutions for building and managing Unix-based relational database environments. But his experience with MOSES had also taught Hawkins that measuring the effectiveness of client/server systems was far more difficult than assessing mainframe operations, for which such practices and metrics had evolved over years.

"In the old days of IBM mainframes and dumb terminals, it was much simpler to gather metrics, because all the systems were inside the glass house," Hawkins says. "With an open systems, client/server model, you have functions distributed throughout the enterprise. It's hard to get a handle on cost, much less reliability or customer satisfaction.

"For instance, my budget reflects what goes on in the data center here in North America," he explains. "But there are lots of department-level systems, and to get an honest reading of cost-effectiveness, you have to factor them in, too."

The problem Hawkins faced is shared by a growing number of IS managers, who need metrics to judge the effectiveness of already established client/server systems, or in some cases to create baseline standards to guide new installations as they transition away from mainframe computing. "There are several reasons why an IT manager might look for a metrics program, and the initial reason will dictate whether they devise one in-house or hire a consultant," says Richard Reed, retired systems strategy manager for the semiconductor division of Texas Instruments in Plano, TX. "Where they're starting from will determine the type of metrics program they need." In April 1995 Reed completed an in-house study for TI that took into account the MOSES group's work and laid out the roadmap for TI's transition to open systems.

Asking for Advice

Whether they are experienced with open systems, like Hawkins, or planners for a prospective client/server operation, like Reed, IS managers may look for outside expertise in creating metrics. In some cases, where a client/server system is under attack for supposed deficiencies, consultants can not only measure effectiveness (or shortcomings) but help repair an open systems operation and restore its usefulness to the parent company.

Hawkins, for instance, having built Millipore's client/server system with the aid of the MOSES work, needed a relatively simple set of metrics to tell him whether the system was providing a competitive advantage for his company. He turned to Coopers & Lybrand, the international accounting and consulting firm, which runs a benchmark assessment program through its quality assurances group in Boston. Nancy Corbett, program director, says work on the metrics program began in 1992, when Coopers & Lybrand was helping some larger clients devise performance measurements for their open systems deployments. Over time, C&L learned how to automate its assessment program, using a multimedia workstation and a question-and-answer format that can elicit responses in as little as three hours. This self-rated metrics program costs from $3,000 to $5,000 for small or mid-size firms. The result is a customized report that compares a company's client/server operations to those of similar-size firms in the same industry. Hawkins recently used this metrics program, as described in the accompanying story.

"This is a way to collect information about your IT operation and its cost-effectiveness, staffing, reliability and maintenance practices," says Corbett. "It's a tool IT managers and their executive teams can use to decide where an open systems deployment needs improvement and where it's already best-of-breed."

The Coopers & Lybrand metrics program contains a database of several hundred firms that have responded to the same self-assessment survey. This pool, comprised mainly of respondents from insurance, manufacturing, health care, government and higher education institutions, allows C&L to show each new survey-taker how its IT spending and staffing compare to industry averages, and how its application performance and system maintenance stack up to best-of-breed companies. Corbett says checks have been put in place to make sure respondents don't inflate their answers to look good, distorting results for other firms. The program follows standards for self-assessment surveys set forth by the International Benchmarking Clearinghouse in Houston. Corbett says that confidentiality is guaranteed.

"The metrics survey has given me a hit list of things I have to improve," says Hawkins. "For instance, on operational assurance we were below the line, and that's important to me. That's something we plan to improve."

All on the Same Page

Not every company has as much open systems experience as Millipore. For newcomers to the client/server style, other types of metrics programs exist to provide a baseline on which to build and measure future development.

For Allen Shaheen, vice president of western operations in the Los Angeles office of consulting firm Cambridge Technology Partners, the most important metric for any client/server application is whether it serves the business need for which it was intended, within the budget and time-to-market parameters established when the project was initiated. While those may seem like self-evident criteria, Shaheen says that newcomers to the client/server model often fail to specify what functions end users want and what systems IS will deploy in response. The final, most critical step in any project is to get agreement at the company's highest levels for the stated objectives and costs. "When client/server systems fall short of expectations, it usually has nothing to do with technology," Shaheen says. "It usually has to do with a lack of user input, incomplete or changing requirements, or a lack of executive support."

With these pitfalls in mind, Shaheen's approach is to define the project explicitly, so success or failure can be judged on relatively objective criteria. Key to this approach is curing mainframe-centric IS managers of the notion that every application must be bullet-proof and contain every desirable feature.

"Client/server computing requires a different thought process where the goal is to deploy quick, flexible, scalable applications," he says. "Part of the job is managing client expectations. Yes, you can build mission-critical, open systems applications that have mainframe-like reliability and redundancy. But you have to ask at the outset whether the business need justifies the extra expense in time and money. Getting these sorts of agreements up front is the most important metric you can establish."

If client/server programs fail to establish firm initial guidelines, or if they do so and performance falls short of promise, it may become necessary to undergo a more radical type of metrics program, one that assesses the gap between what IS is delivering and what users require. According to Dan Weinstein, a partner with Computer Sciences Corp. in San Bruno, CA, in these cases consultants must not only measure performance but devise strategies to improve it. "The impetus for this type of metrics assessment usually doesn't come from IT but from dissatisfied business units," Weinstein says. "At that point we would be trying to help IT identify and fix the main sources of friction."

A first step in such emergency metrics programs is to improve feedback mechanisms, Weinstein says. This can involve putting a member of the IS team into the operating unit to act as liaison. The liaison conveys legitimate gripes back to IS staff and suggests solutions that can be given to special development teams. When development teams turn out applications, the liaison can help sell the solution to the business unit. In the most extreme circumstances, consultants will force the IS group to bid for the right to provide a certain application. These in-house bids may actually be matched against outsourced bids, subjecting IS to the toughest of all metrics--competition.

"If an IT shop is getting complaints over an in-house human relations system, for instance, you have to ask whether that's something you should try to fix or outsource," Weinstein says. "On the other hand, on applications that directly support the company's core business, you would expect IT to be competitive because they understand the mission."

Measuring the Load

Even experienced client/server shops feel obliged to call in outside help from time to time, to provide baseline metrics and standards for future development. As a founding member of the MOSES effort, US West New Vector, based in Seattle, was such a shop. Last year it became part of San Francisco-based AirTouch Communications. But despite his past client/server experience, AirTouch's Dave Beal called for help when his applications development team was preparing to take client/server to the next level, pushing functionality down to mid-level servers and desktop computers.

To advise, AirTouch hired Terry Hodges, a founder of Redmond, WA-based Thinc Corp., a client/server consultancy. The process is described in the accompanying story.

Hodges says that most technology managers understand how to deploy hardware, but they don't necessarily understand the key architectural question in client/server applications, which is how to apportion functionality between the server and the client. Inexperienced client/server architects often overload functions on one side or the other, which bogs down performance. "The idea is to strike a middle range between the server-centric and client-centric," Hodges says.

Regardless of whether an IS manager hires an outside consultant, buys a self-assessment service or creates an in-house study mechanism, measuring the effectiveness of client/server systems is a necessary part of running an open systems operation, says Millipore's Hawkins. The metrics program can even start out as simple as an informal get-together with fellow IS managers, which is something Hawkins still does on a regular basis. But however it is accomplished, some form of metrics should be a routine part of IS practices. "You have to know, and everyone on your team has to know, how your performance measures up," Hawkins says.

Tom Abate covers science and technology for the San Francisco Examiner. He can be reached at abate@ccnet.com.