Strategies for Distributed Systems

Desktop on a Diet

By Larry Stevens

The adoption of thin clients is a trend with ramifications for users and administrators of desktop systems, and the vendors who supply them.

ere's a switch: Perhaps for the first time in its history, the IT community is excited about a new architecture that is simpler and uses fewer resources than its predecessor. Every other change, from the desktop revolution to client/server computing, was enabled by advances such as faster processors and cheaper direct access storage devices (DASDs) and memory, and all of them produced more burdensome administrative duties. By contrast, the thin client model features smaller processors and usually uses less complex software. And because it is a centralized, rather than distributed, model, it's easier to manage, cheaper and more secure.

Accordingly, this is not an architecture just for the rich and famous. In fact, smaller corporations burdened with problems associated with distributed computing may be the first to embrace it. For example, state governments are not known for being technologically advanced. Most are saddled with old legacy systems. But the Wisconsin Department of Workforce Development is about to embark on a project that will represent a major paradigm shift. Not only is it planning to move some of its functions to a client/server system, it's also hoping to incorporate a thin client model.

The primary mover for Wisconsin's technological shift is an equally revolutionary change in an American institution: welfare. The congressionally mandated Personal Responsibility and Work Opportunity Reconciliation Act of 1996, better known as the welfare reform bill, gives the states more freedom in how they administer welfare, but it also passes on more data collection and reporting responsibilities. So the Wisconsin Department of Workforce Development is planning to move some of its systems off its Amdahl mainframe to a Unix-based IBM RS/6000 server on the back end and front-end applications that will allow recipients of aid to families with dependent children (AFDC) to view combined information from departments of education and health as well as job banks. The system will also allow them to create their own personal responsibility plan (a roadmap toward self-sufficiency) and track progress as they complete each step of the plan.

But there's a hitch. The various locations where recipients may need to access the system may be running anything from a dumb terminal to an old Macintosh to a fully loaded PC. "We can't ask our agencies to replace their hardware, and we can't create different versions of client software to support all those front-end systems," says Rollin Ager, director of information technology services at the Department of Workforce Development in Madison, WI. Ager's solution is to develop an intranet-based application. The client, a browser, is easily available and can run on any machine. "We decided the answer was a very, very thin client," he says.

Less Fat, Less Money

Inability to control the types and configurations of front-end hardware is one important reason that organizations with many users or users spread out beyond the physical walls of the corporation are moving to thin clients. The second most significant motivator for a thin client architecture is money: It costs less to manage than a fat client setup.

Peter Kotiuga, senior staff engineer at airplane engine manufacturer Pratt & Whitney Canada in Montreal, says that approximately half of the 300 users in his engineering department sit at X terminals rather than "Wintel" personal computers with Intel processors and Microsoft Windows. The choice comes down to dollars and cents. "The initial cost of a PC can be significantly higher than the cost of an X terminal, and the cost of managing a PC is many multiples of that," he says.

Some studies by research companies have estimated the annual cost of owning and managing a fully loaded Wintel machine over $10,000. Included in this figure are amortization of the hardware, the cost of things like installing new software and operating system upgrades and providing end-user support. "PCs tend to be customized very quickly," Kotiuga says. "When people call the help desk with a problem, the support people could spend an hour first figuring out what the user did to the machine."

Higher management costs are generated by having the application stored on the clients, which are under the control of end users. By contrast, when you keep virtually all of the application in one central server, administration, software support and data integrity and security are less complicated. Software upgrades take place at one location; IS can be assured that everyone's using the same version and that no one has reconfigured the software to run in an unexpected way. And while you have to take pains to secure the server, the architecture is certainly safer than one in which users travel with sensitive data on their laptops.

"It's Economics 101--complexity equals cost," says Peter Burris, vice president and director of open computing and server strategies with the Meta Group in Burlingame, CA. "You can spend $30,000 on a Rolex, which has hundreds of intricate moving parts and which will cost hundreds to fix when it breaks down. Or you can buy a $9.99 Casio digital that has a few transistors and keeps as good time as a Rolex at a tiny percentage of the cost. It's true that Rolex apps, like Rolex watches, provide a more elegant solution. But the days of stepping back and marveling at complexity for complexity's sake are over."

Not for Everyone

Of course, the thin client architecture is not an optimal solution for all organizations. According to David Matthews, vice president of business development for UniKix Technologies, a Phoenix-based maker of client/server and Internet development tools, the size of the user base is the key determinant. "Fat clients are wonderful for departments that have 50 users," he says. "If you have 1,000 users, that architecture is virtually impossible. Somewhere in the middle, you have areas of gray. For large enterprise applications, I would never advise a fat client solution."

Matthews reasons that having more users increases the complexity of maintaining all the desktop systems. You may be able to ensure that in a department of 200 users, all machines are upgraded with sufficient RAM, DASD and processor speed to run the latest version of a key application, and software distribution and support may be manageable. But increasing the number of users to 700 or 800, and spreading the users over 10 states, creates a serious problem, Matthews contends.

Cheryl Currid, president of Currid & Co., a technology research and advisory firm in Houston, recommends three considerations when determining if a thin client architecture is right for your organization: server power, client power and bandwidth. "When you know those three variables, you can look at each application to see how much each needs," she says. For example, applications that need a lot of bandwidth may do better on a fat client, because that architecture requires fewer transmissions than a thin client.

Currid adds that companies also need to look at their long-term strategic direction. "You have to ask yourself, 'Do I want to commit to always having a fat client?,' or on the other hand, 'Will I have sufficient bandwidth to support a thin client even if X-number more users are connected to the system?'"

Finally, Currid says, "The industry is in a state of flux because of the technological changes in thin clients." Pointing primarily to the Java language and its related tools, she notes that people who rejected thin clients two years ago may find the time ripe to look again.

Second Looks

The Federal National Mortgage Association (Fannie Mae) in Washington, DC, is in the process of developing client/server applications which its customers, primarily lenders, can use to speed up the process of loan authorization. Daniel Packer, director of applications development, explains why Fannie Mae has chosen a thin client architecture. "As we push out from our corporation, we can't control the external domain, and we don't want to be concerned with configuration or management of all those clients," he says.

This is actually the second time Fannie Mae has tried a thin client approach; the first, initiated three years ago, fell on its face. It used as clients Intel 386-based PCs, and Sun Microsystems servers performed much of the processing. "It solved the software distribution problem, but performance suffered as the number of users contending for the same server resources increased," Packer says. Also, modem and telephone line bandwidth limitations at the time slowed down performance. "The amount of interactivity we needed required too much messaging," says Packer.

So Packer's team used tools such as Microsoft's C++ and Visual Basic to build software that added processing power to the PCs. Now, just a few years later, Packer considers those applications to be legacy systems. When Fannie Mae rolls out new applications this spring, they will be one of two kinds, both of which use a thin client approach.

The first category will be fully Web-based applications built with Java. In Packer's view, Java provides a more flexible way to distribute processing between client and server. Since the application is downloaded from the server, some parts of the applications--those which need faster processing--can be client-centric, while others can be server-centric.

The second category will be modules that third-party vendors can use as plug-ins to their proprietary software applications. For example, dozens of vendors sell loan origination applications to lenders. Fannie Mae is developing an automated underwriting system, which uses artificial intelligence (AI) to authorize loans. Vendors can use the plug-in as a bridge to the AI program from the loan origination software. The plug-in will be written using Java tools to work on any client, fat or thin.

Software developer Fisher Technology Group in Pittsburgh is another company taking a second look at a thin client architecture. "We started developing our application two years ago. At the time, the thin client model wasn't yet viable," says Greg Such, vice president.

This application, called SupplyLink, is an electronic procurement and requisitioning system, which was developed partly using UniKix tools. It's a combination of electronic catalogs and transactions. For example, the company supplies an electronic version of the catalog of a large office supply dealer. End users can browse the catalog on their PCs and select the products they want. The information then is sent to all the people in the corporation who need to authorize the purchase, and finally, using electronic data interchange (EDI), fax or e-mail, the system transmits the order to the supplier.

Currently, the system runs on a number of Unix servers and Windows or OS/2 clients. But the company is using Java tools to create a thin client version. Java applets cause only strands of application logic and data to be downloaded. This eliminates or greatly reduces performance problems that sometimes resulted when Fisher Technology tried to create a thin client two years ago. Another advantage of Java applications, according to Such, is that they look the same on all machines. Conversely, browser-based Web applications look slightly different depending on the browser being used.

Such adds that, by and large, his customers do not have to worry about issues of performance, as did Fannie Mae's, as most of the work in SupplyLink involves lookups. Because there is little processing, each transaction requires only one or two transmissions from client to server.

Fisher Technology will continue to support its Windows and OS/2 clients even after it releases a thin client version. But Such expects new customers to prefer the Java-based version.

Not Everyone Convinced

But not all companies that tried thin clients in the past are ready to try again. Cincinnati Bell Information Systems had to back off from a thin client approach, because it found the network too slow. Two years ago, the company started building a client/server billing application that made use of thin clients. According to Jim Holtman, vice president of systems architecture, "This seemed the most direct way to create the program." For example, when an employee filled out a form, each field--name, address, city, etc.--was transmitted to the server separately. One form might include 10 transactions, each taking two-tenths of a second.

Two seconds per form didn't seem bad during initial testing phases, when users were relatively few. But Holtman found that as users increased, the large volume of small transactions shuttling between servers and clients at times slowed the network to a crawl. Holtman's group is now in the process of developing fatter clients. In the new architecture, the client will know when the user has completed a "work unit" (which, for example, may be a complete form). Once that happens, the client will automatically organize the data and update the server. "We don't feel ready to try a thin client again even if there are new tools," Holtman says. In other words, he'd rather be safe than sorry.

While it's true that a thin client architecture isn't for everyone, it's fair to say that virtually every organization may seriously consider it. Because the architecture reduces cost and complexity, you don't have to be a Fortune 100 company. Still, even if fat clients are legacy systems, as many people believe them to be, there's no reason to ditch them just because something new has come along. But if support, administration, security or data integrity are becoming too burdensome or expensive, a thin client architecture may provide an inexpensive solution.

Larry Stevens writes about business and technology from Monson, MA. He can be reached at