Behind the News

Analysis of Industry Events

Supercomputers Under Stress

As microprocessor engineers cram more and more transistors on each chip, and massively parallel processing (MPP) systems and clusters combine more and more microprocessors, supercomputers based on vector processing no longer look like the unassailable giants they once seemed to be. "The word supercomputer is ill-defined these days," says Larry Smarr, director of the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana.

In the past, the term referred to the fastest, most powerful computers available. Now the word more often refers to a specific type of computer: a vector-based machine used mostly for scientific and engineering applications. The reason for the change is that other architectures--in particular, reduced instruction set computing (RISC)--can now challenge vector machines in performance when running many kinds of applications.

There are two basic reasons why vector-based supercomputers have lost some of their appeal. The first is parallel technologies, which allow many microprocessors to work on a single problem. One example of a microprocessor-based system that challenges vector-based supercomputers is IBM's RS/6000 SP, a RISC-based machine that uses clustering. Another example, not yet available, is found in work being done by Intel's scalable systems division in Beaverton, OR, on an MPP system with 9,000 Pentium Pro processors and 262GB of memory. Both of these systems are being positioned to take on jobs formerly handled by vector machines.

Business Leverage

The second trend working against supercomputers is the increasing acceptance of the Unix operating system for both scientific and business workstations. Microprocessor-based high-performance systems that run Unix enjoy a much wider library of available software than vector machines. The reason is economy of scale. Software for a Unix system can run on anything from a desktop server to a high-end parallel system. Ironically, this generates precisely the sort of attraction for software developers that has worked against Unix systems in competition with those that use Microsoft operating systems.

On the other hand, software written for a machine from Cray Research, now a division of Silicon Graphics, Inc. (SGI), is limited to Cray customers. There are currently about 600 applications in Cray's catalog. Virtually all of them are in the scientific or engineering fields, and almost all of them also have a microcomputer version.

Like vector computers, parallel systems were first put to work on scientific and engineering applications. But unlike vector processing, parallel processing has been moved aggressively into the business arena. According to David Gelardi, manager of DSS and OLTP marketing for the RS/6000 division of IBM in Poughkeepsie, NY, in 1993 IBM shipped no RS/6000 SP machines into the business market. Last year, two-thirds of those computers were sold for business applications, and this year's figures will probably be a few percentage points higher.

The business market increases the number of units IBM can sell, allowing the company to maintain lower price points than vector computer makers. "The broader market for microprocessor systems gives them better price/performance ratios compared with vector machines," says Willy Shih, vice president of marketing for the advanced systems division of SGI in Mountain View, CA.

Still a Large Niche

Microprocessor-based Unix systems will continue to encroach on markets formerly exclusive to vector machines. But many observers believe that traditional supercomputers are far from irrelevant.

As with mainframes and other large systems, there is still a large body of installed code for vector machines. Companies are naturally reluctant to throw out years of work even if they can get better price/performance by switching to a microprocessor-based machine.

In addition, there are some applications in which at least some people believe vector machines still run faster. According to Shih, in general, applications that require a lot of memory bandwidth (for fast transfer of data between memory and CPU) tend to run faster on vector machines than on comparable microprocessor-based systems.

Who needs this power? According to Shih, most people who buy high-end vector processors are interested in two kinds of capabilities: to run bigger problems and to run those problems faster.

The best example of a category of application which, Shih says, may run faster on a high-end vector-based system is simulation. This includes such exotic uses as wind tunnel analysis and crash-testing systems. "All of the major automakers and aerospace companies have at least one big Cray for things like simulations and thermal analysis. They could do the same job on a microprocessor, but speed is essential to them," Shih says.

Dissenting Voices

Not everyone agrees that vector-based computers have a computational edge over microprocessor-based systems. Mark Seager, assistant department head at Lawrence Livermore National Laboratory (LLNL) in Livermore, CA, says, "Traditional vector supercomputers don't scale to the performance level of parallel systems." LLNL is involved in a four-year, $6 million project called the accelerated strategic computing initiative (ASCI).

The goal of ASCI is to provide to researchers within the U.S. Department of Energy the computational power required to certify the nuclear stockpile without actual nuclear testing. This will be accomplished with a combination of non-nuclear experiments and 3-D modeling and simulation. The project involves developing an IBM RS/6000 SP that can sustain one teraflop (one trillion floating-point computations per second) and reach a peak of three teraflops.

In contrast, Cray's top of the line, the T90, offers 1.8 gigaflops, according to Shih. However, this comparison is hypothetical, because the ASCI system so far is only theoretical and at least four years in the future.

While Seager is confident that parallel machines will soon eclipse vector-based systems in terms of power, he's unwilling to state categorically that vector computers are no longer needed. "That's too strong a statement. They have their niches," he says. However, when pressed, the only such niche he names for vector computers is the many applications companies already have that run on them, which they don't want to rewrite for parallel systems. Other than that, "I can't really think of a reason" to use a vector machine, Seager says.

Howard Richmond, a vice president at Gartner Group in Stamford, CT, is less equivocal. "Vector processing is an architecture which is definitely on the way out. What we see today is the last of the dinosaurs," he says.

In his view, the reason is simple. RISC-based microprocessors are less expensive and have the potential to provide more power than vector systems. "They served us well, but it's time to put them in museums," Richmond says.

While vector-based machines currently have a market, keeping it will be difficult. "Parallel machines are grabbing everything but the most high-end applications," says Smarr of NCSA. "If vector computer vendors don't have a broad base of revenue, it will be hard to keep up the research and development needed to continue to beat microprocessor-based systems." He believes vector machines will be around until the year 2000; after that he declines to speculate.

--Larry Stevens


Research and Relativity

Because IT changes so rapidly, and because vendors often make claims at odds with each other, users and journalists alike turn to market research companies for analysis and guidance. Yet despite their efforts at objectivity, research firms come up with different conclusions regarding the same issues. A recent instance brought this situation to mind again.

Last spring, Sentry Market Research (SMR), a division of IT magazine publisher Sentry Publishing in Westborough, MA, sent out a press release. Its headline said, "Strategic Use of NT Grows in Major Firms as Unix Begins to Slip." The text continued, "Fully half of the survey respondents say NT is likely to become their strategic server operating system, a 12-point increase from last year. In addition, 49 percent say they will use NT as their network operating system as well as their client system. Just under half, 44 percent, will also use NT as a strategic Web application server."

About one month later, the mail brought a report summary from Datapro Information Services Group of Delran, NJ, which is owned by publisher McGraw-Hill. "The global market for Unix-based software products continues to grow strongly," it said. The release quoted Mary Hubley, principal analyst for Unix and open systems, "Unix is alive and well in the marketplace. . . . Unix purchases overall will continue to grow at a healthy 8 percent clip through the end of the decade. Surprisingly, this will be accompanied by an 8 percent decline in non-Unix purchases, including those for Microsoft Windows and Windows NT systems."

Both of these predictions can't come true, can they? Their incompatibility led us to ask how a reader of such reports can decide whom to believe.

When contacted, Datapro's Mary Hubley did not launch a defense of her company's surveys nor denounce the research methodology of its competitor. In fact, although she hadn't looked at the SMR survey in detail, she surmised that it and Datapro's report are actually saying much the same thing. She pointed out that her survey measured Unix against not only NT but against all other operating systems, including proprietary ones. While NT has expanded and will continue to expand, as SMR reported, proprietary operating systems will drop. So, taken as a whole, revenue related to non-Unix operating systems will drop, even though the NT component will continue to rise. By the year 2000, when, as the Datapro release said, "global purchases will be split evenly between Unix and non-Unix systems," the lion's share of the non-Unix purchases will be NT, with only a small portion related to proprietary systems. This is about the same as saying, as the SMR report did, "Fully half of the survey respondents say that NT is likely to become their strategic server operating system."

Judging the Spin

While this explanation solved the mystery of the divergent reports, it raised further questions. Why do reports like these so often seem to be contradictory? Can you trust reports, either summarized in the press or in complete form? And how do you know which reports to trust?

"I've learned to triangulate," says Hugh Brownstone, vice president of strategic business development at IMS America, an information provider to the pharmaceutical industry in Philadelphia. He says he normally takes "no single survey at face value" but looks for trends that he can gather from reading many different reports.

Brownstone also considers the source. "I need a rough sense of their biases," he says. "Every one of them has a bias. Sometimes they'll spin things to make their clients happy."

Small business owners may have a different viewpoint than IS executives of Fortune 500 companies. Newspaper or magazine summaries of research reports don't always specify the kinds of people surveyed. Often research companies survey only their subscribers. In that case it's important to know what types of clients subscribe to the company's services.

Industry veteran Bob Metcalfe is vice president of technology at International Data Group (IDG), a Framingham, MA, publisher that is part of the same company as research firm International Data Corp. Metcalfe acknowledges that companies in the industry often have prejudices. He also recommends that people reading reports should try to understand what their biases are and take them into account. "Some slants are derivative," he says. "If a company repeatedly surveys a market segment and gets consistent results, over time it will have an opinion such as 'we're high on Unix,' which may affect how it slants report summaries."

In fact, it's clear from reading the Datapro survey that it was intended to convey the idea that Unix would prevail. Hubley more or less confirmed that opinion. "Datapro considers Unix to be a very important operating system because it's open. NT is made by one single company," she said.

On the other hand, Brian Klatt, director of client services at SMR, says that focus groups SMR has conducted have shown that many corporate IS managers "are concerned about unchecked heterogeneity," which results from having many different Unix boxes, and are looking to NT "to simplify."

Apples and Oranges

Metcalfe of IDG asserts that the most frequent source of conflicting conclusions is not bias but means of categorization. "One company may say the LAN industry grew by 48 percent, and a second company claims it grew by 28 percent," he says. "But when you read the report carefully, you find that the first company included telecommunications equipment and computers, while the second confined itself to network devices such as routers, hubs and switches." He adds that this problem arises often in emerging markets where categories are fluid.

Accordingly, Metcalfe believes that it is best not to compare results from two companies. He says that while research methods and definitions will vary from company to company, they will usually remain constant in the same company's reports year after year. In this line of thinking, for example, it's probably valid to compare Datapro surveys from 1995 and 1996, but it may be misleading to compare Datapro's results with SMR's.

While there are legitimate reasons why survey companies may come up with different results, Metcalfe is quick to point out that he is not saying that there are no such things as good and bad research. For example, many surveys today are based on self-selected respondents, because that method is cheap. But that method can also be misleading. "If I post a survey on a Web page saying, 'Everyone who uses e-mail send me an e-mail message,' most likely I'll get a response which says 100 percent of those surveyed use e-mail," he says. Metcalfe argues that serious surveys have to be conducted scientifically, using a small but selective group of respondents who are contacted by telephone if necessary to get a 50 to 60 percent response rate.

How do you know whom to trust? "That's where branding comes in," Metcalfe says. If over time you find a particular company tends to be right, or at least you've determined that its research methods are proper, you can normally trust reports from that company.

Brownstone of IMS America warns that a company's success in correctly identifying trends in the past isn't always a measure of the accuracy of a current report. "The new report may have been complied by a different analyst or even a different team of analysts," he says.

Where does all this leave people who are reading summaries of surveys in the press and, because they don't have the actual report, can't look at the details? "I don't think you can get anything from research without knowing a lot of how the research was conducted. If you see a summary of a report, you have to be careful," Metcalfe says. In other words, if you trust the company, and you know a bit about its research methods and slant, you can give the report summary some consideration. But don't bet your IT budget on it.

--Larry Stevens