Users of Large-Capacity Systems May Find Open Solution

Increased power, lower costs may be available

Users with large databases and high-volume systems who are looking to lower the cost of maintaining their current mainframe will be interested in the track titled "Large High-Volume Solutions" at UniForum '95. High-volume systems and very large databases are a growing segment of the commercial opens systems market as users look to lower costs and take advantage of the increased power of PCs. The track will explore the issues surrounding the building, maintaining, and integrating of systems for high transaction volumes using large databases in client/server environments.

For example, operating systems like OS/2 and Windows NT might coexist with UNIX in three-tier architectures-the subject of a session chaired by Peter Coffee, advanced technologies analyst for PCWeek. In the client/server notion, the three tiers, Coffee says, consist of a high-bandwidth, rugged data server, typically a mainframe, located in a central area; a middle tier of application servers with the processing power needed for new enterprise applications; and a very economically priced desktop client with a familiar and easy-to-use interface. Typically, the application servers run some variant of UNIX on a RISC CPU. "The question is, what's the opportunity for PC technologies such as OS/2 and NT to grow up into that middle tier and challenge the UNIX family of products at that level?" Coffee says. In addition, Apple could contend with a PowerPC-based system, and Sun would also like a piece of the middle-layer business. "Whether we'll ever see NT running on the Sparc architecture is an interesting possibility," he adds.

The selling point will be improved processing capacity. "A lot of different wheels are all interlocking in different ways to try to gain a strong position in what is probably the fastest growing market for additional processing capacity," Coffee says. Users don't really need a lot more speed on their desks. But when you look at the things organizations can benefit from, such as manufacturing efficiency and improved delivery of services, clearly there is a market for perhaps another tenfold growth in processing capacity at that middle layer. People who want to make systems they can actually make some money on-as opposed to consumer products with a razor-thin profit margin-definitely need to be looking at this layer of the enterprise architecture."

The panel chaired by Coffee will include Ram Sudama, vice president of technology and chief scientist with Open Environment Corp.

Those who are more interested in large database management may want to attend the session on "Optimizing Data Management Servers" chaired by Steve Harad, senior UNIX systems administrator with IMS America. "We're going to look at problems associated with large databases-and we'll define large as in the billions of records and hundreds of millions of lines of code-from four points of view-an architectural, system engineering, database administration, and UNIX administration point of view," Harad says. "We will address the problems and pitfalls of database optimization.

Examples of such problems include how to improve performance and reliability, how to make the database scalable, how to use buffer caching, RAID arrays and disk devices, and what UNIX parameters may have to be tuned.

The session will be geared toward the system architect, UNIX administrator, or database administrator, Harad says. Speakers will include-all from IMS America-Rod Gunn, director of database administration; Steve Keller, senior systems engineer; and Jeff Schott, director of software services.

Other sessions in the track include: