Under Control

HEAT, LIGHT AND HARDWARE

When I started in this line of work we were still in the era of wooden ships and iron men. In those days of yesterdecade, we did the algorithms with the hardware topology. Now we make magic with the software. But it's hardware that houses those software systems.

There are three basic configurations. First the classic single processor such as in a personal computer (PC). Second, node network of both heterogeneous and homogeneous single processors, as widely used in business and industry, e.g., local area networks and client/server computing. And third, the most recent innovation, massively parallel processors (MPP). These are normally homogeneous and installed in a single box. From a designer's standpoint, all three hardware configurations are related and differ only in details.

The PC is but a node in a system. A network is several nodes put together on an ad hoc basis. In MPP, the designer formally configures the assembly of nodes for special purposes. The classic super computer is a small group of high performance vector processors in a box. Multiple nodes are then grouped for more power. Eventually we end up with a massively parallel processor.

Obviously, even computers gotta have the hardware components of processor, memory and I/O channel. These elements can be configured in the any of four basic forms. The single instruction, single-data architecture is used in PC and workstations. The single-instruction, multiple-data architecture was used in early MPP finite element analysis applications. The multiple-instruction, multiple-data approach is used for distributed control systems in process industries. Least common is multiple-instruction, single-data used for real-time control in programmable controllers, as well as Flavors Technology's PIM system.

The I/O is usually not considered as part of the initial architecture. This is changing, however. Computer scientists understand now that a "philosopher computer" solipsistically performing logic functions has only limited application in the real world. The basic architecture should include all three: processor, memory and I/O.

The fundamental constraints in system design are heat and light. Heat dissipation is the one of the most frustrating because we always feel we should be able to overcome it with flow and dissipation mechanisms or by using less power per calculate function. We don't feel so bad if we can't overcome constraints that are a function of the speed of light. We can skirt around those constraints by using parallel processors and reducing run length. This has a ripple effect, as semiconductor chip become more complex in order to work in large groups, and special software is written for the chips.

When you design a computer you don't manage by setting a goal. You start by managing the available resources. It's not in our interests as engineers to design machines that can't be built. On the other hand , because silicon performance in memory and processor power improves by a factor of ten every five years, a designer must plan for chips that don't exist yet.

Design for manufacturing must also be considered. It's important to guess right. What chips are available? And when? What board sizes? How many layers? What software is there? What will new software cost? And most of all, who will be the user?

Recently, we designed a machine for export that had to meet the Department of Commerce specifications. Meeting those regulations had to be our top consideration. Other considerations might be, what kind of data base is needed? Is the system for real-time control, or is it for an MIS department? We don't design simply for our own edification, but for an application or use.

The team approach is strongly recommended. We now know that skunk works work, solving problems in half the time and at one-fifth the cost. It takes about two years to nail down a concept design and about another two years till you have a significant customer shipment. Note that these estimates are for a new architecture, not simply building a "special," or updating a current design.

In the future we can expect to see memories in the Kgigabit range, gigabit solid-state memories, 100 MIPS available for desktop computers. Swarming software and autonomous agents will alleviate the need to generate huge amounts of software. Ever-accelerating power and intelligence will increase over the next five years to astronomical levels, possibly beyond our ability to apply it. It's where we're going.

As appeared in Manufacturing Systems Magazine May 1993 Page 62
http://www.manufacturingsystems.com




Homepage
Manufacturing Systems Magazine Articles
References - Table of Contents

Send mail to rmi.info@barn.org for more information.
Please send mail to webmaster@barn.org regarding web site structure.
Copyright © 1996-2002 R.Morley Inc. All Rights Reserved

R. Morley Incorporated
614 Nashua Street, Suite 56
Milford, NH 03055-4992 USA
Tel: 603-878-4365 FAX: 603-878-4385