|HOME||PUBLIC LIBRARY||ANOTHER PERSPECTIVE||INFOPERSPECTIVES||CONTACT|
It's a milestone: By midyear, Seagate and Hitachi will offer 3.5-inch disk drives that store a terabyte. Other disk makers are right on their heels. Processor makers with multicore offerings will reach other technology milestones, as will network gear makers whose products will foster ever-faster communications. It's hard to think of so many metaphorical milestones without recalling the originals, stone pillars that marked thousand-pace distances on the vast web of Roman Empire roads. These highways linked Rome to the rest of Europe, spanned Britain and even crossed Asia Minor, producing dramatic consequences, both intended and unintended.
Some Roman roads and their milestones are still intact, two thousand years after they were first built. Computer systems won't last that long. But the most valuable information they house will be, because the purpose of computers is not to last but to allow knowledge to last, and to be passed safely through generations of equipment, and of people, too.
But this is not to diminish the importance of computing, and the advances it will make this year. On the contrary, for even though just about every development in information technology is promoted as if it were truly of historical importance, quite a bit of what is going on looks like it will turn out to be significant. Skeptics may argue that nothing all that big is happening, that computers are getting faster, smaller, and cheaper, as they always have, and that's all. But that may be a bit too glib. Frederick Engels, who used to live around the corner from us, pointed out that there are times when quantitative change is so great that it produces qualitative change. What is going on right now in computing just might be one of those times.
If something important is truly afoot, chances are it won't be the stuff that makes headlines in the press. In the media, the headlines are likely to emphasize developments that occur in the PC market, because so many people have PCs, and where the companies that make disks and processors and network gadgets can achieve high unit sales.
We can understand what is behind the hubbub. Later this year, the PC vendors will offer to put terabyte disk drives in home computers and component vendors will offer the same devices are replacements or additions to installed machines. Terabyte drives could retail for as much as $400 apiece, far more than typical PC disks. This will pep up sales of component dealers and raise the average selling price of machines offered by computer vendors. But the impact of the recording technology that has made it possible to pack a terabyte of storage into a standard 3.5-inch disk form factor will be far more substantial in glass houses and server hotels, and in offices where smaller servers live. These servers represent market segments that are unlikely to grab headlines.
It won't take computer professionals long to figure out just how important the compact and capacious new drives will be. Compared to current high capacity disks, which typically provide about 250 GB of storage, the new units can reduce the physical volume required for storage to a fourth its current level while at the same time delivering a similar reduction in power consumptions, and thus heat, per unit of storage capacity.
When the very same perpendicular recording techniques are used in 2.5-inch disks, the results are also noteworthy. These drives fall into two classes. The most abundant class is defined by the 2.5-inch drives used in laptop computers. They provide a lot of storage in a small space and use very little power. The other kind is a miniaturized version of the fast and sturdy drives used in servers that give their storage subsystems a real pounding.
For now, makers of these disks will probably be limited to drives storing 73 GB, because larger drives yield unsatisfactory performance in random access applications. The new small server disks will generally use the SAS (serial attached SCSI) interface: It's fast, it's physically compact, and it's affordable. The big performance advantage of these new disks is due in part to their high rotational speed: 15,000 RPM. Vendors reckon the smaller form factor can cut power and heat by about a third compared to 3.5-inch drives, and given a bit of time they might be able to do even better than that.
Cousins of these server drives spinning at half this rate (or less) but providing even higher capacity. will end up in notebook computers, where there is a never-ending conflict between tight power budgets and burgeoning storage requirements. They also can be used in servers that don't keep their disks buzzing like angry hornets. The industry ought to be able to deliver a lot of 160 GB laptop drives this year and, perhaps in time for Christmas, ship drives that hold considerably more data. The SATA interface system that hooks these drives to disk controllers isn't as smart as SAS technology, but it's very cheap. The result is disk storage with respectable performance that costs less than a buck a gigabyte.
It's pretty clear some new patterns are emerging. You can decide for yourself if the change is qualitative or merely quantitative by taking a quick look at the basic specifications of a few contemporary disk drives from one supplier. And here's another clue: The disk business has moved ahead so far and so fast that computer makers are starting to design some products around the new drives. Laptop makers such as Toshiba and Hewlett-Packard sell machines with two hard drives configured to allow mirroring. The machines already on the market mirror roughly 100 GB, but the engineering in them would not have to be altered to accommodate drives with higher capacity. They separate the road warriors from the road worriers.
Laptop computing also seems to be the arena in which another storage concept is attracting a fair amount of attention: hybrid disks that combine a rotating drive with flash memory. The flash memory helps laptops boot up in a jiffy, among other things. What could be more impressive than computers that help cover Microsoft's shame?
The gap between high performance disks and high capacity disks is bound to change the way data centers operate. Most server hotels also house a ton of data that ought be kept on high-capacity arrays, data that doesn't require ultra rapid access but which can't be relegated to tape or optical storage, either.
The challenge of making two-tier storage practical will fall on the providers of storage management software. The biggest players here are also the leading disk array vendors, such as IBM and EMC. They might prefer to have all their customers buy very large high performance disk arrays, but that's impossible. Even if their customers have ample cash budgets, they don't have big enough environmental budgets. Halving the heat dissipation of disk farms can yield very substantial savings, not just in terms of electricity and air conditioning, but also in terms of the power backup systems needed to keep those disks spinning no matter what. So the storage vendors are not going to fight the inevitable. Instead, they are going to make it more pleasant for their customers, and charge for the software that provides all the comfort.
If the storage suppliers happen to need inspiration, they might consider taking a road trip on a Roman road, possibly in the company of archeologists and civil engineers. They could gain a keen appreciation of how the Romans built their roads, which involved thinking in terms of layers. Along the way, perhaps the Appian way, they could also learn something about milestones.
In today's language, the word milestone is as thoroughly eroded as some of the originals put in place by the Romans. Roman milestones were the GPS gizmos of their time, and then some. They not only noted distance and provided other guidance to one's current position; they said where you've been and where you might go, too. In some cases, they also provided advice for tourists or for armies on the march at the Ides of March.
Knowing that empires millennia in the future would need a little help with their storage architectures, the Romans built their roads in layers. The classic design used on major thoroughfares had a foundation made of large stones. This provided the structures with both a firm footing and drainage. On top of the coarse rubble was a layer of small stones and gravel. Finally, on the surface, there were paving stones. The middle layer was a vital part of the Roman highway maintenance system. If a road surface got out of kilter, the middle layer made it easy to realign the paving stones. And if a paving stone broke, it was easy to adjust the contours of the middle layer so the top surface of the replacement paving stone would align with surrounding pavement.
The result was a smooth, dependable road that was able to carry the full range of contemporary traffic: walkers, riders, small chariots and buggies, and larger wagons.
The Romans build secondary and tertiary roads, too, and the ones with less traffic were scaled back in terms of their technology. The Romans were not only great engineers; they were efficient ones, too.
Many of the roads led to and from Rome, the seat of the empire. But the Romans built roads elsewhere, too. Roman roads connected ports and settlements across Britain. There were also roads crossing territory that now likes in Greece, Macedonia, Albania, and part of Turkey.
The Romans used local materials and adapted details of their construction projects to different terrains. Nevertheless, they maintained consistent architectural concepts. That enabled Roman soldiers, who participated in building projects, and officers, who supervised the construction and maintenance of roads, to perform their jobs well wherever they were assigned.
There are quite a few similarities in the computer industry. Standards for device interfaces, chip architectures, networking protocols, and the expression of ideas in linguistic form are shared the world over. Many (but not all) innovations are built on the shoulders of established technologies, and that contributed to the speed at which ideas diffuse across the computing landscape.
If the recent history of the computer industry and the ancient history of Rome are any sort of guide, a new wave of standardization will arise from the development of multicore processors.
The industry has already boiled down its general purpose processing architectures to a few families. The dominant theme is the one first set by Intel and then enhanced by AMD, and there are other important architectures, too, such as Power and Sparc. In addition, there are a number of processor designs used in devices that can be tied to computing networks but which are not, strictly speaking, computers.
The most popular examples can be found in mobile phones. The applications processors (still distinct from the digital signal processors) in phones are influential for a few reasons. They are pretty fast. They don't use much power. And more than a billion of them will be sold this year. The innards of mobile phones are quite different from the guts of servers. The leading architecture is called ARM, and it's not a variation on the X86, Power, or Sparc themes. Yet the engines in general purpose computers and the applications processors in mobile phones have quite a bit in common, including some aspects that are visible to sufficiently curious end users. For instance, most servers, most PCs, and most mobile phones can understand Java. Also, all are likely to have technology that is shaped by the use of Unicode or its ASCII subset, so they agree about sorting. (While IBM mainframes and System i servers use EBCDIC for encoding data, they have been taught Unicode as a second language.) In Rome, Romans spoke Latin and it was used for official purposes throughout the empire, but on the ground far from Rome, the Romans conducted their affairs in other languages, such as Greek.
The computer industry is on the verge of determining, by trial and error and market reception, which of its architectures will work best in multicore designs. The winner or winners will be the cores that not only work well at the heart of general-purpose computers, but which also deliver good results when used as storage controllers, routers, and maybe even handheld devices. The reason is quite simple: The more completely the industry exploits standard building blocks, the more quickly improvements in those core circuits can yield advancement in every application of the building blocks.
While it's impossible to predict which architectures will win, it's a safe bet that the champions will not only be very good at all kinds of information processing, but also cheap to make and economical, in power terms, to use. The leading designs will involve compromises beyond those any Roman engineer had to make, so it's possible the arguments among computer designers will never end. Roman roads may have been largely planar, but computing technology exists in a multi-dimensional space. For all we know, the killer chip could turn out to be a multicore ARM rather than a CPU familiar to computer users.
This year will also be a crucial one for communications. The networking world has long since consolidated on Ethernet for wired interconnection, but wireless technologies are too young to settle down, particularly when it comes to the last mile linking mobile devices to terra firma, where the glass houses are built. It is starting to look like ideas that have caught on in Japan and are now spanning Europe, including not only GSM for low speed cellular networks but also 3G (WCDMA) for midrange mobile communications, will succeed. But it's far too early to write off WiMAX, an offspring of technology that supports wireless connectivity in homes, offices, and coffee shops.
In the end, computers will have to support transactions originating on mobile devices as well as they support transactions carried out on desktop and laptop personal computers. If it becomes clear that the end user side of Internet commerce will reach past operating systems and touch base in Java country or inside browsers, there will be a milestone that is unloved in Redmond even if it causes jubilation in Googleville.
Maybe the companies erecting milestones on the digital highways should put in a little more time studying how the Roman road system worked out over the long haul, particularly when it comes to unintended consequences. When the Empire declined, producing more millstones than milestones, barbarians used its roads, which of course ran both ways, to more efficiently sack and pillage Rome.
— Hesh Wiener February 2007