The Protocol Problem

by Doc Searls

Does the name Ungermann-Bass ring a bell? If it doesn't, chances are Sytek, Corvus, Bridge and Excelan are just as mysterious. Yet there was a time, back in the early- to mid-eighties, when these were big names in networking. If you read Data Communications back then, or Communications Week, you would have seen a lot of stories about (and advertising by) those companies, alongside more durable names like 3Com, IBM, Digital Equipment Corp. and Wang.

In those days the arguments were mostly about “pipes and protocols”. IBM pushed Token Ring (IEEE 802.5), which ran on special IBM wiring. DEC had a breed of Ethernet (802.3) that ran on its own coaxial cable. Corvus had Omninet, an Ethernet variant that ran on twisted pair. Wang had Wangnet, which ran on...something, I forget what. Same with AT&T's ISN (Information Systems Network). Everybody had their own arcane implementation of one “standard” datalink protocol or another. Each required their own special topology, of course, as well as their own transmission methods. You couldn't get away from articles comparing broadband against baseband and star vs. bus vs. ring topologies.

Around 1987 or so I did a bunch of work for Ungermann-Bass, which specialized in big networking jobs for big companies. U-B was naturally attracted to MAP, the Manufacturing Automation Protocol developed by General Motors in 1982 to standardize networking within the company's many factories and manufacturing facilities. MAP came to be known as MAP/TOP after it was mooshed together with Boeing's Technical Office Protocol. MAP/TOP was conceived around IEEE 802.4, which required a bus topology, although it distributed data with a token in the manner of IBM's Token Ring networks. This was supposed to be a more robust and noise-immune approach than the more popular Ethernet alternatives.

U-B's efforts were well meaning and highly regarded by the press but were undermined by Ethernet's success, even on factory floors. Not only was Ethernet relatively cheap, but it integrated nicely with networked PCs, which by default used Ethernet as well—thanks to the successful efforts of companies like 3Com, which made Ethernet interface cards by the millions.

All due credit to U-B, there was no way to know MAP wouldn't catch on, or even that the combined industrial heft of General Motors and Boeing couldn't lever the market to 802.4. Even though they cost hundreds of dollars apiece, 3Com's Ethernet cards were cheap and ubiquitous. As for higher-level transport and internetworking protocols, there was even less reason to believe that TCP/IP would ultimately define the most universal network ever created: the Internet.

What's amazing to me now, looking back over the years, is how long it takes to establish a given protocol, and how so many huge companies bet and lost their ranches on protocols that markets never ratified.

Today we're accustomed to thinking of “internet time” as something that moves very fast. We credit Moore's Law with improving new computers while obsoleting old ones in less time than it takes to grow a crop of asparagus. Yet protocols mature in about the time it takes an apple seed to become a fruit-bearing tree. Craig Burton, CEO of JanusLogix, says,

The time it takes to invent, create, develop and standardize on a protocol for any given service is way out of alignment with what is happening in the marketplace. Take LDAP. LDAP v3 was introduced some six years ago at this point. You think the Linux kernel takes a long time to move, it is lightning fast compared to how fast a protocol moves.

I just checked and it's now closer to seven years. The same thing seems to be happening, by the way, with IPv6. Any bets on how long it's going to take before it becomes a de facto standard?

So the protocol problem is a big reason why the Internet is so successful yet so limited.

Look at the core protocols that enable the Net's basic services: HTTP, FTP, LDAP, SMTP, POP3, IMAP and SNMP, to name seven. In effect, these are the Net's true infrastructure because they govern the few services the Net can universally support. The Net grew around this set of protocols precisely because they were simple, unencumbered by corporate ownership and easily ubiquitized. The X.400 standard defined a rich set of e-mail functionalities, but it was a pain in the butt to implement. SMTP and POP3 were free and easy, so those protocols came to serve as the universal infrastructure for e-mail service.

Yet even today no internet protocols support the rich and complex file, print and directory services that come standard with, say, Novell's NetWare.

Many internet services remain undeployed (or underdeployed) for lack of enabling protocols. Instant messaging (IM) is one example. The fact that any of us can download an IM client from AOL or Microsoft doesn't mean that the Net itself possesses any kind of IM standard. AOL, Microsoft, Yahoo! and Lotus all have IM systems that use arcane, proprietary and closed protocols, each restricting their own clients to their own servers. Thus, AOL's AIM clients are like browsers that can only visit one web site. Without the IM equivalent of an HTTP or an SMTP, there is no IM equivalent of Apache or Sendmail. The only serious candidate at this point is Jabber's protocol (www.jabber.org), but it's still new—more shrub than seed, but far from a fruit-bearing tree. Meanwhile, the Net has no IM infrastructure.

Security is another one. Computing has been around for 50 years or more, and we still think about security largely in terms of firewalls. (I got an e-mail this morning from a guy who can't see one of our web pages here at Linux Journal because his Fortune 500 company's firewall excludes pages that contain the word “suck”.)

Firewalls are a medieval solution to a 21st-century problem. Their models are forts and castles, moats and drawbridges. The result is what Craig Burton calls

Web noir—a technological Dark Age obscured by the apparent brilliance of the Internet, as we know it. The dark—that noir—is what we don't see, what we don't know because it doesn't yet exist.

(See www.craigburton.com/stories/storyReader$19.)

The first bright hope for a renaissance, Burton says, is XML (eXtensible Markup Language). With the recent development of SOAP, WSDL, WSIL, UDDI and WSIL, XML makes web services possible. Some of these services look so promising that Microsoft, IBM and Sun are pushing their own web service frameworks: .NET, J2EE and SunOne, to name three.

Yet for all their inclusivities (especially in promotional rhetoric) frameworks have an exclusive effect. They lock in customers and lock out competitors. If the Net has proven anything, it is that exclusive architectures, frameworks and environments all fail as universal infrastructure. Yet they have their appeal, even to people who ought to know better. Take Mark Forman, the White House technology czar. In April 2002, the Seattle Times reported that Forman was operating on the erroneous assumption that Microsoft's Passport technology might serve as a kind of national identity infrastructure.

The Seattle Times wrote,

Forget about a national ID card. Instead, the federal government might use Microsoft's Passport technology to verify the on-line identity of America's citizens, federal employees and businesses, according to the White House technology czar.

On Sept. 30, the government plans to begin testing web sites where businesses can pay taxes and citizens can learn about benefits and social services. It's also exploring how to verify the identity of users so the sites can share private information.

(See seattletimes.nwsource.com/html/businesstechnology/134438173_passport18.html.)

Passport will fail that mission for the same reason AOL's and Microsoft's instant-messaging systems will ultimately fail as universal standards, no matter how popular they are today: they are owned by people with vested interests in keeping their properties closed and exclusive.

Identity should be another of those fundamental web services. Your self, your car and a drill press on a factory floor should all have unique identities that make sense in the universal context of the Net—someday. But how will that happen? Will it depend on yet another protocol or some new set of protocols? By what decade?

So let's raise a question: if we can't speed the evolution of protocols, can we at least find a way to reduce or eliminate their role as a gating factor for progress? Can we eliminate this drag on Moore's Law, as well as our own sense of possibilities?

When we find the answer, Burton says, “the result will be as transformative as the discovery of calculus, which accelerated scientific revolutions and helped bring on the Renaissance.”

Doc Searls is senior editor of Linux Journal.

Load Disqus comments