OpenP2P.com    
 Published on OpenP2P.com (http://www.openp2p.com/)
 See this if you're having trouble printing code examples


The Great Rewiring

by Richard Koman
08/20/2001

P2P and Web Services Speaker

Clay Shirky will deliver his keynote, "The Great Re-Wiring," Wednesday, Sept. 19 at the O'Reilly P2P & Web Services Conference.

Clay Shirky at February's O'Reilly P2P Conference

Related Articles:

Interoperability, Not Standards

Hailstorm: Open Web Services Controlled by Microsoft

Next Step for P2P? Open Services


Comment on this articleClay will be online all week to discuss these ideas and much more.
Post your comments

A year ago, peer-to-peer was a nebulous concept, and many people had trouble understanding exactly what it meant and how it encompassed everything from Napster and Gnutella to instant messaging and distributed computing. To many people, Clay Shirky captured the essence of P2P with his observation that most users were essentially the "dark matter of the Internet" -- intermittently connected PCs cut off from the DNS system that makes machines visible and reachable.

Shirky kicks off the O'Reilly Conference on P2P and Web Services in September with a keynote titled "The Great Rewiring." OpenP2P.com editor Richard Koman reached Shirky by phone to talk about this "rewiring," Web services and the future of P2P.

Richard Koman:Your keynote is called "The Great Rewiring." What does that mean?

Clay Shirky:The argument I'm advancing is partly historical and partly rooted in recent changes in technology. The argument I want to make is that what we're seeing is the result of a bunch of forces that were put in place about 15 years ago. In the first years of the 1980s, both halves of what we think of as the modern computing landscape launched. We saw on January 1, 1983, the ARPANet switch over to running off of Internet protocol and became for all intents and purposes the Internet we recognize today. Then in January 1984, we saw the PC with a GUI launch in the form of the Apple Macintosh. And for 10 years those two revolutions didn't connect to one another. PCs were very rarely connected directly to the Internet. The two revolutions were sort of going on parallel tracks.

Then a period that I'm calling "the great wiring" came about -- from '94 to '99 -- when Mosaic, and then later Netscape and IE, gave people a reason to connect their PCs directly to the Internet for the first time. But those connections were really kind of weak and lame -- temporary IP addresses and so forth. And the change that peer-to-peer brought about seems to be to be a "great rewiring:" a way of rethinking the ways in which the personal computer, with all of the expectations that it brings to the user in terms of local speed and graphic interfaces, can be connected more directly to the Internet.

Koman: So this is specifically a P2P innovation? You put P2P right at the center of this rewiring?

Shirky: Peer-to-peer is the more kind of philosophical question; Web services right now is a more technological set of questions. But I think the two can actually inform one another quite a bit. It's plain to anybody looking at the peer-to-peer movement that one of the things it's critically lacking is an agreed-upon set of infrastructure and data standards. This is what Web services is trying to create, obviously, at its core. It seems likelier to me that peer- to-peer will converge on standards pioneered by the Web services people, rather than on standards arising directly out of the peer-to-peer world.

Koman: As far as standards in the peer-to-peer world, it seems like it's been a complete nonstarter for the past year.

Shirky: Exactly right. The only things we have in the peer-to-peer world that even are starting to look like standards are essentially bilateral interop agreements around certain technologies. "Here is a way you can write XML to the Groove network" and "Here is a way to use Jabber to pass XML documents in real time." But really those are hardly standards. They're barely application frameworks.

Web services, by starting with a standards-driven goal, stands a much better chance in my mind of providing not only the standards and infrastructure for Web services but the spill-over: standards and infrastructure for peer-to-peer. So the obvious win for peer-to-peer from Web services is better interoperability. And I think there are a couple of important things the Web services people can learn from the peer-to-peer people as well.

The first is that HTTP is not the be-all and end-all of transport mechanisms. What the peer-to-peer people have been really good at is finding different transport mechanisms for different needs. Jabber will handle things in real time, logging presence and identity. Napster took the idea of HTTP but rewrote it in asynchronous mode so you wouldn't get traffic jams. Companies like 3Path are building on top of SMTP as a transport mechanism. There are people looking at BEEP or BXXP. And one of the things I think the Web services people are going to learn from peer-to-peer is that there are reasons to use protocols other than HTTP in certain circumstances. When the wealth of innovation in the peer-to-peer world is exposed, I think it's going to be very valuable for the Web services people.

The other thing I think Web services can learn from peer-to-peer is that the idea of client-server is an attitude about transactions but not an attitude about machines. What peer-to-peer has shown us is that a machine can as easily be a client in one second and a server in the next, or indeed a client and a server at the same time, so that all notions like "Oh, there's the Web server and there's the Web browser and those are two fundamentally separate things," I think are going to break down in the Web services world. And we've seen a great deal of innovation with clervers and transceivers and nodes and all these other words that the peer-to-peer people have for a device that acts as both client and server in different environments.

Koman: What is the Web services' people's understanding of client-server right now? Fairly traditional?

Shirky: It's almost completely traditional. What essentially they are trying to do is take what the Web did for publishing and apply that to a computing environment in general, to say that simple requests and structured replies can form the backbone of everything from exposing business processes to remote procedure calls. But they're still very much in this mode of request-and-response, where one machine is consistently acting as the client and the other machine is consistently acting as the server. And I think what we've seen with Napster is if you come and are downloading from me a Frank Sinatra song, a Motown song and a Talking Heads song, I might think, "Well, that guy's got really diverse taste. Why don't I go see what he's doing." So at the same time I'm being a server to you, you are being a server to me.

In the Web services world, you can readily imagine instead of having a cross-border tax calculating engine, having two machines essentially communicating with one another bilaterally: "Hey, here's what the tax rates are in my country. What are they in your country?" And that kind of two-way conversation is I think going to become a more integral part of Web services than people are now imagining.

Koman: So does that equal something more like "peer services?" I mean is the term "Web services" a holdover from this traditional view of client-server?

Shirky: Yes, absolutely. "Web services" is a holdover from traditional client services and it started with the notion that HTTP was such a flexible transport protocol that that was going to become the baseline of everything new and that all we needed to replace was HTML, and we needed to replace it with XML. I think what we're seeing is that of a traditional Web services stack -- HTTP, SOAP, WSDL and UDDI -- the only thing in that stack that's really questionable is the HTTP part. It might as well be SMTP or FTP or even BXXP, or what have you. But I think a little bit like peer-to-peer, the label "Web services" is going to stick even if the definition becomes mutable. You know, five years ago "the Web" meant HTML documents moderated or mediated by HTTP. "The Web" now means the publicly accessible Internet, so I think we're not only going to keep the word "Web services," I think inasmuch as it becomes popular, it's going to change the meaning of the word "Web" away from narrower protocol-driven definition and toward the larger public Internet definition.

Koman: So as far as morphing the definition of "the Web," though, "the Web" is still defined by content or data servers somehow putting together and giving to requesters files.

Shirky: That's right. Yeah, I think that that will change. What I don't know is whether or not people will really experience themselves as running Web servers when they are offering services out from under the cloud. But given that Web services is the narrower of the two, between Web services and peer-to-peer, my guess is that wherever the two overlap, Web services is going to be the name people use because it's easier to get your hands around. Peer-to-peer, as many people have commented on, is a very kind of slippery and open-ended label for a lot of separate movements. Web services feels like something you can point to. So my guess is that while the changes peer-to-peer portends for Web services will be more fundamental, the name "Web services" is probably going to be around for a lot longer.

Koman: This seems to be a big problem for peer-to-peer in terms of being taken seriously and being able to move it along technically. So you pointed out that standards are not happening in peer-to-peer and that's because it's not technically explicit enough for people to dig their hands into.

Shirky: Right. Peer-to-peer is such a vast idea -- which is to say essentially there are all of these edge-connected devices, there are a handful of novel ways of addressing them to collect, collate or aggregate their resources, and there are a wide number of ways of then exposing those resources to other users. Suddenly you're dealing with something that encompasses computation, storage, bandwidth, human presence. You're dealing everything from architectures that are sort of inside-out star topologies, like SETI@Home, all the way to completely decentralized webs as with Gnutella, and you get all of those people in a room and many of them simply don't have that much to say to one another. The Napster people and the SETI@Home people can each admire what the other has done with those unused resources, without them having a large number of sort of technical solutions they can discuss with one another.

Web services is in some ways a less interesting problem, but it's certainly a more tractable problem. It has what many people feel is lacking in peer-to-peer; you can point to something and there is a generally agreed-upon definition of it. One of the arguments I want to make in the idea of "The Great Rewiring" is that peer-to-peer is going to go away -- "go away" in the same way that the telephone went away, which is to say become ubiquitous -- as we take it for granted that the edge-connected devices can be full participants in the network. Web services, on the other hand, are going to sort of become increasingly high-profile as we think of ourselves as subscribing to particular services directly over the network.

Koman: We just saw P2P go through this rather nasty hype trajectory, where at first there was all this publicity and interest from VCs and then a rash of articles that P2P was overblown and there's no real business opportunities, and so on. So in three months' time, will we be there with Web services again? You know, Web services is dead, and we're off chasing the next big thing?

Shirky: I think that nothing is immune to the hype trajectory. That in a way is how things develop antibodies. And we've certainly seen the peer-to-peer world develop antibodies after it rode that particular hype trajectory. I think that the Web services hype trajectory is going to be somewhat different because Web services is primarily a B2B arrangement.

The only reason to have Web services is to have something that is both automatable and can operate with machines at both ends of the transaction. Humans will never use Web services except at the end of the pipeline. And because of that, it's never going to have the profound effect that things like Napster and ICQ had on people's perceptions of the computing environment. And so you're not going to have front-page stories on USA Today about how Web services is transforming things.

That having been said, there will be a number of businesses that slap a whole bunch of four-letter acronyms on their front page. And there will be venture-capital money going into those businesses. And then there will be a sorting of the good from the bad. Not only do I think that's inevitable, I think that any energy spent attempting to avoid that is probably pointless. What I do hope comes out of that is that there are some fairly real criticisms of Web services, because right now the ideas as they are being put forth have a couple of very significant weaknesses.

One of the things that made the Web the Web is this sort of very narrow list of primitive concepts that you were allowed to work with. There is a URL; there are a handful of methods -- get, post, put, delete; and then there are a handful of ways of including extra data -- a query string, an additional path, standard input and a cookie. And that's it. I mean I can write down what you need to use as a Web developer on a postcard.

Because Web services has said, "We're going to start from scratch and we're going to make everything not only definable but we can even have late binding," it's all up in the air again. So my hope is that when there is the crescendo of hype and counterhype, that at least some of it has to do with the technology and the places where Web services have real weaknesses that need to be addressed, rather than just the fact that there are venture capitalists who don't know what to do with their money.

Koman: So that seems to suggest that if Microsoft can offer a good definable framework for Web services in .Net, it's likely that they could win.

Shirky: Well, yes, although there's an open question in my mind about what constitutes "winning" in this world. And this is the big issue as always with Microsoft. Microsoft does not like to win 40 percent of anything. And yet in a world of interoperability, I'd argue that 40 percent is about the high-water mark. We've seen Microsoft say with the HailStorm announcement that for the first time they're going to allow Microsoft software to be used on non-Microsoft devices. The real question is whether or not down the road they will attempt to restrict, alter or reverse that idea in order to try to transfer the monopoly from the PC desktop directly into the network.

I know that there are elements in Microsoft that would like to do that. On the other hand, the world is more interoperable than it was when Windows 95 launched, so Microsoft has steadily been losing ground on the ability to lock out interoperability. I'm hopeful that this will be another phase in that change from Microsoft being a monopolist to being a significant competitor. So I don't think that there's any doubt that .Net will be a significant and ultimately easy-to-use and adaptable framework for Web services. I think the question is how much openness can that company tolerate?

Koman: Well, Dave Stutz would say to us that Microsoft realizes that they can't own every platform that they want to play on and so interop is a good thing for them because it lets them reach people where they don't own their platform.

Shirky:And were Dave Stutz the CEO, I would absolutely take that to the bank.

You know, I've had these conversations with Stutz, I count Stutz as a friend. I think he's fantastic and I think his instincts are exactly in that mode, but I also sat on the stage [at OSCON 2001] with both Craig Mundie and David Stutz and I sensed that what Mundie was saying showed that the management committee is being guided by a different set of principles than Stutz is using in his work.

So again, Microsoft hires smart developers, they ship good code. There's no reason to think that a Microsoft without the desktop monopoly would not nevertheless be a ferocious competitor and very successful in their space. But the strategy tax that Microsoft is willing to pay in order to preserve or extend its desktop monopoly is in many cases is very, very high, and the question I think that's before us is, "Does that strategy tax extend to things like breaking SOAP or launching copyright- or patent-based lawsuits for otherwise open technologies?" And the jury's obviously very much out on that.

Koman: So moving on to the subject of another panel you're hosting at the conference, is P2P dead?

Shirky: Well, you can't have a conference like this without talking about the P2P backlash and the attempts to bury P2P. There was one criticism that it was overbroad, that there's nothing to hold these technologies together except TCP/IP. There was another criticism that came from Lee Gomes of the Wall Street Journal that said it was an investment fad that's sort of come and gone as quickly as push and so forth. And there's no particular agenda for that panel except to say we need to talk about that. There's obviously a lot of hype that came out around peer-to-peer, and I think we essentially need to sort out what are the real and underlying changes from what are sort of transient effects, and that panel is an attempt to sort of bring those issues up. I actually don't know what people are going to say except that I think it's going to be an interesting conversation.

Koman: One thing we haven't really talked about that is actually front and center is the recording industry versus Napster, and the way that Napster has been tarred as an application of pirates, and the way that P2P has been tarred by Napster as the technology of pirates.

Shirky: The Napster situation is obviously still emerging. You know, I think that Napster is going to be one of those technologies that sort of presages a new world even though the company that founded it may not last ...

Koman: I should just expound on the question actually. I just got a call from a reporter who's working on a story apparently about some congressmen that are saying that P2P networks are a haven of pornography. So, it's another black mark on P2P and sort of an underlying theme here is that these things [P2P applications] are just all bad. They're at best about stealing music. At worst, they're about child pornography.

Shirky: Right. Well, any time a technology comes along that lets people do something new outside of an area that was previously heavily controlled, there will always be elements of society that are concerned about that. I mean it happened with the telegraph. It happened with the telephone. One of the big concerns when the telephone came about was that men and women would be able to talk to one another directly without going through intermediaries, and God knows what that could lead to. This has been a constant theme. The car was, of course, seen to be a tremendously destructive force because freedom of mobility outside of your community meant heaven knows who you would be consorting with and so forth.

The easiest thing I think, or rather perhaps the most important thing to say about many of these criticisms is that, of course, they're true. When you have a car and you have a higher degree of personal mobility, you do meet different people. Plainly the telephone has had a huge effect in breaking down aspects of traditional society. And there is certainly some loss in those things. I think what we have to consistently stand up for is to say that the gain in freedom in greater than the loss.

Every medium ever invented has been used to distribute pornography. That's something that's true about humanity, not about technology. And I don't mean in the past 50 years. I mean since the Babylonians figured out writing. So this is not even a criticism. That's a criticism that can be leveled at any given technology. I think what people are particularly concerned about is for the past 100 years all of our two-way media has been narrow-band, which is to say I can only address one or two other people -- the telephone, the fax, letters. And all of our broadcast media are one-way: radio, television, and so forth. To suddenly have a two-way mass media threatens a huge number of vested interests who much prefer the bottlenecks we have.

This is not the first piece of propaganda about peer-to-peer; it won't be the last. And I think the two things that we have to stand up for are: First, the enormous amount of intellectual property created in the world today destined for an audience of five rather than 5 million -- which is to say, I'm writing a report for my boss and three other people. And any technology that makes it easier to share intellectual property is good, and we will have to work out the issues surrounding copyright holders. But there is simply no point in suggesting that some technology ought not to be allowed to exist because it changes the way intellectual property is dealt with.

The second thing is that there are lots and lots of normal models for selling intellectual property as a service, rather than as a product. The Recording Industry Association of America is mostly in this game to save their own skins. They're not actually acting on behalf of the music industry. The music industry knows perfectly well how to make money off of music as a service. They do it with every radio station in the world every day. So the sort of narrow sets of concerns around Napster and peer-to-peer technologies I think can't be allowed to distract us from the fact that this is a huge boon to personal freedom and in particular it's a huge increase in the flexibility and fluidity with which we can deal with intellectual property, and that new business models are going to be required to deal with that, that the genie is not going back in the bottle.

Koman: I'm just struck by how much more virulent the social conversation is over Napster and peer-to-peer than over, say, the Web. There was concern about child pornography on the Web, but the Web being essentially a one-way publishing or broadcast medium as opposed to a two-way medium -- is that the difference?

Shirky: Yes. When the Web came out, everyone focused on the fact that it was a visual medium, and that's why advertisers were interested in it. But it's plainly the fact that it cannot be annotated or followed up to that was really the reason advertisers were interested in it. You can make outlandish claims without anyone contradicting you.

And you can see the effect of that on the corporate landscape by noting that every time a technology comes along like Third Voice that threatens to let people annotate Web sites, there's a huge backlash and "No one is going to have a conversation about my Web site that I don't control." Peer to peer, by being much more Usenet-like, much more IRC-like, is more resistant to that kind of control and is therefore upsetting the people for whom that bottleneck is a useful bottleneck.

Koman: In our report, "2001 P2P Networking Overview," we talk at some length about making a shift from the Center Net to the Edge Net, and one of the points that Lucas Gonze makes is that that's a shift from being machine-specific to not really knowing which machine you're talking to. It's a shift from machine-centrism to content-centrism. That is, I don't care who you are. I don't care what machine you have or what your IP address is. I just want this file and I'll take any copy of it that's out there.

Shirky:I would argue in fact that what the peer-to-peer world has shown us is that all kinds of things can have addresses. And that when you point to Napster, of course you're going to end up with a content-centered model. But when you point to ICQ, you're getting a people-centric model. And if you're going to start ending up in a world where Web services are offered over, say, Jabber, you're going to end up with a service-centric model and so forth.

It's the same problem people have been having in the hardware space. Everybody wanted to know, "What comes after the PC?" And the answer, of course, is "everything." Everything is going to start happening all at once. We've got the PDAs, but we've also got mainframes running Linux and we've got wristwatches and, you know, there is never going to be a sole class of hardware that mediates all our connections together.

In the same way I think that this is the lesson of peer-to-peer: It's the decentering of the address space. We've just lived through 25 years of a completely machine-centric world, where every other protocol is hung off of a machine address somewhere -- you know richard@oreilly.com, telnet.oreilly.com, www.oreilly.com. And we're now moving into a world where some things don't have that. When you and I collaborate on Groove, I don't care where your machine is. When you and I talk on ICQ, I don't care where the machine you're using as a terminal is, and so forth. So I think that the huge legacy of peer-to-peer -- even if peer-to-peer is "dead," I think the legacy that is going to fundamentally change the landscape is the notion that address spaces no longer have to be tied to machines, but they don't have to be tied to content either.

Koman: Content, people, resources.

Shirky: Right. Any resource that can be named can be addressed.

Koman: One other thought, this one about Code Red, which is essentially a massive denial-of-service attack on the White House. Denial-of-service attacks seem to me to be a problem of the center of the Net. In a distributed edge Net, where you're free of being tied to one actual machine, and you have sort of massive redundancy of content at least -- that is to say, there are 100,000 copies of whatever the hit song is -- you don't care what the status of any one machine is as long as you can get one of those redundant copies. Does that make the Net more stable in that things like DoS attacks become irrelevant if you don't really care about the status of any single machine?

Shirky: Well, you could have a denial-of-service attack on Napster by attacking the central look-up servers, server.napster.com and server2.napster.com. Gnutella is more resistant to that kind of attack. I think the real core of the resistance is not so much denial of service, although that's a good example, as just another iteration of something we have seen in the history of computer engineering over and over and over again -- which is when a certain part becomes so unreliable, relative to the application it's deployed for, that it's no longer to be trusted, the solution is almost invariably to move out to repetition of that part over several iterations. When the CPU can't be tweaked any more, we go to parallel processing. When a disk drive isn't reliable enough for a bank to entrust its data on, we go to RAID. And so what Napster has shown us is you can build a redundant array of inexpensive servers and it is much harder -- it would be much harder to bring down at the level of the individual service. If you wanted to prevent anyone from downloading a certain Britney Spears song on the Gnutella network, it would be nearly impossible. And I think that that is a model that corporations are going to begin to respond to because they are in fact the people who own thousands and tens of thousands of desktops and for whom redundancy and backup is a permanently critical issue. And since they've already spent the money on the hardware, they might as well use it for these other kinds of things. So I think that it is not so much a question of center and edge as it is a question of redundancy of inexpensive and unreliable parts is often a superior strategy in general, and I think that denial of service attacks show us another place where that is the case.

Koman: Right, the dichotomy is between highly reliable but very rare machines versus very unreliable but massively redundant --

Shirky: Exactly. When Napster was at its height, the chance that any given Napster server was online at any given moment was very small, but the chance that you could get a copy of "Oops, I Did It Again" was 100 percent, with perhaps no copy being more than 10 percent reliable. There was never a moment when you couldn't get that Britney Spears song. That is computationally a really interesting model, in engineering terms, because it follows on from a lot of what we've seen. Napster is forward-error correcting in a sort of metaphorical way by providing tremendous redundancy.

One of the other things -- I mean the equivalent of a denial-of-service attack on Napster was when a bunch of boneheads from the music industry announced that they were going to "flood" the Napster network with bogus copies of popular songs, not realizing of course that the point of Napster is that no one stores songs on their hard drive that they don't listen to. If I download a copy of something that purports to be "Wild Horses" by the Rolling Stones and is in fact an anti-Napster screed, I'm going to delete it. So good content propagates in the system and bad content falls out. And it was a mark of how little people understood the value of the Napster network that they thought they could even begin to flood it with bogus content.

Koman: So back to your keynote theme, and we'll finish up. Whether P2P is alive or dead, it will have a lasting impact in terms of rewiring of the Net, of re-architecting the Net?

Shirky:I wouldn't even say re-architecting. The Net is like a car whose engine you tinker with even as you're driving it. So when I think about peer-to-peer, I don't think, "Oh this is going to replace everything that's gone before." I think that what it's done is put a whole lot of new tools in the tool box. And for all the hype that came and went, there is an army of 23-year-olds for whom what Napster did wasn't a revelation or a revolution, it was just new information. And now they think, "Oh, if we need a 600-gig hard drive, we don't have to buy a 600-gig hard drive. We just need 600 people to give us a gig on their PC."

Someone out there right now is working on an application that needs 30,000 sound cards to run. I don't know what that application is, but I'm going to be awfully interested when it launches. And these kinds of notions, that the domain name system is not the only way to address the IP network, that aggregated resources can either be more reliable or more scalable than buying and bundling all the applications in the center ... These tools and techniques are going to become normal during the next five years until we will take it for granted that certain classes of applications simply distribute the load across the network and not even really notice that that was something that didn't become part of the general lexicon until the last couple of years.

Richard Koman is a freelancer writer and editor based in Sonoma County, California. He works on SiliconValleyWatcher, ZDNet blogs, and is a regular contributor to the O'Reilly Network.


Copyright © 2009 O'Reilly Media, Inc.