P2P in Flash 10 Beta — the Questions Facing a YouTube, Skype, and BitTorrent Killer

May 21, 2008

As I’ve reported, the inclusion of P2P in Flash 10 Beta represents a fundamental disruption of the Internet platform. As with all disruptions, however, this one will progress in fits and starts. Flash 10’s details limit the full power of its P2P features. While features like VoIP will be fully enabled, it will take some ingenuity to turn Flash 10 into a more generalized P2P platform. Here are the issues:

1) Flash Media Server (FMS)

You’ll need Flash Media Server (FMS) to take advantage of Flash P2P. At $995 for the “Streaming Server” and $4,500 for the “Interactive Server”, FMS is beyond the reach of most developers working on their own projects, severely limiting Flash P2P’s disruptive potential. In an ideal world, the new P2P protocols would be openly specified, allowing open source developers to write their own implementations. As it stands now, a single company controls a potentially vital part of the Internet infrastructure, and encryption will likely thwart the initial reverse engineering efforts of open source groups like Red5.

2) No Flash Player in the Background

As David Barrett (formerly of Akamai/Red Swoosh) has emphasized on the Pho List, Flash Player only runs when it’s loaded in your browser. As soon as you navigate to another page, Flash can no longer act as a P2P server. P2P programs like Red Swoosh, BitTorrent, and LittleShoot don’t have this limitation, and it means Flash can’t save web sites as much bandwidth as those full-blown applications can. This limits but does not eliminate Flash’s threat to CDNs. Sure, you could get around this using AIR, but that creates another major barrier to adoption.

3) Usability

While Flash 10 has the ability to save files to your computer and to load them from your computer (essential for P2P), it pops up a dialog box each time that happens. While this is an important security measure, it cripples Flash 10’s ability to mimic BitTorrent because you’d have dialogs popping up all the time to make sure you as a user had authorized any uploads of any part of a file.

4) Limited APIs

While all the required technology is there in the Real Time Media Flow Protocol (RTMFP), ActionScript’s API limits some of the P2P potential of Flash 10. P2P downloading breaks up files into smaller chunks so you can get them from multiple other computers. Flash 10 can only save complete files to your computer — you can’t save in small chunks. As a result, you’d have to use ActionScript very creatively to achieve BitTorrent or LittleShoot-like distribution or to significantly lower bandwidth bills for sites serving videos. It might be possible, but you’d have to work some magic.

So, that’s the deal. There’s still a lot more documentation coming our way from Adobe, so there are undoubtedly useful nuggets yet to be discovered.

Even given all these limitations, however, the key point to remember is the Internet has a new, immensely powerful protocol in its arsenal: Matthew Kaufman and Michael Thornburgh’s Real Time Media Flow Protocol (RTMFP). While Flash might use it primarily for direct streaming between two computers now (think VoIP), it introduces the potential for so much more.

Keep your helmet on.

Advertisements

LimeWire Arista RIAA Deposition Recap

February 16, 2008

So I finished my grueling 6 hour deposition an hour or so ago. Present at the deposition were Greg Bildson of LimeWire, Charles Baker (my and LimeWire’s counsel), the RIAA counsel, RIAA special advisor, Kelly Truelove, the counsel for Arista et al from Cravath, the stenographer, and the videographer. I would have liked to have released and distributed the video of the deposition on LittleShoot, Gnutella, and my web servers as a clear demonstration of non-infringing uses, but it looks like it will not be publicly released for the time being.

I fear my testimony damaged LimeWire’s case in large part due to various discussions with Mark Cuban, Jim Griffin, Serguei Osokine and others on the Pho list. Here’s a little excerpt I wrote on 10/27/06:

I believe passionately in p2p and believe it has a bright future, but I do not support the vast majority of p2p companies out there because they’re almost entirely devoted to infringement.

The Cravath lawyer highlighted this and several similar comments as indicating I think LimeWire is completely devoted to distributing infringing content. They successfully pinned me down on this point with precise “yes” and “no” questions, as in “do you have any reason to think you did not write that statement.” I don’t think LimeWire actively sought to make money from infringing content. I think LimeWire was in large part a victim of its historical time, a time when the Internet was still a baby and when users were not savvy about producing and distributing their own works. As a result, the vast majority of digital content available at the time was copyrighted, but only because that’s what the users had. YouTube was not possible then because you didn’t have a threshold of the population who would be comfortable uploading videos to servers and because bandwidth wasn’t cheap enough.

That said, LimeWire is primarily used for distributing infringing material, but it’s clearly the users distributing that material outside of the intents of the LimeWire creators, myself included. When I started working at LimeWire, we were building the Lime Peer Server and planning how Gnutella would be used to search for everything from apartment listings to cars. Despite our best efforts, those plans never came to fruition. My primary critique of LimeWire and of other p2p applications is that they didn’t think as creatively as they could have about other uses of the technology, with the exception of Skype. The conversation on Pho was in the aftermath of the YouTube sale when the potential for distributing non-infringing content was obvious. I think we could have seen that sooner at LimeWire and could have more actively pursued a p2p-enabled YouTube using DMCA protections, but that’s easy to say in retrospect.

My comments on pho were somewhat taken out of context. The Cravath lawyer succeeded in what apparently is the oldest trick in the book: put you to sleep with hours of mind-numbing questioning about the details of query routing hashes and long-forgotten forum posts before slipping in the key potentially incriminating questions just when they think your brain has turned to complete mush. By the time they got to the questions on Pho, I couldn’t remember my name let alone articulately clarify my thoughts on a forum thread from over a year ago. This prevented me from continually pointing out that the Pho forum threads were focused on the details of YouTube’s protections under the DMCA safe harbors and how they could apply to p2p.

Here’s another snippet from Pho they highlighted. I believe I wrote this in response to one of Jim Griffin’s comments:

I agree the underlying technology for LimeWire and Skype are similar. The point is that one makes all of its money off of infringing content while the other does not. You think that’s all great in the spirit of innovation. I think they should be as innovative with their businesses as they are with their technology, like Skype. You say they make money from the same source, I guess the technology. I think that’s ridiculous. There’s so much room to innovate with p2p outside of infringement that it’s mind boggling there hasn’t been more.

The key issue is that, while LimeWire clearly makes money from users’ infringement, they never intended that to be the case. It’s the content that’s infringing, not LimeWire. I simply wished we thought bigger — thought beyond the existing uses of the technology, along the lines of what Skype was able to do. That’s not to say it would have been easy, however, and that’s not to say LimeWire’s liable because they did not more vigorously pursue more creative paths.

As I emphasized continually in the deposition, we were always creating a generalized tool for media distribution. It was a tool for dynamically searching millions of computers for any type of content. We worked with universities around the world, particularly the Stanford Peers Group, on creating the most efficient algorithms for distributed search. Our competitors included Google and Yahoo as much as they did Kazaa, a point the Cravath lawyer failed to fully appreciate or take seriously, even though I could not have been more serious.
If you’re giving a deposition any time soon, my advice is to continually stay on your toes and to watch out for the ol’ put you to sleep with the most boring questions you can possibly imagine trick. It’s a trap.

Hopefully in the long run the First Amendment will matter more than making sure the record industry has plenty of cash to pay the most expensive lawyers in the business to help line their pockets.


Mark Cuban an Investor, Not a Lawyer

September 29, 2006

Mark Cuban came through New York yesterday and told a group of advertisers the YouTube founders “are just breaking the law.”  Really, Mark?  In fact, that’s far from clear, and Mark knows it.  YouTube has substantial protection under the DMCA section 512 safe harbors, as Fred Von Lohmann and others have made clear.  They appear on particularly firm ground in terms of section 512 (a), the law designed to protect people like Cisco from liability when they route infringing bits.  Just like Cisco, the argument goes, YouTube is at the whim of an automated system where users are choosing to send those bits.  While the DMCA safe harbors have not been tested much at all in this context, Cuban knows very well that YouTube could easily win this one.

So, why all the fuss?  Cuban has an interest in the outcome of the online video wars.  He’s a significant investor in Red Swoosh, the p2p content delivery network.  I have friends at Red Swoosh, and I like what they do.  Their technology makes YouTube look like the kids’ stuff it is.  If Red Swoosh won the video wars instead of YouTube, the Internet would be a better, more efficient place with higher resolution video.  The trouble is, Cuban would also stand to make a heck of a lot of money, and I’m struggling to find another explanation for Cuban’s crusade.  He knows perfectly well he’s overplaying his hand with his predictions of YouTube’s demise.  He knows perfectly well the legal questions hang in the balance, even tilting in YouTube’s favor in my own reading of the DMCA safe harbors.  So why’s he doing it?

I’d love to hear any other explanations out there, as I like Cuban’s general style and thinking.  I’d love someone to tell me I’m wrong.


BitTorrent: Old Technology in a New Box

August 21, 2006

The myth of BitTorrent goes something like this: Bram Cohen, hacker extraordinaire, realized circa 2001 that it would be more efficient to break files up into pieces on different servers and to download those pieces separately. This would distribute the load across multiple servers, providing a more robust architecture for accessing the file. The trouble is, the practice was common well before BitTorrent came on the scene. Cohen simply wrote another implementation of a technology that had already become commonplace in the P2P community. The first implementation I know of was Justin Chapweske´s work on SwarmCast in 2000. As I remember it, Justin´s creativity pointed the way for us all.

Heck, we even released swarm downloading in LimeWire long before BitTorrent ever made a public release, as I first announced here. I wrote almost none of the downloading code, but my old LimeWire buddies Chris Rohrs and Sumeet Thadani have more of a claim to having “invented” swarm downloading than Bram Cohen. LimeWire´s also an open source project, and we were working on the swarming implementation as early as January of 2001, as you can see from the CVS logs. Cohen didn´t even start working on it at all until May of 2001. What´s more, it never occurred to us at LimeWire to think of it as a new idea because, well, it wasn´t.

Why do I care? It´s just that it keeps coming up, most recently in the O´Reilly e-mail from a couple of days ago seeking ETech 2007 participants, where they describe “BitTorrent’s use of sufficiently advanced resource locators and fragmented files” as the type of new innovation they´re looking for. I was a history major in college (in addition to computer science), so these things matter to me. Cohen himself perpetuates the myth, most blatantly on the BitTorrent web site where it says: “While it wasn’t clear it could be done, Bram wanted to enable effective swarming distribution – – transferring massive files from server to client with the efficiency of peer-to-peer — reliably, quickly and efficiently.” The fact is, it was clear it could be done because people like Justin and us over at LimeWire had already done it!

The Wired article on Cohen from January 2005 takes the cake, though. The article says “Cohen realized that chopping up a file and handing out the pieces to several uploaders would really speed things up.” Again, he “realized” it because he saw that others were already doing it. They go on to describe how traditional file sharing networks are “slow because they suffer from supply bottlenecks. Even if many users on the network have the same file, swapping is restricted to one uploader and downloader at a time.” It´s all just blatantly wrong.

Now, don´t get me wrong. I love BitTorrent. I think BitTorrent is amazing and a perfect example of the kind of enabling technology that makes us all more free. It offers the clearest hope for a future of media distribution beyond the inadequate cable and network broadcast model we see today. It´s just that BitTorrent´s innovation was far less sexy. BitTorrent worked because it did less, not because it had any amazing new ideas. BitTorrent took what many p2p applications were already doing and scrapped most of it. BitTorrent scrapped search. It didn´t bother with a fully connected network. It didn´t worry about file management. It just took the downloading component and packaged it up nicely. Cohen realized that the downloading technology alone was all people wanted or needed in many cases, and that the tricky distributed search part was often unnecessary. Hats off — it has really changed the way many of us think about technology.

That said, BitTorrent was old technology in a new package. The innovation was in the packaging.


Towards an Open Infrastructure

July 27, 2006

Writers such as Jon Udell, Tim O’Reilly, and Yochai Benkler have alluded to the rise of the “open infrastructure” where open source and likely peer-to-peer projects provide the network primitives that can be combined and built upon to create services competing with everything from Akamai’s content delivery to Amazon S3’s data storage.

OK, I’m on board. Love it. Stepping back for a moment, though, this is really a new description of an idea that’s been around for some time under different guises. Sun’s JXTA project, for example, has for years attempted to deliver precisely the type of network primitives that would characterize such an open infrastructure, even providing for the type of “service delivery network” Udell envisions. JXTA is a pain to set up, though, and the project is simply not as focused or coordinated as it could be. It also has not had a killer use case to put it over the top. I say this as someone who has participated actively in the JXTA community and as a genuine fan of the project.

The Globus project took this step into an open infrastructure years ago as well, emerging from the world of grid computing. Globus allows developers to use the grid for everything from data backup to distributed processing. New terminology can nudge old technologies over the hump into the mainstream consciousness, however, as we’ve seen in the last several years with AJAX. Could Globus be to “open infrastructure” what LightStreamer is to “AJAX”? It’s possible.

OK, so let’s forge ahead. What are the key components of an open infrastructure? In many ways, an open infrastructure resembles an open, networked operating system. Much like an operating system, the open infrastructure would provide access to CPU, disk space, memory, and network resources. The p2p/grid computing world offers all of these. To compete with the Googles and the Yahoos of the world, what Udell calls the “galactic clusters”, an open infrastructure system needs to leverage p2p’s resource pooling. Projects like Globus emerged from the academic world, from the desire of researchers with access to supercomputers to share their processing power. Projects like SETI@home have demonstrated how much further this idea can go if we bring edge resources into the fold, creating the world’s largest supercomputer from networking hundreds of thousands of much less powerful computers. SETI@home or even the more generalized BOINC don’t quite meet the open infrastructure demand, however, as they demonstrate the specific use case of distributed processing with a central, coordinating node. To utilize all edge resources, we need a more generalized system that does not rely on this centralized coordination and that can fulfill any task. The problem then becomes the heterogeneous nature of Internet hosts, particularly the fact that most nodes are behind Network Address Translators (NAT)s or firewalls. NATs cut off a node’s resources, preventing it from contributing to the pool. That’s where Session Initiation Protocol (SIP) steps in. With resounding success in VoIP, SIP and with the associated STUN and ICE protocols provide a robust and generalized way for two or more users to connect on the Internet regardless of their specific NAT or firewall configuration.

So, an open infrastructure needs p2p to be most effective, and P2P needs SIP. Together, the possibility emerges of a generalized system where the collective bandwidth, CPU, memory, and disk space resources of every internetworked computer on the planet can be dynamically combined to perform arbitrary tasks. Other protocols and specifications such as XMPP, RDF, and SPARQL would also likely play vital roles in such a system, as would distributed hash tables, but I’ll get more into that later.

As Udell points out, the open infrastructure would closely parallel the pooling of knowledge resources we see in Wikipedia or the collaborative filtering of Slashdot. In this case, though, we’re collectively sharing the resources of the computers themselves. The computers are doing the collaborating.