P2P in Flash 10 Beta – a YouTube, Skype, and BitTorrent Killer

May 16, 2008

The inclusion of p2p in the Flash 10 beta threatens to bring down everyone from YouTube to Skype. Using P2P, Flash sites will be able to serve higher quality video than YouTube at a fraction of the cost. Meanwhile, the combination of the Speex audio codec and the Real Time Media Flow Protocol (RTMFP) will enable sites to seamlessly integrate VoIP without requiring a Skype install. The impact of this change is hard to fathom. We’re talking about a fundamental shift in what is possible on the Internet, with Flash demolishing almost all barriers to integrating P2P on any site.

Hank Williams and Om Malik have discussed the potential for Flash 10 to be used for P2P CDNs, and they’re largely right on. The biggest problem I see with P2P CDNs is oddly latency, however. While P2P theoretically enables you to choose copies of content closer to you on the network, you still have to negotiate with a server somewhere to establish the connection (for traversing NATs), nullifying the P2P advantage unless you’re talking about really big files. As Hank identifies, the sites serving large files are the CDN’s best customers, so we are talking about a significant chunk of the CDN business up for grabs. That said, CDNs could easily start running Flash Media Servers themselves with integrated RTMFP. They’ve already addressed the server locality problem, and taking advantage of Flash deployments would simply be an optimization. Whether the CDNs will realize this shift has taken place before it’s too late is another question.

To me, the really vulnerable players are the video sites themselves and anyone in the client-side VoIP space. Writing a VoIP app is now equivalent to writing your own Flash video player. All the hard stuff is already done. Same with serving videos. You no longer have to worry about setting up an infinitely scalable server cluster — you just offload everything to Flash. No more heavy lifting and no more huge bandwidth bills. In the BitTorrent case, it’s mostly a matter of usability. As with Skype, you no longer need a separate install. Depending on what’s built in to the Flash Media Server, you also no longer need to worry about complicated changes on the server side, and downloads will happen right in the browser.

The stunning engineering behind all of this should be adequately noted. The Real Time Media Flow Protocol (RTMFP) underlies all of these changes. On closer inspection, RTMFP appears to be the latest iteration of Matthew Kaufman and Michael Thornburgh’s Secure Media Flow Protocol (SMP) from Adobe’s 2006 acquisition of Amicima. Adobe appears to have acquired Amicima specifically to integrate SMP into Flash, now in the improved form of RTMFP. This is a very fast media transfer protocol built on UDP with IPSec-like security and congestion control built in. The strength of the protocol was clear to me when Matthew first posted his “preannouncement” on the p2p hackers list. Very shrewd move on Adobe’s part.

Are there any downsides? Well, RTMFP, is for now a closed if breathtakingly cool protocol, and it’s tied to Flash Media Server. That means Adobe holds all the cards, and this isn’t quite the open media platform to end all platforms. If they open up the protocol and open source implementations start emerging, however, the game’s over.

Not that I have much sympathy, but this will also shift a huge amount of traffic to ISPs, as ISPs effectively take the place of CDNs without getting paid for it. While Flash could implement the emerging P4P standards to limit the bleeding at the ISPs and to further improve performance, this will otherwise eventually result in higher bandwidth bills for consumers over the long term. No matter — I’d rather have us all pay a little more in exchange for dramatically increasing the numbers of people who can set up high bandwidth sites on the Internet. The free speech implications are too good to pass up.

Just to clear up some earlier confusion, Flash Beta 10 is not based on SIP or P2P-SIP in any way. Adobe’s SIP work has so far only seen the light of day in Adobe Pacifica, but not in the Flash Player.


OpenSocial, Facebook, Google, OpenGadgets

January 28, 2008

I still find all of the attacks on OpenSocial to be naive. Did anyone ever really think each company would open up it’s social graph? Apparently so, but I certainly didn’t. How could Google possibly get everyone to join and dictate they all open their data? One step at a time! Perhaps I’ve never been disappointed because I never conceived of OpenSocial as anything but OpenGadgets. I don’t think the importance of OpenGadgets should be overlooked, however. Google Gadgets is a generally sound approach to standardizing gadget making using html and JavaScript. It’s certainly a big step up from learning some new proprietary markup language invented by Zuckerburg & co.

Perhaps this ties in to my general disdain for Facebook. Sure, it’s a heck of a lot better than MySpace, but is that really saying much? It’s still a social network, which is just inherently cheesy and doesn’t solve any interesting technical problem whatsoever. I find it shocking there’s talk of people leaving Google to go to Facebook. Maybe I’m a tech snob, but old Sergey and Larry actually solved a really challenging technical problem at Google. It makes sense to me they have legions of programmers rallying behind them. Facebook? This guy was a freshman at Harvard who knew a little PHP. Not to mention he stole the idea from his “buddies” and broke away to do it on his own. There’s all this excitement about a company that’s just not interesting technically with sketchy business ethics? I just don’t get it.

I’ll stop soon, but let’s just touch on the “Facebook platform.” Come on. We really need another proprietary platform? There’s a platform I’ve come to know and love that’s simply an astounding place for innovation called the “Internet.” It’s really cool. It’s really open. I’ve come to like other platforms like Ning, but I just have no interest in writing something in Facebook’s markup language that will make them more advertising cash. People like Facebook because there’s money to be made. That’s the only conclusion I can make. It’s not a horrible reason, but we should at least call it like it is. All the ramblings about how innovative the platform is remind me of company valuations in the late 90s. A lot of talk. I have yet to see any truly innovative Facebook app. Seriously. Please don’t super poke me. Ever. Most Facebook apps are not only uninteresting, but I actively wish they didn’t exist. They make my life worse and waste my time.

I’ll take the Internet any day.


O’Reilly, GData, Open Standards

September 4, 2006

Tim O’Reilly’s post about GData and the importance of open standards articulates the argument for expanding the open infrastructure, for standardizing the “small pieces” that together do the heavy lifting of the Internet and make everything work together.

I like David Weinberger’s “small pieces” phrase, and I’ll adopt it here. Open standards and open source work so well, and so well together, because the pieces are small. Each standard solves a very specific problem. This allows each open source implementation of those standards to be limited in scope, lowering the barriers to entry for writing and maintaining them. The Internet today exists because of small pieces, particularly HTTP, HTML, CSS, XML, etc.

Together, these small pieces form the web platform that has fostered the startling array of innovations over the last ten years. O’Reilly’s key phrase is “A Platform Beats an Application Every Time”. If there’s any lesson to take away from the Internet, this is it. A platform beats an application because it fosters an entire ecosystem of applications that can talk to each other using these small pieces. The ability to talk to each makes each application far more powerful than if it were an isolated island. Just like an ecosystem, platforms create new niches and continually evolve as new actors emerge, and they create needs for new protocols.

This is why the current Internet lies in such a precarious state. The ecosystem has evolved, and has created needs for new protocols that do everything from traverse NATs to publish data. As the system becomes more complex, however, we’re forgetting that central tenet that small pieces made the whole thing work in the first place. In most cases, standards for solving the problems exist, but private actors either don’t realize it or decided to use their own versions regardless. This is like companies in 1994 deciding to ignore HTTP and implement their own versions.

Take NATs for example. The IETF’s SIP, TURN, STUN, and ICE provide an excellent, interoperable framework for traversing NATs. Nevertheless, Skype, BitTorrent, and Gnutella all implement their own proprietary versions of the same thing, and they don’t work as well as the IETF versions. As a result, none of them can interoperate, and the resources of all NATted computers remain segmented off from the rest of the Internet as a wasted resource. Skype can only talk to Skype, BitTorrent can only talk to BitTorrent, and Gnutella can only talk to Gnutella in spite of standards that could make all three interoperate. In Skype and BitTorrent’s case, they even ignore HTTP. They decided to completely forgoe interoperability with the rest of the Internet for file transfers.

GData, in contrast, gets high marks for interoperability. It uses the Atom Publishing Protocol (APP), RSS, and HTTP. RSS and HTTP are, of course, widely deployed already. APP is a good standard that leverages HTTP and solves very specific publishing problems on top of that. APP lets you modify any data you submit, one of Tim Bray’s first criteria for “Open” data. Google Base, built on top of GData, also shares AdSense revenue with users, fulfilling Tim Bray’s second criteria of sharing value-added information from submitted data.

The only part of GData I have a problem with is OpenSearch. OpenSearch is sort of half of an Internet standard because it emerged from a single company Amazon, in the face of a better standards out of the IETF, RDF and SPARQL.

SPARQL and RDF together create an abstraction layer for any type of data and allow that data to be queried. They create the data portion of the web platform. As Tim says, “The only defense against [proprietary data] is a vigorous pursuit of open standards in data interchange.” Precisely. RDF and SPARQL are two of the primary protocols we need in this vigorous pursuit on the data front. The Atom Publishing Protocol is another. There are many fronts in this war, however. We also need to push SIP, STUN, TURN, and ICE in terms of making the “dark web” interoperable, just as we need to re-emphasize the importance of HTTP for simple file transfers. These are the protocols that need to form, as Tim says, “a second wave of consolidation, which weaves it all together into a new platform”. If we do things right, this interoperable platform can create a world where free calling on the Internet works as seamlessly as web browsers and web servers, where every browser and every server automatically distribute load using multisource “torrent” downloads, and where all data is shared.

Standards are the key to this open infrastructure.


Chris Holmes and Architectures of Participation

August 30, 2006

My good friend Chris Holmes’s recent Tech Talk to Google is now available on Google video. Chris’s work touches on a lot of things, but you can think of it as helping to implement an open standards and open source-based infrastructure for things like Google Maps and Google Earth. You should check out his thoughts.

I get all excited when Chris talks about open standards as a cornerstone of democracy. With the web changing rapidly, we all need to remember this lesson. The web itself was based on the simple open architecture of HTTP and HTML. Analogous standards exist for geographic data. Chris’s work focuses on expanding the web platform to also support geographic data, much as my work focuses on expanding the web platform to support P2P.

I’ll write more about “architectures of participation” in the future. While “Web 2.0” is a much catchier name, I think “architectures of participation” clears up a lot of the confusion surrounding these issues. I also think it digs deeper. A lot of the Web 2.0 thinking focuses on collaboration on the level of individual web sites. I have no problem with that, and I just love collaborative projects like Wikipedia. There’s a distinct lack of discussion about how architectures of participation at the standards layer enables all of this, though, I think because more people understand web sites than the standards driving them.

Wikipedia would, of course, never exist if we didn’t have HTTP and HTML. HTTP and HTML are really quite simple protocols, but look what they’ve enabled! Imagine what could happen if we really started growing the protocol layer of the web, integrating things like geographic standards and SIP onto standard web projects. What could collaborative projects do atop a more powerful infrastructure? I’m not sure, but it’s a question we should be taking a harder look at.


Skype and Click To Call

August 29, 2006

Om Malik posted a fascinating piece about eBay pushing Skype as the standard protocol for “click-to-call”, the process of clicking on a hyperlink to initiate a VoIP call.  As I mentioned last week, Skype’s push of its proprietary protocol for click-to-call is as if Yahoo decided to introduce a separate standard for HTTP circa 1994.  Imagine if half of all hyperlinks started with “http:” while the other half started with “yahoo:”.  Every browser and every web server would have to implement both.  SIP is today’s HTTP.  It powers VoIP with the almost singular exception of Skype.  Its well-architected and widely implemented in open source projects, just like HTTP was 10 years ago.

The picture gets uglier.  Skype is a proprietary protocol.  EBay is pushing this standard to lock out all the other players.  Imagine if we only had one web browser and one web server from a single company today because the protocols they were proprietary.  This would have set the Internet back years. 

I predict this attempt will fail.  It ignores the importance of open protocols as the glue of the Internet, as the bedrock for the competition that makes it all work.  While the Internet is built on Apache and Linux, it’s also built on the IETF. 


Cringely, Skype, Open Infrastructure

August 16, 2006

I have seemingly plunged myself into a running debate with Robert X. Cringely about the finer points of p2p telephony and NAT traversal. I would first like to acknowledge the remarkable breadth of Cringely´s technical knowledge. An early Apple employee and a longtime savvy observer of technology, Cringely somehow has a strong grasp of the highly specialized technology underlying VoIP. His range is startling.

That said, Cringely continues to stumble over the finer technical details. I only bring this up because the fundamental problems with Skype lie in those details, as I´ll explain. They are what make Skype a “closed garden” and a detriment to the “open infrastructure” I´ve advocated. Cringely puts forth an explanation of Skype´s NAT traversal, asserting that Skype uses STUN, TURN, and ICE servers to do the heavy lifting. There are several problems with this assertion. First, there´s no such thing as an “ICE server”. ICE is a client-side protocol that makes use of STUN and TURN “candidate” endpoints to establish the best possible connection between two peers. Second, and most importantly, Skype doesn´t implement any single one of these protocols regardless. While Cringely likely understands this, his post makes no reference to this key distinction.

For the uninitiated, STUN, TURN, and ICE allow clients to traverse NATs. They are all IETF drafts that continue to change frequently, and they are typically used alongside SIP to power VoIP. This is true for almost every VoIP provider, except for Skype. Skype unfortunately chose to implement a proprietary version of each one, breaking interoperability with other VoIP providers. This makes Skype much like your cell phone in the U.S., where you typically cannot switch cell phone providers while keeping the same phone. If you switch from Sprint to Verizon, for example, you have the joy of putting your $200 phone in a box in your closet or going through the hassle of selling it on eBay. Skype has given us a similar gift. You could never use Skype software with Vonage or Gizmo, for example. If Skype used SIP, TURN, STUN, and ICE, you theoretically could.

Skype´s proprietary protocol is also an issue on web pages. On eBay, you can now press a hyperlink to call users over Skype. This link starts with “skype:” much as typical links start with “http:”. Links on web pages to initiate a phone call will become increasingly common. You could easily imagine links on MySpace pages for calling other users, for example, and some savvy MySpace users likely already have them. The problem is that every other VoIP provider uses SIP, which long ago standardized its own interoperable URIs that start with “sip:”. So, because Skype chose to implement proprietary versions of everything, you will likely have to choose between two links when making a call, one of the form “skype:afisk” and another of the form “sip:afisk@lastbamboo.org”.

Imagine if web servers made a similar choice circa 1994. In this world, instead of every link starting with “http:”, some would start with “http:” and others would start with “mywretchedprotocol:”. All browsers would have to support both. What a nightmare! We have Skype to thank for starting us along that path with VoIP.

The implications of this issue go further. While SIP certainly has its problems (why did they ever include UDP?), the carefully designed interworking of the SIP family of protocols is a thing of beauty. SIP does not depend on STUN or TURN or ICE, for example, just as STUN does not depend on TURN or SIP or ICE, etc. This allows each protocol to evolve independently and allows different developers to implement different parts of the protocol family. One open source project can simply write a STUN server, for example, while another could write a SIP server, and another a TURN client. In the end, the user gets better software because engineers can break apart the problem and focus on implementing one piece well. And they´re all documented in Internet drafts that anyone can read. Skype´s use of proprietary protocols butchers this system.

Because of its careful engineering, SIP can also carry any type of traffic over any type of transport. You can use SIP to transfer RSS feeds using HTTP over TCP as LittleShoot does, for example. Or you can just make a phone call. SIP is simply the signaling protocol used to establish the connection. Skype doesn´t have anywhere near this flexibility.

Skype´s decision to use a closed protocol has security implications as well. When calls are routed through supernodes, for example, there´s a built-in “man-in-the-middle” that can monitor all traffic. Skype encrypts calls, but do they use both server and client authentication to prevent the man-in-the-middle from launching a replay attack? If they don´t, then it´s theoretically possible for an attacker to become a supernode to listen to all of your calls. As a closed protocol, Skype isn´t open to public scrutiny in the security community that could otherwise identify and fix such vulnerabilities. There could be people implementing this exploit to monitor and decrypt your Skype calls right now. While one independent security audit claims Skype does implement both client and server authentication, this is one person evaluating their architecture as opposed to the throngs of security experts who would be eager to identify holes if the system were open. We just don´t know.

These issues all point to the importance of an open infrastructure and to the power of SIP as a bedrock of the next generation of Internet applications. As people like Vint Cerf have noted, SIP may be to the next ten years what HTTP was to the last ten, unless Skype gets in the way and everything degenerates into a battle of ugly proprietary implementations of the same thing.

I choose to believe that good engineering wins in the end. Protocols like SIP, HTTP, and XMPP will enable a new generation of far more powerful applications capable of seamlessly circumventing NATs and pooling resources to put the maximum possible power in the hands of the user.


Cringely Doesn’t Understand P2P

August 2, 2006

Mark Stephens, a.k.a. Robert X. Cringely, the guy who seems to get everything, just doesn’t seem to understand peer-to-peer. His recent post “The Skype is Falling” misses the mark several times. As Yochai Benkler has noted so clearly, p2p is about computers collaborating to share resources, much as Wikipedia is about humans collaborating to share knowledge. On Wikipedia, different people have different levels of knowledge on different topics. That’s what makes it work so well! Specialists in different areas on Wikipedia don’t have to worry about the things they don’t know about — they just contribute what they can. Together, they combine the best of human knowledge for everyone’s maximum benefit.

Peer-to-peer is at its most powerful when it does precisely the same thing — when each peer contributes as much as it’s capable of contributing. Just like humans, computers come in many shapes and sizes. Some have 200 GB hard drives and a dial-up modem. Others are cell phones with no hard drive and little memory, but with bandwidth to spare. Well-architected p2p networks take this into account, harvesting whatever resources are available at any time for the maximum benefit of the network as a whole.

Computers without firewalls and not behind NATs are one of the most precious resources on p2p networks because they supply services others can’t. Because anyone can connect to them, they serve as the glue holding any p2p network together. Without these nodes, most p2p networks simply would not function. Perhaps most importantly, they facilitate connections between NATted/firewalled nodes, allowing any node on a network to theoretically connect to any other.

This is where Cringely just doesn’t get it. He goes into detail describing how Skype uses “servers” to facilitate NAT traversal between peers. He points to the use of servers as somehow making Skype not p2p. The fact is, Skype uses distributed non-firewalled peers as “servers” to allow other peers to connect. This is ahh, well, precisely like every other p2p network on the planet. This architecture could not be more peer-to-peer. In fact, this is p2p at its best – the network is harvesting all available peer resources dynamically.

Cringely claims that “a lot of Skype connections aren’t p2p at all” because of this server interaction and that these servers need to have a “surplus of bandwidth to handle the conversation relay.” This is where his thesis really starts to unravel. He apparently does not understand that the servers are simply used as a signaling protocol to relay contact information between two peers. The call itself typically happens directly between the two computers using UDP NAT hole punching, a practice that VoIP really brought into mainstream use but that is also used in various games and in most p2p networks. There are certainly cases where the hole punching fails, such as with trying to connect 2 peers both behind symmetric NATs, but the call is typically direct. If it weren’t, call quality would often be horrible.

This is a key difference because the vast majority of the bandwidth requirements for calls are for the call itself. The headers exchanged for establishing the call are negligible. I would hazard a guess that 99% of the packets transferred in VoIP calls around the globe are voice packets, not headers. These packets never touch the server unless UDP hole punching fails. It’s an open question as to what Skype does when UDP hole punching fails, but I’d actually be surprised if they were even using “supernodes” in this case, likely instead routing their calls through their own servers. I just don’t think supernodes would be able to provide enough bandwidth for high enough call quality.

Far from the sort of faux p2p Cringely describes, Skype’s use of non-firewalled nodes to negotiate voice sessions between two or more peers is p2p at its finest. Now, don’t get me wrong, I have plenty of problems with Skype’s use of a proprietary protocol instead of SIP, and I’d prefer them to be open source, but this part of Cringely’s analysis just doesn’t make any sense.