TechCrunch 50, LittleShoot, and the Aftermath

August 15, 2008

Things have been on hold here for the public release of LittleShoot as we have awaited word on our TechCrunch 50 application.  We didn’t make it! Darn.  It came down to the wire — we were in the last batch of companies to receive notification we weren’t in the conference on Friday at 1:24 AM EST. The 50 companies that made it in should know now.  Here’s the e-mail:

 

Dear TechCrunch50 Candidate:

We are sorry to inform you that your company was not selected as a finalist for the TechCrunch50 conference. As you know, we are only able to select a very, very small percentage of the more than 1,000 outstanding applications we receive.

Your company was among a select set of candidates that we considered, and it was a difficult decision driven purely by the limited number of presentation slots. Since we regarded your business so highly, we want to make sure you still get the opportunity to participate in the conference in our DemoPit.
(http://techcrunch50demopit.eventbrite.com).

As a DemoPit company, you will have the opportunity to be nominated for the People’s Choice award and win the 50th spot on the TechCrunch50 main stage. As the 50th company to present, the People’s Choice award winner will be able to compete for the $50,000 TechCrunch50 award. Act fast, as spaces are very limited and first come, first served.

Additionally, all DemoPit companies will benefit from the exposure generated by media attending the event. We do anticipate having approximately 300 members of the international press in attendance.

If you have questions regarding the TechCrunch50 Demo Pit opportunity, please email Dan Kimerling at dan@techcrunch.com.

Sincerely,

–Jason, Heather & Michael
and the TechCrunch50 Team

 

I’ve got a lot of respect for the TechCrunch folks and the way they give unfunded companies a shot, and I thoroughly enjoyed meeting Jason Calacanis down at the Mahalo Tech Meetup last night in one of my first nights in LA. I appreciate the tremendous work Jason, Michael, Heather, and the other folks at TechCrunch have put it to the process.  

That said, it’s on.  I feel like the guy on draft day who didn’t go on the first round.  It’s the “meritocracy” thing that gets me – TechCrunch 50 is touted as a pure meritocracy.  I’d put LittleShoot’s technology up there with anyone, and it just kills me to think 50 startups beat us out.  We can tell ourselves they had better business models, better marketing plans, yada yada yada, but I’m taking it to mean they had better technology.  If there’s anything that motivates me, that’s it. I have great respect for the other applicants, and we all supported each other on the TechCrunch blog as we agonized through the waiting process.  I wish everyone the best of luck, but the LittleShoot public beta is on its way.

Here’s a link to the LittleShoot demo video we submitted for TechCrunch for people unfamiliar with the Little Fella’:

LittleShoot Demo


P2P in Flash 10 Beta — the Questions Facing a YouTube, Skype, and BitTorrent Killer

May 21, 2008

As I’ve reported, the inclusion of P2P in Flash 10 Beta represents a fundamental disruption of the Internet platform. As with all disruptions, however, this one will progress in fits and starts. Flash 10’s details limit the full power of its P2P features. While features like VoIP will be fully enabled, it will take some ingenuity to turn Flash 10 into a more generalized P2P platform. Here are the issues:

1) Flash Media Server (FMS)

You’ll need Flash Media Server (FMS) to take advantage of Flash P2P. At $995 for the “Streaming Server” and $4,500 for the “Interactive Server”, FMS is beyond the reach of most developers working on their own projects, severely limiting Flash P2P’s disruptive potential. In an ideal world, the new P2P protocols would be openly specified, allowing open source developers to write their own implementations. As it stands now, a single company controls a potentially vital part of the Internet infrastructure, and encryption will likely thwart the initial reverse engineering efforts of open source groups like Red5.

2) No Flash Player in the Background

As David Barrett (formerly of Akamai/Red Swoosh) has emphasized on the Pho List, Flash Player only runs when it’s loaded in your browser. As soon as you navigate to another page, Flash can no longer act as a P2P server. P2P programs like Red Swoosh, BitTorrent, and LittleShoot don’t have this limitation, and it means Flash can’t save web sites as much bandwidth as those full-blown applications can. This limits but does not eliminate Flash’s threat to CDNs. Sure, you could get around this using AIR, but that creates another major barrier to adoption.

3) Usability

While Flash 10 has the ability to save files to your computer and to load them from your computer (essential for P2P), it pops up a dialog box each time that happens. While this is an important security measure, it cripples Flash 10’s ability to mimic BitTorrent because you’d have dialogs popping up all the time to make sure you as a user had authorized any uploads of any part of a file.

4) Limited APIs

While all the required technology is there in the Real Time Media Flow Protocol (RTMFP), ActionScript’s API limits some of the P2P potential of Flash 10. P2P downloading breaks up files into smaller chunks so you can get them from multiple other computers. Flash 10 can only save complete files to your computer — you can’t save in small chunks. As a result, you’d have to use ActionScript very creatively to achieve BitTorrent or LittleShoot-like distribution or to significantly lower bandwidth bills for sites serving videos. It might be possible, but you’d have to work some magic.

So, that’s the deal. There’s still a lot more documentation coming our way from Adobe, so there are undoubtedly useful nuggets yet to be discovered.

Even given all these limitations, however, the key point to remember is the Internet has a new, immensely powerful protocol in its arsenal: Matthew Kaufman and Michael Thornburgh’s Real Time Media Flow Protocol (RTMFP). While Flash might use it primarily for direct streaming between two computers now (think VoIP), it introduces the potential for so much more.

Keep your helmet on.


P2P in Flash 10 Beta – a YouTube, Skype, and BitTorrent Killer

May 16, 2008

The inclusion of p2p in the Flash 10 beta threatens to bring down everyone from YouTube to Skype. Using P2P, Flash sites will be able to serve higher quality video than YouTube at a fraction of the cost. Meanwhile, the combination of the Speex audio codec and the Real Time Media Flow Protocol (RTMFP) will enable sites to seamlessly integrate VoIP without requiring a Skype install. The impact of this change is hard to fathom. We’re talking about a fundamental shift in what is possible on the Internet, with Flash demolishing almost all barriers to integrating P2P on any site.

Hank Williams and Om Malik have discussed the potential for Flash 10 to be used for P2P CDNs, and they’re largely right on. The biggest problem I see with P2P CDNs is oddly latency, however. While P2P theoretically enables you to choose copies of content closer to you on the network, you still have to negotiate with a server somewhere to establish the connection (for traversing NATs), nullifying the P2P advantage unless you’re talking about really big files. As Hank identifies, the sites serving large files are the CDN’s best customers, so we are talking about a significant chunk of the CDN business up for grabs. That said, CDNs could easily start running Flash Media Servers themselves with integrated RTMFP. They’ve already addressed the server locality problem, and taking advantage of Flash deployments would simply be an optimization. Whether the CDNs will realize this shift has taken place before it’s too late is another question.

To me, the really vulnerable players are the video sites themselves and anyone in the client-side VoIP space. Writing a VoIP app is now equivalent to writing your own Flash video player. All the hard stuff is already done. Same with serving videos. You no longer have to worry about setting up an infinitely scalable server cluster — you just offload everything to Flash. No more heavy lifting and no more huge bandwidth bills. In the BitTorrent case, it’s mostly a matter of usability. As with Skype, you no longer need a separate install. Depending on what’s built in to the Flash Media Server, you also no longer need to worry about complicated changes on the server side, and downloads will happen right in the browser.

The stunning engineering behind all of this should be adequately noted. The Real Time Media Flow Protocol (RTMFP) underlies all of these changes. On closer inspection, RTMFP appears to be the latest iteration of Matthew Kaufman and Michael Thornburgh’s Secure Media Flow Protocol (SMP) from Adobe’s 2006 acquisition of Amicima. Adobe appears to have acquired Amicima specifically to integrate SMP into Flash, now in the improved form of RTMFP. This is a very fast media transfer protocol built on UDP with IPSec-like security and congestion control built in. The strength of the protocol was clear to me when Matthew first posted his “preannouncement” on the p2p hackers list. Very shrewd move on Adobe’s part.

Are there any downsides? Well, RTMFP, is for now a closed if breathtakingly cool protocol, and it’s tied to Flash Media Server. That means Adobe holds all the cards, and this isn’t quite the open media platform to end all platforms. If they open up the protocol and open source implementations start emerging, however, the game’s over.

Not that I have much sympathy, but this will also shift a huge amount of traffic to ISPs, as ISPs effectively take the place of CDNs without getting paid for it. While Flash could implement the emerging P4P standards to limit the bleeding at the ISPs and to further improve performance, this will otherwise eventually result in higher bandwidth bills for consumers over the long term. No matter — I’d rather have us all pay a little more in exchange for dramatically increasing the numbers of people who can set up high bandwidth sites on the Internet. The free speech implications are too good to pass up.

Just to clear up some earlier confusion, Flash Beta 10 is not based on SIP or P2P-SIP in any way. Adobe’s SIP work has so far only seen the light of day in Adobe Pacifica, but not in the Flash Player.


Ian Clarke’s Freenet 0.7 Released

May 9, 2008

After 3 years of development, the latest version of Freenet is here. This version protects users from persecution for even using Freenet, let alone for the content they’re distributing. Freenet is a vital tool against censorship, particularly in countries like China where freedom of speech is often severely curtailed. For the unfamiliar, here’s the quick description of Freenet from their site:

Freenet is free software which lets you publish and obtain information on the Internet without fear of censorship. To achieve this freedom, the network is entirely decentralized and publishers and consumers of information are anonymous. Without anonymity there can never be true freedom of speech, and without decentralization the network will be vulnerable to attack.

Congratulations to Ian Clarke, Matthew Toseland, and the other the Freenet developers. The quote on the Freenet site epitomizes the importance of the project:

“I worry about my child and the Internet all the time, even though she’s too young to have logged on yet. Here’s what I worry about. I worry that 10 or 15 years from now, she will come to me and say ‘Daddy, where were you when they took freedom of the press away from the Internet?'”
–Mike Godwin, Electronic Frontier Foundation

Freenet is a vital weapon in that war.

I’m also excited to have Ian as a new addition to the LittleShoot advisory board, one of many things we’ll be making more announcements about soon.  I’ve always had a great respect for Ian’s emphasis on p2p’s importance as a politically disruptive tool for free speech.  We all got caught up in the copyright wars and missed the big picture, but not Ian.


OpenSocial, Facebook, Google, OpenGadgets

January 28, 2008

I still find all of the attacks on OpenSocial to be naive. Did anyone ever really think each company would open up it’s social graph? Apparently so, but I certainly didn’t. How could Google possibly get everyone to join and dictate they all open their data? One step at a time! Perhaps I’ve never been disappointed because I never conceived of OpenSocial as anything but OpenGadgets. I don’t think the importance of OpenGadgets should be overlooked, however. Google Gadgets is a generally sound approach to standardizing gadget making using html and JavaScript. It’s certainly a big step up from learning some new proprietary markup language invented by Zuckerburg & co.

Perhaps this ties in to my general disdain for Facebook. Sure, it’s a heck of a lot better than MySpace, but is that really saying much? It’s still a social network, which is just inherently cheesy and doesn’t solve any interesting technical problem whatsoever. I find it shocking there’s talk of people leaving Google to go to Facebook. Maybe I’m a tech snob, but old Sergey and Larry actually solved a really challenging technical problem at Google. It makes sense to me they have legions of programmers rallying behind them. Facebook? This guy was a freshman at Harvard who knew a little PHP. Not to mention he stole the idea from his “buddies” and broke away to do it on his own. There’s all this excitement about a company that’s just not interesting technically with sketchy business ethics? I just don’t get it.

I’ll stop soon, but let’s just touch on the “Facebook platform.” Come on. We really need another proprietary platform? There’s a platform I’ve come to know and love that’s simply an astounding place for innovation called the “Internet.” It’s really cool. It’s really open. I’ve come to like other platforms like Ning, but I just have no interest in writing something in Facebook’s markup language that will make them more advertising cash. People like Facebook because there’s money to be made. That’s the only conclusion I can make. It’s not a horrible reason, but we should at least call it like it is. All the ramblings about how innovative the platform is remind me of company valuations in the late 90s. A lot of talk. I have yet to see any truly innovative Facebook app. Seriously. Please don’t super poke me. Ever. Most Facebook apps are not only uninteresting, but I actively wish they didn’t exist. They make my life worse and waste my time.

I’ll take the Internet any day.


Elusiva, Virtualization, Google Base, LittleShoot Updates

April 10, 2007

Yes, I’m still alive. Barely. Actually, that’s not true. I’m alive and well, but I’ve been buried chin deep in LittleShoot code for the last several months and have been severely neglecting my blog.

First, I want to encourage everyone to check out my good buddy Igor Shmukler’s launch of his office virtualization company, Elusiva. Igor is of the breed of Russian-born programmers I’m sure many of you have encountered who have truly intimidating systems (as in operating systems) knowledge. Igor was soldering circuit boards and hacking Windows drivers at the age of 11 in Russia while I was trading baseball cards in Gill, MA. By the way, anyone want 50 Roger Clemens Topps rookies? Hmnn…maybe not. Seriously, though, Igor knows his stuff. If you have a need for office virtualization tools, I’d highly recommend checking it out.

In other news, LittleShoot hacking continues behind the scenes. The endless details have kept us from launching just yet, but we’re as excited as ever. For any open source folks out there, there are some really exciting modules to either use or contribute to. The multi-source downloading code, for example, is completely reusable as a separate jar, as it just about every other part of LittleShoot. If you feel like hacking on Ajax, the Hibernate database layer, or the SIP, TURN, ICE, or STUN implementations, they’re all carefully parceled out for your coding pleasure. They’re just not released quite yet.

LittleShoot has taken many twists and turns. Back in September I was ecstatic to start playing with Google Base, essentially offloading all of our searching and file publishing to Google instead of our servers. I love many of the concepts behind Google Base, like the embracing of open standards such as the Atom Publishing Protocol (APP), Amazon’s OpenSearch, and the simple REST API. In practice, though, Google Base was a disaster for us. The performance was just too inconsistent, and the documentation is contradictory. In some places they say updates to your data are instantly searchable from within your private account while in elsewhere they claim it can take up to 24 hours. In practice, they’re usually instantly searchable, but can take up to 24 hours every once in awhile. Good luck getting a straight answer from the Google Base folks. If you’re considering Google Base for anything that’s mission-critical other than creating a new venue for advertising your eBay sale items, proceed with caution!

We decided Google Base didn’t perform well enough for future LittleShoot users, and I implemented a centralized searching solution using Hibernate, MySQL, etc. I also hired Leo Kim to set up our server cluster over at Cari.net, and he did a great job implementing a scalable architecture that will keep LittleShoot humming along as the user base grows. Leo actually just got hired by my good old buddies over at LimeWire. I couldn’t quite match their salary offer just yet (ahh, like not even close). Someday, Mark, someday!

Let’s see, what else? Oh, I feel extremely fortunate to have retained Wendy Seltzer as our legal counsel, particularly for making sure LittleShoot adheres to the DMCA. Wendy is just awesome. She’s so passionate. She’s also going one on one with the NFL, and she’s winning. Check it out. What a great example of what you can do with a little knowledge. Its one thing to have someone who knows the law, but it’s something completely different to work with someone who is also truly passionate about the future of free speech and digital media.

Oh, one last thing. Bloomberg news recently published a feature on Mark Gorton, my former boss at LimeWire and a friend. Beyond coming up the idea for LimeWire and forming our original team, Mark is also passionate about making New York more bicycle friendly and about bringing free, open source software to non-profits and governments that often don’t have the knowledge or resources to take advantage of technology. Mark’s an amazing example of how you can live the life you want if you just give it a little effort.

That’s it for now. I’ll write more often as the LittleShoot launch draws near.


MySpace Zapr Link Tool, Bandwidth Hell, and NAT Traversal

September 12, 2006

I just read Mark Cuban’s blog for the first time in awhile, and I like his fast and loose style, so don’t be surprised if my posts get a little less formal.

Moment’s after catching up with Mark, Mick from Zapr blogged about the new MySpace Zapr link tool. I quickly gave it a spin. At first, it blew me away. The link you ultimately used to download some of Henning Schulzrinne’s fascinating lecture slides on my machine looked like this:

http://72.3.247.245:81/GETF?(null)&(null)&adamfisk&HORTON-LAPTO&2f615f21d986d501

I looked at that and scratched my head. I even shot off a quick e-mail to my good buddy Jerry Charumilind to figure out what I was missing here. I assumed the 72.3.247.245 was the IP address of the Starbucks I’m sitting in here in Sheriden Square, New York City, and that they had somehow figured out how to publicly address my machine using my locally-running Zapr instance to open up some ports. UPnP? No, that just wouldn’t work in most cases. Too brittle. Were they doing an ICE-style direct transfer, the way I would? Not possible without Zapr installed on both ends.

Then I turned to my trusty nslookup and discovered “72.3.247.245” is, in fact, one of their servers. I suspect they use the raw IP address to make it look like they’re doing something fancy, but, in fact, they’re just relaying traffic. Suddenly the world made sense again!

Don’t get me wrong, it’s still a nifty service and a nice implementation. It’s getting towards the seamless integration of the “dark web” I’d like to see — the integration of edge resources, in this case content. If they were using open protocols, I’d give them a thumbs up. Unfortunately, we can add them to the long list of services ignoring interoperability. If they are using standards, they’re certainly not advertising it. Aside from that, the main problem I see is their bandwidth costs if the service gets popular. Yikes! They’re not caching it, as your file immediately becomes unavailable if you go offline. This means not only will a user’s machine get hammered if something is popular, but so will Zapr’s. The user’s machine would just never hold up (think of the most popular YouTube videos hosted on your machine at home), and the Zapr servers would have a heck of a time too.

How do you get around this? Just like Skype does, just like Gizmo does, and just like LittleShoot does. Require users on both ends to have the software installed and pierce the NATs and firewalls to connect the computers directly. That solves the problem of killing the central server. What about the user’s poor machine? Keep track of all the locations for the file and load balance it across the Internet. How you do that is another question I’ll leave for another day (hint: not with BitTorrent).


O’Reilly, GData, Open Standards

September 4, 2006

Tim O’Reilly’s post about GData and the importance of open standards articulates the argument for expanding the open infrastructure, for standardizing the “small pieces” that together do the heavy lifting of the Internet and make everything work together.

I like David Weinberger’s “small pieces” phrase, and I’ll adopt it here. Open standards and open source work so well, and so well together, because the pieces are small. Each standard solves a very specific problem. This allows each open source implementation of those standards to be limited in scope, lowering the barriers to entry for writing and maintaining them. The Internet today exists because of small pieces, particularly HTTP, HTML, CSS, XML, etc.

Together, these small pieces form the web platform that has fostered the startling array of innovations over the last ten years. O’Reilly’s key phrase is “A Platform Beats an Application Every Time”. If there’s any lesson to take away from the Internet, this is it. A platform beats an application because it fosters an entire ecosystem of applications that can talk to each other using these small pieces. The ability to talk to each makes each application far more powerful than if it were an isolated island. Just like an ecosystem, platforms create new niches and continually evolve as new actors emerge, and they create needs for new protocols.

This is why the current Internet lies in such a precarious state. The ecosystem has evolved, and has created needs for new protocols that do everything from traverse NATs to publish data. As the system becomes more complex, however, we’re forgetting that central tenet that small pieces made the whole thing work in the first place. In most cases, standards for solving the problems exist, but private actors either don’t realize it or decided to use their own versions regardless. This is like companies in 1994 deciding to ignore HTTP and implement their own versions.

Take NATs for example. The IETF’s SIP, TURN, STUN, and ICE provide an excellent, interoperable framework for traversing NATs. Nevertheless, Skype, BitTorrent, and Gnutella all implement their own proprietary versions of the same thing, and they don’t work as well as the IETF versions. As a result, none of them can interoperate, and the resources of all NATted computers remain segmented off from the rest of the Internet as a wasted resource. Skype can only talk to Skype, BitTorrent can only talk to BitTorrent, and Gnutella can only talk to Gnutella in spite of standards that could make all three interoperate. In Skype and BitTorrent’s case, they even ignore HTTP. They decided to completely forgoe interoperability with the rest of the Internet for file transfers.

GData, in contrast, gets high marks for interoperability. It uses the Atom Publishing Protocol (APP), RSS, and HTTP. RSS and HTTP are, of course, widely deployed already. APP is a good standard that leverages HTTP and solves very specific publishing problems on top of that. APP lets you modify any data you submit, one of Tim Bray’s first criteria for “Open” data. Google Base, built on top of GData, also shares AdSense revenue with users, fulfilling Tim Bray’s second criteria of sharing value-added information from submitted data.

The only part of GData I have a problem with is OpenSearch. OpenSearch is sort of half of an Internet standard because it emerged from a single company Amazon, in the face of a better standards out of the IETF, RDF and SPARQL.

SPARQL and RDF together create an abstraction layer for any type of data and allow that data to be queried. They create the data portion of the web platform. As Tim says, “The only defense against [proprietary data] is a vigorous pursuit of open standards in data interchange.” Precisely. RDF and SPARQL are two of the primary protocols we need in this vigorous pursuit on the data front. The Atom Publishing Protocol is another. There are many fronts in this war, however. We also need to push SIP, STUN, TURN, and ICE in terms of making the “dark web” interoperable, just as we need to re-emphasize the importance of HTTP for simple file transfers. These are the protocols that need to form, as Tim says, “a second wave of consolidation, which weaves it all together into a new platform”. If we do things right, this interoperable platform can create a world where free calling on the Internet works as seamlessly as web browsers and web servers, where every browser and every server automatically distribute load using multisource “torrent” downloads, and where all data is shared.

Standards are the key to this open infrastructure.


Chris Holmes and Architectures of Participation

August 30, 2006

My good friend Chris Holmes’s recent Tech Talk to Google is now available on Google video. Chris’s work touches on a lot of things, but you can think of it as helping to implement an open standards and open source-based infrastructure for things like Google Maps and Google Earth. You should check out his thoughts.

I get all excited when Chris talks about open standards as a cornerstone of democracy. With the web changing rapidly, we all need to remember this lesson. The web itself was based on the simple open architecture of HTTP and HTML. Analogous standards exist for geographic data. Chris’s work focuses on expanding the web platform to also support geographic data, much as my work focuses on expanding the web platform to support P2P.

I’ll write more about “architectures of participation” in the future. While “Web 2.0” is a much catchier name, I think “architectures of participation” clears up a lot of the confusion surrounding these issues. I also think it digs deeper. A lot of the Web 2.0 thinking focuses on collaboration on the level of individual web sites. I have no problem with that, and I just love collaborative projects like Wikipedia. There’s a distinct lack of discussion about how architectures of participation at the standards layer enables all of this, though, I think because more people understand web sites than the standards driving them.

Wikipedia would, of course, never exist if we didn’t have HTTP and HTML. HTTP and HTML are really quite simple protocols, but look what they’ve enabled! Imagine what could happen if we really started growing the protocol layer of the web, integrating things like geographic standards and SIP onto standard web projects. What could collaborative projects do atop a more powerful infrastructure? I’m not sure, but it’s a question we should be taking a harder look at.


Skype and Click To Call

August 29, 2006

Om Malik posted a fascinating piece about eBay pushing Skype as the standard protocol for “click-to-call”, the process of clicking on a hyperlink to initiate a VoIP call.  As I mentioned last week, Skype’s push of its proprietary protocol for click-to-call is as if Yahoo decided to introduce a separate standard for HTTP circa 1994.  Imagine if half of all hyperlinks started with “http:” while the other half started with “yahoo:”.  Every browser and every web server would have to implement both.  SIP is today’s HTTP.  It powers VoIP with the almost singular exception of Skype.  Its well-architected and widely implemented in open source projects, just like HTTP was 10 years ago.

The picture gets uglier.  Skype is a proprietary protocol.  EBay is pushing this standard to lock out all the other players.  Imagine if we only had one web browser and one web server from a single company today because the protocols they were proprietary.  This would have set the Internet back years. 

I predict this attempt will fail.  It ignores the importance of open protocols as the glue of the Internet, as the bedrock for the competition that makes it all work.  While the Internet is built on Apache and Linux, it’s also built on the IETF.