Liminal Existence

Fixing Sign-in

Sign-in is tricky business, and it doesn’t get careful enough attention. The folly I’ve allowed myself on Poetica is to try to make things better. After all, identity (and consequently sign-in) was basically all I thought about for the few years before starting on Poetica, and I had some ideas I wanted to try out.

Tim Bray, who’s taken it upon himself to work on these problems for Google, has written up a post about Poetica’s, and has said some nice things. I wanted to write up some notes on how we think about the problem from an interaction design and technology standpoint, and what’s still missing from our approach. Hopefully, this conversation will inform and increase the chances for success of Tim’s hosted version (and corresponding open source project).

The general principle can be stated simply, in two parts: first, give users a trust-worthy way to identify themselves. Second, do so with as little information as possible, because users don’t want to (and simply can’t) remember things like passwords in a secure way.

The first part is solved by any number of readily available standards: OpenID, OAuth, SAML, Kerberos, and so on, to name just a few. We support as many as possible / practical, including proprietary variants. I use the PassportJS library to handle the protocol bits, and the number of approaches it offers are many.

The second part – creating a simple and consistent way to sign in while preserving user choice and preventing vendor lock-in – is, frankly, the hard bit. But it’s also easy – there’s just a lot of social inertia to overcome. To cut to the chase, we try really hard to take our user’s most memorable globally unique thing, the thing 99% of sites already use to identify users – their email address – and use that to sign them in without requiring a password or forcing them to remember information that is meaningless to them like “Identity Provider” or “Delegated Authentication”, like this:

  1. If the user already exists in our system, sign them in using the method they used last time.
  2. If not, try to discover an authentication provider using their email address:
    1. Webfinger, to allow user choice regardless of domain, in the future where everyone’s email address supports it, amiright?
    2. MX record – if the provider that hosts your email supports delegated authentication (for example, Google Apps for Domains / gmail), all drawn from an internal lookup table), then we use that.
    3. OpenID and so on (e.g., IndieAuth) discovery against the email address’s domain.
    4. I want to add support for Mozilla’s Persona for discovery, but haven’t gotten around to it yet as the spec has been in flux and I’m not sure it currently supports the bits we’d need.
  3. If we still can’t figure it out, and the user is new, we ask them to sign up using a provider of their choice. Currently we only support Google, Hotmail, Twitter, and Facebook as sign-in options at this step. This is the weakest part of our process, and one we’re actively working to improve.

Once we have their sign-in provider, we show the user a single “Sign In” button and send them over to approve the request (first time only). We open a pop-up for them to interact with their provider rather than just redirecting so that if they get scared or for any other reason close the authentication window, we can catch that and increase the amount of help and number of choices we offer them. Our current implementation of this isn’t great, and is the other main thing we’re working to improve.

Subsequent sign-ins are automatically redirected back with no user intervention. Even on the first sign-in, most often the user is already signed into their identity provider, so they don’t even need to type in a password. We don’t ask our users if they’ve signed up or not – our database already has that information, so on first sign-in, we take the user through the sign-up process automatically, showing them a tutorial and asking them for any additional information we need.

We’re in private beta right now, but the process has been working really well. There are still kinks, to be sure, but the beautiful thing as far as I’m concerned is that most users don’t even notice that they’re signing in – they just do, and then get down to using Poetica. Which is exactly what we should be aiming for (but often can’t acheive) with security systems. The two bits of negative feedback I’ve heard, always from very savvy users, are that it feels a bit too simple to be trusted (are they being phished? A valid criticism, but I’d argue we’re better on the phishing front than most other solutions) and that our “which provider?” fallback page kind of sucks (it does).

I’m looking forward to contributing to Tim’s project to bring this to more people, and I hope you’ll consider using this approach in future (and current) projects.


Tim Bray recently posted about AccountChooser, a project that’s come out of Google and made its way into the OpenID foundation. Go read that post first, as this post is explicitly a response (as requested). Against my better judgement, I’m compelled to say something, because damn it, I actually care about this stuff and I think it matters. And I think AccountChooser is a terrible, counter-productive approach to solving the increasingly large problem of “identity”.

The problem, stated succinctly, is “how can we allow users to sign in easily, without requiring passwords?” The existing solutions are, simply: email+password (I’m not going to get into why passwords suck), Sign in with Facebook, Sign in with Twitter, and Sign in with Google. I’m not sure if anything else works in practice (maybe LinkedIn? Does anyone use that? Simon/Nat?). The latter three use open standards, but lack mechanisms for discovery and thus end up with a “NASCAR” sign-in interface. AccountChooser is an evolution of the XAuth idea (which was riddled with bad security problems, updated to address the security problems, but not the fundamental questions). It’s related to Mozilla’s Persona, but uses OpenID instead of Persona’s custom infrastructure.

Preamble finished, I think there are several key issues with AccountChooser, and I believe each one is sufficient to thwart adoption.

  • Design matters: AccountChooser hijacks your site’s design intent by placing an AccountChooser-branded page in every sign-in interaction. You can customise it to some extent, but the account buttons remain the same. Website owners aren’t going to be happy about giving up this level of control – I’m not, and I won’t implement AccountChooser for this reason alone.
  • It doesn’t work: When I tried to sign in to Tim’s demo site with two separate Google accounts at the same time, AccountChooser (not Tim’s demo, but actually failed. I didn’t even try to do anything weird! This is obviously fixable, but not exactly inspiring of confidence for the future, given that they’ve had well over a year to make this simple and primary integration work.
  • Browser independence matters: AccountChooser is premised on the idea that people can’t remember anything, and the browser can remember everything for them. Despite having a work computer that’s mine, a laptop that’s mine, a phone that’s mine, and a tablet that’s mine, each of those devices sometimes is used for others’ logins (guests, friends with dead/forgotten phones, coworkers, etc). AccountChooser doesn’t provide an easy path to temporary sign-in, and it demotes the user’s own sense of agency when signing in. Which of my six accounts did I use to sign up to Not sure? The solution: sign in to each in turn with AccountChooser, because the computer has helped me forget.
  • It’s run by the OpenID Foundation:, the necessary gatekeeper domain to make AccountChooser work, is run by the unaccountable, corporate, 90% white, 90% male OpenID Foundation. There’s no option to change this, and there’s no story for why this is OK, or explanation for why it’s not. By centralising identity in a single domain, AccountChooser effectively thwarts user choice, and does so by placing control in the hands of people and organisations who are not user-focused and whom I actively distrust. I’ve made similar comments about Mozilla Persona, but at least I kind of trust the Mozilla foundation, and they have a story for how you don’t need to be stuck trusting a single domain that they own.

There are ways to fix this; most of them involve being smart about discovery and making things easy for users without locking those users in to any one solution. I’m trying a strategy that I think already works well at, and I’m still improving it. In fact, most of the issues I’ve had are down to shoddy OpenID / OAuth implementations (and their implementors not listening to their customers).

I remain doggedly hopeful that we can fix all these things.

Private Webhooks. Private Feeds.

This post is for people who want to be able to subscribe to private feeds, or people who want to be able to communicate from one site to another using webhooks. I’ve talked a number of times on the subject at various conferences, but haven’t posted publicly about the approach.

Thankfully, it’s simple. You can see the whole thing here, in this nice set of slides:

Or, you can look at this diagram that illustrates the protocol flow. Note that all the curl commands needed to make a secure, private connection are included in the diagram.

The goal is to allow crypto-less communication across sites while retaining a familiar user experience. This approach achieves that, I think. What do you think?

Pipe Cleaners

London’s not a clean city, as anyone who’s ever spent more than a day there knows very well. The black crap that builds up in your nose after a tube-heavy day is one of London’s most striking features to the new visitor, and apparently it’s not getting any better.

There’s probably an opportunity here:

Paperback Web

“The only way to get authors and publishers to embrace this device is to sell 20,000,000 of them. You either become the best and only platform for consuming books worth buying or you fail. And the only way to create that footprint in the face of an iPad is to make it so cheap to buy and use it’s irresistible.” — Seth Godin
 This statement is total bollocks. If there’s only going to be one best and only platform for consuming books, it’s not going to be some chintzy app made by Apple or Amazon and without meaningful social features. If there’s only going to be one best and only platform, it’s not going to be a DRM solution, unless something’s changed and DRM is now suddenly viable for books where it wasn’t for movies and music.

If there’s going to be one best and only platform for consuming books, it’s going to be the web. The reality is more complicated, of course, and we’ll probably have as many platforms for reading books as we do types of paper. Those platforms will also have learned from the internet, unlike Seth’s suggestions (which are good nonetheless).

Beautiful Lines

Update: This was written just before the iPhone 4 came out, with Apple’s new 326 ppi display. With screens that vary from 75 to 326 ppi (and no doubt, beyond), this stuff matters now. Go and make your sites resolution independent. If you don’t care about the critique of a designer’s blog, scroll to the bottom to learn how to make your site look amazing on all these very shiny new devices.

Typography on the web is ugly. Ragged-right is an abomination, a carry-over from when text rendering was done by Netscape Navigator on 486s with 16 MBs of RAM. Oliver Reichenstein writes at length about how Wired Magazine’s typography looks terrible on the iPad, but his own design blog has some not-so-subtle typographic issues. I’m going to quote Oliver by way of a screen shot:

Oliver’s lines are between 80 and 90 characters long, 50% longer than what he recommends here. While he has clear paragraph breaks, it’s actually impossible to usably increase the font size on his site since everything is done with relative scales — a larger font means that the left gutter grows like a tumor, pushing the text off the right edge of the canvas, while the text container grows, too keeping the line lengths extra long.

More problematically, the iA blog doesn’t adapt to devices with different pixel densities. They have an iPhone-specific stylesheet, but that only looks good for the portrait orientation — flip to landscape, and words-per-inch drops to about one or two. On the iPad, Apple’s reasonably smart scaling saves them, but the margins are horrific:

It’s a little hard (but easier than it should be) to tell that the visual effect of the text running up against the iPad’s right bezel is incredibly distracting, and unnecessary given all that white space to the left. Never mind the fact that the font size is too big on the iPad in landscape mode, and actually a little too small in portrait.

I don’t mean to harp on iA or Oliver — text across the web looks roughly like the visual art equivalent of MS Paint. Hell, this blog is using a fixed-width layout that looks terrible on low-resolution screens without the saving grace of content zoom. The iA site is better than most, but the work of Knuth and Bringhurst and so many others isn’t being honoured.

We can do better.

It’s not hard. The tools we have available to us today are amazing. HTML rendering engines are wicked fast, support letter-spacing and word-spacing and all forms of justification and hyphenation and drop caps and indents, oh my! In the past year, we’ve finally gained the ability to render custom fonts across browsers, basically bringing typography on the web up to LaTeX or Microsoft Word standards. Ahem.

A simple extraction of some of the work I’ve been doing with rePublish, here’s an unobtrusive approach to presenting properly sized text for any reading device that might happen upon your carefully written text. Don’t worry about your layout; Any approach is fine, whether fluid or fixed, grid or not. It’s the job of this approach to work around your constraints, making your text more readable and lovely.

First, if you’re not already, drop everything and size all your text in ems. Your text should be 1.0 em, everything else in ems as appropriate.

* { font-size: 1.0em }
h1 { font-size: 1.5em }
small { font-size: 0.75em }

The strategy from here is to determine how big the text needs to be (at 1.0 em) in order to fill a given text with a desired number of characters. In the “olden days”, this was done with rulers and text sizing charts. In the modern era, we’re going to do exactly the same thing, except that we’ll build our text sizing chart every time we want to display text.

To do this, we take a string that will give us the average number of characters per pixel. A lower-case alphabet will do, but a closer approximation takes the letter frequency into account: “aaaaaaaa­bb­ccc­dddd­eeeeeeeeeeeee­ff­gg­hhhhhh­iiiiiii­jk­llll­mm­nnnnnnn­oooooooo­pp­q­rrrrrr­ssssss­ttttttttt­uuu­v­w­x­yyz”. Next, we’ll measure how many pixels wide that string is in the default font size:

var sizer = document.createElement('p'); = 'margin: 0;
                       padding: 0;
                       color: transparent;
                       background-color: transparent;
                       position: absolute;';
var letters = 'aaaaaaaabbccc
sizer.textContent = letters;
var characterWidth = sizer.offsetWidth / letters.length;

The characterWidth variable is an approximate measure of the per-character width in pixels for the default font size.

Next, we need to know how much horizontal space we need to fill. Get out your rulers, and let’s start measuring! The specifics here will vary for every design, but the approach is always the same. First, find the space in which the main body text lives and get its width in pixels:

var contentWidth = document.getElementById('content').offsetWidth;

Now we can find out how many characters long our lines are by dividing contentWidth by characterWidth to obtain actualMeasure. Dividing that by our ideal number of characters per line gives us the relative factor by which we need to scale the default font size. For example, if our target is 66 characters per line, but the current font size produces 85 characters per line, then we need to scale up the font size by 85/66 or 129%.

In order to do the last step, we obtain the current body font size like so:

var mea­sured­Font­Size = parse­Float(
                                  get­Com­put­ed­Style(doc­u­ment.​body, null).
                                  re­place('px', '') );

And putting it all together, we find our desired base font size with the following formula: desiredFontSize = measuredFontSize x actualMeasure / targetMeasure. Armed with that knowledge, we can circle back around and update the base font size:

var actualMeasure = contentWidth / characterWidth;
var targetMeasure = 66; = 
               (measuredFontSize * 
                actualMeasure / targetMeasure) +

That’s it! No matter what device your visitors are using, they’ll have an easy time reading your carefully written text. Add hyphenation using the unobtrusive (and fast) Hyphenator.js, turn on justification, and you’re publishing texts that have the same careful and consistent rendering exhibited by virtually every paper book published today.

To round out the approach, it’s absolutely possible and desirable to set maximum and minimum font sizes. Large, high density displays may produce oversized fonts if the content area is flexible, and constrained devices will favour very small fonts. For small displays, this is enough since displaying less text on a small display just makes sense.

For larger displays, displaying smaller-than-ideal text will leave more white space or leave more characters per line, which may or may not be acceptable. If it isn’t, there are a number of options; moving to columns might be a good one — this is what newspapers do, to good effect. A 30” high pixel density screen isn’t that far off a broad-sheet newspaper, and a series of columns is a far more appealing idea than a website that forces me to scroll down to continue reading a tower of text.

The simpler option would be to use relative sizes for the content container, sizing in ems rather than pixels, and guaranteeing that your content can reach the ideal number of characters per line. For this to work, you need to key the font size off the available pixels or the device’s native resolution, rather than the area into which you’re sizing text. This blog uses a fixed pixel design, and I’m not currently in a position to rework the design at the moment, but this latter approach is the one that I’d use in the future, and is in effect the technique I use in rePublish to ensure that the text is properly sized regardless of device resolution.

The code to do all this is posted here:

To use it in your site, just paste it into script tags in your template and call it from your onload handler.

I’m releasing it into the public domain, so please modify to suit and share with anyone and everyone. If you create a plugin for jQuery or any other JavaScript framework, please leave a comment here so others can find it.

A Comment, Republished Here for Posterity

In-Reply-To: Let’s Implement the Open Pile! It’ll Be Great! by Johannes Ernst.

You’re absolutely right. Try, as a new commenter, to leave a comment on your blog. Seriously. It’s horrendous. Here’s my approach:

Act 1: First, I saw the WordPress logo. So I tried to enter my WordPress username and password. Oops, I guess I shouldn’t have told you that, since now you can dig into your logs and pretend to be me on hosted blogs. When that didn’t work, I thought, well, maybe I’ve forgotten my login info. So, I tried a few other options, none of which worked. I guess you could probably log into a few more sites as me now, assuming you’ve been keeping careful logs…

Act 2: Giving up on the username / password option, but not wanting to go through the login dance for what was now clearly “just your blog,” I tried to use my OpenID login, for which Google has chosen a not-totally-unreasonable URL: - but, of course, that didn’t work. So I tried again, this time using my experience as a web developer to change the URL to, just in case was returning something more useful than, or in case your OpenID library wasn’t following a redirect or something. Fail.

Act 3: Now, since I *really* wanted to leave a comment on your blog, I clicked the dreaded ‘register’ button. And, to my delight, I saw that it wanted a username and an email address. Right, because I’m going to remember my username for the WordPress install at Ha! Thankfully, I got my first choice. I guess the kids haven’t started lining up around the block…

Act 4: Being a good piece of software, WordPress did not ask for my password. So, off I go to my inbox to retrieve the password, which thankfully is sitting right there. It’s a horrendous mess (‘*QOj9rc8D$%X’ fwiw) and Chrome doesn’t like the idea of neatly selecting it, because it’s not really a word, y’know? I manage nevertheless, and go back to the other tab (whatever did we do before tabs?!).

Act 5: Now I enter my password, eager to make my blog post. Click enter, and *bam*, I’m pushed face-first into my brand-new WordPress profile page. W00t!


Oh, right. I was trying to make a blog post.

Act 6: So, back I go to to find the post that I wanted to comment on. No, wait, wrong page. Rewind. Back I got to to find the post that I wanted to comment on. The post footer says I’m logged in as romeda (oh, wait, I guess I didn’t get my first choice - why did I use ‘romeda’ instead of ‘blaine’? D’Oh!), so I click on the textarea, and away I go!

Now, Umm, What was I going to say?

Oh, yeah:

Facebook Connect is the best experience for both parties, because chances are the commenter has a Facebook account (and if they don’t, do you really want to hear from them?) so that’s good for the site, and it’s really just one click on that pretty blue Facebook Connect button and then one click to approve the connection (nevermind the privacy implications, pshaw), so that’s great for the user.

But that only works if you trust Facebook. You Dumb Fuck.

So, if you’re like me, and try not to be a Dumb Fuck, you should just skip all the bullshit and use email addresses. That do automagical discovery, thanks to Webfinger. Which is a shitty name, but do you have a better idea? (no really, if you do, PLEASE tell me) Tantek’s called it RelMeAuth; I think we should forgo HTTP URLs altogether for this, simplify, simplify, simplify, and just use email addresses. Whatever happens under the covers doesn’t fucking matter one iota. You start from the user experience and then, as web developers, we make it work. Period.

So to say it again, you’re absolutely right. The Open Pile is a totally useless heap of marketing buzzwords. The only thing that matters is user experience (well, the experience of developers building this stuff matters, too, but it’s a secondary concern. We wouldn’t be in this business if we didn’t enjoy at least a little pain). Except that the Open Pile has some real gems in it, and I very much look forward to mining for them with you next week [at the IIW]!

Facebook Is My New Boatcar

Facebook’s relentless drive away from privacy has garnered a lot of attention lately. For those of us who have been working towards building decentralised networks for some time now, the attention heaped upon Diaspora comes as no surprise. They’ve done a fantastic job raising the need for open alternatives to Facebook.

Matt Asay’s post yesterday, Facebook has problems, Diaspora isn’t one of them, argues that being free and open isn’t enough. The end-user experience of social networks is what matters, he says. Because a great user experience isn’t at Diaspora’s heart, it’s doomed to fail. His argument is persuasive and, as anyone who’s ever built a user-facing application knows, it’s absolutely correct.

Here’s the thing: while Diaspora’s aim is freedom, that doesn’t mean that open alternatives to Facebook are all prioritising the same thing. The biggest challenge that Facebook is facing, above privacy, above the threat of falling out of fashion, above up-and-coming competition from Twitter or Foursquare or that-social-network-you’ve-never-heard-of, is this:

Facebook is building a Boat-Car.
A Ducky Tours Boat Car

Boat-cars seem seem like a pretty awesome idea, but the fundamental challenge of combining a sealed hull with external wheels means that boat-cars will never be able to match the performance or aesthetics of cars or boats. Pursuing the entire social market, Facebook has attempted to adapt itself to every new feature of the social web. They started out as a Friendster-alike that emphasised intentional communities, and did it well, providing elegant social utilities to university students. But since then, they’ve systematically bolted on features in an attempt to build a vehicle that does everything that Flickr, Twitter, Foursquare, Email and IM do, to name a few examples. Increasingly, they’re trying to become a framework for the web in general so that everything a web user does is done through Facebook. Instead of offering a carefully constructed vehicle that offers amazing social experiences, they have a created a clumsy boat-car that can never truly compete with more focused sites.

What Facebook does have, fundamentally, is the social graph. Where Flickr has a careful treatment of photo sharing, Facebook has photo sharing built on an expansive substrate of communities. Where Twitter has an insane ability to capture and amplify the low-level hum of human communication, Facebook has an insane ability to execute at scale unlike anyone since Google. Where Google has an intimate understanding of the flows of data on the web, Facebook has an intimate understanding of how to keep their users engaged. Most importantly, Facebook has hundreds of millions of users, and the network effects are in full force.

While no one will ever be able to overcome Facebook’s advantage on Facebook’s terms, just as no one was able to defeat Microsoft on Microsoft’s terms, it’s downright easy to create better social experiences than Facebook’s. It’s easy to create better tools than Facebook’s. It’s also easy to imagine a better social environment than theirs. Logging into Facebook is for me like walking into a room where everyone I’ve ever met is standing around, talking to each-other. My bosses, my family, friends old and new, co-workers, acquaintances, everyone! It’s like attending a nightmare wedding in hell.

Social Anxiety

The challenge isn’t social network portability; I regularly fly all the way around the world just to reconfigure my social network and have different conversations than the ones I normally have. I’ll gladly log into a different site if it means I can see just work-related conversations, or just family photos. The challenge is that the only viable place for those activities today is Facebook. Their network effects are of so much larger a magnitude than anyone else’s that creating a new social site without leveraging Facebook’s network is a downright crazy idea. Therein lies Facebook’s weakness, and the weakness of every dominant but “closed” network.

This is where open, decentralised alternatives come in. Instead of relying on Facebook’s social graph, social web tools can be built on top of the one true social network: everyone. Instead of building boat-cars — ugly tools that try to do too much — developers could focus on building the best photo sharing site in the world, or the best recipe sharing site, or the best book sharing site. In this world, if someone wants to come along and compete, they do so on features and execution, without first having to steal away all the users from the site that got there first. We’d end up with better experiences and tools instead of just dominant ones.

Sailboat Regatta

Facebook’s tools might be the very best for right now, but it’s frankly ridiculous to think that Facebook will be able to provide either the tools or even the infrastructure for the next five or ten or twenty years of development of the web. The job of serious web developers today is to ignore the siren call of Facebook, Twitter, Apple, Adobe, or any other comers that would define the parameters of the web for them, and instead build the best experiences possible. If you protest, and say that Facebook allows you to connect your users with each-other more easily than any alternative, ask yourself if Facebook’s interface is the best you can imagine, or if you feel closely connected to your network on Facebook (or Twitter, or any “platform” provider), or if your network on Facebook represents all of your social interactions. If the answer isn’t emphatically YES!, then it’s worth your while to consider the alternatives.

Hell, if you work at Facebook and you can’t emphatically answer yes to those questions, then it’s worth your while to consider the alternatives. After all, if you can’t beat ‘em, join ‘em. And trust me, you can’t beat the web, because in the long term, the web isn’t subject to anti-trust suits, doesn’t have financial constraints, and can keep evolving until something works.

Three Simple Things That Browser Developers Can Do Today to Make HTML5 Apps Real.

I’ve had this draft sitting around for a while now, but prompted by Tim’s and Ben’s posts on HTML5 and the web as pertains rich applications and such, herewith some thoughts based on fighting with HTML5 Apps in the context of rePublish.

Cross Domain, Already

The largest barrier to HTML5 as a viable platform is cross-domain AJAX. Full stop. If you think I’m wrong or just whining and that I should just use JSONP or CORS, go try building any of the following without relying upon a server-side component and all the privacy, cost, and maintenance issues that such a beast entails:

  • A .doc editor.
  • An ePub reader.
  • An image editor.
  • A multi-protocol IM client.
  • A P2P client.
The short answer: you can’t do it. Yes, there are HTML5 Offline Apps which helps in that apps can work offline until they can sync to a server, but it’s not a complete answer. Solutions exist (WRT, Widgets, etc) but they’re for widgets, not apps, and in any event they’re not a single-serving approach. You still need to repackage your app for each new runtime environment.

If we’re going to build applications that read documents in HTML5, we need cross-domain requests. JSONP doesn’t cut it. CORS doesn’t cut it. Downloadable applications don’t need CORS headers in order to make HTML requests; why should installable HTML5 Apps be subject to this crippling restriction, based fundamentally in a stupid policy decision around cookies?

With the advent of the FileAPI, client-side development can finally read local files (though not directories or recursive paths). So there’s that.

Web Protocol Handlers

Once upon a time, when faced with a link like this one:, the operating system or browser would do the right thing, which is to look up the application that the user has chosen to handle mailto URIs in a system registry, launch it, and create a new message addressed to

Email links don’t work for me across all the browsers I use, because there aren’t hooks to tell browsers (or the OS) to use a web URL as the handler instead of some application in my path. This is stupid, and based entirely in technology decisions made over twenty years ago. Thinking about the future, for example, wouldn’t it be awesome if Delicious or Digg could register a ”share” protocol handler, so that instead of having a horrible NASCAR mess of social sharing links, we could have our browser fill in the blanks with the site we use — “share this using your preferred tool” rather than “share this with any of these tools you’ve never heard of.”

… and Content-Type Handlers

Likewise, when clicking a link like this one:, the browser would check the Content-Type header for a mime-type (in this case application/zip), look up the application designated to handle application/zip files, and launch it with the file in question as an argument.

There’s a W3C / WhatWG proposal based on a feature added in Firefox 3 to add both protocol and content-type handlers that can be fielded by HTML5 Apps, but all you get at the other end is a URL - there’s explicitly no way for your HTML5 app to do anything with the URL, because of cross domain restrictions.

Sure, you can refer your app to your server component or build a “native app” for every OS to which you’d like to deploy, but there are a whole bunch of issues that arise if you’re not trying to lay your dirty hands on every bit of your users’ UGC. Privacy, performance, UX, policy, bandwidth, costs; these are all non-trivial factors that are much easier to deal with in the context of client-side applications than they are in the context of a vendor-owned website.

So, Browser Developers

This is the era of platform independent client-side web apps, right? Applications that are web-native, weaving and linking and knitting the strands of information and communication together, doing so using the underlying technology of the web. Cocoa apps can’t carefully represent the sorts of information flows that happen on the web, nor can Windows apps or any traditional desktop app. The conceptual advantage that working in HTML and Javascript has over so-called “native” code is immense.

But, we need tools to build these things. CSS Animations are great, Canvas is amazing, but how about some low-level tools? Mobile Safari already has the “add to home screen” button, why not add something similar to the desktop browsers? “Install this [web] application [with extra permissions]” would be an amazing boost for the web, fill in missing pieces in Tim O’Reilly’s Internet Operating System, and give us some real alternatives to the multifarious app stores that lurk in every corner.

tl;dr: HTML5 Apps need cross-domain requests, protocol handlers and content-type handlers in order to be first-class citizens. Browser developers can and should make this happen, sooner than later.