Posts Tagged ‘web’

Articles

This Week in Google’s discussion of H.264

In Web development on January 27, 2011 by Matt Giuca Tagged: , , , , , ,

I’ve been an avid listener of This Week in Google for the past few months now. Don’t get me wrong — I really love Leo, Jeff and Gina. They are usually very insightful and nearly always defend Google when they need defending (e.g., in the Wi-Fi spying debacle), but they’ll go against Google when they do the wrong thing (e.g., the Verizon net neutrality deal). Being three web entrepreneurs, these guys seem to really “get” the concept of the open web. So I couldn’t understand why, in episode #77, the three of them unanimously and almost without any debate or counter-arguments, bagged out the recent Google decision to drop H.264 from Chrome (which I blogged about previously). They brought up all the same arguments: Google are just doing this to get back at Apple, Google are trying to support Flash, Google made an Evil move. I tweeted “Surprised & dismayed to hear a unanimously anti-Google stance re #h264 on #TWiG. Was expecting a less short-sighted and Apple-centric view.”

So I was keen to catch their follow-up discussion this week, and it wasn’t what I expected. They came back a little sheepish (from having been too hard on Google), a little apologetic, and a little more understanding. Unfortunately, though, they seem to have misunderstood some crucial technical details, rendering their new defense of Google (“it was a bad PR move”) almost entirely invalid. I took the time to transcribe the entire conversation on this topic, from TWiG #78 (from 26 minutes in — underlines mine):

Leo: Now it’s been a week. We’ve absorbed the dropping of H.264. We talked a lot about it, if you didn’t hear last week’s TWiG, Kevin Marks was on. He was great, he explained it all. We’ve had time to think about it. Do either of you feel a little bit better about Google dropping H.264, cause I’ll say up front, I kinda do. I kind of, maybe I’m a true believer, but I kind of think that they took a pretty big hit in the name of openness. Do you feel better, Gina?
Gina: They definitely took a pretty big hit. We were pretty hard on them. I do feel a little better about it, although I have to say I’m looking at the Engadget article now. Did they issue a public defense?
Leo: Well it wasn’t very …
Gina: It wasn’t very public, was it?
Leo: It was public, but I don’t think it was that compelling. They basically just re-iterated, “no, we did this for openness.”
Gina: And the core of this argument is “we’re not evil, like we’re doing this thing that could be interpreted as evil or not, just trust us, our motives are not evil.”
Leo: I don’t think that’s a good argument. (laughs) “Trust us, we’re not evil.  Trust us! Really! Honest!”
Gina: Well, you know, we care about the future of the web, we want to make it open.
Leo: I mean, it carried water for me because I believe fundamentally that’s true. I know far too many Google people, which is always a mistake, I admit it. In fact one of the things I’ve always done in my career is distance myself from the companies I cover, cause once you start knowing them and liking them it’s very hard to believe ill of them.
Jeff: Amen.
Leo: So maybe I know them too well.
Gina: I admit to that too.
Leo: But all the engineers I know at Google really are highly committed to open. So even if corporate says “No” for business reasons, the troops at Google aren’t going to go along for the ride. And I do believe you can say, look, we understand WebM may be slightly encumbered, may be slightly undesirable, nobody’s using WebM, no browser supports it, but we want to put a flag in the sand. We want to say “open is better.” There’s gonna be some pain. We’re gonna suffer some pain. I understood their argument. Don’t confuse Flash (cause we said that this de facto helps flash) — but they said OK, yes, maybe, but don’t confuse Flash with H.264. These are different things. What we are talking about is the video, and I did say this last week, it’s the thing. What is HTML5 gonna do when it sees . And you can all add plugins for every codec you want, that’s fine, but we are gonna put a stake in the sand and say, “What it should do is use WebM. It should use an unencumbered open standard.”
Jeff: Isn’t that a different way to say it, Leo, is that that’s a default, rather than support. They said “We’re going to stop supporting H.264. You can ship the codec with it, but we’re gonna default to WebM.”
Leo: If you use Windows, and Chrome, it will support H.264, cause Windows builds in the codec.
Gina: Well the Chromium project never supported H.264.
Leo: Nor did Mozilla.
Gina: But Chromium and Chrome are different, right. So Chrome is the Google product that they’re gonna ship, so they used to support H.264. Chromium didn’t, but now they’re not. I think honestly this was just a bad PR move.
Jeff: I think that’s what I’m saying, is you can say it differently, and say “we’re gonna default to open now“, and everyone would have hugged them.
Leo: I think the problem is it’s too technical of a … that’s kind of what they’re saying but it’s technical. They said “”, but I don’t think that most even technical people understand the difference between a plugin and native support. There has never been in HTML a definition of what happens when you see video. In every case, up until now, when you see video you have to have a plugin. HTML does not support video. HTML5 will support a tag. What happens when that tag happens? What does the browser do? Does it go out and and launch a plugin? Well it could, in fact it will have to in Safari, and that’s why Google says they’re gonna make WebM plugins for Safari, we’re gonna make a WebM plugin for IE9, but that’s interim. The real question is if it’s an HTML5 browser, and it runs across a tag, has no plugins, has no Flash, there’s nothing installed, what’s it gonna do when it sees ? And we believe that all browsers should default to a baseline of WebM. If you want to add more, that’s fine.
Jeff: That’s fine. The problem here is the precedent set by Apple, with Flash. It sounded like that.
Leo: You’re right. You know what, you’re exactly right. You hit the nail on the head. In the context of what Apple did, it sounded like they were doing the same thing.
Jeff: Exactly. That’s the PR problem they had, and so it was so easy to say “you can install whatever you want, folks, but we are now going to default to open.” Would have been fine.
Leo: Perfect. You’re right.
Jeff: “We’re gonna not support H.264,” that sounds like such a Steve Jobs thing (love you, Steve, hope you’re better).
Leo: They should hire us to do their PR. (laughs)
Gina: Yeah, I wonder. I don’t think they expected the backlash they got, like some project manager posted this on the Chromium blog, I don’t think they maybe expected the response that they got.
Leo: I think that’s the case, and this is one of the reasons I like Google. They’re not that polished actually.

So firstly, some factual issues. “No browser supports it” is bullshit — WebM is fully supported in recent releases of Chrome/Chromium and Opera, and in recent betas of Firefox. As for the claim that Chrome on Windows will use H.264 regardless, I believe this is false (and Leo won’t like it). It is my understanding (supported by this blog post) that Chrome, like Firefox and Opera, but unlike IE and Safari, uses only its own bundled codecs (Theora and WebM, and for the time being H.264), not the system ones. This means that dropping H.264 support from Chrome would mean it couldn’t play H.264 video, even if Windows had the codec installed. I think that’s the way it should be, because otherwise web admins will upload videos in a thousand random codecs of whatever they have installed on their system, and the web will fragment. As argued in the same post, it is better to have a small set of standard codecs in the browser.

Now I really don’t get what Gina meant by “this thing that could be interpreted as evil or not, just trust us, our motives are not evil.” As I argued previously, I don’t like having to trust companies not to be evil, and in this case, I don’t have to. Google has made a perpetual royalty-free license on WebM so they cannot turn around and stab us in the back. Google’s intentions here are out in the open. They don’t want browser manufacturers (which includes themselves) and video content hosts (which includes themselves) to have to pay a license to serve and play video. This isn’t an altruistic position. It is pragmatic and in their own interests (and happily, ours too). When a company does something altruistic, you need to be suspicious and look for the hidden agenda. There is no hidden agenda here; they are looking out for their own bottom line, and the health of the web (which they profit from).

And then we get bogged down in this very murky discussion of “defaults” versus “support”. Re-reading the transcript, Leo seems to be suggesting that which codec to use is up to the browser — somewhat true, but only if the website supports both codecs. They seem relieved to discover that this news means not that Chrome won’t support H.264, but only that it will prioritise WebM over H.264 if given the choice. Is that what they’re saying? It isn’t correct. It is true that Chrome won’t support H.264. If Chrome finds a website that only supports HTML5/H.264, it will not play the video. Jeff claims that saying “we won’t support H.264” is an Apple thing to say, whereas saying “we default to WebM” is what they should have said. I disagree — “we won’t support H.264” is the truth, whether you like it or not. “We default to WebM” sounds exactly like the sort of spin Apple would put on this announcement. It seems we’re now in such an Apple-centric world that the truth is considered “bad PR”, whereas a vague and confusing spin which hides the true nature of the underlying technology is something people are happy with.

But it gets murkier. They seem to be confusing “no support” as in “this software does not contain this feature” with “no support” as in “you are expressly, technically and legally, forbidden from adding this feature to your own device.” This is what distinguishes Google from Apple, and that’s what they missed. Because it’s true that Google will no longer be supplying the H.264 feature on Chrome, just as Apple decided not to supply the Flash feature on iOS — in that regard, this is the same thing as Steve Jobs saying “we will not support Flash.” Which, by the way, I have no problem with — Jobs can put any software he wants in his product, and leave out any he doesn’t want. The difference is that not only is Chromium open source, so you can add the feature back if you really want it, but that even the closed-source Chrome supports plugins, so you could add a H.264 plugin. iOS explicitly forbids you from installing Flash.

I feel like it’s been a week, and we’ve had all this arguing, and finally have forgiven Google for all the wrong reasons. Rather than celebrating a major step towards removing the last holdout for the open web, rather than realising as much as you like H.264, it would never have been supported by Firefox anyway (not because Mozilla are pig-headed, but because as an open source company they literally can’t support it, and even if they did, Linux distros would have to take it out again) — and that that is precisely why it is a bad thing, we have instead forgiven Google because it turns out we can work around this decision and use H.264 after all. Way to entirely miss the point.

Advertisements

Articles

A response to criticism over the Chrome H.264 debacle

In Web development on January 14, 2011 by Matt Giuca Tagged: , , , , ,

There has been a lot of arguing on the Internet in the past few days over Google’s decision to drop H.264 video support from Chrome. I was really surprised to see the majority of posts (I read) were negative towards Google. I thought the Internet had more sense. I think this move just might save us from twelve years of patent hell so I absolutely applaud Google for doing it.

So here is a rebuttal to pretty much every negative comment on the Chromium blog (linked above), as well as this Ars Technica opinion. If there are any negative arguments I’ve missed, yell in the comments and I’ll either add a rebuttal, or acknowledge it as a fair point. Now on the exact licensing terms of the MPEG-LA, I refer to this document and this additional press release. Firstly, from what I understand it, here are the terms of the license, divided up (simplified) into three use cases:

  • For manufacturers of encoders (that is, programs which create H.264 video), there is a license fee with a maximum of $6.5 million per year.
  • For manufacturers of decoders (that is, programs which view H.264 video, such as web browsers), there is also a license fee with a maximum of $6.5 million per year.
  • For distributors of content (that is, websites which serve H.264 video), there is a license fee with a maximum of $6.5 million per year (with a maximum of $100,000 per video). However, for distributors of content where the end user does not pay, there are no royalties for the life of the patent.

H.264 supporters are quick to jump on that last point, claiming that this makes H.264 free. But it doesn’t, because you still need to pay if you are serving video behind a firewall. It is still illegal to make a web browser without paying the fee (assuming H.264 becomes a mandatory feature of all web browsers), and it is illegal to make software for encoding video without paying the fee. Importantly, these fees still apply even if you are giving away your software for free. This means the end of open source web browsers and video encoders.

Now basically every comment boils down to one of the following complaints:

  • It’s not like Google can’t afford the $6.5 million-per-year fee.
    • True, but that misses the point. Firstly, when you say “Google should continue to pay the license fee,” you are making a business decision for them. If a company has a $6.5M-per-year expense and they want to cut it, they are well within their rights to do so, even if it affects you personally or they could afford it. They don’t have a contract with you to provide this service. Secondly, not all browser manufacturers can afford it. Mozilla can’t (or won’t), and if I decided to write a browser tomorrow, which suddenly became popular, I wouldn’t be able to afford it either. If Google paid the yearly fee, they would be asserting their vast wealth, and saying “look, we’re one of the few companies in the world that can afford to make a popular web browser.” By not paying the fee, and dealing a blow to H.264, they are saying “we want smaller companies and people to be able to make browsers too. More browsers is better for the web.”
  • Google is a hypocrite because they are dropping H.264 in the name of openness, but still support Flash. / H.264 is patented but at least fully open, whereas Flash is closed source. (This makes up about a third of the comments.)
    • Not even Google can change the world over night. Flash is currently so entrenched that you couldn’t possibly drop support for it (unless you are Apple and millions of developers and users will bend to your every whim). Google will probably eventually drop support for Flash, once HTML5 is far enough along. But for now it is simply impractical to drop Flash support, while it is quite practical to drop H.264 support.
    • In terms of which is more open out of H.264 and Flash, both are published standards. H.264 has open source implementations, but is patented, whereas Flash is a closed-source implementation that nobody has fully replicated yet. This is a totally different issue. With Flash, Adobe is protecting their implementation — their own work (as is their right), but they won’t stop competing implementations. With H.264, MPEG-LA is outlawing all possible implementations, even those which they didn’t write.
    • Update: Can I just re-iterate the first point? Google has to pay to put H.264 in the browser. They don’t have to pay to put Flash in the browser. It’s their wallet, not yours. It isn’t hypocritical to use something someone gives you for free (even if it’s “bad”), yet not be willing to pay for something else which is “bad”. (Hey, I should know: I used an iPod which my parents bought me for years, but like hell I would pay for one!)
  • Google are probably being paid by Adobe to hold back adoption of HTML5 video
    • Google have ties with Adobe due to their support of Flash on Android. But supporting Flash natively is just a way to make the browsing experience better. As I said above, Flash is too entrenched to get rid of, whatever your ideals are — for now. I’ll believe accusations of back-room deals when I see them.
  • Google is a hypocrite because YouTube supports H.264.
    • Yes. Your point? Everybody supports H.264 (at least in a Flash wrapper). That’s precisely the problem Google is trying to break away from. YouTube also supports WebM.
  • Stupid move, and it won’t have any impact. Chrome doesn’t have enough market share. / Nobody will bother to encode WebM just for the benefit of Chrome users.
    • True, Chrome only has 10% market share, and that might not be enough to convince web admins to support the format. But Firefox doesn’t support H.264 either (and will soon support WebM). Combined, they have over 30% of the market.
  • This is bad for the open web because sites will just go back to supporting Flash. / This will slow the adoption of HTML5 video.
    • Perhaps a bit, but since Firefox has double the market share of Chrome, and it doesn’t support H.264, this was already a problem — we can’t move to a pure HTML5+H.264 web while Firefox doesn’t support it. It is better to stay with a proprietary old standard while we build towards an open new standard than transition from a proprietary old standard to a proprietary new standard. Once the transition is complete, we’ll be too exhausted to do it again for another ten years. I disagree that transitioning to H.264 is better for the “open web”.
    • I couldn’t believe this extremely narrow-minded comment on the Ars Technica article, under the heading “This hurts the open web”: “even Firefox users would be able to use H.264 video through Microsoft’s plugin for that browser” — how the hell can we call it the “open web” if users of the leading open source browser are forced to use a proprietary plugin which only works on a single proprietary operating system? That’s just the same as Flash, only worse, because it’s for Windows only.
  • This is bad for site admins because now they have to encode their video in two formats.
    • True. But it’s already bad for site admins who have to support both Flash and HTML5+H.264 for Apple devices (though to be fair these both support the same underlying H.264 codec). These same admins are looking forward to a future where they can drop Flash support, but cannot due to browsers which don’t support HTML5. The problem is, H.264 video will never be supported by the open source browsers, so they will always have to support either Flash or an open video codec. This move might help move towards a single, open video codec.
    • Also, since Adobe has announced that Flash will soon support WebM, site admins will be able to provide HTML5+WebM content with a fallback to Flash+WebM for browsers which don’t support WebM directly, leaving only iOS (which supports neither). That would be just as reasonable as HTML5+H.264 with a fallback to Flash+H.264, only Apple can implement WebM if they want to (whereas open browsers cannot implement H.264).
  • This is bad for users because suddenly a whole bunch of sites will stop working.
    • False. Currently, there are no websites which exclusively support HTML5 video; they all fall back to Flash (obviously this won’t break the web, because otherwise Firefox would already be broken). Therefore, now is the time for any willing browser manufacturers to drop support for H.264 without Flash, before it becomes the standard. It is too late to drop support for Flash without it first being replaced by another standard. Dropping H.264 at this early stage will not affect any users. If we wait, it will be too late.
    • By contrast, when Apple dropped support for Flash, that was bad for users because it broke a shitload of existing websites. Apple was so powerful that they managed to get pretty much the whole Internet to switch over to their new patent-encumbered standard, H.264. That was bad for site admins and users.
  • All the other browsers support H.264. If only Google continued to support it, we could finally agree on a standard.
    • False. Firefox has never supported H.264 (without Flash), and never will. An open source product can’t ever support it, so therefore we will never have an open source browser supporting this standard. Bear in mind that Chromium, the open source version of Chrome, has never supported H.264 either, for the same reason. This is part of Google’s motivation: to make Chrome more open source (the H.264 part of Chrome is proprietary, by definition). Hence the announcement: “we are changing Chrome’s HTML5 support to make it consistent with the codecs already supported by the open Chromium project.”
  • H.264 was around in HTML5 first. Why is Google trying to change it now?
    • False. HTML5 video was first introduced with Ogg Theora as the standard format. Due to refusal by certain browser manufacturers to support Theora (largely Apple’s support of H.264 on iOS), the codec was removed from the standard. As it stands now, browser manufacturers are free to implement any codec.
  • This won’t kill open source browsers. You can still distribute source code, just not binaries.
    • False. Or, maybe technically true, but that isn’t how open source works. “Open source” does not refer to programs distributed only in source code. It refers to programs whose source code is available. The vast majority of open source users do not build their software from source. If, for example, Mozilla were to put H.264 support in Firefox, it would become illegal to distribute Ubuntu with a binary of Firefox, and all Ubuntu users would need to compile Firefox from source. Even Google, who currently pays the $6.5M-per-year fee, does not include H.264 support in the open source version of Chromium.
    • Furthermore, even if you could argue in court that you didn’t distribute an implementation of the patent, only the source code, this is not a risk many small open source developers would be willing to take. The cost of implementing a web browser is simply too high under this regime.
    • Edit: Nick Chadwick points out that FFmpeg (an open source video encoder/decoder) supports H.264 for both encoding and decoding. I am openly wondering how they are able to distribute binaries (and the implications for distros such as Ubuntu). Edit: David Coles points out that free distributions such as Debian and Ubuntu have removed H.264 support (and other codecs) for this reason.
  • This is about free-as-in-cost (gratis) not free-as-in-speech (libre). H.264 is open, you just have to pay for it.
    • This is a point the Ars Technica article raised. It treads the subtle line between gratis and libre — the implication being that you shouldn’t make this a moral issue when it is merely a financial issue. The problem is, patents are a free-as-in-speech issue. Gratis is about how much something costs, whereas libre is about what you can do with something once you have it (i.e., your freedoms). Now if MPEG-LA had developed an H.264 encoder and decoder, for example, and were charging for it, that would be a gratis issue. You would have to pay for it, but if someone figured out the protocol, they could make their own without being restricted. Instead, the MPEG-LA has given away the spec for free (gratis). But in doing so, they have told you what you can and cannot do with it, and what you cannot do is make a web browser without paying them. Imposing a financial cost on somebody if they take a certain action (once they already have your property) is limiting their freedoms (libre), not charging for a service (gratis).
  • H.264 is an open standard. The MPEG-LA have promised not to charge any royalties.
    • No, it isn’t. As I outlined above, MPEG-LA will still seek royalties from encoder and browser manufacturers, and site operators distributing behind a paywall.
  • WebM is an inferior codec / WebM isn’t as fast because H.264 is implemented in hardware.
    • Maybe it is inferior, but it’s the best codec we have that isn’t patent-encumbered. Here is a technical analysis (which is way over my head) of the WebM codec, which doesn’t speak well for it. Edit: See the analysis of WebM and its patent risk (in reply to the above link), which basically explains that WebM was specifically designed to be inferior to H.264 to avoid treading on MPEG-LA’s patents. It is sad times indeed.
    • Now of course, H.264 is implemented in very fast hardware on iPhones and many other devices, whereas WebM needs to be decoded in software. But of course, if WebM took off, newer devices could support it in hardware instead, so that argument doesn’t work in the long run.
  • Nobody other than Google supports WebM; it isn’t going anywhere.
    • For what it’s worth, the WebM Project page shows a very large list of supporters, including Mozilla (WebM will be implemented in Firefox 4), Opera (Opera already supports WebM), Adobe (WebM will be implemented in an upcoming version of Flash), FFmpeg, AMD (owner of ATI), NVidia and ARM.
    • Not to mention, Google. Between Chrome, Android and YouTube, that’s a significant chunk of the browser, hardware and content delivery markets.
  • WebM might be patented too.
    • Of course the problem with patents is you never know when you’ve infringed one. Unlike copyright, where if I create something entirely on my own then I own it, with patents I can infringe someone’s patent merely by inventing the same thing they did. Therefore, nobody can say for sure that WebM doesn’t infringe on any patents and MPEG-LA has audaciously suggested they will begin charging for use of WebM — typical behaviour of a patent troll. But so far, nobody has named any specific patents infringed by WebM.
    • Edit: Here is an analysis of WebM and its patent risk.
  • This will put Apple at a disadvantage; if the web moves over to WebM, iPhone won’t be able to play video any more / This is a power play by Google to lock out the iPhone.
    • If WebM does become the standard, Apple can easily implement it. It is open source and patent free, so it’s not like Google is trying to make everyone use a format they control and lock out the competition. (Of course, implementing WebM in hardware isn’t trivial, but I’m sure Apple have the resources to do it if they were pressed to.)

Edit: I just found this post by an Oracle developer which provides some similar rebuttals to mine.

Articles

URI Encoding Done Right

In Web development on May 24, 2008 by Matt Giuca Tagged: , , ,

Well I guess it’s about time to actually make some content-filled posts!

I’m going to start by talking about web development, and correct handling URIs on the client and server side. This is something that I expect all good web frameworks should be able to deal with, but I’m not a big fan of such contrivances, and I believe whether you use one or not, you should understand how this stuff works.

With no work at all, you can make URIs that work for a bunch of strings you might try, but they’ll fail as soon as someone uses a special character. With a tiny amount of work, you can make your URIs encode and decode correctly for most things people might try. But there are still a bunch of edge cases you might not know about. This article is about the extra work you need to do (or at least the extra things you need to think about) for your URIs to encode and decode correctly for all characters including non-ASCII / Unicode ones.

Notes:

  • I use the term “URI” – Uniform Resource Identifier – to refer to what are commonly called “URLs”. It’s just a slightly more general term for the same concept.
  • Apologies for the curly quotes in this article. They’re done automatically by the blogging software; please consider them to be straight quotes when quoting characters and strings.
  • You should know a bit about Unicode – if you don’t then go read up on it! You should not be programming if you don’t know at least a bit about Unicode.

URI Encoding Rules

The issue here is that URI syntax defines a set of “reserved characters”, including, the following: :, /, ?, #, &, =.

These characters are used in the URI syntax as delimiters. For example, the ‘?‘ indicates the beginning of the query string, and the ‘&‘ separates key/value pairs in the query string. When you put arbitrary strings into a URI, it becomes important to escape characters to avoid confusing them with delimiters.

For example, say you are writing a wiki, and you have a page called “black&white”, you might create a URL like this: “?page=black%26white&action=view”. Note the “%26” is a proper URL-encoded form for the ‘&’ character – it is a ‘%’ sign followed by the 2-digit hexadecimal ASCII code for ‘&’ (0x26). It is very common to see space characters (‘ ‘) encoded as “%20”, the ASCII code for ‘ ‘. Spaces may also be encoded as ‘+’ characters (though this is the behaviour of HTML forms, not part of the URI syntax).

The lesson is, you need to escape such characters, or they’ll be misconstrued as delimiters. If you wrote this wiki in JavaScript, and had code to construct part of a URI:

qs = "?page=" + pagename + "&action=view";

Then if pagename == “black&white”, your URI would be “?page=black&white&action=view”. This will be interpreted as key/value pairs: page = “black”, white (with no value) and action = “view”. So “black&white” should have been escaped as “black%26white”.

You also need to escape other characters in order to correctly generate the URI syntax. In particular, all non-ASCII characters (ie. Unicode characters) need to be first encoded as UTF-8 (a stream of bytes), and then those bytes need to be URI-encoded. For example, the string “Ω” (character code U+2126) is encoded in UTF-8 as the 3-byte sequence “\xe2\x84\xa6”. So it is encoded as the URI “%E2%84%A6”. URI encoding libraries should take care of this automatically.

In JavaScript

JavaScript has always provided the “escape” function which takes a string and produces a URI-encoded version of that string. Don’t use it. It’s poorly-specified (differs across browsers), doesn’t escape enough characters, and doesn’t handle non-ASCII characters properly. Newer (new enough so it’s safe to use) JavaScript versions provide two more escaping functions: “encodeURI” and “encodeURIComponent“. These three functions differ only in what characters they escape, and how they tread non-ASCII characters.

All of these results are empirically verified in Mozilla Firefox 3.

escape does not escape:

* + - . / @ _

encodeURI does not escape:

! # $ & ' ( ) * + , - . / : ; = ? @ _ ~

encodeURIComponent does not escape:

! ' ( ) * - . _ ~

(Also all three do not escape ASCII alphabetic or numeric characters).

RFC 3986 defines the “unreserved” characters as alphanumeric, and . _ ~ – these are the only characters that should not be escaped. Which makes you wonder why the JavaScript functions differ so much!

escape

As you can see, escape doesn’t encode ‘+‘, which could cause trouble if the decoder decides to treat “+” as a space (as it may do if it’s expecting an HTML form). It also doesn’t encode ‘/’ which means you can get away with escaping a path, but will cause problems if you are encoding a string with an actual ‘/’ in it (which isn’t a path delimiter)! It does encode ‘~’, which should not be encoded according to the RFC.

Finally, it epically fails dealing with non-ASCII characters. escape(“\xd3”) gives “%D3”. This seems correct, but it isn’t. Remember, these things are not actually ASCII values – they are UTF-8 byte values. The UTF-8 value of U+00D3 is not “\xd3”, but “\xc3\x93”. So the correct URI-encoding for “\xd3” is “%C3%93”. For character codes above ‘\xff’, it fails even harder. escape(“\u2126”) gives “%u2126”. Now there is nowhere in the URI syntax which says to do that! Once again, it should be UTF-8-encoded first, so the correct URI-encoding for “\u2126” is “%E2%84%A6”.

So escape is crap. Don’t use it.

Fortunately, encodeURI and encodeURIComponent both behave correctly with respect to non-ASCII characters, so I won’t talk about that behaviour (it matches the expected output above).

encodeURI

As you can see, encodeURI is very relaxed about what it encodes. It deliberately does not encode any URI delimiters (such as : / ? & and =). The reason for this is that it is designed so that you shove a whole bunch of strings together into a URI-like thing, and then call encodeURI to encode as much as you can, still preserving the delimiter characters.

As far as I can tell, this makes it totally useless. It’s only useful if you’ve already constructed a malformed URI. You should never be shoving strings together like this without escaping their components first. By the time you’re ready to call encodeURI, you’ve already blurred the distinction between what is a delimiter and what is an actual character.

To continue our wiki example, you would use it like this:

qs = encodeURI("?page=" + pagename + "&action=view");

If pagename is “my rôle”, it will correctly encode the URI as “?page=my%20r%C3%B4le&action=view”. But if pagename is “black&white”, it will be just as useless as no encoding at all, because ‘&’ is not escaped.

The only use for this function is if you’re positive your strings don’t contain delimiter characters. But IMHO if you’re making that assumption, you’re asking for trouble.

Basically, (IMHO) you should never use encodeURI, because you should never construct the sort of string it’s expecting.

encodeURIComponent

Of course, the proper solution is to use encodeURIComponent, which escapes just about everything. The important trick is to use it on all the components, before concatenating them together. So to fix our example, you would do this:

qs = "?page=" + encodeURIComponent(pagename) + "&action=view";

Now this will correctly encode any string you throw at it. A ‘&’ character in pagename will correctly become “%26”, while the ‘&’ in “&action=view” will remain a ‘&’ delimiter.

Note that if you’re encoding a path with ‘/’ characters in it, and you want to keep them unescaped, you need to split on ‘/’, encode all the path segments, and then recombine! This sucks, and I suspect it’s a motivation for using encodeURI. But don’t be tempted! Just write your own function to do it. (It would be nice if there was an encodeURIPath which is the same as encodeURIComponent but doesn’t escape ‘/’ characters).

I’m still confused as to why encodeURIComponent doesn’t escape ! ( ) or * (all of these are reserved characters). However, it doesn’t seem to do much harm, so I won’t complain.

Decoding

JavaScript provides decoding functions for each encoding function.

escape <=> unescape

encodeURI <=> decodeURI

encodeURIComponent <=> decodeURIComponent

Each of the decoding functions comes with the same pitfalls as their encoding counterparts. For instance, decodeURI is just as useless as encodeURI, because if you use it on a full URI, it will give you a malformed URI. If you use it on a URI component, it won’t have decoded certain characters.

decodeURIComponent is the correct solution. It is guaranteed to decode all %xx sequences, but as with encodeURIComponent, you have to break up the URI into components first, or the output will be meaningless.

In Python

On the server side, you’ll be using some other language, and you’ll have the same problem. Since I primarily use Python, I’ll mention it here. Every language has its own library, and every library has slightly different rules. Read the documentation carefully before blindly sending your strings off to battle!

In Python, all of this is handled with the urllib module. This provides two quoting functions, quote and quote_plus.

By default, quote escapes all characters except alphanumeric and ‘_‘ ‘‘, ‘.‘ and ‘/‘. So it’s basically encodeURIComponent from JavaScript, except it also doesn’t escape ‘/‘ – this means it can be used to encode paths. Also note that it does escape ‘~‘, which it should not.

The good thing though, is that it lets you override what it doesn’t escape (except alphanum, _, and .). So quote(…, safe=”~”) gives you a version that does escape ‘/‘, but doesn’t escape ‘~‘. I’d recommend you use this most of the time. Only if you are escaping a path should you use the default, and I’d still recommend you allow ‘~‘ to be unescaped: quote(…, safe=”/~”).

There is also quote_plus, which converts spaces into ‘+’ symbols instead. You may want to do this for aesthetics, but be aware that this is not considered a space in URI syntax, so you will need to fix this up on the other end (and if you’re talking to JavaScript, remember none of JavaScript’s functions see this as a space).

Unquoting is straightforward. Choose unquote_plus if your URLs use ‘+’ for spaces, and remember that HTML forms do this automatically. (ie. if you are reading a URL from an HTML form, for some reason by hand, you would use this).

Lastly, Python’s urllib doesn’t yet know how to deal with Unicode strings, so these functions do not handle non-ASCII characters properly. Recall that in JavaScript, escape(“\xd3”) gives “%D3”, and it should have given “%C3%93”. Well in Python, urllib.quote(“\xd3”) also gives “%D3”, but I consider this to be “okay”. The reasoning is that in JavaScript, strings are considered Unicode strings, and should be treated as such. In Python (pre 3.0), strings are considered just 8-bit byte strings, so it is valid to encode this as the byte 0xd3, not the character U+00D3.

This simply means if you need to encode a Unicode string, you should manually encode it as UTF-8 first, using the encode(“utf-8″) method.

urllib.quote(u”\xd3”.encode(“utf-8”)) gives “%C3%93” (correct).

Similarly, if you decode a URI, technically it will come to you as a UTF-8-encoded byte string.

urllib.unquote(“%C3%93”) gives “\xc3\x93”.

If you want to treat this as a Unicode string, just decode it using the decode(“utf-8”) method.

urllib.unquote(“%C3%93”).decode(“utf-8″) gives u”\xd3”.

Also note the existence of the “urlencode” function, which encodes a dictionary into a query string, using quote_plus on all the strings, largely automating this whole process. Similarly, the cgi module provides a lot of functionality for parsing these things.

Double-encoding and double-decoding

It’s a pretty common problem to accidentally double-encode a URI. That means you’ve got two places in the code where they get escaped (or perhaps you encode it, then pass it to a library function which also encodes it). You end up with a string like this:

“black%2526white”

(Note the ‘%’ in “%26” was encoded to “%25”). The only way to prevent this is careful documentation and reasoning about the properties of the strings (ie. “this string is a raw string”, “this string is a URI component”). Check to see if your library expects a raw string or a URI component, for example.

A harder issue to catch is double-decoding. This can appear harmless unless you have good test cases. Consider some code which accidentally decodes a URI twice. The URI component “black%26white” is decoded to “black&white” then decoded again to “black&white” – it looks fine.

However, there are special cases where this won’t be fine – specifically cases with % signs in them. The URI component “26%2524” is an encoding of the string “26%24”. If you accidentally decode it twice, you will get “26$”. So it is a bug if you decode something twice, even if it rarely shows through. Once again, careful documentation.

Summary

As indicated by the length of this post, URI encoding is a harsh mistress. In general, you should carefully read and test any library functions you use to do encoding or decoding, and think about all the characters that may be encoded or decoded. Think about non-ASCII / Unicode characters and how they will be treated.

If you can get away with it, find higher-level functions such as Python’s urllib.urlencode which takes a lot of the work out of it. But even if your web framework does all of this for you, it’s a good thing to know what’s going on.

References

RFC 3986

Wikipedia: URI scheme