Securing content without proxying through PHP is a pretty common, and easily solved problem. You get Lighttpd (or any other webserver) to check some sort of a hash when it serves content. In the hash you include the users IP address, a timestamp, and probably the path to the file. This solution works remarkably well for the average desktop user. Sure there's outliers behind non-sticky load sharing proxies, but most people confidently ignore that portion of their user base.


Mobile users present a more serious problem. Their IP changes more frequently, under a large number of circumstances. As users move from tower to tower, or as their carrier changes moods their IP changes, generally within the same subnet. That last statement there provides almost a tease as to a possible solution: only hash a portion of the IP address to allow for greater flexibility.


Problems however, go a bit deeper.


As devices get smarter, we're now able to do nifty things like surf the web at home (on our mobile device) using our WiFi connection. As we walk outside and down the street the device seamlessly fails over to the 3G, Edge, or whatever connection and our browsing session continues. The browser being blissfully unaware of the change can do nothing to help us. During a test over the weekend, I saw tens of thousands of instances of switching within the same subnet, and a few thousand instances of changes beyond. This is a real problem for a mobile site.


In order to solve the problem I've had to balance a few needs: our need to provide even a modicum of protection for our content, the needs of our users to have links work when they view a page. To balance these I've added some checks to our software that determine if a user's IP changes, when those changes are detected the secure links generated omit the IP address but use a far more aggressive timeout. Hopefully balancing our need to protect content, with our desire to serve it to expectant customers.



Have you solved this problem differently?




Comments »

No Trackbacks
Mentioned to you on IRC, but consider using a cookie that somehow validates the user's browser.

Users are likely to share URLs with others (which is what you're trying to avoid), but much less likely to go mucking in their cookies sqlite DB (or similar).

S
#1 Sean Coates (Homepage) on 2009-09-09 19:05 (Reply)

Hi Sean,

Thanks for the hint. I've actually got our dev team looking into it already. I like the idea as it does balance usability (user sees nothing) with security. It actually does so better than my solution.
#2 Paul Reinheimer (Homepage) on 2009-09-09 19:21 (Reply)

Yeah, a cookie is the obvious choice.

Can be a random sequence, which is used in the hash for the urls, instead of the IP Address.

The only other alternative is having single use urls. Once their used, they can't be used again.
#3 Ren on 2009-09-09 19:36 (Reply)

Single use URLs are attractive, but architecturally complex.

We're not dealing with a single server, but tens of servers spanning multiple data centers. Managing some sort of global token revocation pool would be kind of a big deal, and tying the token to a single server would provide a single point of failure, and make some other stuff a bit harder. Overall given the choice between blocking a valid request to also block more fraudulent ones, or letting more fraudulent requests in to ensure no valid ones got blocked, I would choose the latter.
#4 Paul Reinheimer (Homepage) on 2009-09-09 19:49 (Reply)

I did that with a download system before. Worked fine so long as people only used browsers, but if they used a download manager it would completely mess up.
#5 Martin Fjordvald on 2009-09-09 20:22 (Reply)

Ah, multiple data centres.

Limiting downloads to a maximum of once per data centre would be easier, as wouldn't have to rely on communicating between DCs.

Each DC shares the same secret key, so they could all generate & validate each others hashes/authentication codes.
#6 Ren on 2009-09-09 23:14 (Reply)

As far as i understood your Problem, Nginx Webserver with the HttpSecureDownload Addon could be a solution for you. See here: http://wiki.nginx.org/NginxHttpSecureDownload

Best Regards,
Uli
#7 Uli (Homepage) on 2009-09-10 09:08 (Reply)

Hi Uli,

I'm not quite sure what Nginx will get us that we don't have already with Lighttpd. It uses a very similar hashing system.
#8 Paul Reinheimer (Homepage) on 2009-09-10 14:31 (Reply)

Hi Paul,
you wrote about taking the IP into account and so on which is not the case with the nginx-solution. I thought it would be helpful to see other solutions for your problem ;-)

Best Regards,
Uli
#9 Uli (Homepage) on 2009-09-10 20:46 (Reply)


Enclosing asterisks marks text as bold (*word*), underscore are made via _word_.
Standard emoticons like :-) and ;-) are converted to images.
 

Hi, I’m Paul Reinheimer, a developer working on the web.

I co-founded WonderProxy which provides access to over 200 proxies around the world to enable testing of geoip sensitive applications. We've since expanded to offer more granular tooling through Where's it Up

My hobbies are cycling, photography, travel, and engaging Allison Moore in intelligent discourse. I frequently write about PHP and other related technologies.

Search