Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Expert to Expert: Helen Wang and Alex Moshchuk - Inside Gazelle

Download

Right click “Save as…”

  • High Quality WMV (PC)
  • MP3 (Audio only)
  • MP4 (iPhone, Android)
  • Mid Quality WMV (Lo-band, Mobile)
  • WMV (WMV Video)
Microsoft Research was in the news not too long ago regarding the innovative, outside-the-box research being done by MSR scientists on display at the annual MSR TechFest event. One of the stars of the show was a new web browser project named Gazelle.

Gazelle
is a Microsoft Research prototype web browser constructed as a multi-principal OS (emphasis on research and prototype).  From the Gazelle Microsoft Research Technical Report: Gazelle‚Äôs Browser Kernel is an operating system that exclusively manages resource protection and sharing across web site principals. This construction exposes intricate design issues that no previous work has identified, such as legacy protection of cross-origin script source, and cross-principal, cross-process display and events protection.

Interesting, Captain. This really piqued our curiosity so Erik Meijer and I decided to find out the inside scoop on Gazelle. Why choose an OS architecture to model a web browser? How does it work, exactly? What does multi-principal mean in the context of execution of web pages? Aren't we talking about isolated processes? What happens when a principal is compromised? Is the browser kernel completely isolated from code executing in a principal context(is it possible to "blue screen" Gazelle)? What are the intrinsic challenges with implementing this design? How performant is a multi-principal, kernel-based web browser (what if you have 40 principal contexts running simultaneously, for example)? 

This is a great conversation with Gazelle project lead Helen Wang and Alex Moshchuk, a PhD student intern developer working on the Gazelle project. We cover a lot of ground and Erik and I are unusually curious given the fascinating model Gazelle represents for a truly secure web browser.

Enjoy! This is a birthday present from Channel 9 to you!

Tags:

Follow the Discussion

  • Interesting topic and good video.  If you liked it, you might want to also check docs on google chrome:

    http://dev.chromium.org/developers/design-documents/process-models
    http://dev.chromium.org/developers/design-documents/multi-process-architecture
    http://dev.chromium.org/developers/design-documents
  • CharlesCharles Welcome Change
    Helen talks about the Chrome design in this interview and she briefly points out the flaws (or differences) when compared to the more locked down kernel principal management approach... Of course, Chrome's isolated processes is a solid architecture for today's browser, but certainly not the end of browser security innovation...... IE incorporates tabbed processes as well. Gazelle leaps ahead of all current browsers with the principal approach because there is no dependable way to execute code in the Gazelle kernel (so there can be no process hijacking or remote code execution on the host machine - code only runs in the context of a principal and this can include more than one process in context -). At least this is the theory, anyway, and Gazelle is a research project, not a product... Still, when you think of the state of the art in web browser security technology today (I don't care which browser you talk about...), there's a very long way to go until we reach somewhere close to browser security nirvana....

    C
  • "Gazelle leaps ahead of all current browsers with the principal approach because there is no dependable way to execute code in the Gazelle kernel (so there can be no process hijacking or remote code execution on the host machine - code only runs in the context of a principal and this can include more than one process in context -)"

    Chromium (chromes base) is separated into two protection domains.  These are a browser kernel and rendering engine.  The rendering engine domain runs in a restricted sand box environment.  Web pages and plugins are both executed in the rendering engine domain which means they have restricted access to your system.  As with Gazelle, all communication to the kernel is done via a tight API proxied through IPC.  From what I can tell, Gazelle offers no specific improvements over chrome in this area.

    However Gazelle does shine!  Gazelle puts serious priority on DOM and script interaction which is in desperate need of improvement in all current browsers.  I definitely look forward to further information on this project in the future.

    As for my chrome links, this area really interests me but companies are still fairly hush about what they're doing.  Chrome is the exception to this, which is why I posted the links.  There is a lot of valuable information there for anyone interested in this sort of thing.
  • CharlesCharles Welcome Change
    Thanks for the links!
    C
  • I agree with Erik - Running C++ code in the web browser is not such a crazy idea and I am very enthusiastic about Google Native Client! I had never heard of Xax before this interview, will have a look into this.
  • Vesuviusvesuvius Count Orlock
    A very intersting discussion. I think you need to get the Vista Security people to argue the case to break the web, yes people had their problems, but I now have a more secure OS thanks to the initial courageous decisions. I am all for breaking it all!
     
    If I need to redesign my website as a consequence, then so be it. With Silverlight and out of browser stuff, Gazelle offers me an extra layer of assurance. If it breaks a 10 year old website, then that is collateral damage.
  • CharlesCharles Welcome Change
    It depends on how well you can actually sandbox native code executing in a browser. I'm not against the notion of C++ being compiled by and executing in the browser. I just think it's kind of crazy and potentially dangerous if you don't get the runtime security plumbing right.

    C
  • @vesuvius

    Breaking the web seems like a worse issue than it is.  Infact, IE8 already contains the solution for this.  If the browser detects insecure scripting it could block the actions and indicate to the user that they may want to reload the site in compatibility mode.

    @LordKain

    Web browsers run on all kinds of devices today, from pcs to mobiles and fridges.  Because of this there is a strong need to abstract the code from the underlying system.  This means that even if you were to compile C++ for a special web environment, there would still need to be a layer of abstraction such as virtual machices or JIT.  With Silverlight and Flash both having GPU support now, and the progressive increases in speed, the only advantage C++ on the web would have is language preference.  Also its worth pointing out the impossibility of creating an accepted standard for how such a language would work across browsers.
  • @Charles

    This is exactly what Google Native Client is all about. An attempt to build this kind of sandbox to enable native performance without sacrificing security. I understand there might be some security concerns if the runtime is flawed (and it is clearly more difficult to sandbox native code because of things like dynamically generated code, overlapping instructions, and so forth) but isn't it the same problem with the Silverlight / Flash runtimes?


    @pdev

    The performance discussion is a never-ending story and I would be curious to see how some of the demos shipped with Native Client (like Xaos for instance, the fractal viewer) would behave in the Silverlight / Flash world.
    I am not saying that C++ applications should be used in the browser instead of some higher-level technologies, I am just saying that supporting this technology would clearly bring new opportunities to the developers.  

    As for the language consideration, the C/C++ pair is still the most popular today (according to http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html) and there are hundreds of millions of lines of code already written in these languages...

  • "Chromium (chromes base) is separated into two protection domains.  These are a browser kernel and rendering engine.  The rendering engine domain runs in a restricted sand box environment.  Web pages and plugins are both executed in the rendering engine domain which means they have restricted access to your system.  As with Gazelle, all communication to the kernel is done via a tight API proxied through IPC.  From what I can tell, Gazelle offers no specific improvements over chrome in this area."

    Gazelle is fundamentally different from Chromium here.  In Gazelle, there is one protection domain per principal, namely, web site.  So, the number of protection domains is the same as the number of web sites that the user browsers. This means that when a.com embeds ad.com, a.com and ad.com are placed in separate domains. In contrast, Chromium places them into the same protection domain.  The key distinction between Gazelle and all its previous browsers is that the browser kernel manages all cross-principal protections and resource management.  In contrast, Chromium must do cross-principal protection in its rendering engine.  This is what makes Gazelle's browser kernel a real OS, and Chromium's browser kernel not really an OS.  Please refer to Gazelle's tech report's related work for a very detailed comparison.

    I'd also want to clarify that the goal of the Chromium's architecture is to protect the host machine from the browser and the web.   The goal of Gazelle is to protect web site principals from one another  --- such a protection is an operating system's job, hence is the Gazelle approach. The resulting architecture naturally protects the host machine he browser and the web as well.
  • stevo_stevo_ Human after all

    This is interesting, one thing I wonder about is like.. similar to the http-only cookie (for reference, this wasn't something all browsers supported, making it a concern to use), how should developers target features then.. consider that I wanted to use http-only cookie to protect my cookie from any javascript, but gah- firefox at the time didn't support it so javascript WOULD be able to use it.

    Isn't there a similar issue here, in some sense that whilst this is a really good addition, the security it helps enforce cannot be really trusted until all common browsers also do the same? perhaps this is something that should become the norm, that each browser implements this (or similar) model?

  • Maybe I'm missing something here.  Process isolation on modern OSes are for stability rather than security.  Sure - they can run under different security contexts, but that wasn't the primary driver for the model. 
    If we were to architect a presentation technology executing foreign code in a sandbox, would we end up with this? 

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.