Coffeehouse Thread

13 posts

Whats after NT kernel?

Back to Forum: Coffeehouse
  • User profile image
    ACT10Npack

    NT kernel has been around for 8 years or more. I was wondering when will Microsoft drop NT kernel for a newer and more advance kernel. The reason I ask because Microsoft replace Win9x kernel for NT because they could only take it so far. It's going to happen with NT in 10 years or so. Are their working on a new kernel or are they're hoping NT will be like UNIX which may never be replace for something better.

  • User profile image
    icelava

    Well if you look at the things Longhorn is trying to achieve, there're some pretty major changes that will be brought about by that new OS.

    Granted, those subsystems (Avalon, WinFS, Indigo, etc) are quite very much outer layers and can't be positioned together with the core kernel.

    If you want things to change at that level, it'll have to be the generation after Longhorn. Wait. Real long.

  • User profile image
    prog_dotnet

    the clr and managed code are just for user mode applications in longhorn.
    Everything running in kernel mode will stil be unmanaged.

  • User profile image
    Sven Groot

    The 9x kernel was not so much designed as that it just kind of happened. MS-DOS and Windows were never designed to go as far as they did.

    The NT kernel has a pretty solid design philosophy behind it, and much more thought was put into putting it together. As such, it has the potential of being relevant to a modern OS for much longer than the 9x kernel. I personally don't see any need for Microsoft to waste years rewriting a perfectly good kernel.

  • User profile image
    Manip

    Why do you ask? What is wrong with the kernel?

    Seems fine as is to me..

  • User profile image
    msemack

    Believe it or not, the NT kernel is actually one of the newest kernels out there.

    MacOSX isn't a new kernel.  It's just Mach.

    Linux is a "new" kernel is the sense that all of its code is new.  However its design doesn't offer anything revolutionary.

    There aren't any brand new kernels desgns from the rest of the Unix crowd either.  Most of them have been around longer than NT.

    Let's ask this question...  What do you think should be done to improve the kernel?  I can think of a few things, but I'd like to hear your list.

  • User profile image
    Minh

    Device drivers are moving out into user space -- Security w/ CPU support will probably move into kernel?

  • User profile image
    msemack

    Minh wrote:
    Device drivers are moving out into user space -- Security w/ CPU support will probably move into kernel?


    Drivers can already exist in user space now.  You can do it with toolkits like Jungo WinDriver, or the new WDF.  No special kernel changes are required to the kernel design.

    Also, just because a driver CAN run in user mode doesn't mean it's good for them to run in user mode.  Timing-critical drivers (IDE controller, Video) will probably never make it out into user mode.  Besides, you wouldn't gain much in the way of system stability by putting them in user mode anyway.  If your IDE controller driver crashes, your system is hosed, period.

    What putting things in user mode does is make driver development easier.  It insulates the developers from some of the nasty bits of WDM.  It has a side effect of making the driver crash not bring down the system, but it doesn't help you if that driver is for a critical piece of hardware.

    What do you mean by "Security w/ CPU support"?

  • User profile image
    Tom Malone

    msemack wrote:

    What do you mean by "Security w/ CPU support"?


    I think hes talking about these new processors from amd/ intel that stop dodgy instructions from running, not sure i fully understand the princupals behind it, sp2 brought support for amds i think.

    Tom
    (sorry for being so vague)

  • User profile image
    msemack

    Tom Malone wrote:

    I think hes talking about these new processors from amd/ intel that stop dodgy instructions from running, not sure i fully understand the princupals behind it, sp2 brought support for amds i think.

    Tom
    (sorry for being so vague)


    You mean the NX bit.  NX is a nice feature, but it's really a hardware thing.  No real kernel changes required, aside from making sure the x86 memory manager knows to set bits on the pages.

    It doesn't really stop "bad" instructions from running.  What is does is prevent memory pages from executing if they aren't flagged as "executable".  It helps prevent a class of buffer overrun type exploits.

    Basically every non-x86 CPU arch has had this capability since forever.  It was added to x86 when AMD released their AMD64 CPUs, and then was also added by Intel with their new 64-bit Xeons.

  • User profile image
    bwill

    Over the years, I've seen Microsoft Research work on several interesting OS projects - see http://research.microsoft.com/os/ for an example.  Realistically, I would expect any compelling innovations they produce to be integrated into the existing Windows product, for backwards-compatibility reasons; but if for some reason we really needed to start over with a completely new kernel, I'm guessing that the roots of that new kernel would rest on the work our research folks are doing.

  • User profile image
    msemack

    You're talking about Rings.  They've existed since the 386 days at least.  Your application runs in Ring 3.  All those "special" instructions require a lower Ring # to execute (usually ring 0). They can also execute in real-mode.

    Ring 3 code can only execute a subset of the Ring 2 instructions, which can only execute a subset of Ring 1 instructions, which can only do a subset of Ring 0 instructions.  They have nothing to do with NX, or Service Pack 2.

    If a high-ringe program tries to run a lower-ring instruction, it will generate an exception in the CPU hardware.  The operating system typically has something installed to handle the exception (kill the offending application, process the instruction on behalf of the application, etc).

    If the NT kernel had to manually scan all the instructions, the cost in processor time would be enormous.

    Windows and Linux, use only Ring 0 (kernel mode) and Ring 3 (user mode).  Very few (if any) x86 operating systems use the intermediate rings.

    Back in the day, OS/2 used to use Ring 1 or Ring 2 for drivers, and only used Ring 0 for the kernel itself.  At some point (OS/2 Warp, I think), this was dropped.

    Switching ring levels is a slow operation.  Having your drivers run in Ring 1 may sound like a good idea, but it's expensive and offers little benefit (a bad driver can still kill the system).

  • User profile image
    msemack

    Beer28 wrote:
    I assume you're talking about this



    Exactly.  I was going to post a picture, but I couldn't locate one easily on Google.

    Anyhow, I'm hardly a CPU expert.  I just do a lot of very low level programming (driver development, BIOS source code, etc).

    I don't know how long it would take to write an opcode assembler, I've never done it.  Personally, I would just use the NASM source code, instead of reinventing the wheel.  If NASM's GPL license is a problem for you, you might be able to take advantage of something like this. http://www.tortall.net/projects/yasm/



Comments closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know.