Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Singularity: A research OS written in C#

Download

Right click “Save as…”

MS Researchers Jim Larus and Galen Hunt lead an intriguing project where they've built an OS using managed code. The project is known as Singularity. In their own words:

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.


Besides Singularity's kernel being successfully written in C# (how cool is that!), there are all kinds of interesting lessons learned with respect to what a managed OS enables. Again, this is a prototype research OS, not a full fledged OS that can run the typical applications you've come to expect of an OS (or even provide a user interface beyond, say, that of DOS).


Enjoy.

Download Size: 168 MB

Tags:

Follow the Discussion

  • This isn't the CLR.  In our world, we compile entire MSIL for the kernel into x86 instructions at installation time.  There is no libc at the bottom. 

    However, we do have around some assembly code.  Like a kernel written in C, our C# kernel needs assembly code to handle the lowest part of the interrupt dispatch on the x86.  But once the assembly code has finished, it dispatches directly into compiled C# (no C).  BTW, there is some C code in the system, primarily for the debugger stub.


  • In Singularity, you can add new code to your application.  However, instead of loading it into your own process, you load it into a child process.  The OS facilitiates setting up channels between the child and its parent. 

    While this is still very much a work in progress, the results so far look promising.  For example, we have a dynamic web server that uses child processes.  Also all of our device drivers run in child processes.
  • I would kill (or pay) to get my hands on the source code. I am having such trouble learning about OS concepts using MINUX, being able to look at these ideas in C# (or original ideas) would be wonderful. But frankly that isn't going to happen so I'll just move on. Smiley

    This type of system has some great potential in areas where you need to be able to rely on the system to be secure and stable. It might not be a speed demon but it is what I would like running on an ATM machine I am using or on the hospital monitor equipment.

  • figuerresfiguerres ???
    Dang! this sounds good... must watch and then comment....  I'd love to see this taken further...

    an OS Keneral based on code that has been verifiied and has built in to it some aspects of the managed code ala .net could be the next step in making a much more "Bullet Proof" windows OS later.
  • Can some similar system to Singularity be implemented as a device driver in the existing NT kernel?  Perhaps the singularity runtime could run all of its code and "lightweight processes" in kernel mode and all calls out of managed code would be simple function calls to normal NT device drivers without changing processes or switching VM spaces. 

    Also, singularity has no shared memory;  does it have shared type instances?  Or is the best one can do more like proxy objects?  Does singularity even have a JITer in the kernel (perhaps for application and upper level code)?   

    This project seems great!
  • CharlesCharles Welcome Change
    Beer28 wrote:
    comment withdrawn.


    THis is not good practice, Beer (replacing a post's text with "comment withdrawn").

    C
  • CharlesCharles Welcome Change
    Beer28 wrote:
    I just finished it, at the end Charles commented on the OS having a webserver asking if it "parsed html and stuff." (52:10)
    A webserver reads the file off the disk(the document portion of the http request header), optionally shoots it to registered functions of interpreters loaded as modules in it's proc address, like mod_php or mod_perl, or even mod_mono, or does cgi piping(older style) to an interpreter process, then takes that output and throws it back down the tcp line with send(socket,,);
    An http server doesn't do any type of document processing on it's own, that's the browser on the client side that parses it, and sets it up for drawing to the client area. That's why the guy came back right away and said "http".


    Did you listen to the reply to my vague/misleading question, Beer? That should suffice.

    C
  • It was just lots of edits as he watched the video, very messy, no loss.

  • CharlesCharles Welcome Change
    Manip wrote:
    It was just lots of edits as he watched the video, very messy, no loss.



    I don't like the practice of removing what you post because you decide you don't want it to be there any more. Live with what you say. Have some onions and accept it when you look like a fool or whatever. Hey, I do!

    C
  • You know you come across a lot better in person than you do when posting online.
  • CharlesCharles Welcome Change
    Manip wrote:
    You know you come across a lot better in person than you do when posting online.


    As I've mentioned, the Charles entity that exists online is human Charles' attempt at an AI construct. Funny nobody believe this. I will try and adjust my autopoetic content sensing mechanism to apply more human-like, emotionally balanced contexts to my posted replies.

    C
  • hmm ironic you edited the above post, what were you saying about living with what you post again?
  • CharlesCharles Welcome Change
    Beer, did you read Galen's reply:

    "However, we do have around some assembly code.  Like a kernel written in C, our C# kernel needs assembly code to handle the lowest part of the interrupt dispatch on the x86.  But once the assembly code has finished, it dispatches directly into compiled C# (no C).  BTW, there is some C code in the system, primarily for the debugger stub."

    Clear?

    C
  • CharlesCharles Welcome Change
    Manip wrote:
    hmm ironic you edited the above post, what were you saying about living with what you post again?


    I appended content, which is orthogonal to the notion of content subtraction. Note that the original text was decorated with additional text, not changed in meaning or removed completely. There is a difference between editing a post for clarity and removing it to protect your ego.

    C
  • That's a very fine line you walk there. And appending text can also protect your ego, and change the meaning of it. Case in point:

    You jackass.

  • CharlesCharles Welcome Change
    I have no ego. Calling me names means nothing. End this fork now. The topic of Singularity deserves more than this silly drivel.

    Next.

    C
  • Ahh but I defending my ego by calling you names and changed the meaning of the post. All just by adding text. So a very fine line indeed. Smiley

    PS - I don't think you're a jackass.

  • MauritsMaurits AKA Matthew van Eerde
    Charles wrote:
    Manip wrote: It was just lots of edits as he watched the video, very messy, no loss.



    I don't like the practice of removing what you post because you decide you don't want it to be there any more. Live with what you say. Have some onions and accept it when you look like a fool or whatever. Hey, I do!

    C


    An interesting point.  Do you preserve past versions of edited posts (a-la-wiki?)  What would you think about the idea about making them available to the viewing public?  Philosophically, that is - I'm sure that it wouldn't be a development priority.
  • Kinda like a Wikki Log.
  • MauritsMaurits AKA Matthew van Eerde
    Beer28 wrote:

    if he does have a copy he's welcome to put it back. I don't care. I deleted it so nobody would get confused by my using the post as a whiteboard.


    Oh... Smiley I wasn't fishing for your post.  I've done the same thing myself.  I was conceptually exploring Charles' "no rewriting history" idea, which I like.  Systems with perfect information are a good thing, I think.
  • CharlesCharles Welcome Change
    We do not store copies of posts; only indicators that represent the fact that a post has been edited and who did the editing and when. Not too interesting.

    EDIT: Can we please keep this thread directly related to Singularity. It's such a compelling topic. Thanks.

    C
  • You guys talked about the numbers but didn't say how fast it is compared to an x86 PC. I would imagine a lot slower, I mean managed code on every level which has an interpretation layer in there and a micro kernel too, that's just got to take a speed hit.
     
    If someone asked me to guess I would say a 50% speed hit.
  • From what I gather from the video - they used a cut down version of the clr - except for the low level assem stuff.

    They have the GC and strong typing at the kernal level.  Jim said that he did away of the stuff they didn't need - the different ways to compare strings in the different languages (is there a difference between comparing strings in a language - a string is a string ??)  There is also no JIT - so it would be like running NGEN on the OS.

    Java had a similar go at trying to create a Java OS desktop some time ago ? Anyone know how that went?

    I like the idea of this OS, but it will not replace windows anytime soon.  But could replace things like Windows CE / SmartPhone etc (aren't these all os's ?)

    EDIT: NGEN pre-JIT's your code base

  • Ugh...is this the ultimate attempt at trying to force me to upgrade to DSL or broadband?  I'm in the stone age here with a 56K modem, and when I tried to download the video it politely told me "7 hours" to go.  Why do the videos I am most interested in have to be the longest ones?

    P.S. Sorry, this was yet another "orthogonal post" that disrupted the continuity of this otherwise linear conversational construct through time and screen space.
  • Beer28 wrote:
    Buzza wrote: They have the GC and strong typing at the kernal level.  Jim said that he did away of the stuff they didn't need - the different ways to compare strings in the different languages (is there a difference between comparing strings in a language - a string is a string ??)  There is also no JIT - so it would be like running NGEN on the OS.


    EDIT: NGEN pre-JIT's your code base



    galenh wrote:

    This isn't the CLR.  In our world, we compile entire MSIL for the kernel into x86 instructions at installation time.  There is no libc at the bottom. 



    He says there's no CLR. The clr would need the real kernel and libc anyway. It sounds like they're compiling the C# code to native instructions the way a C compiler would with a library that it uses to do io and memory access.

    What do you think Buzza?


    The base class libraries are not there - as he said Windows Forms and XML are not need (and probably need to much work to get it to work anyway) 

    They probably got the open source 'rotor' project and used that as the basis for their - lets call it the kernal runtime. This could then have been modifed down to the assem level to talk to the bios.  Once this bios - managed layer was complete - they moved on and created the rest of the microkernal components. 

    EDIT: there is no clr as we know it - but there is a 'managed environment' down there - they do have a GC at that level - or thats the impression i got from the interview.  In the end - everything turns into assem - its just at what stage do we create the assem ?  

    Is it possible to have the kernal look after the GC and strong typing for use ? I think it is !  With the kernal - malloc and free are replaced by the new and the Garbage Collector ??

    How cool does this sound?
  • Beer28, .NET code is always compiled to native platform instructions before execution. By default, the code is compiled to IL so that the same files can be distributed to multiple platforms. The IL is then compiled at runtime to the particular platform's native instructions before execution starts.

    MS also provides developers the option of precompiling their code using NGen. Doing this, of course, ties the code to that one architecture just as with C/C++.

    NGen
    http://msdn.microsoft.com/library/en-us/cptools/html/cpgrfNativeImageGeneratorNgenexe.asp

    New NGen features in .NET 2.0
    http://msdn.microsoft.com/msdnmag/issues/05/04/NGen/default.aspx
  • codancodan I didn't do it.
    They didn't use ngen to develop this. They used a development tool that was developed internally by the Advanced Compiler Technology group at Microsoft Research called Bartok.

    http://research.microsoft.com/act/
  • The MSIL to x86 compiler we use is Bartok, developed by Microsoft Research's Advanced Compiler Technology Group (http://research.microsoft.com/act/).  David Tarditi and his team have created this fantastic whole-program optimizing compiler that reads in a collection of MSIL Assemblies and outputs an x86 binary.  At the end of the day, its just code.

    Beer28, remember that libc is just x86 code.  So, we replace whatever one might need from libc, with C# code.  Instead of calling a C version of libc, Singularity uses safe code written in C# to directly access the screen hardware (for example).

    This probably makes more sense when you realizes the most OSes don't use BIOS except during the very earliest stage of boot.  Singularity does the same as well, it only use BIOS during the 16-bit real-mode boot strap.  Once jump to 32-bit mode, we never use BIOS again, but use device drivers written in C# instead.  Yes, we had to replace a lot of CLR libraries with different code.  However, unlike the CLR, the Singularity runtime is written in C#.
  • William Staceystaceyw Before C# there was darkness...
    There are 2 large differences. You can compile C code without importing anything, thus making it independant. The 2nd difference is that you can statically compile in both the C and C++ runtimes.

    I see no practicle difference here.  If you can do it in C, you can do it in any language as long as you have the compiler support.  That language itself does not matter.  You could do it in Perl as long as you had the support to convert the MSIL into native code to run on the bare metal (which is what Bartok does AFAICT).
  • staceyw wrote:
    There are 2 large differences. You can compile C code without importing anything, thus making it independant. The 2nd difference is that you can statically compile in both the C and C++ runtimes.

    I see no practicle difference here.  If you can do it in C, you can do it in any language as long as you have the compiler support.  That language itself does not matter.  You could do it in Perl as long as you had the support to convert the MSIL into native code to run on the bare metal (which is what Bartok does AFAICT).


    The difference is there is a managed environment at the kernal level.  The ref counts / type safety is available at a much level lower.

    EDIT: i forgot to mention the runtime takes care of what the compiler used to do with type safety and references counting.

    ALLOC and FREE are just c function in stdlib (i think) - they are just replaced with NEW and the GC - its a very impressive design.

    Could this be a purely OO OS?
  • Beer28 wrote:
    The huge difference between C, C++, Perl, and java || C# is that the latter 2 have memory management through a garbage collector and cannot directly access memory pointers.

    Perl5 uses reference counting. Perl6 will use Parrot.

    If the compiler links all the runtime support as one huge linkable monolith binary then fine, that's what gcj does, except it's shared, and not usually statically linked. In a kernel though you'd have to link it statically.

    Case in point, one of the people in the video responded and wrote, in libc, you have functions written with C, but in their kernel you have functions equivalent to libc written with C#.

    How are you going to write a malloc routine with C#?

    You have to allocate memory at some point in a kernel, you have to have some IO and communication with devices through shared memory. You have to use memory addresses, you have to load the ivt with the entry points of your int handlers.

    C# won't let you manipulate memory pointers. There has to be something else there. That big runtime bartok links your C# to must be full of C code.

    C# can't do these things because of the language, so yes it differs from C/C++ and Perl.

    EDIT: Or say you get an interupt and god forbid your x86 compiled C# handler must pull values out of registers to handle it. C# has no such functionality, and you can't drop to asm.


    What is the stack - a place to store data, its gets pushed, and gets poped.  There is the stack object in .net and works the same way - its not the same actual stack - but the functionality of it is exactly the same.

    I think of it as mananged code at the lowest of levels that anyone will ever want to get at.

    instead of alloc and free - we can use new and (!Dispose() or the GC)

    registers / stack / etc are so low level - last time I touched them was when i was doing control electronics on a 68HC11.

    I would class this as the lowest level of machine virtualisation with the focus in OO princibles.

    I assume that this environment would only run managed code.
  • Christian Liensbergerlittleguru <3 Seattle
    For what are you fighting, beer? I don't understand why you get so upset here. They did some thinking and created an OS written in C#. This code is then translated to something the CPU understands... C# is only a language. You can translate it to everything you like: Just write a custom compiler.

    I mean if there are no such functions in C# you simply write classes or functions that are accessible from C#. You may write them in some other language... It's no big deal (in the end all is translated to machine code).

    They are researcher: I guess they have a lot time to think how this stuff can work.
    Look at universities. They produce so much cool stuff. Only like one percent gets public. They simply have the time and the money... At our university they created their own language and wrote an whole UNIX based system with that. Only for research.

    I would not get upset like you did. We are on C9 here... simply ask questions: they created this (for us) to ask!


    I'm downloading the interview right now. Is there any way to get in touch with the sources of this OS? Is this close source or are there plans to release it somehow for .NET enthusiasts?
  • Beer28 wrote:

    How can you swap memory for IO with a device if C# won't let you out of the sandbox and handle memory, for instance. The video says there is no underlying Win32 API, so there's no PInvoke there to do that.


    C# has pointers and allows you to access memory directly. You have to be within code explicitly marked "unsafe" but you can do it. PInvoke can be used to call assembly routines for the bits you just can't do in anything other than x86.

    It seems as though you're assuming that C# is like a subset of C/C++. It isn't.

    I shall have to sit and watch this video later, it sounds super interesting.
  • Man... I can't decide whether I want to drool over Xbox 360 or over Singularity. I'm speechless.

    I wonder if this sort of thing might create incentive to make more .NET hardware? I'm no doubt pushing it here, but it would be amazing to see, say, a processor that takes in IL as its machine language. I doubt there'd be very much of an advantage to such a thing (it'd be ridiculous to manufacture cost-effectively I'm sure), but it's something to daydream about.

    And yeah, so what if a lot of the code actually turns out to be C or machine code? So what if it's 90% managed rather than the 99% that they say? What does it prove? What are you trying to prove?

  • Tom ServoTom Servo W-hat?
    I think the ideal way of implementing a managed OS would be to offer core services in native code form, that is a global memory manager and a global CLR and GC, and then try to implement as much in managed code (including drivers). Processes would be represented by appdomains.
  • TomasDemlTomasDeml Run Chiro, Run!
    reinux wrote:

    Man... I can't decide whether I want to drool over Xbox 360 or over Singularity. I'm speechless.

    I wonder if this sort of thing might create incentive to make more .NET hardware? I'm no doubt pushing it here, but it would be amazing to see, say, a processor that takes in IL as its machine language. I doubt there'd be very much of an advantage to such a thing (it'd be ridiculous to manufacture cost-effectively I'm sure), but it's something to daydream about.

    And yeah, so what if a lot of the code actually turns out to be C or machine code? So what if it's 90% managed rather than the 99% that they say? What does it prove? What are you trying to prove?



    There is a .net cpu
  • William Staceystaceyw Before C# there was darkness...
    C# won't let you manipulate memory pointers. There has to be something else there. That big runtime bartok links your C# to must be full of C code.

    C# can't do these things because of the language, so yes it differs from C/C++ and Perl.

    EDIT: Or say you get an interupt and god forbid your x86 compiled C# handler must pull values out of registers to handle it. C# has no such functionality, and you can't drop to asm.

    ?  C# has pointers.  Second, he said they use some ASM and some C as the lowest levels.  Third, he said it was C# "like" - not the C# we use in the SDK.  As others have pointed out, you can do anything you want when you "own" the compiler.  Plus the get the CLR in their at the lowest level, so when that is "on" you probably don't need pointers any longer at the above layers.  And you don't need to pinvoke any longer.
  • William Staceystaceyw Before C# there was darkness...
    For what are you fighting, beer? I don't understand why you get so upset here. They did some thinking and created an OS written in C#. This code is then translated to something the CPU understands... C# is only a language. You can translate it to everything you like: Just write a custom compiler.

    Well said.
  • William Staceystaceyw Before C# there was darkness...
    Great job C9. This is one of the best.
    I wonder:
    - Could this be start of a true VM OS?  Whould you still need to say this VM is 512MB, this is 512MB.  Or could VMs be just processes that share the same OS?
    - Display.  Is display in the kernal or still passing messages to display driver?
    - Channels.  Are channels similar to Indigo NamedPipe channel?
    - Putting on the dev hat, it would be nice to say this is my "shared" object and the tell the runtime about it.  Another process could know about it with a lookup (ala named mutex, MMF) Then I don't have to do the send/receive back and forth manually.  Both processes could update the same object by abstracting the two-way channel in the runtime even farther.  Updates could even fire an event or callback delegate on the object, a method would get fired when the object changes state.  Then in my code, I can just get/set properties on the object and all processes can "see" it.  Naturally, there is sync issues that need to be addressed, but something like that would be handy.  Kinda a publish/subscribe object model.
    - Maybe you could at least shoe horn Monad on it in the console for a script language.
    - And hack Indigo on it, so at least you export instrumentation interfaces so things like a Perf Mon could be written on another windows machine that makes Indigo calls to it for testing/demo/display of stats, internals, processes, etc.

    Very cool stuff guys!
    --William  
     
  • This video further confirms my suspicions that Microsoft wants to re-invent the Lisp Machines.
  • If the source code isn't released at least could we get some iso that can be booted into vpc? I would love to try it out just to say that I did.

    Cheers!
  • figuerresfiguerres ???
    WOW!

    I see a lot of ways this can help build better systems level code.


    and I see that as a possible way to start a new OS.


    also the guys who did this:  think security devices!

    for example this might just be a killer way to build an embeded device OS like one for a firewall / router / nat box.

    an OS thats very limited, lots of static checking the code before using it. isolated processes model. fast due to less overhead (context switches etc...)

    share-none memory model.

    now put that on a pc with some network cards and setup isolated drivers for each card.
    do a non-tcp internal message bus to pass data and seperate isolated processes for the nat and filter and logging and the rules...

    could be a way to build a very tight system.

    and use this for creating a test system, for drivers and system code.... run a system on a vpc thats moded to hook debugging data and try to kill it with bad data and other failure modes and verify that it "works right"

    and many other ways this could be of use later....
  • Christian Liensbergerlittleguru <3 Seattle
    I love the part where they say that it has also a port of cassini runing on it Big Smile "Every operating system needs a webserver" - I had to laugh.

    Could you put some more information about this thing on the net? Or give use more URLs where we can find more information?

    I'm really interested to know more about it (like beer also obviously does).
  • Beer28 wrote:

    Wouldn't that mean that the bulk of the kernel is that dependancy free mini-CLR runtime binary that bartok links the code to. What's that written in?


    C#, I'd guess. In much then same way that the C runtime libraries are written in C. Kinda messes with your head just thinking about it.

    Beer28 wrote:

    As for the stack, the stack is more than just an area of memory for pushing and popping, it works with the registers in the CPU for cpu instructions like push and pop, and call and ret.


    Yes, but only in a native code sort of way. Since all of user mode is MSIL you only really need the CPU stack for the kernel. And since it's a microkernel it's not going to be that heavily used so I guess you can just initialize it to a predefined area of non-paged ram and then pretty much forget about it.

    Having watched the video now I think this sounds like a fantastic idea, I'd certainly love to read more about it. Are there any papers etc on it?
  • Beer28 wrote:
    Buzza wrote: registers / stack / etc are so low level - last time I touched them was when i was doing control electronics on a 68HC11.


    I have a Motorolla 68HC11 test board I want to get rid of, if you're feeling nostalgic and you want a good clean one for twenty bucks, let me know.


    You wouldn't have an HC12 with a board by any chance would ya?
  • Beer28 wrote:
    Buzza wrote: registers / stack / etc are so low level - last time I touched them was when i was doing control electronics on a 68HC11.


    I have a Motorolla 68HC11 test board I want to get rid of, if you're feeling nostalgic and you want a good clean one for twenty bucks, let me know.


    Na, have done the stuff for 10 years - and i hated doing wire wrap (mine was wire wrap anyway)
  • figuerresfiguerres ???
    Beer28 wrote:

    If the CLR runtime that the kernel is linked to when it bartok compiles the C# code is written IN C#, what code is managing the GC of the CLR, if the CLR runtime itself is C#?

    What is the the CLR linked to as a runtime?
    C# code can't exist on it's own, because of it's dependancy on memory management and it's inability to reach into memory. It's not like you can compile C#, using the confines of the language for a 8051 chip and load it the way you can with C and SDCC.
    The language itself requires a runtime for GC and memory management. That's like saying you're going to write a 8051 or any MCU/CPU OS with Visual Basic and the msvbvm is also written in VB, it doesn't compute.


    I think you are close but missing a few details.

    why can't you write the CLR in C#?

    no reason you can't write say 99% of .net in C# or any other .Net language.

    how?

    same way you port a C compiler to a new CPU.

    you need a bit of ASM code in some loaded form to call to give you a base to start from yes...

    but remember that the CLR like a libc has to be native code to run on a given cpu.

    an OS allocates memory for apps by having a set of data structures to know what memory is out there to use and maps who has what blocks.
    so when you boot your CLR/OS you make a few hardware calls (probably in asm) to get the memsize and then build a map and manage it as stack and heap and some tables of pointers and such...

    just like some apps have 3rd party memory managers that ask the os for big chunks and then hand the  out to the rest of that app... same thing.

    so you pre-jit or ngen the runtime code, and take over the memory and then start running app code inside the os.

    as for "who manages" well "Managed Code" is not entierly a "Runtime" part of the "Management" is done by the compiler and the msil it builds checking things for some static / predictable errors.
    then the rest of the CLR handles some dynamic issues.

    but C# let's you build "unsafe" code like C or C++
    so I would say that some of the things std .net does are left out of this system but you still get a base that can then host managed code on a "clean" base and go from there....

    a Mini-Kernel that then could load a larger "full CLR" as part of the final OS that then becomes a "Fully managed OS"
    layers ...
  • leighswordleighsword LeighSword
    today, MS told us their OS written in C#,
    tomorrow, Intel told us their CPU is .Net inside(no more x86 assemble instruction).

    oh, my god.
  • Like the OS, the compiler is a research prototype.  We use it to try out new ideas.
  • You've basically described it.  Basically all of the runtime is written in C#.  The GC itself is written in C# using the "unsafe" extensions. 
    The GC gets all of its memory from a very simple page manager at the very bottom of the system. 
    The page manager and GC are written carefully so that they don't require any GC'd memory.

  • So...  is anyone working on a version of this system to be implemented as a driver within the NT kernel?   It seems like there's no reason this should be impossible when CoLinux did this with an entire linux kernel. 
  • Beer28 wrote:

    So why doesn't visual studio .NET have x86/64 as a C# compile target?



    With 2005/.NET 2.0, you can create platform-neutral binaries or target specific platforms. The target options in VS 2005 are Any CPU, x86, x64, or Itanium.
  • Beer28 wrote:

    are they dependancy free like the bartok compilied C# binaries they're doing the kernel with if you don't ever use anything but stack local vars?


    In MS' implementation, the dependencies should be similar to .NET 1.x. With Bartok, this wouldn't apply.

    In VS 2005, the platform-specific code is still compiled as IL by default. IIRC, a platform id is added to the PE header (there's a bit more to it -- I'll try to get a link to the specifics). The code is then compiled to native as usual before execution on the target platform.

    .NET 1.x had no notion of bitness, so the extra information was added for managed/unmanaged interop on 64-bit platforms. In fully managed code, this wasn't a problem, but is when accessing unmanaged code because in .NET, the pointer size is the same as the native platform.
  • leighswordleighsword LeighSword
    galenh wrote:
    Like the OS, the compiler is a research prototype.  We use it to try out new ideas.

    image that if Intel told MS they try to integrate OS into their CPU , what feeling  MS get?
    the same to Longhorn, give us Native APIs, just like Intel gives you(MS) basic instructions, leave it to US,pls.
  • Beer28, AFAIK, only Bartok does that.

  • Christian Liensbergerlittleguru <3 Seattle
    I have been reading the few documents available on Singularity (one PDF and one presentation).

    I got a question: wouldn't it be possible to create a virtual machine on top of singularity that emulates the Win32 API and other APIs of the current Windows implementation? This wouldn't break existing code and new code could be compiled directly using the new features.

    I guess you did already do some of this thinking?! Are there problems with creating such a virtual machine?

    Could you please post more information about your project? The information posted on http://research.microsoft.com/os/singularity/ is not very much.

    Cheers
    Christian
  • figuerresfiguerres ???
    Beer28 wrote:
    so there's no heap when bartok compiles C# to x86?
    it's all .bss and .data type reserved memory?

    The kernel has it's own C library with a few functions but they have no GC dependancy or other dependancy.

    Even x86 compiled C# by ngen has the GC dependancy and the runtime dependancies. It's still linked to all the CLR imports.

    Maybe it compiles the C# to x86 and does all allocations as hard allocated initialized or uninitialized data reserved instead of doing a heap and reclaiming memory after use?


    Beer:

    what is the classic Malloc() / Free() pattern?

    a library has a list of blocks and assigns a block to an app's process space, the app later calls free()
    OR
    when the app dies / exit() gets killed
    the OS has to "Free()" the memory that was all tagged for that process.

    Hmmm.... sounds a *LOT* like a GC to me!
    a very crude one but it is a clean up that apps have had for a long time.

    if the OS did not have a way to re-claim the blocks on crash then the OS would be very unstable and die after a few bad apps crashed and left memory locked up.

    so all the .net New / GC model does is to take the classic pattern and build more detailed features into it.

    so you can builld most of the .net (memory) model from a classic malloc / free structure.  so doing the revserse is not such a big jump ...
    no black magic. no "runtime dependancy" just setup a class that handles the map of blocks and has cleanup code. attach event handlers and delegates as needed.
    which are function pointers and address slots when you drill down to x86 level.

    but with a .net model you really have one block per app to recover not a chain of blocks.... thats an effect of the .net mangement -- in a "Best case"
    I am not sure if .net ever allows fragmented space or not.... never had to check that as it's never been a problem for my apps. Smiley
  • MasterPieMasterPi taddah!
    I'm sorry to be the complete newbie here, but..

    Can someone explain to me what this is?

    Thanks! Smiley

    mVPstar
  • What I think you need to do is forget about malloc and free and look at what these functions do.

    New is part of the framework and free is implemented via the GC - in regards to what the functions do - they are extrememly simple and most likely implemented in assem - remember they said they dropped down to assem for parts.

    I found this site about another OO language called Oberon:

    http://www.oberon.ethz.ch/native/

    Where they discuss memory management in another type of OS called Oberon.  It's also a OO language.

    http://www.oberon.ethz.ch/native/WebHeap.html

    They discuss memory allocation.  This is most likely what these guys did.

  • Beer28 wrote:


    like C++ or java, new is part of the language, it's a keyword, malloc is not, heap allocation and freeing is actually embedded in the language specs.



    that should have been C++, java and C# - and as you said there is no malloc / free in these languages - they in the specs - which leads me to believe thats where the implementation is, at the low level implemenation of the CLR runtime subset.
  • figuerresfiguerres ???
    Beer28 wrote:


    like C++ or java, new is part of the language, it's a keyword, malloc is not, heap allocation and freeing is actually embedded in the language specs.




    sure it is.... but the "New" has to turn into misl which in turn has to turn into a set of x86 code which has to "do something" that does in part what a call to Malloc() or k*() does.

    with the new in .net acting on a type it does more than just setup a chunk but that's one thing it has to do.... in the normal clr it's calling to clr functions like a c app with normal libs would do.

    in this os I imaging that they have a type that has mem-alloc functions that they use for the "Micro Kenrel" level and then go from there.

    the thing to keep in mind is that 1 misl op may be any amount of x86 code or calls to the runtime.

    so new in code is just a function (I think its .ctor if I recall?? )
    and thats what malloc / kmalloc etc... are.
    just that the language hides the function for you.
    ok it may not be a "static function" it may also be a chunk of inline code plus function calls at runtime but it's all just code in the end.
  • Beer28 wrote:

    Why not "new" the memory for the map table?


    Because you're writing the implementation of new, so obviously it isn't going to work.

    It's like saying why not just call malloc when implementing the C standard library version of malloc in C. After all, access to the standard library is part of the C standard right? So your implementation of malloc could just include stdlib.h and call malloc. Except that's obviously stupid.

    It doesn't require a "special" compiler or any kind of magic, it just requires that you don't try to use the heap when implementing the heap. If you did, you'd just end up with a nasty recursive loop and eventually a crash. Why is that so hard for you to grasp? Or are you being deliberately obtuse?

    If you really wanted to ensure that the heap manager couldn't call new then you could always protect it with suitable code access security permissions, but that's probably overkill.
  • figuerresfiguerres ???
    Beer28 wrote:
    say you are writing the createpagetable() function, and you call new something(); , what's going to happen?

    Where's it going to allocate the object?



    at an address in memory.

    it's the OS.

    it OWNS the ram on the machine....

    all it needs is to track what it's using.

    just like in the old DOS apps that write to the display in text mode.

    you set up a base address and then handle the 80x25 as an array .... and you start writing to that chunk of space.

    and recall that they are in this simple OS limiting some things... so I do not think they page to disk
    nor several other things....
    they have only done the code that was 100% needed to boot and play.

    think like an embeded device -- you own the system.
  • Again, I don't think NEW would be implemented in c# but probably in low level assembler - same as the GC?

  • Beer28 wrote:

    As a matter of fact the implementation of anything in the C library is not defined at all. It doesn't matter how it's implemented as long as it is according to the standard and as long as those functions are linkable in C code. That's why C is portable and works on everything.

    I could be wrong, but I believe new and relying on auto-destruction is part of C# and not the CLI.


    The implementation of GC and memory allocation isn't defined in the C# standard either. It's just required that it is there. This is no different from the requirement of C to have the standard library available and the standard library includes malloc.

    C# doesn't stop you from writing a memory manager. You can do all the same things you can do in C, so you could, in theory, do something like:

    unsafe ZeroSomeRam()
    {
       int *pX = 0x00000000A;
       for (int i=0;i < 100;i++)
       {
          *pX = 0;
          pX ++;
       }
    }
    


    to zero a block of memory if you like. If you're writing the memory manager for an OS then you own the RAM and you own the VAS. It's up to you to decide how you arrange things in it.

    If you're writing the OS in C you can't just malloc in the Memory Manager. If you're writing an OS in C# then you can't just new in there. There is no difference.
  • Beer28 wrote:

    There is the C language, then there's the C library.


    The C standard library is part of the language. If you don't have the library availble, technically you aren't C.

    Now there may be some implementations which miss bits out, like SDCC, because they're targetting very limited devices. Technically these aren't C.

    The Linux and Windows kernels don't have the full standard library available at kernel level because they were written in C before C was standardised. You could write a full blown version of the standard library that works in kernel mode but there is little point as exisiting C functions for all the necessary bits exist already. After all, who needs printf in the kernel?
  • William Staceystaceyw Before C# there was darkness...

    Could you please post more information about your project? The information posted on http://research.microsoft.com/os/singularity/ is not very much.

    I agree. This could be by design, but that paper and ppt seem to target high level people, not technical people interested in the design.

  • Here, maybe this helps. Code in C#:


     class Class1
     {
      static void Main()
      {
       Console.Write('.');
       Class1 c1 = new Class1();
       Console.Write('.');
      }
     }

    Compiled into IL:

    .method private hidebysig static void Main() cil managed
    {
          .entrypoint
          // Code Size: 21 byte(s)
          .maxstack 1
          .locals (
                Test167.Class1 class1)
          L_0000: ldc.i4.s 46
          L_0002: call void [mscorlib]System.Console::Write(char)
          L_0007: newobj instance void Test167.Class1::.ctor()
          L_000c: stloc.0
          L_000d: ldc.i4.s 46
          L_000f: call void [mscorlib]System.Console::Write(char)
          L_0014: ret
    }


    And finally, x86 instructions as disassembled by the VS.NET debugger:

     static void Main()
     {
      Console.Write('.');
    push        ebp 
    mov         ebp,esp
    push        eax 
    push        edi 
    push        esi 
    xor         edi,edi
    mov         ecx,2Eh
    call        dword ptr ds:[79C566A8h]
      Class1 c1 = new Class1();
    mov         ecx,0AD5098h
    call        FDBE1FC0
    mov         esi,eax
    mov         ecx,esi
    call        dword ptr ds:[00AD50D4h]
    mov         edi,esi
      Console.Write('.');
    mov         ecx,2Eh
    call        dword ptr ds:[79C566A8h]
     }
    nop             
    pop         esi 
    pop         edi 
    mov         esp,ebp
    pop         ebp 
    ret

    I think if you look at byte 7 of the IL, you can see that it's neither C# nor the IL that decides how objects are created. Since for Singularity they're using their own .NET runtime, making "newobj" do something other than what the normal .NET JITter would do is probably as easy as reimplementing malloc in C. At least, I don't see why it wouldn't be.

  • This sounds as a great research project and it would be great if they'd open it more to the public. Of course having a Monad port for it would be a great benefit for the developers team. I actually dreamed of a thing like this and these guys are having all the fun doing it Smiley
    I'd have also a question, for instance Apple is doing their drivers development in a constrained C++. Don't you think it would be nice to have a special purpose language that would be great for device driver writing? That would allow to write the code once and then achieve different machine code in function of the compiler switches or different compilers. Such code should be more verifiable, perhaps more mantainable in time, have better productivity and a smaller learning curve. Or not? Does it sound convenient or is c,c++ a good language choice anyway? 
  • I think so. The second call jumps to the class constructor.
  • Manip wrote:
    It might not be a speed demon but it is what I would like running on an ATM machine I am using or on the hospital monitor equipment.



    <offtopic>
    It was on the news like last week that a bunch of thieves built an entire ATM machine that read cards and took in PIN numbers but wouldn't give any money out. I couldn't help but laugh.
    </offtopic>
  • CairoCairo I want my waffle sundae, give me my carbs!
    Kernel in C# - interesting! More interesting: "native code" compiler for C#.

  • Beer28 wrote:

    Basically, are those allocation functions imported from the CLR?


    For the current user-mode version of the CLR, yes I'd imagine they are. That doesn't mean that they have to be though. The JIT (or native compiler) can compile a newobj into whatever is appropriate for the environment.

    Beer28 wrote:

    I think the compiler handles it after all this. I think the compiler, the bartok, does special heap allocations as bss and data(init and uninitialized reserveds) for C# that compiles the pagemanager and stuff that can not be GC'd because I can't figure out how else it could be done.


    GC is only ever going to be used if your calling new to create objects on the heap. Inside the memory manager you are never going to be calling new. What's more, you never need to. The memory manager decides where everything goes in physical memory. If it has data structures that need memory reserved, the programmer simply decides where to put them and hey presto it's done - that memory is allocated.

    Everywhere else in the kernel (except where IRQLs forbid page faults) you could use new because the memory manager is there to provide an implementation of it. All you need is for the compiler to generate a suitable call to MM (or a kernel trap which is reserved for this purpose) in place of newobj.

    This sort of symbiotic relationship is common in systems level programming. Who decides where in memory to load the memory manager? How do you load the filesystem driver from the filesystem before you have the driver?And who allocates memory for the memory manager?

    It's nothing to do with what language you use to solve it. Even if you wrote the entire OS in assembly you'd still have to deal with these issues.
  • figuerresfiguerres ???
    Beer28 wrote:
    int *pX = 0x00000000A;
    pX = 0;

    Doesn't that defeat the purpose of managed code?

    <quote>If it has data structures that need memory reserved, the programmer simply decides where to put them and hey presto it's done - that memory is allocated.</quote>

    you mean like the bss and data segments of elf or pe. Yeah, I've been writing that over and over again in this whole thread. You do the same on embedded mcu's.

    But as long as you're doing this and doing stack only programming and allocating the memory yourself in raw ram, wouldn't it be better to do it in C where there isn't OO and huge data structures in classes to push on the stack and reserved memory?

    Wouldn't you rather put task structs into a linked list than a managed array?

    I mean, think about how often a scheduler goes through a linked task struct list. Would you really want the compiler to generate all the extra type safe instructions for critical operations that execute hundreds of times a second?


    beer:

    1)  they said there are tradeoffs and some asm code.
    2) this is not a "ready for primetime" OS it's research.

    but if say Minix led to Linux
    then perhaps this will inspire some other new os writer to do something new / different from what we have today.

  • Beer28 wrote:

    Wouldn't you rather put task structs into a linked list than a managed array?

    I mean, think about how often a scheduler goes through a linked task struct list. Would you really want the compiler to generate all the extra type safe instructions for critical operations that execute hundreds of times a second?


    You can write pretty damn efficient linked lists in C# (even without 2.0's generics), and bounds checking can be eliminated in a lot of cases -- iterating through an array being one of them. Besides, last time I checked, a few hundred times a second isn't very frequent anymore, not even on most microcontrollers.
  • Sorry to jump in so late, but I was traveling and only got around to reading these messages.


    This thread has a lot of different issues intertwined. Let me try to clear up some of the most common confusion. Galen and I would be happy to answer questions about Singularity.


    1. When are source/binary/more papers going to be available?


    We'll put papers on our website (http://research.microsoft.com/os/singularity/) as we finish the final versions of them. It's a quaint academic tradition to ship no paper before its time.

    Code and/or a running system is further in the future. We thought about it, but there are a lot of reasons why we're not ready (not the least of which is that the system is still very barebones, not useful to anyone but us, and in a state of rapid flux). Releasing code entails a lot of work on our part and at least a commitment to answer questions, so it isn't something we'll do until we are good and ready.

    2. What compiler do you use?


    As several people noted, we use the Bartok compiler and runtime from the ACT group in MSR (http://research.microsoft.com/act/). It is a highly optimizing compiler that compilers MSIL down to x86 code. It comes with a runtime system written entirely in C#--though parts of it, most notably the garbage collector (GC) are unsafe C#. (It is an open research challenge to write a real GC in a type-safe language.)

    Bartok is a very high quality compiler that produces good code, but it is a research prototype. It doesn't handle exactly the same language as MS's product compilers (e.g. no reflection) and isn't ready for widespread use. Don't ask when it will be shipped, since it isn't going to be. If you wonder why, say "research prototype" 10 times fast and you'll have the reason.

    3. How do you do xxx in C#?


    A couple things to note. Everything in Singularity is written in safe managed code (C#), except the kernel. This includes device drives, system components, applications, etc. The kernel, since it implements the memory system, scheduler, and manages devices is pretty low-level code and is primarily written in safe C#, though there are parts written in unsafe C# and a HAL written in C++.

    Also note that we own the compiler and can control the code that it generates. Using an off-the-shelf compiler would introduce a lot of difficulties in predicting exactly what code would be generated in different situations. This is not fundamental, but rather a big convenience.

    And yes, you too can write a good part of your run-time system in safe code. Look at a library sometime. Most of it is pretty simple data manipulation that can be written in any language. There are a few tricky parts where the unsafe subset of C#, or its equivalent, is essential. The key is to factor your system so these parts live in the kernel, with a safe interface, or are inserted by your compiler.

    4. Didn't JavaOS do this?


    Not really. JavaOS is just a simple run-time system between the JVM and bare hardware, which provided a bare minimum of services on the hardware to run the JVM.

    Singularity is much closer to the JX project in many respects. You might want to take a look at their paper to understand some of what we are doing:

    Golm, M., Felser, M., Wawersich, C. and Kleinoeder, J. The JX Operating System. in Proceedings of the USENIX 2002 Annual Conference, Monterey, CA, 2002, 45-58.


    Galen and I would be happy to answer questions on Singularity. It would be a lot easier to reply if there was a thread for each topic of discussion, rather than messages containing a collection of unrelated questions.

    Thanks a lot for your interesting in Singularity!

  • figuerresfiguerres ???
    larus wrote:

    Sorry to jump in so late, but I was traveling and only got around to reading these messages.


    This thread has a lot of different issues intertwined. Let me try to clear up some of the most common confusion. Galen and I would be happy to answer questions about Singularity.


    ...

    Galen and I would be happy to answer questions on Singularity. It would be a lot easier to reply if there was a thread for each topic of discussion, rather than messages containing a collection of unrelated questions.

    Thanks a lot for your interesting in Singularity!



    thank you!

    And if I may an idea that just might have some use:

    part VPC part OS Kernel

    a way to test OS building and also creating specialized PC based systems like a firewall / router.

    some sample code for boot-strap, a base kernel as sample code. then a VPC extension to allow testing in slow mode and or single step modes.

    so that one could try new low level code in a kind of VPC-debugger.

  • If you'd like a thread featured way to communicate there is the ms forum betanews.microsoft.com or you could open one on google groups.
    Great project, I hope you produce something that we can install and play with (perhaps shared source initiative). Also can you envision a device-driver oriented language?

  • Very nice video. The part of a managed OS is of course not new, as someone commented a few pages back (I skipped over the code debate), Lisp machines was essentially this, even with custom hardware, to execute lisp code faster.

    But the brief mention of code verifiability was very interesting. Perhaps someone else is doing something like this in the research labs? Maybe we could get a video of that?

    The concept alone is naturally worth persuing. Mathematically verifiably safe code. No stupid cert messing, you can simply (receiver side) verify a binary to be safe. That is so cool.
  • quxqux
    You can't look at a binary and decide what it will do.  All you can do is run it and then stop it from doing things like disk access that you don't want to allow.
  • Code verification is an idea that has been kicking around the programming language research community for about 10 years. George Necula and Peter Lee did the original work on proof-carrying code and Greg Morrisett elaborated the idea with typed assembly language.

    The work is great (and practical), but keep in mind that most of the properties that have been verified for machine code are safety (type and memory safety), just like the Java JVM or CLR verified for their intermediate languages.

    Glad you liked the video. My kids nearly died laughing and suggested I take some PR training before I do another one.

  • The video was great, C9 isn't about PR and marketroid stuff. It's about what you're up to and I think that's one of the most thought provoking videos I've seen here.

    You guys are super lucky to do that and get paid for it. Smiley
  • CharlesCharles Welcome Change
    AndyC wrote:
    C9 isn't about PR and marketroid stuff.


    I'm glad this is perfectly clear! Smiley


    MS Research is an amazing place to work. So much cool stuff is going on there that it's hard to figure out where to go next with our camera.

    C
  • rhmrhm
    This is a very interesting project, mostly because of the original ideas on the OS rather than the fact it's written in C# - I see C# as just an enabler for the new process model concept as it allows for process seperation without the need for heavy-weight hardware protection.

    It's also nice to hear some validation of my claim that the security model on conventional operating systems is essentially broken in the network age.


    On a side note: The sound level on this video is very low. I don't know what editing/conversion software you use but hopefully it has an option somewhere for dynamic range compression or sound level maximizing or something like that. Not only would it save having to crank the loudspeakers up (and thus get deafened by any other programs that play sounds) but the higher sound level also improves the quality of lossy sound compression.
  • The video was great, C9 isn't about PR and marketroid stuff. It's about what you're up to and I think that's one of the most thought provoking videos I've seen here.

    You guys are super lucky to do that and get paid for it.


    You don't have teenage kids, do you?

    I really enjoyed the Channel 9 interview, since it let us talk directly to a technical audience, without having to worry about it getting edited into a narrow perspective.

    And, yes, we are lucky. MSR is a great place to work. The rest of the company is pretty good too Smiley.

    /Jim
  • Amazing. Just few days before this video came out, I wrote a small essay on architectural form a managed OS might take and some rational behind that.

    The post is at:
    http://tinyurl.com/dy8b7



  • This is a very interesting project, mostly because of the original ideas on the OS rather than the fact it's written in C# - I see C# as just an enabler for the new process model concept as it allows for process seperation without the need for heavy-weight hardware protection.

    It's also nice to hear some validation of my claim that the security model on conventional operating systems is essentially broken in the network age.




    Your observation is correct. C#, because it is a type-safe, memory-safe language enables us to explore new architectures for an OS. The key factor is the safety, rather than C#, though I like the language, particularly v2 with generics.

    C# (or another safe language) has an additonal advantage as well. Programs written in safe language are easier to analyze completely and accurately. My research group SPT (research.microsoft.com/spt) has build many tools for finding defects in programs. Analyzing C/C++ is difficult and always has an unsound assumption that a program isn't violating language semantics with dirty tricks, such as converting an integer to a pointer. In a safe language, these assumptions are enforced by the type system and run time system, so tools can rely upon them.

  • How do you work around the issue of DLLs, and loading reusable code? Is that possible at all?

    Also, with regards to channels as the only communication method between two separate entities in the system, is the app/driver just statically linked against that code?

    I'm also working on a project that parallels a small number of your goals/achievements, and not knowing much of the internals of linking, etc., I'm curious how it all works. Is it the job of the JIT to link in static library code at runtime?

    Hope I'm not too vague, as I'm not quite sure what it is exactly I want to ask Wink
  • Beer28 wrote:
    larus wrote: an unsound assumption that a program isn't violating language semantics with dirty tricks, such as converting an integer to a pointer.


    I never thought of using pointers as integers a dirty trick....
    in x86 they're the same width so it doesn't really matter in the compiled code nor asm which is which type. I guess the same is true on 64 with long.

    Do you have any plans for embedded systems, 8051/2, arm?
    what about just arm? I don't forsee this going to 8 or 16 bit in short retrospect.



    I class it as a dirty trick as a pointer is just that - it points to data - but it is not the actual data - the data is what is pointed at that memory location.  An integer is data - not a pointer.  Keep the 2 seperate.
  • Beer28 wrote:
    Buzza wrote:
    I class it as a dirty trick as a pointer is just that - it points to data - but it is not the actual data - the data is what is pointed at that memory location.  An integer is data - not a pointer.  Keep the 2 seperate.


    in assembly it doesn't matter, C is shorthand for assembly

    it's 32 bits either way. All those types in windef.h are all typedefs, they are not compiler standard types. WORD, DWORD, WPARAM, all that stuff.
    A pointer to a type and an int are compiler types, they happen to be the same size on x86, long and * are the same size on x86_64.

    That's not cheating, that's programming. If you ever do small MCU's with limited addresses, you will quickly see, that you need addressability.

    Managed code like java is a whole other story. You can't say C or C++ code is dirty because it doesn't behave like java. It's not supposed to.


    They key word here is 'happen to be the same size on x86' thats the dirty word as its just a coincidence.

    Could you give me a sample of this 'addressability', as being a c and c++ dev - i have never needed it.
  • Beer28 wrote:


    Also you can write out a function from the code like JIT, and call that new entry point, say on win, you virtualalloc pages with +rwx and write out some opcodes, you can then cast that address to the function pointer with the params and return type, then funcptr(arg1, arg2);
    So in otherwords you can do dynamic code and call it after marking t he heap pages executable.



    I believe the managed equivalent of that is using the System.Reflection.Emit namespace and it's associated classes, which can generate IL on the fly.

    Haven't ever tried actually using it though, it's mostly used by compilers/script engines, but I guess you could do if it you really wanted to.

    Beer28 wrote:


    There are just tons of great uses for addressable memory. C# and Java are pretty limited in my opinion. It sounds like the Microsoft guys may be using some dirty tricks with that "bartok" compiler though


    "Safe C#" and Java are admittedly more limited in this respect than say C or C++. The advantage they give you, however, is much better code verifiabilty as larus mentioned.

    "Unsafe C#", i.e. code that is marked as unverifiable, however is easily as capable as C/C++ and performing all manner of dirty operations like this,  at this expense of requiring Full Trust code access security permissions to run.

    You really should try C# out Beer, even if only under Mono. I think, if you ignored you're preconceptions about Microsoft and .NET behind, you'd actually really like it as a language.
  • an unsound assumption that a program isn't violating language semantics with dirty tricks, such as converting an integer to a pointer.


    I never thought of using pointers as integers a dirty trick....
    in x86 they're the same width so it doesn't really matter in the compiled code nor asm which is which type. I guess the same is true on 64 with long.

    Do you have any plans for embedded systems, 8051/2, arm?
    what about just arm? I don't forsee this going to 8 or 16 bit in short retrospect.


    Sorry to offend old-time C programmers, but it is a dirty trick to treat pointers as integers. Moreover, in any version of C after K&R C, it is also unnecessary, except in a few obvious circumstances, such as referencing a literal memory address such as a device register:  *((int*)0xffffeeee)

    The fact that it works on a particular machine is just a coincidence that makes your code more difficult to read, more difficult to port, and aviolation of the C language semantics (that why they introduced void* a long time ago).


    But, in context, I was refering to the difficulty of analyzing code with static tools. Once you start confusing pointers and integers, the tools (including optimizing compilers) pretty much give up and assume the worst.


    Types serve a purpose, both to make code clearer and easier for humans to read, but also to tell compilers and other tools that the universe of items accessible through a variable are distinct from the items accessible from another variable and that only a limited set of operations can be applied to that variable.


    And, yes, the lowest level of our runtime system obviously manipulates raw memory pointers. But, we call them memory pointers (UInt_Ptr), not integers.

    We haven't looked at 8/16 bit systems, but that does not seem like a likely direction for our work. 64 bits, on the other hand, opens a lot of interesting possibilities.


    /Jim
  • These are good questions, and the answers probably aren't clear from the video.

    1. We don't have DLLs, in the sense of dynamically loaded libraries, but we do have libraries of code that can be reused in various applications.


    2. I'm not sure what you mean by "app/drive statically linked against the code"? Code in two processes is only related by the channels between them, which have contracts expression the data that is transfered and the legal message patterns. Beyond those, there is not linking and the code in each process is entirely independent.

    3. We don't have a JIT. We precompile everything before executing it. If you don't have dynamic code loading, you don't need a JIT.

    /Jim

  • How do you extend the libraries of code to allow reuse by various applications? For example, adding a SOAP library.

    What I meant by the static linking question is: How do you a) build the app, and b) load it into the system such that it can call the kernel library functions? This also builds on the previous question of extending the kernel library...

    I'm confused how you get by without a JIT, and how this precompiling process works. Especially how you create a SIP, and put code into it. Is that not dynamic code loading?

  • Any Singularity alpha that we can download just to play around? Any .PPT explaining the architecture?

  • To Mr. Larus:

    I'm currently finishing the Computer Engeneering course at Universidade de São Paulo (Sao Paulo University -- http://www.usp.br/">http://www.usp.br) and ever since i started graduation, i've been studying C# and some CLR which eventually allowed me to take the MS Student Consultant position in our university lab (http://www.usp.br/labms">http://www.usp.br/labms).

    I was planning to write my end-of-course monography based on managed-code kernels and ever since i decided that, i've gathering material about the subject (which, as you may know, is not an easy task). Eventually i found your project and since it is much like the topic i chose, i was wondering if you could help me by pointing out any papers, books or any other research materials you may or may not have used.

    I'm really willing to learn and would appreciate very much this kind of input, being more than happy to later share my research. Thanks in advance.

    Carlos
  • They mentioned that the current NT kernel can't really be changed that much and I'm wondering why? Can't you just change the nt kernel and just make sure the interface between kernel and userland is the same? As long as the kernel interface is the same the app wouldn't care how the kernel did something. I would think that the app wouldn't even have to know it changed?
  • Have you guys ever considered making an ARM port of the OS to use for robotics control and such? It'd be cool to have a .NET-based OS that can run without having to go through WinCE/Linux and Compact Framework/Mono.

    There's DotNetCPU, but... is it gone?

  • Wow. This is absolutely fantastic stuff! Don't listen to the old-time C programmers (the ones adverse to change, anyway) in this thread and let them get you down. It's truly a shame that they can't even look outside of the box and see what else is possible.


    You've built a fabulous system, and your performance benchmarks were  very exciting!


    Keep up the good work, I look forward to seeing what you produce or release next!
  • CharlesCharles Welcome Change
    chris31 wrote:
    They mentioned that the current NT kernel can't really be changed that much and I'm wondering why? Can't you just change the nt kernel and just make sure the interface between kernel and userland is the same? As long as the kernel interface is the same the app wouldn't care how the kernel did something. I would think that the app wouldn't even have to know it changed?


    The NT kernel can (and does) change, but the fundamental architecture remains static. Singularity represents a completely different operational model than NT. For one thing, there is no notion of shared memory and processes are truly independent. Also, the Singularity kernel is closed (read impossible to rootkit an OS like Singularity).

    The Vista kernel is a great piece of engineering and represents an evolution of the NT kernel (the Vista kernel is not a modified XP kernel, for example. It's new (for a client OS anyway)).

    Please stay tuned for more on Vista (and Singularity) in the Going Deep series.

    C
  • TonatiúhTonatiúh ¡Cualli itch a cosamalot!
    Beer28 wrote:
    [...]
    I'm guessing it's linked to undisclosed low level C or ASM libraries since it's not libc. You can't write low level code with C#, it's impossible because it won't let you break free into the instruction set you need to reach the bios and or service interupts, such as faults, device interupts, or system interupts of any kind.
    [...]


    Beer I have read several of your comments while studying this thread in order to enhance my background about the Singularity Proyect of Microsoft Research on Operating Systems... While reading your post I held my self thinking on what I have learned from one recent paper from MSR Titled "An Overview of the Singularity Proyect" (MSR-TR-2005-135), dated on October 27, 2005, that is near five months after that post of you (May 13, 2005 ) from which I have quoted above a single paragraph.

    That paragraph have finally broken my self-restrainment to reply you and wait until I have read every thing... Sorry... Just I could not avoid to be fired up for such an event procedure call interrupt... triggered directly to my brain and bypassing the nutshell my microkernel awareness operating system is...

    Would you be so kind as for reading in full the above linked for download document (277 Kb in PDF format)... I will greatly appreciate it, specially if you do it before any further comment about Singularity.

    Thank you a lot for reading me Beer...

    Tonatiúh
  • What about scalability of such system? I suspect transition from single to multi CPU enviroment will be quite painful perfomance wise? Did you do some testing n multiprocessor enviroments?
  • dhidhi

    In the second video, I saw that you are doing message passing over channels and run everything in its own SIP.
    I did a lot of programming for µnOS which is an OO OS written in C++ including a GC.
    It also uses channels for message passing (without message contracts) just like QNX does.
    So if every driver runs in it's own SIP, can I write a SIP by myself sending messages to the NIC SIP to tell it to send raw ethernet packets to the net?
    Or has the TCP/IP SIP exclusive rights to communicate to the NIC SIP?
    In µnOS we dicided to compile device drivers to DLLs.
    So the network service, which is a seperate process in user mode creates an instance of the device driver object(s)
    in its own process space and uses a well-defined interface to access these driver object.
    Only the network service has a channel over which the other applications can create and use network connections.

    How about modularity and the extension of a certain system service?
    Let's say your TCP/IP SIP currently supports IP, ICMP, ARP, TCP and UDP.
    How would I extend it to support SCTP as well?
    Can I write an extension module (e. g. in form of a C# class) that is loaded by the TCP/IP SIP at startup to support SCTP?
    Or do I have to write the extension in form of another SIP?

    I also saw that you are doing process creation, channel management and security inside the microkernel.
    Did you thaught of implementing a SIP in user mode for doing stuff like that?
    E. g. the QNX process manager does some of these things in user mode.

    At all, you did a great job with Singularity!
    I hope Microsoft stays tuned with it!

  • s_jethas_jetha 'Will it run on my 486?'
    Wow. Charles, I have new found respect for you. Going on about AI base, and Homeostatic OSes, just made my jaw drop.

    This video was really interesting, especially coming from someone that has practically no knowledge of what ACTUALLY goes on inside an operating system. The idea of managed code inside an OS is quite intriguing.

    Can't wait to see the second part!!
  • RichardRudekRichardRudek So what do you expect for nothin'... :P
    Wow. Even though this thread is really old, I just had to respond.

    Years ago, I had these exact same thoughts, the genesis of which was probably triggered by Helen Custer’s Inside Windows NT, and probably Gordon Letwin’s Inside OS/2  - tripping over my old, disused PC’s (22 at last count) also provided constant reminders, until I eventually made some shelves and moved them out of the way.  The hoarder’s stubbed toe theory… Smiley

    Back then, apart from processor speed and memory constraints (only ?!), the biggest problem was that the processors didn’t have any, or the appropriate hardware support to allowed “protected” operations, and I really loved strongly-typed languages – I simply built better code.

    Then I’d heard about something called Pseudo-Code, and I quick realised that combining the two ideas may lead to solutions for the lack-of-protection-hardware problem. Actually, I was already familiar with the idea of Pseudo-Code, but I knew it by the it’s implementation: BASIC (tokens, etc).

    Anyway, at that time, the tools (IL, compiler, code analysis, etc) were obviously not around, and I was not able to build the tools – well if I’d won the lottery, I’d probably just be getting to this point, now. So, it’s great to see a lot of this stuff is coming into fruition. It’s even better to see that you have the compiler people firmly entrenched in the team.… maybe I should have gone to Uni…

    Anyway, just like with Java, I’ve been meaning to learn and actually use C#. I see that C# has one the things I ranted about, at one point: structured source code commenting, which aids the programmer in producing self-documenting source code. Though I’m not sure that I like it’s HTML-like “tagging” implementation.

    Keep up the good work.

  • magicalclickmagicalclick C9 slogan, #dealwithit. WinPh8.1 IE empty tab crash and removable video control edition.
    Hello guys, just got this interview link from MSDN forum. The project is quite interesting. I am only undergrad degree, so this is off my league. But I have some concern about the flexibility of the OS.

    What kind of restrictions will be intriduced with managed envirnment? I know there will be restrictions, but is there a list of possible restrictions? Maybe an example. And will there be work around? How hard is it to introduce a new hardware device driver to the OS?
     
    I know managed envirnment is cool, but as a programmer used both C++ and Java, I would rather choose C++ for its flexibility and expressiveness over weak Java. I have looked into C# a bit, it is more flexible than Java, so it is nice, but it is still not as flexible as C++. I am not sure how many people will be turned off by its restrictions.

    I am more focused on the impact to 3rd party software providers. About scurity, I don't care about the virus because I use PC Backup, but I care about worns and trojens. Hopefully you guys can come up with a better solution integrated in the OS.

    Best of luck and keep up the good work.
  • OK, the question of all questions Smiley

    Is there any chance to get our hands on Serenity in the future? Will it be the foundation of the next generation Windows OS?
  • Most impressive - quick and stable. I like it a lot.

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.