Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Kang Su Gatlin - "Phoenix," next-generation compiler

Download

Right click “Save as…”

Kang Su Gatlin is a program manager on the Visual C++ compiler team and talks with us about Phoenix, the code name for new compiler technology that his team is working with Microsoft Research on (it'll be used to compile Longhorn and future versions of SQL Server and Visual Studio).

You'll hear more this week from Kang Su about C++ stuff.

Tag:

Follow the Discussion

  • I'm not really sure I completely 'get' this framework... Is it like an abstract layer that can output to binary and or .Net or whatever you wish? Or is it more complicated than that.. The video didn't really talk about it and the website is bare-bones..

    Also, so is this a VC++ compiler or does it sit between the input and output and allow you to alter each one?
  • Wow, that's sensational!  The academic needs had really never occurred to me.  I'm eager to hear more.
  • cu_0xffcu_0xff CU_0xff
    HI...

    I saw Phoenix during a presentation of MS Research in Germany about 3 months ago. They showed cross compiling from a PC to Xbox 2.

    According to the presentation that day Phoenix covers the whole code generation process (including debugging). The input as well as the output is not bound to any language and/or target at first hand. So they said that for example the input could be a binary executable, it would be - how to call that ?? - 'de-linked', new code added in front of functions or calls and the re-linked again to build a patched version.

    All in all it sounds quite cute to me Wink

    CU

    0xff


  • The XBox is a PC anyway, it is x86 and runs Windows.
  • KSGKSG
    He was referring to XBox-2 in his post, which is architecturally different than Xbox.  I'm not sure if specs have been released for it. 

    I don't know much about XBox-2 (unfortunately), but Phoenix is built to be rapidly retargetable, so that you can easily bring up a new platform (as we all know the compiler is one of the first things you need when you get a new ISA).

    I never thought Phoenix would be referred to as "cute"  Smiley

    Thanks,

    Kang Su Gatlin
    Visual C++ Program Manager
  • I know your all going to think, sigh moron.. but wouldn't it be more efficient to write directly to the hardware or to have the compiler 'convert' the DX9 into direct calls (Special DX9 lib). Because for fixed hardware it is a waste to support a general interface when the hardware is always constant.

    I mean the front-end directX bits would be the same but you would link into a lib that would be cleaner and thinner because all the extra support has been cut out.

  • XBox-2 are based on PowerPC processors from IBM, the current xbox model is founded on x86 PC technology



    A document that is claimed to be intended for use of game developers involved in software making process for Microsoft’s next-generation game-console known as Xbox 2 has leaked into the World Wide Web and was published by a number of web-sites. The document outlines some specs of the software giant’s game console scheduled to emerge in 2005 or 2006 and confirms the details published earlier this year.

    Xbox 2 Main Specs Confirmed?

    The document addressed to game developers is believed to be written by Pete Isensee, Development Lead, Xbox Advanced Technology Group, and contain some preliminary information about the Xbox 2 console’s internal hardware and architecture. While some specifications are not yet finalized, pretty a lot of things have their shape now and are not likely to substantially change.

    Among the goals for the Microsoft’s future console’s hardware the company notes maximization of general purpose processing performance rather than fixed-function hardware, elimination of performance bottlenecks and overall improved performance of central processing unit and graphics processing unit.

    The Xbox 2 – the project code-named Xenon – is said to be powered by a triple-core 3.50GHz or faster IBM PowerPC processor and a 500MHz or faster ATI graphics processor, has 256MB or more of unified memory, runs a custom operating system based on Microsoft Windows NT, similar to the Xbox operating system. The graphics interface is claimed to be a superset of the Direct3D version 9.0.

    Although the architecture of the two consoles is quite different, the Xbox 2 has the processing power to emulate the Xbox, according to Microsoft. Whether the next-generation console will be backward compatible involves a variety of factors, not the least of which is the massive development and testing effort required to allow Xbox games run on Xenon.

    Xenon is a big-endian system. Both the CPU and GPU process memory in big-endian mode. Games ported from little-endian systems such as the Xbox or PC need to account for this in their game asset pipeline, the Microsoft’s engineer notes.

    Tapping into the power of the CPU is a daunting task. Writing multithreaded game engines is not trivial. Xenon system software is designed to take advantage of this processing power wherever possible. The Xbox Advanced Technology Group (ATG) is also exploring a variety of techniques for offloading graphics work to the CPU.

    Below scoops from the document are published.

    Xbox 2 May Process up to 6 Threads

    The Xbox 2’s central processing unit is a custom processor based on PowerPC technology. The CPU includes three independent processors (cores) on a single die. Each core runs at 3.50GHz speed of faster. The Xbox 2 microprocessor can issue two instructions per clock cycle per core. At peak performance, Xenon can issue 21 billion instructions per second.

    The chip for Microsoft’s future console was designed by IBM in close consultation with the Xbox team, leading to a number of revolutionary additions, including a dot product instruction for extremely fast vector math and custom security features built directly into the silicon to prevent piracy and hacking.

    Each core has two symmetric hardware threads (SMT), for a total of six hardware threads available to games. Not only does the “Xenon CPU” include the standard set of PowerPC integer and floating-point registers (one set per hardware thread), the microprocessor also includes 128 vector (VMX) registers per hardware thread. This astounding number of registers can drastically improve the speed of common mathematical operations, according to the document.

    Each of the three cores includes a 32KB L1 instruction cache, a 32KB L1 data cache and share a 1MB L2 cache. The L2 cache can be locked down in segments to improve performance. The L2 cache also has the very unusual feature of being directly readable from the GPU, which allows the GPU to consume geometry and texture data from L2 and main memory simultaneously.

    Microsoft claims that instructions of the next-generation console are exposed to games through compiler intrinsic, allowing developers to access the power of the chip using C language notation.

    Xbox 2 Graphics Processor to Use Shader Model 3.0

    The graphics processor designed for the Xbox 2 console is a custom 500MHz chip from ATI Technologies.

    The shader core has 48 Arithmetic Logic Units (ALUs) that can execute 64 simultaneous threads on groups of 64 vertices or pixels. ALUs are automatically and dynamically assigned to either pixel or vertex processing depending on load. The ALUs can each perform one vector and one scalar operation per clock cycle, for a total of 96 shader operations per clock cycle. Texture loads can be done in parallel to ALU operations. At peak performance, the GPU can issue 48 billion shader operations per second.

    The GPU has a peak pixel fillrate of 4 or more gigapixels/sec (16 gigasamples/sec with 4x antialiasing). The peak vertex rate is 500 or more million vertices/sec. The peak triangle rate is 500 or more million triangles/sec. Microsoft reportedly states that the figures are attainable with non-trivial shaders.

    Microsoft’s future console is designed for HDTV output. In order to fit 720p frame-buffer inside the chip, a special 10MB or larger on-die embedded dynamic RAM (EDRAM) buffer will be incorporated. Larger frame-buffers are also possible because of hardware-accelerated partitioning and predicated rendering that has little cost other than additional vertex processing. Along with the extremely fast EDRAM, the GPU also includes hardware instructions for alpha blending, z-test, and antialiasing.

    The Xbox 2 graphics architecture is a unique design that implements a superset of Direct3D version 9.0. It includes a number of important extensions, including additional compressed texture formats and a flexible tessellation engine. Xenon not only supports high-level shading language (HLSL) model 3.0 for vertex and pixel shaders but also includes advanced shader features well beyond model 3.0, Microsoft claims. For instance, shaders use 32-bit IEEE floating-point math throughout. Vertex shaders can fetch from textures, and pixel shaders can fetch from vertex streams. Xenon shaders also have the unique ability to directly access main memory, allowing techniques that have never before been possible.

    As with Xbox, Xenon will support precompiled push buffers (“command buffers” in Xenon terminology), but to a much greater extent than the Xbox console does. The Xbox team is exposing and documenting the command buffer format so that games are able to harness the GPU much more effectively.

    In addition to an extremely powerful GPU, Xenon also includes a very high-quality resize filter. This filter allows consumers to choose whatever output mode they desire. Xenon automatically scales the game’s output buffer to the consumer-chosen resolution.

    Xbox 2 Memory to Pump up to 22.4GB of Data per Second

    The Xbox 2 will have 256MB or more of unified memory, equally accessible to both the GPU and CPU.

    The main memory controller resides on the GPU (the same as in the Xbox architecture). It has 22.4GB/sec or higher aggregate bandwidth to RAM, distributed between reads and writes. Aggregate means that the bandwidth may be used for all reading or all writing or any combination of the two. Translated into game performance, the GPU can consume a 512×512×32-bpp texture in only 47 microseconds.

    The front side bus (FSB) bandwidth peak is 10.8GB/sec for reads and 10.8GB/sec for writes, over 20 times faster than for Xbox. Note that the 22.4GB/sec main memory bandwidth is shared between the CPU and GPU. If, for example, the CPU is using 2GB/sec for reading and 1GB/sec for writing on the FSB, the GPU has 19.4GB/sec available for accessing RAM.

    Eight pixels (where each pixel is colour plus z = 8 bytes) can be sent to the EDRAM every GPU clock cycle, for an EDRAM write bandwidth of 32GB/sec. Each of these pixels can be expanded through multisampling to 4 samples, for up to 32 multi-sampled pixel samples per clock cycle. With alpha blending, z-test, and z-write enabled, this is equivalent to having 256GB/sec of effective bandwidth! The important thing is that frame buffer bandwidth will never slow down the Xbox 2 GPU.

    New Audio Format for Xbox 2

    The Xbox 2 central processing unit is a superb processor for audio, particularly with its massive mathematical horsepower and vector register set. The microprocessor can process and encode hundreds of audio channels with sophisticated per-voice and global effects, all while using a fraction of the power of a single CPU core.

    The system’s south bridge also contains a key hardware component for audio – XMA decompression. XMA is the native Xenon compressed audio format, based on the WMA Pro architecture. XMA provides sound quality higher than ADPCM at even better compression ratios, typically 6:1–12:1. The south bridge contains a full silicon implementation of the XMA decompression algorithm, including support for multi-channel XMA sources. XMA is processed by the south bridge into standard PCM format in RAM. All other sound processing (sample rate conversion, filtering, effects, mixing, and multispeaker encoding) happens on the CPU.

    The lowest-level Xbox 2 audio software layer is XAudio, a new API designed for optimal digital signal processing. The Xbox Audio Creation Tool (XACT) API from Xbox is also supported, along with new features such as conditional events, improved parameter control, and a more flexible 3D audio model.

    No Built-in Wi-Fi in Xbox 2

    As with Xbox, the next-generation console code-named Xenon is designed to be a multiplayer console. It has built-in networking support including an Ethernet 10/100-BaseT port. It supports up to four controllers.

    From an audio/video standpoint, Xenon will support all the same formats as Xbox, including multiple high-definition formats up through 1080i, plus VGA output.

    In order to provide greater flexibility and support a wider variety of attached devices, the Xenon console includes standard USB 2.0 ports. This feature allows the console to potentially host storage devices, cameras, microphones, and other devices.

    Xbox 2 Unlikely to Feature HDD

    The Xenon console is designed around a larger world view of storage than Xbox was. Games will have access to a variety of storage devices, including connected devices (memory units, USB storage) and remote devices (networked PCs, Xbox Live). At the time of this writing, the decision to include a built-in hard disk in every Xenon console has not been made. If a hard disk is not included in every console, it will certainly be available as an integrated add-on component, Microsoft said.

    Xenon supports up to two attached memory units (MUs). MUs are connected directly to the console, not to controllers as on Xbox. The initial size of the MUs is 64MB, although larger MUs may be available in the future. MU throughput is expected to be around 8MB/sec for reads and 1MB/sec for writes.

    The Xenon game disc drive is a 12x DVD, with an expected outer edge throughput of 16MB/sec. Latency is expected to be in the neighbourhood of 100ms. The media format will be similar to Xbox, with approximately 6GB of usable space on the disk. As on Xbox, media will be stored on a single side in two 3GB layers.

    Design Not Finalised

    The Xenon industrial design process is well underway, but the final look of the box has not been determined. The Xenon console will be smaller than the Xbox console.

    The standard Xbox 2 controller will have a look and feel similar to the Xbox controller. The primary changes are the removal of the Black and White buttons and the addition of shoulder buttons. The triggers, thumbsticks, D-pad, and primary buttons are essentially unchanged. The controller will support vibration.

    Xenon Development Kit

    The Xenon development environment follows the same model as for Xbox. Game development occurs on the PC. The resulting executable image is loaded by the Xenon development kit and remotely debugged on the PC. Microsoft Visual Studio version 7.1 continues as the development environment for Xenon.

    The Xenon compiler is based on a custom PowerPC back end and the latest MS Visual C++ front end. The back end uses technology developed at MS for Windows NT on PowerPC. The Xenon software group includes a dedicated team of compiler engineers updating the compiler to support Xenon-specific CPU extensions. This team is also heavily focused on optimization work.

    The Xenon development kit will include accurate DVD emulation technology to allow developers to very precisely gauge the effects of the retail console disc drive.

    Representatives for Microsoft Corporation did not comment on the report.

    http://www.theregister.co.uk/2004/02/03/xbox_2_to_sport_three/

  • Manip wrote:

    I know your all going to think, sigh moron.. but wouldn't it be more efficient to write directly to the hardware or to have the compiler 'convert' the DX9 into direct calls (Special DX9 lib). Because for fixed hardware it is a waste to support a general interface when the hardware is always constant.



    Possibly. But the back end of Xenon's DX9 implementation is probably heavily optimised based on the underlying hardware. Also keeping a generic API will make it easier to keep XBox 3 backwards compatible.
  • William Staceystaceyw Before C# there was darkness...
    Very kewl.  Sounds like "Master Control Program" Smiley
    Leverage all compliler "stuff" into one product.  Inputs are VB, C#, C++, MSH, etc. and spit out optimizied exes or dlls.  Wonder if this would allow side-by-side c# and c++ (or VB) methods in the same class and just compile as one project.  This would be really handy if you just need to do one or two methods in c++ (or even perl, ksh etc.) and don't want start another project and figure out ref to lib and locations and on and on.  Just start coding in any languge in your class.  In the abstract, this would seem very doable, but I guess the devil is in the details.  Would think you could even change the exe and dll model a bit and abstract that further.  Say you had one file type (i.e. *.code or something)  If that file has a Main() you can run it as an EXE.  You can also just reference it like a dll if you wish.  Dlls would just be *.code without a Main() and you can ref them and call them.  Now if you have 10 *.code (i.e. old dlls) that you use all the time and want to distribute one *.code file (i.e. exe) you can just bunch them in one project and compile it to one *.code file (any language including scripts.)  Now that would be cool.  Cheers!
  • KSGKSG
    Actually you'll be able to have VB.NET, C++, and C# in a single file in VS2005 using the C++ linker.  It's a feature we call "Managed Linking".  We have an internal demo (soon to be released on web) with something like 8 .NET languages in single exe.  The other cool thing is you can actually debug the application in Visual Studio, and when you step across language boundaries, it works just as you would hope it might.

    Kang Su Gatlin
    Visual C++ Program Manager
  • my first understanding was that you're working on a generic back-end, with a public/unique interface for different front-ends to hook into... then you mention the jitters and of course, given the nature of .net application model, i realized that the so called "generic back-end" is more like an entire compiler (with cli front-end and back-end). what is distinct / new in this approach? today, the "super high level" interface is the cli itself... i can already design my compiler back-end to target cli and then use your stuff (runners, jitters, etc) to go from there... c#, vb, j# already do so. in case of c++, it's a little bit different... as a compiler engineer, i would like to know *how* would the new compiler make my life easier...

    regards,
    daniel - c++ r&d...
  • [quote user="KSG"]Actually you'll be able to have VB.NET, C++, and C# in a single file in VS2005 using the C++ linker.  It's a feature we call "Managed Linking".  We have an internal demo (soon to be released on web) with something like 8 .NET languages in single exe.  The other cool thing is you can actually debug the application in Visual Studio, and when you step across language boundaries, it works just as you would hope it might.

    but we already have "managed linking" today (talking strictly about the way you transparently link in together objs generated with (or without) /clr... so, i understand that you will adopt the very same strategy for c#, vb (& maybe j#)? that would be nice!

    i always wanted to link c# or vb generated objs in a linker transparent manner! and of course, mix them with already existing (potentially unmanaged) objs or static libraries.

    so, my understanding is that instead of taking different (cli backend) routes with c# and vb you will have a generic backend (obj based) 100% hookable into the existing linker and all you have to do is to take the existing c++ backend and publish it (as an unique interface) so different frontends (c#, vb, 3rd party) could be consistently usable via the common bakend.... is that right?

    this is super cool, because:

    1. it simplifies alot the building process (many front-ends will use the very same backend and of course the very same linker).

    2. it gives 3rd party compiler engineers to build their frontends and hook them into the common backend without worrying about backend partial complience, linker issues, etc)

    3. it gives all the compliant languages (potential frontends) the power to generate (via the common backend) compatible objs, as follows.

    3.1 you use your c# folks to generate n objs
    3.2 you use your vb folks to gneerate m objs
    3.3 you use your existing objs or static libraries

    and link all of them in and you get a seamlessly integrated exe or dll or whatever you wanna name it without worrying about frontends differences!

    interesting... can 3rd party compiler engineers have access to the backend interface before you release it? of course, under nda, etc...

    regards,
    daniel - c++ r&d // company doesn't matter
  • [quote user="KSG"]Actually you'll be able to have VB.NET, C++, and C# in a single file in VS2005 using the C++ linker.  It's a feature we call "Managed Linking".  We have an internal demo (soon to be released on web) with something like 8 .NET languages in single exe.  The other cool thing is you can actually debug the application in Visual Studio, and when you step across language boundaries, it works just as you would hope it might.

    that would be very nice, so you formalized the backend services like "the nature of your frontend is irrelevant, if you comply to our backend interface" that is, you feed our interface with the right parsing trees and you get the right obj out, and then go from there (use our unique linker)

    so far so good.

    but why not going one step further, by formalizing the linker also! so you can go like, this c# generated obj with that vb generated obj with that already existing (potentially unmanaged) obj/lib and (now i am coming) *my* proprietary obj will link in fine, if *my* linker complies with your linker interface...

    extrapolating.... why not formalizing the loader also... and so on and so on and so on...

    regards,
    daniel
  • KSGKSG
    To be more clear about this...

    You can have inputs to the infrastructure which is ASTs, or CIL (C++ IL), MSIL, PE files.  The framework can than target MSIL (for a standard .NET compiler), native code (for Jitters or static compilers). 

    So now you can have you language, now output an AST, for example, which you then feed into Phoenix which can output MSIL (if you're targeting .NET) or x86 native code if you're doing static compilation.

    Additionally this same framework can be used for building tools, such as static analysis or binary rewriting.

    I hope that makes things a bit more clear.

    Thanks,

    Kang Su Gatlin
    Visual C++ Program Manager
  • Interesting... I'm wondering if your observations include the capability of generatiung 100% native objs from c# and vb... I understand that the linker will end up by creating pes (native or managed/with pe stubs)...

    So, can you create 100% native objs from c# and vb? This would make sense since you seam to have the frontends from the backends/unique backend...

    PS
    Same observations about using the new c++ (2005) language extensions for generating 100% native applications...

    Regards,
    Daniel

  • one word: SUPER!!!


    ps
    you call the new c++ language extensions by "managed code extensions, 2002 and 2003". i am aware of stroustroup "apetite" of improving the c++ language definition (in the ansi comitee). telling stroustroup that c++ doesn't have enough rtti, that it doesn't have support for true properties and events is like selling fridges to eschimoes... now, trying to avoid entering in more politics about the ansi comitee, i would like to propose you something:

    1. instead of calling your c++ language extensions by "managed code extensions" just call them simply by "c++ language extensions" especially now, when you will be able to use the new c++ extended dialect to generate native code as well.
    2. standardize them asap via ecma. ansi will not adopt them (mostly because ansi is 99.999% politics instead of trying to properly innovate the c++ language for the benefit of the many)
    3. instead of trying to reach the less common denominator among language family members, try to increase the level of abstraction to each language. (e.g. instead of reducing the "managed" c++ operators overloading capabilities to the level of overrideable operators from vb or c#, make c# and vb capable of overriding (almost) everything...) you already started this path (e.g. read "c# generics") but there is more to come...

    good luke & best regards, yours,
    daniel // sc*tts v*ll*y

  • Quick question. I'm wondering about the time-frame when 3rd party compiler manufacturers are going to benefit from Phoenix. I understand that the backend interface(s) will be relatively public.

    Also, I suspect that the linker itself will not need any changes at all (that's the whole catch, right? why maintaining 2 linkers (one for c# & vb and another for c++) instead of one...) - so, implicitly, tools manufacturer companies which already purchased deployment priviledges for the c++ toolchain (compiler, resource compiler, librarian, linker, etc) should be able to deploy the linker with Phoenix... Is this right?

    If my questions go beyond the scope of this area, please accept my appologizes.

    Thanks,
    Daniel
  • Hi Kang,

    I have a quick question for you (since you are the product manager for c++):

    Q: Are you in feature complete mode with the new (2005) c++ language extensions and compiler driver?

    So, is the compiler frontend frozen? Knowing how you work (from the past) I'd say you should be in bug fixing mode... But I'd like to get a confirmation from you.

    PS
    We call cl by the "compiler driver"... I was refering to the options cl exposes...

    Thanks & Regards,
    Daniel
  • KSGKSG
    Hi, to answer a few questions in one posting:

    1) Can you generate native code for VB and C# with Phoenix.  The answer is yes, but we can actually do that now with the pre-JIT (called NGen).  The pre-JIT is a JIT that runs before you run the application -- usually when you install the application.

    2) We call the new syntax introduced in VS2005, simply "C++" or the new C++.  I'm not in marketing, so as long as you know what I'm talking about I'm happy.  Smiley

    3) I don't know the time frame we'll have general availability for Phoenix.  I'd be surprised if anyone did.  It's one of those things that is always in discussion, but it needs to become more robust before that happens.

    Hope that helps.

    Thanks,

    Kang Su Gatlin
    Visual C++ Program Manager
  • Re(1): sure we do... i asked you q1 because jits cannot optimize at the level of a "static" compiler

    Re(2): :-)2

    Re(3): same here...

    Re("Hope that helps."):Oh ya!

    Thanks,
    Daniel

  • If this is about front-ends for various languages and back-ends for various architectures, what's the difference with the way gcc currently works (which is cool and convenient, BTW)?

    Rogier
  • KSGKSG
    Have you tried to add a pass to gcc before?  Phoenix is made to be modular and understandable so that researchers can do research with it (Notice that despite gcc being open source and free, no one uses it in research?  Do you know why?).

    Also Phoenix is so configurable it can be used as a JIT, and is the basis for tools, such as binary rewriting or static checking of source code.  Things that gcc can not do.

    Thanks,

    Kang Su Gatlin
    Visual C++ Program Manager
  • [quote user="KSG"]Notice that despite gcc being open source and free, no one uses it in research?  Do you know why?).

    No, because they actually do so, following link for starters:

    http://gcc.gnu.org/readings.html

  • of course, who would do fundamental compiler research on existing/production compilers? do you know any universities using msvc++ for compiler research? anyway, few gcc links (generic research) including stanford, princeton, mit, harvard, etc... using gcc for more or less compiler related research... it's really different from production environments and here is why:

    class 1. when you do compiler research for a company, you have deadlines, competitors, major compiler optimisations and language features comming from marketing

    class 2. when you do compiler research for an university, you have to make sure your students get teh basics, a strong fundation of generic compiler axiomes and so on, you couldn't care less about optimisations, etc

    so, when you said "Notice that despite gcc being open source and free, no one uses it in research?  Do you know why?" my answer is simple, here is why:

    class 1. because they have to ship their own compilers, because they compete with gcc, etc

    class 2. because gcc is a PRODUCTION COMPILER and not a DIDACTIC COMPILER... of course, i have absolutely no reason to believe that someone could actually do any kind of fundamental compiler research using a compiler which only comes on binary form...


    my2c, in good faith, of course!
    d

    http://gcc.gnu.org/readings.html
    http://suif.stanford.edu/suif/mlists/suif-talk/199408/19940812.html
    http://suif.stanford.edu/collective/
    http://www.stanford.edu/class/cs195/materials/asgns/asgn1/asgn1.pdf
    http://www.cse.msu.edu/~dengmin1/research.html
    http://www.intel.com/research/mrl/news/files/MRFKeynote_Compiler.pdf
    http://www.princeton.edu/~wqin/build.htm
    http://www.princeton.edu/~xzhu/mips.html
    http://www.princeton.edu/~raas/beowulf/Software.shtml
    http://www.mit.edu/afs/sipb/project/gcc-3.4/
    http://www.cs.rhul.ac.uk/research/languages/projects/rdp.shtml
    http://gcc.gnu.org/ml/gcc/1997-11/msg00773.html
    http://opensource.mimos.my/fosscon2003cd/paper/slides/18_nur_hussein.pdf
    http://216.239.57.104/search?q=cache:1JjWvwyUC1UJ:finiteloop.org/~btaylor/etc/bret_taylor-resume.pdf+gcc+research+stanford&hl=en


    KSG wrote:
    Have you tried to add a pass to gcc before?  Phoenix is made to be modular and understandable so that researchers can do research with it (Notice that despite gcc being open source and free, no one uses it in research?  Do you know why?).

    Also Phoenix is so configurable it can be used as a JIT, and is the basis for tools, such as binary rewriting or static checking of source code.  Things that gcc can not do.

    Thanks,

    Kang Su Gatlin
    Visual C++ Program Manager
  • well, despites both popularity and availlability, gcc is actually the whorst compiler money can "buy" and here is why:

    1. it's very buggy (despites the common source tree, many bugs bubble in both front and back ends)
    2. it's very slow (talking about compile time)
    3. the level of optimization is very low (slow is bad, fast is good)
    4. it's false compliant (claiming you are 100% ansi compatible is definitely an exageration, mainly because the standard itself is over 5% ambiguous!)
    5. most of the source dates from 20 years ago and it suffers from so many syndroms, i would need 20gigs to log a full analysis...)
    6. it lacks crucial options big time! e.g. consistent allignment, etc - to add insult to the injury, each platform adds its own inconsistencies...

    regards,
    daniel.

    KSG wrote:
    Have you tried to add a pass to gcc before?  Phoenix is made to be modular and understandable so that researchers can do research with it (Notice that despite gcc being open source and free, no one uses it in research?  Do you know why?).

    Also Phoenix is so configurable it can be used as a JIT, and is the basis for tools, such as binary rewriting or static checking of source code.  Things that gcc can not do.

    Thanks,

    Kang Su Gatlin
    Visual C++ Program Manager
  • KSGKSG
    I have to admit I looked at a random sampling of those links very quickly and none of them looked like they were using gcc as an infrastructure for compiler research.  People certainly USE gcc for research, but they're typically NOT doing COMPILER research. 

    You gave two reasons, and I disagree with both:

    1) I'm referring to researchers who don't have a "product" to ship.  People use SUIF so they can do their research.  They want to prove a result... they usually aren't building a compiler that they can then sell.

    2) Phoenix will also be a production compiler.  Writing well factored that can be modified in a research environment is a non-trivial endeavor, which was never the charter of GCC. 

    Remember the original question was how is this different from GCC. 

    And, yes, no one uses VC++ for compiler infrastructure research, as it was not built in any way to support that.  Phoenix is a different story.  That's the difference.

    Thanks,

    Kang Su Gatlin
    Visual C++ Program Manager
  • "it'll be used to compile Longhorn"

    So was it?

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.