Eric Aguiar

Back to Profile: HeavensRevenge

Comments

  • C9 Lectures: Dr. Erik Meijer - Functional Programming ​Fundamental​s, Chapter 2 of 13

    One amazing add-on for GHC is a stand-in frontend sort of a replacement for GHCi (using GHC officially) is a project called WinGHCi found http://code.google.com/p/winghci/  I fully recommend its usage since I haven't been disappointed by it yet (and it prints to screen ~20x faster than the original ghci output) Tongue Out

     

  • Niners on 9: Sven Groot - Past, Present and Future

    I think that a nice feature specific for Chan9 videos, would be an account specific perk to remember the play position of an incompletely viewed video, so whenever you leave and come back, so you can continue from the position in the video you were watching.  

  • MSNZ Unplugged - Preparing for Cloud Computing & PDC

    He's indeed an oddball, all his 7 mentioned myths are themselves... myths!  Since he just picked the truths apart into partial truths to try and make a sales pitch.  For the record tho, I LOVE the cloud, but how he is bending words for his benefit is against what I believe.

    1. Azure is what?  Architecture+Infrastructure = Platform
    2. Damn right every major vendor will have their own "Cloud", Oracle, Microsoft, Dell, Google, etc; will all have their own clouds based on fear of Intellectual Property and trade secrets or source code leaking into another vendors hands.
    3. SaaS IS a cloud app, its an application which is run on many computers at once, applications easily fit into a SaaS topology like seeing 1 element of a naive array as one physical machine, thus executing a closure upon each machine for almost balanced workloads.
    4. Clusters and movie production render farms are totally different, not even Map-Reduce qualifies as cloud computing to me, its far too limited in scope to be as general purpose as what historical grids have provided, like folding@home for example.
    5. Well not like you really have a cloud on 1 machine, thats just being stuck in a historical rut of sequential code being run on 1 core, and preventing multi-threaded bugs to be seen on a single core for the same reason.  The coolest and only way to achieve local cloud computing is having 1 VM bind to 1 core on a Many-Core of the future with hardware supported message passing to one another on the motherboard to create a local Virtual Grid.  Tongue Out
    6. The Cloud is 100% dependent on the internet for both transportation mechanisms and encapsulation methods, so he just got his thoughts reversed....
    7. This one is the most radical, and its almost not a myth on 1 merit, the ONLY thing that cannot be in the cloud, is what you use to access it, the bootstrap mechanism(point of entry).  Once that is not the case, say an embedded OS, or a mobile device, THEN ALL DATA will be on the cloud, it would free up every company, every person for flexible confidentiality of their data. With accessibility from any location on Earth as long as they are bootstrapped to access the Cloud(Internet).  So just omitting the method of accessing the cloud, to say it isn't included as "Everything", then yes, EVERYTHING will be in the cloud.

    Just my thoughts for now, time to go to another vid as I wait for PDC beauty. Smiley

  • Expert to Expert: Erik Meijer and Butler Lampson - Abstraction, Security and Embodiment

    Charles and Channel 9'ers I actually believe I found that magic sauce/Silver Bullet/Free Lunch for concurrency & parallelism for running sequential programs in a concurrent manner on a multi-core chip.

     In no relation to relativity, I call it Relativistic Computation, I'm in school so time is limited.  But for relativistic computation I just mean when code gets pumped into a scheduler, in a sequential way, its functions are put into a CPU opcode pipeline, and when the function hits and goes over all the cores, flowing over to start on the first core again, it reassess the function to execute based on its progress over the available array of cores.

    A data dependent Increment to 40 showing what number goes where during each step to look like:

    Step #: 01 - 02 - 03 - 04 - 05 - 06 - 07 - 08 - 09 - 10 
    ————————————————————————————————————————————————— 
    Core 1: 01 - 05 - 09 - 13 - 17 - 21 - 25 - 29 - 33 - 37 
    Core 2: 02 - 06 - 10 - 14 - 18 - 22 - 26 - 30 - 34 - 38 
    Core 3: 03 - 07 - 11 - 15 - 19 - 23 - 27 - 31 - 35 - 39 
    Core 4: 04 - 08 - 12 - 16 - 20 - 24 - 28 - 32 - 36 - 40 

    Even though it looks plainly obvious it has never been implemented!!  Just have an input and output scheduler pump dependent opcodes sequentially into separate CPU pipelines to saturate the separate core pipelines.

    Basically, it's striping a STREAM of assembly opcodes into the CPU's after the scheduler has a quick data flow analysis for it to make the recomputed true op/function running be +4 per core instead of the original +1 for each core in that example to show the concept at work.

    The scheduler being huge in controlling data access to and from both input/outputs of the data streams to help control conditional execution and data dependancies; so having  1 scheduler to control 1 chip and its cores in a User-Mode Scheduler(UMS) or a VM abstraction for memory isolation would be quite awesome.

    Id really enjoy a respectable Microsoft Technical Fellow to see if this is even sensible but it seems to work in my head Tongue Out 

     

    This is my first post here but I am a long time channel 9'er and avid Haskell enthusiast thus it's nice seeing Erik being challenged by such a guy like Mr. Lampson.

     

    Until next time Charles