Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Erik Meijer: Latency, Native Relativity and Energy-Efficient Programming

Download

Right click “Save as…”

I caught up with Erik Meijer recently to see what was on his mind (wish we could do this more often as his mind is typically full of very interesting things at any given time!).

Erik had just read an article - Every Programmer Should Know These Latency Numbers - and it got him thinking...

Here, Erik discusses his perspective on latency. We also discuss native relativity, energy efficient programming (what does that mean?) and nutrition labels for software Smiley. As usual, this is classic C9: the conversation just happens and it's all captured in audio and video. Whiteboarding included. Twists and turns, too.

Always a real pleasure to get the chance to geek out with Erik. Thank you, Dr. Meijer!

And remember, energy can neither be created nor destroyed, but moved from state to state and place to place. It should be fairly obvious how this relates to the energy required to compute your code in this age of clouds and battery-powered personal computers (aka phones...). How much energy does your code use? How could type systems help here (could they?)?

Tune in. Think.

 

Tags:

Follow the Discussion

  • As you say Charles classic, hope Eric pursues this and maybe writes something in long form (a actual paper / article would be nice). This is such an important and frankly fascinating area I think. Delivering the maximum experience with the minimum of electrons is a worthwhile and deeply satisfying design constraint.

  • chrischris

    If you liked this talk then:

    http://www.yosefk.com/blog/simd-simt-smt-parallelism-in-nvidia-gpus.html

    Chris

  • RobIIIRobThree Life's unfair, but root password helps!

    I have been pointing people/co-workers at this diagram for some time now. It conveys the same idea as the diagram referred to by Erik and the article but, IMHO, even more clearly illustrates the magnitudes of latencies.

  • Bent Rasmussenexoteric stuck in a loop, for a while

    I wonder what Erik and Brian think about reconfigurable asynchronous logic automata as shown in the talk "programming with bits and atoms". Seems like just something a computer scientist and a physicist would find fascinating

  • felix9felix9 the cat that walked by itself

    maybe you need some infrastructure like this

    http://blogs.msdn.com/b/ie/archive/2011/03/28/browser-power-consumption-leading-the-industry-with-internet-explorer-9.aspx

  • Richard Anthony HeinRichard.Hein Stay on Target

    Very interesting and great links in the comments - lots of new favorite links today - thanks rabbits!

  • This was so awesome.

    This is kind of the "relativity theory" of computation. Expanding the current theory from two dimensions to 4...time, space, latency, and energy. Expressing a relationship between those dimensions via types sounds awesome. Maybe at the beginning you could give the programmer attributes to mark up their code to express areas that either are or are not power sensitive/in-sensitive. As the platforms energy profile changes...dock...undock...etc...some calculations fall within the devices allowed expenditure range.

    Restricting a language's expressivity seems tough to me because people always find a way to express themselves.

    @Charles...Native is a relative term. You could describe a process with a language and then have it "firmed up" by running it on a FPGA. There are people doing that with Java these days. If you want to talk about optimization then you also have to take a long look at the problem you are trying to solve.

    These guys rolled out a FPGA cluster for JP Morgan:

    http://www.maxeler.com/technology/dataflow-computing/

     

  • CharlesCharles Welcome Change

    @B3NT: Thanks, B3NT... Yep, Native is relative. Some things are more native than others, as a consequence of this fact, relatively speaking Smiley

    Thanks for the link, BTW.

    C

  • Having watched this a couple of times now a few things spring to mind:

     Firstly, even if we knew that in 10 years time a magical 'charges in 5 minutes, lasts a week' battery technology (one that we'd be happy keeping in our pockets and / or next to our skin) would become available, that still leaves 10 years in which as programmers we can each do our bit (sic) for the planet and future generations by making a real effort not to write the watt-burning equivalent of 'bloatware' (wattware?)

     Secondly, I think we need to stop talking about native being about low level issues like deterministic memory management (hugely desirable) and managed being about higher abstraction. If you take the 'safe path' through C++ 11 then templates, value-embedded and at push smart_ptrs and judicious use of inheritance gives you oodles of abstraction, and what's more it doesn't lock the performance door behind you. Sidebar: what we need here are oodles of high quality C++ 11 samples so that doubters and the fearful don't keep encountering scary C++ 98 (or worse) code.

     Lastly I thought the point about compromising on using up more energy to gain reduced time to market was well made, but then the stakeholders need to be mature and responsible enough to re-invest in more efficient representations once customer mindshare has been secured. I have my doubts that they will unless it can be shown that not using a lower energy representation bites them where it hurts... in their wallets (or purses).

  • @chris:Oh, nice article, many thanks for the link Smiley

  • Richard Anthony HeinRichard.Hein Stay on Target

    Have you heard about SSDAlloc?  http://phys.org/news/2012-07-massive-power-big-companies.html

     

  • I think the situation nowadays is just a temporary stage where we need mobile devices which consume too much energy with the old desktop technology. Also, the server farms are just started building up alos using desktop technologies, so naturally energy consumption is an issue.

    But energy consumption is essentially a hardware issue and not software one. As soon as we'll take the next step and start using ultra low energy transistors (many of them are in the research labs already), then all this obsession about energy will go away.

    So preparing the general languages or IDEs for this seems like a completely unnecessary issue.

    I'd rather see strong research on information models and how we could make languages deal with information flows instead of machine details.

  • Richard Anthony HeinRichard.Hein Stay on Target

    There have been a lot of great breakthroughs in capacitors with steady output voltage and new ways to create photovoltaics on silicon, it seems that the problem will change in 2-3 years.  Devices will charge very quickly with hybrid capacitor-battery designs, so the lifetime of the battery will be a bit less of a concern in most devices, and the charge time and or charge capabilities (photovoltaic/hydrogen/motion/piezo/induction/heat/etc...) will be features.  Since most people move between places or conditions where the device can be charged, or their heat/motion/pressure, even their voice are ways to charge up a capacitor quickly (the problem with varying voltage has been overcome).  Imagine if you're battery is dead and you literally just need to squeeze out a few seconds of call time.... 
    Anyways, right now, it is a problem and later we will still have to program in a way that takes advantage of what energy management capabilities are in a device.  Also it's not like you can make an app that uses up all the charge quickly, even if ubiquitous charging exists.  Rather it should be smart enough to know when it needs a lot of energy quickly versus being able to run in the background, similar to prioritizing threads. 

  • > Assembly language isn't even the lowest level ... maybe the machine has microcode

    Yup. Microops. Your x86 processor doesn't run x86 instructions; it dynamically translates (and caches) them into a small RISC like instruction set which can vary even in different chip models from the same vendor.

    X86 is just a convenient "intermediate language". And by convenient, I mean a lot of software exists that runs on it.

  • CharlesCharles Welcome Change

    Native relativity refers to the level of abstraction (conceptual distance or depth) between the underlying machine that will eventually execute your code (in a form that is nothing like what you typed into the editor...) and the code you write, read and reason about, developing algorithms, building apps, scripting web pages...

    Some folks compose closer to the machine at design time than others and this distance from the machine is reflected in the specific language abstractions they use to author computer programs (C/C++ versus Java/C#/JavaScript, for example - the former more closely abstract the target machine and in fact are compiled for specific CPU architectures before the code you write ever runs.... The latter target "VMs" like JVM/CLR/Chakra, which take generic "intermediate" instructions (IL, bytecode) and output CPU-specific machine code at runtime... So, one becomes native earlier on in the compilation workflow than the other. At the end of the day, they are all native when their inevitable instructions instruct the CPU to compute....).

    Everything is relative at some level, even relativity itself. Native is no exception.

    C

Remove this comment

Remove this thread

close

Comments Closed

Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums,
or Contact Us and let us know.