Coffeehouse Thread

11 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

IBM develops 'instantaneous' memory, 100x faster than flash

Back to Forum: Coffeehouse
  • User profile image
    pavone

    "Made in IBM Labs: IBM Scientists Demonstrate Memory Breakthrough for the
    First Time


    • Reliable multi-bit phase-change memory technology demonstrated
    • Scientists achieved a 100 times performance increase in write latency
    compared to Flash
    • Enables a paradigm shift for enterprise IT and storage systems,
    including cloud computing by 2016"

    http://www.engadget.com/2011/06/30/embargo-ibm-develops-instantaneous-memory-100x-faster-than-fl/

     So, we may finally have that instant boot time.

  • User profile image
    JoshRoss

    These kind of advances keep me interested in this kind of crap!

    -Josh

  • User profile image
    ScanIAm

    It's kind of cool how the bottleneck is moving away from the storage device and back towards the motherboard hardware. 

  • User profile image
    magicalclick

    Saw it on Emporia, so awsome indeed. Ofc after using the photonic processing, this is still several factors slower, but, still awsome nevertheless.

    Leaving WM on 5/2018 if no apps, no dedicated billboards where I drive, no Store name.
    Last modified
  • User profile image
    Richard.Hein

    Sounds great!  The press release says they have achieved a 10 microsecond write speed in the worst case.  I guess that is when it's writing 11, since they say they apply voltage in an iterative process that increase the voltage slightly each iteration until the desired voltage is reached.  So I suppose 10/4 microseconds is the best possible case. 

  • User profile image
    evildictait​or

    There's already volatile storage that works at that rate - the L1 cache and the onboard processor-register cache work easilly at that rate. The latency of RAM isn't the time it takes to change a 1 to a 0, it's the time it takes to get the information from the CPU to the RAM (going through the onboard cache, L1 processor cache, L2 cache and then over QPI to the RAM bus), so I'd like to see what the like-for-like speed would be.

    The thing that everyone always forgets when they see these amazing new technologies is that how big and fast your computer is basically comes down to cost. Your L1/L2 cache uses stupidly expensive and super-fast RAM because it's small. You could easilly triple or quadruple the speed of RAM if you were willing to pay the $5k to upgrade all of your RAM to that quality, but RAM is so infrequently used that it's barely worth the upgrade. It's the bus speed that hurts the most in most computer builds.

     

     

    Edit: You failed to mention this key phrase from the article that changes the entire conversation: "and is cheap enough to be used in anything from enterprise-level servers all the way down to mobile phones". That's somewhat more of an interesting development.

  • User profile image
    ScanIAm

    @evildictaitor:

    I think it depends on the appplication.  I've seen charts and such that compare speeds of video encoding or other long running processes and they can see some improvement with faster memory, but since most of us aren't movie editors, it's mostly an artificial advantage.

    I wonder if the limitation on bus speed has anything to do with the form factor of the ATX motherboards.  If manufacturers had the freedom to move the bus' around (or closer) would it matter?

  • User profile image
    evildictait​or

    I wonder if the limitation on bus speed has anything to do with the form factor of the ATX motherboards.  If manufacturers had the freedom to move the bus' around (or closer) would it matter?

    The limitation on bus speeds comes from the fact that changing it will instantly kill lots of hardware that people still want to plug into their machine. QPI Bus transfer and DMA+ sort of gets round a lot of this restriction, but it's a bit of an inherent restriction.

    Heavy video manipulation (such as de-compression of Hi-def video) unfortunately requires good RAM just because most video compressions to work well for getting good file-sizes compression, not fast memory-transfers, which causes problems both because video-decompression tends to thrash the page-file and also that it causes lots of mis-aligned reads/writes going to RAM rather than to the cache.

    I suppose that's the price we pay for letting software developers and not hardware developers invent video codecs though :/

  • User profile image
    QuickC

    I think the text says compared to FLASH RAM.  That would be non volitile memory that is in the SSD's.  Today they are a collection of serial devices, and very slow compared to DDR.

  • User profile image
    Proton2

    Each "cell" can store the equivalent of 4 bits (via different voltage levels), meaning the size required is much smaller to contain the same amount of storage, though I don't know the relative size of this technology to flash. Also the number of writes before degradation is into the millions, whereas the current flash memory is only in the thousands.
     

     

  • User profile image
    Proton2

    ok, trying to enter text while on my PlayBook is not working out well.

     

     

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.