There's already volatile storage that works at that rate - the L1 cache and the onboard processor-register cache work easilly at that rate. The latency of RAM isn't the time it takes to change a 1 to a 0, it's the time it takes to get the information from the CPU to the RAM (going through the onboard cache, L1 processor cache, L2 cache and then over QPI to the RAM bus), so I'd like to see what the like-for-like speed would be.
The thing that everyone always forgets when they see these amazing new technologies is that how big and fast your computer is basically comes down to cost. Your L1/L2 cache uses stupidly expensive and super-fast RAM because it's small. You could easilly triple or quadruple the speed of RAM if you were willing to pay the $5k to upgrade all of your RAM to that quality, but RAM is so infrequently used that it's barely worth the upgrade. It's the bus speed that hurts the most in most computer builds.
Edit: You failed to mention this key phrase from the article that changes the entire conversation: "and is cheap enough to be used in anything from enterprise-level servers all the way down to mobile phones". That's somewhat more of an interesting development.