Coffeehouse Thread

41 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

Where is my 128 GHz chip ?

Back to Forum: Coffeehouse
  • User profile image
    Sven Groot

    @kettch: Just keep it away from the cheese.

  • User profile image
    GoddersUK

    , evildictait​or wrote

    *snip*

    The biggest problem is not to do with the CPU being unable to do computations at that speed - it's about being able to put the register banks and the L1 and L2 caches close enough to the CPU that the processor doesn't spend most of it's time waiting for data to process.

    There's no point being able to do a trillion operations a second if you can't load data onto the device or offload it fast enough to be able to make use of it.

    At that kind of speed you also need pretty ridiculous cooling systems in order to keep the chip from simply melting. 

    A more pressing problem is probably current leakage as they try and minimise transistor size.

    But yes, the fundamental physical limits of current technology are being approached. I'm sure one day we'll have quantum/bio/photon computers but, in the mean time, I think we've reached the point where lack of processing power is only really an issue for heavier duty equipment (possibly desktops and servers but I'm really thinking computational clusters/supercomputers and so forth here) where size and cooling are easier to accommodate.

  • User profile image
    Bass

    I hope in the future there will be computers that will be measured in how many neurons they have and their average activation delay. Imagine running a neural network algorithm on an actual neural network. That would be cool.

  • User profile image
    kettch

    @Bass: Even cooler when it comes back with improvements to your algorithm and incinerates you for being inefficient.

  • User profile image
    eddwo

    What do you need a 128 GHz chip for?  Are you modelling turbulent flows in fluid dynamics, or do you just want prettier graphics in your games?  

    I'm sure we all want to approach holodeck style realism one day, but for most tasks the current systems are probably more than powerful enough.

    People adapt a lot more slowly than computers. We still get people asking us for 'minimum system requirements', or what PC should they buy to run our software, when in truth for all but the most demanding applications any computer under 5 years old will do fine.  

    I'm still using a Core 2 Duo 6300 from 2006 as my development machine, just with rather a bit more RAM now than what it started with.

     

    What we really need is a way to actually make full use the capabilities of the systems we have without all the 'bloating' that has gone on. We have mostly just adapted to faster computers by creating more inefficient software, which leaves us roughly in the same position we were a decade ago in terms of responsiveness.

  • User profile image
    exoteric

    , evildictait​or wrote

    *snip*

    Now, I'm no expert, but I pretty sure that latency isn't going to halve in the next 18 months without upsetting a whole ton of physicists.

    If it did, it would be spooky!

  • User profile image
    Proton2

    I was curious of what is that fastest GHz today, and I found this, but its from 2005 :

    "Feng and Hafez developed a transistor less than half a millionth of a metre long, with a maximum operating speed of 604 GHz, meaning it can carry out 604 billion operations every second."

    http://www.newscientist.com/article/dn7253-worlds-fastest-transistor-operates-at-blinding-speed.html

    Of course that is just the speed of a single transistor.

    Wolfram Alpha says half a millionth of a metre is 500 nm  (nanometers), and Intel is about to make 14 nm features. I don't know how big a transistor is in size using 14 nm process node tech.

  • User profile image
    ScanIAm

    @Proton2:I may not understand what it means to have 14nm features, but I'm pretty sure that it means that's the smallest discrete thing that can be printed on the chip.  It doesn't mean the path of the instruction is 14nm in total.  It does mean, though, that the instruction path can be shorter overall.

    If you look at the visual layout of a modern CPU, you'll see literal centimeters between the processor and the cache, so you'll never get faster than that.

  • User profile image
    kettch

    @eddwo: "If you build it, they will come"

    Software is always pushing the boundaries of what the hardware is capable of. You don't always need the latest hardware, but sometimes it helps.

    These days, I'd rather have more cores. At work, I have a Core i5 and 8GB of RAM. With two cores, I spend most of the day in a state of resource starvation. Yet, my machine at home has an old Core 2 quad. It still has 8GB of ram, but the extra cores make all the difference. I can have a couple of visual studio instances open, and other miscellaneous stuff, and decide I want to switch contexts for a little while by firing up a game. If I did that on my work computer, it would explode.

    The same goes for servers, more most purposes I'd rather have parallelism than raw speed.

  • User profile image
    DCMonkey

    , ScanIAm wrote

    If you look at the visual layout of a modern CPU, you'll see literal centimeters between the processor and the cache, so you'll never get faster than that.

    You could always go vertical.

  • User profile image
    evildictait​or

    , ScanIAm wrote

    @Proton2:I may not understand what it means to have 14nm features, but I'm pretty sure that it means that's the smallest discrete thing that can be printed on the chip.  It doesn't mean the path of the instruction is 14nm in total.  It does mean, though, that the instruction path can be shorter overall.

    Normally when a processor manufacturer talks about lengths at that scale they're talking about the size of a transistor.

  • User profile image
    PopeDai

    , kettch wrote

    @eddwo: "If you build it, they will come"

    Software is always pushing the boundaries of what the hardware is capable of. You don't always need the latest hardware, but sometimes it helps.

    These days, I'd rather have more cores. At work, I have a Core i5 and 8GB of RAM. With two cores, I spend most of the day in a state of resource starvation. Yet, my machine at home has an old Core 2 quad. It still has 8GB of ram, but the extra cores make all the difference. I can have a couple of visual studio instances open, and other miscellaneous stuff, and decide I want to switch contexts for a little while by firing up a game. If I did that on my work computer, it would explode.

    The same goes for servers, more most purposes I'd rather have parallelism than raw speed.

    I find IO to be the biggest bottleneck right now. There's no point being able to crunch through 10GB of data on your CPU in under 500ms if it still takes 30 seconds to read it from disk.

    I also want to know why 3D games from 12 years ago have more responsive hardware accelerated UIs than a WinForms WebBrowser control embedded within a WinForms UserControl embedded within a WPF control, contained within a WinForms form (this is some of the internal software we use). The window takes 3 seconds to handle a resize event.

  • User profile image
    kettch

    @PopeDai: IO is also a problem for me. I have a server that does heavy image processing, and I can see the CPU and memory well within my tolerance, but disk access is pegged. This also probably also has something to do with our crappy SAN.

  • User profile image
    PopeDai

    , kettch wrote

    @PopeDai: IO is also a problem for me. I have a server that does heavy image processing, and I can see the CPU and memory well within my tolerance, but disk access is pegged. This also probably also has something to do with our crappy SAN.

    I'll also note that the new Task Manager in Windows 8 only measures IO throughput when it calculates % utilization. I find that IO-operations-per-second pose a much bigger threat to system responsiveness than a long sequential write. Maybe I'll file a bug on this...

  • User profile image
    Sven Groot

    Percent utilization for an I/O device is typically calculated simply as "the percentage of CPU time that the I/O device is busy". Are you sure that Windows 8 doesn't use that calculation? That's what the tooltip implies it's using, anyway.

  • User profile image
    Ion Todirel

    , PopeDai wrote

    *snip*

    I find IO to be the biggest bottleneck right now. There's no point being able to crunch through 10GB of data on your CPU in under 500ms if it still takes 30 seconds to read it from disk.

    I also want to know why 3D games from 12 years ago have more responsive hardware accelerated UIs than a WinForms WebBrowser control embedded within a WinForms UserControl embedded within a WPF control, contained within a WinForms form (this is some of the internal software we use). The window takes 3 seconds to handle a resize event.

    software rendering and abstractions on top of abstractions on top of abstractions?

  • User profile image
    Ion Todirel

    , eddwo wrote

    What do you need a 128 GHz chip for?  Are you modelling turbulent flows in fluid dynamics, or do you just want prettier graphics in your games?  

    I'm sure we all want to approach holodeck style realism one day, but for most tasks the current systems are probably more than powerful enough.

    People adapt a lot more slowly than computers. We still get people asking us for 'minimum system requirements', or what PC should they buy to run our software, when in truth for all but the most demanding applications any computer under 5 years old will do fine.  

    I'm still using a Core 2 Duo 6300 from 2006 as my development machine, just with rather a bit more RAM now than what it started with.

     

    What we really need is a way to actually make full use the capabilities of the systems we have without all the 'bloating' that has gone on. We have mostly just adapted to faster computers by creating more inefficient software, which leaves us roughly in the same position we were a decade ago in terms of responsiveness.

    No it isn't, and it might never be, not in our life times anyway. It depends how much of the world you want to simulate.

  • User profile image
    Sven Groot

    For games it doesn't really matter how fast your CPU in your PC is, since 99% of all games coming out now are console ports so are designed to run on 7 year old hardware (Xbox360 and PS3). Hopefully once the PS4 and new Xbox come out there will be some new games that actually utilize modern PC hardware.

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.