Coffeehouse Thread

28 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

Why are desktop cores not increasing as predicted ( on C9 )?

Back to Forum: Coffeehouse
  • User profile image
    SteveRichter

    http://www.pcworld.com/article/252948/intel_ivy_bridge_chips_launch_dates_leaked.html

    The soon to be released next generation Intel processor has 5 to 15% speed improvement and no increase in the number of cores on the chip. Not to dismiss something that likely contains great technical improvements, but why has the core count in desktop processors stopped increasing?

     

  • User profile image
    spivonious

    @SteveRichter: Because most users don't need more than two cores? I think it's just a pure cost/benefit issue. Intel makes most of their money off of the low-end dual-core chips.

  • User profile image
    SteveRichter

    , spivonious wrote

    @SteveRichter: Because most users don't need more than two cores? I think it's just a pure cost/benefit issue. Intel makes most of their money off of the low-end dual-core chips.

    Would .NET apps run better on an octo core desktop? All this talk that more apps should be written in C++ kind of hints at the idea that the processors of the future are not going to be able to deliver the power that .NET apps need.

     

  • User profile image
    MasterPi

    The focus is also a power efficiency - I would assume adding more cores would up the power requirements, as well as generate more heat. (IvyBridge is supposed to reduce power consumption)

    Also, I believe that via Intel's Tick-Tock, it's the Tock where they invest in actually changing around the microarchitecture, where they can figure out how to add more cores. Ivy Bridge is in the Tick phase, where they reduce the manufacturing process (down to 22nm from 32nm in this case). It's also one of the first CPUs to use the Tri-gate technology that removes the limitation of adding transistors on a chip - In the next phase Tock, they might explore Tri-gate more to add additional cores.

     

    SteveRichter said:
    Would .NET apps run better on an octo core desktop?

    It depends on how the app is designed among other things. During high contention when several apps are running concurrently, your might see increased responsiveness if the OS can schedule the .NET app to run on more threads. If your app makes use of many parallel and asynchronous calls, then you might see an increase in perf.

  • User profile image
    davewill

    @SteveRichter: It seems like the CPU architects have started to look for other ways to get work done that don't involve adding more cores.  Things like bringing other components onto the chip that were previously a bus call away.  I get the sense they know they can fill the coke bottle with cores and now they are working on ways to widen the bottle opening.

  • User profile image
    DevTomSch

    @SteveRichter:

    AFAIK it's the memory / cache architecture bottleneck. A core will only run at maximum speed if the currently executing code AND accessed data fits in their level 1 (L1) caches.
    Todays L1 caches are about 32-64 KBytes only! L2 is about 256KB; L3 some MB, but shared for all cores!
    Beyond these caches, all cores compete against the same, much slower main (flat) memory (4GB typically today).
    Now compare these numbers with the RAM consumption of todays apps / OS, each needing many MBs, adding up to GBs.
    Another point I see in software is the often inappropriate, aimless use of too many threads (explicit or hidden, by overrated 'smart' libs), forcing cache contents to get invalidated for each context switch.

    To solve these problems, BOTH the CPU architecture AND especially the software design has to be changed fundamentally.
    Until this happens, adding more and more cores will not provide that much more performance, it simply can't scale up that fast.
    IMHO Intel knows how to solve the hardware side very well (they have prototypes, and others have done it before), but NOT Microsoft about SW / OS design.
    MS is wasting enormous resources with confusing, worthless Metro/WinRT concepts, instead of a fresh, clean and lean OS restart.

  • User profile image
    SteveRichter

    , DevTomSch wrote

    MS is wasting enormous resources with confusing, worthless Metro/WinRT concepts, instead of a fresh, clean and lean OS restart.

    Is anyone else ( Apple ) doing it better?

     

  • User profile image
    JoshRoss

    To answer you question in one initialism, GPU. Who needs sixteen cores in their desktop when you can offload so much work to the GPU.

    -Josh

  • User profile image
    DevTomSch

    @JoshRoss:  GPUs (as currently available with mainstream Nvidia/AMD/intel- graphic cards architectures)
    are fine for many display / video oriented data ('pixel') processing, or some massive parallel maths.
    Thus except for multimedia related apps or academic use, it's quite difficult to use a GPU for most other common apps / tasks doing very different/mixed/random kind of processing.
      (e.g. database/file IO, communication/web access, XML/html/script parsing, ...)
    GPUs only scale to the max if you have a huge array of identical data (like 'pixels') to be processed in parallel, over and over again, by identical code sequences per each 'thread'.

  • User profile image
    magicalclick

    Well, that core race was pure marketing stunt to being with.

    Leaving WM on 5/2018 if no apps, no dedicated billboards where I drive, no Store name.
    Last modified
  • User profile image
    JoshRoss

    , DevTomSch wrote

    *snip*

    Thus except for multimedia related apps or academic use, it's quite difficult to use a GPU for most other common apps / tasks doing very different/mixed/random kind of processing.
      (e.g. database/file IO, communication/web access, XML/html/script parsing, ...)

    But those are server tasks. If you wanted, you could get a 10 core Intel Xeon Processor E7, turn on the hyper threading, and you have 20 threads.

    -Josh

  • User profile image
    AndyC

    There are really two reasons that cores haven't shot up. The increasing reliance on mobile devices means that power efficiency trumps raw processing grunt for most people, there's little point having the ability to process huge amounts on your laptop if doing so kills battery life.

    And from the software side of things, developers still struggle with making effective use of multiple cores/processors. It just fundamentally requires a significant amount of hard work to understand and structure code such that the benefits are noticable without introducing new and hard-to-resolve bugs. Most devs are still just waiting for it to all happen automagically, even though that's unlikely to occur in any reasonable timescale if at all, as can be seen even in the comments in this thread already.

  • User profile image
    SteveRichter

    , AndyC wrote

    There are really two reasons that cores haven't shot up. The increasing reliance on mobile devices means that power efficiency trumps raw processing grunt for most people, there's little point having the ability to process huge amounts on your laptop if doing so kills battery life.

    cores can be shut down on a CPU until actually needed by the workload. And desktop systems do not have a power constraint. The hardware side of computing has a long history of providing processing power that many thought was not necessary.

    The first desktop quad core processor was released in Nov, 2006
    http://wiki.answers.com/Q/When_was_sealed_first_quad_core

     

     

  • User profile image
    AndyC

    @SteveRichter: Sure, but it's much easier to sell a laptop based on a longer battery life than on the fact it contains an expensive multi-core CPU which will spend most of it's life with the cores switched off to conserve power. As for the desktop, it's a diminishing market so there's less motivation to develop expensive CPUs for them.

    Unless there is a radical change in the way developers utilise CPU resources, the massively muti-core CPU market will still be focused predominantly on the server market, as the workloads and tools used there are generally better at scaling out and they have the kind of workloads that require it too.

  • User profile image
    fabian

    , AndyC wrote

    Uness there is a radical change in the way developers utilise CPU resources, the massively muti-core CPU market will still be focused predominantly on the server market, as the workloads and tools used there are generally better at scaling out and they have the kind of workloads that require it too.

    Will C# async / await help with that?

  • User profile image
    Bas

    , fabian wrote

    *snip*

    Will C# async / await help with that?

    Can't speak for others, but I personally can't live without async/await anymore. It's made all my stuff so much more readable/maintainable.

  • User profile image
    AndyC

    , fabian wrote

    *snip*

    Will C# async / await help with that?

    Nope, async/await is still fundamentally single threaded, it just makes it easier to keep a thread responsive.

  • User profile image
    vesuvius

    , AndyC wrote

    *snip*

    Nope, async/await is still fundamentally single threaded, it just makes it easier to keep a thread responsive.

    async/await is syntactic sugar at a language level, for not implementing IAsyncResult manually. Nothing has changed on a core level (no pun intended).

    I could have never finished most of the work I have had without having to understand using wait handles and blocking threads. It is welcome, and improves readability but I never found asychrony in .NET to be anything hard.

     

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.