Coffeehouse Thread

20 posts

Forum Read Only

This forum has been made read only by the site admins. No new threads or comments can be added.

The New vs Evolving the Now

Back to Forum: Coffeehouse
  • User profile image
    Charles

    Why not create a new programming model that fundamentally abstracts many-core processors to a level where imperative programming with objects remains dominant in the software developer linqua franca?

     

    Hypothesis: The time it takes for the current set of software development tools (compilers, languages, libraries and runtimes) to evolve to take full advantage of radically new processor architectures (both on the horizon and here now) may be on par with the time it takes to reach a broad level of adoption of something new.

     

    C

  • User profile image
    JoshRoss

    I think Rx has the potential to revolutionize the solutions to problems that could be streamed, but are not streamed.  The data parallel problems seem to be much simpler to solve, if you can come up with lock-free algorithms. STM, well who knows if that will ever pan out.  

     

    What is that saying... Stripe, stream or struggle?

  • User profile image
    magicalclick

    Yeah, but, the problem is syncing the data. That is always the hard part. One way I look at it is, something like between SQL and OO programming. Because SQL takes care of the data locked automatically, and SQL itself is parralle. Not sure, if there is way to bring that to OO style.

     

    Leaving WM on 5/2018 if no apps, no dedicated billboards where I drive, no Store name.
    Last modified
  • User profile image
    JoshRoss

    magicalclick said:

    Yeah, but, the problem is syncing the data. That is always the hard part. One way I look at it is, something like between SQL and OO programming. Because SQL takes care of the data locked automatically, and SQL itself is parralle. Not sure, if there is way to bring that to OO style.

     

    I'm not completely convinced that object orientation is the only way forward.  Speaking SQL for a moment, if you were to look at tables as types and were able to pass them around like variables, you could enable a class of staticly typed solutions for problems that would otherwise require all kinds of dynamic muck or schema diarrhea. 

     

    For linq, I am still trying to wrap my mind around this idea that grouping and aggregation do not need to be coupled.  Erik was talking about this in one of his posts.

  • User profile image
    elmer

    I don't understand why there should be the need to learn a new language, in order to continue doing the same thing.

     

    I vote just change the plumbing for existing languages and run-times... let me continue to say what to do, using my existing knowledge and code base, and let the platform decide how best to do it.

  • User profile image
    Charles

    elmer said:

    I don't understand why there should be the need to learn a new language, in order to continue doing the same thing.

     

    I vote just change the plumbing for existing languages and run-times... let me continue to say what to do, using my existing knowledge and code base, and let the platform decide how best to do it.

    The same thing is in fact part of problem. What I mean is that in order for plumbing to be effective, the higher level expressive abstraction (like a language) needs to describe what eventually gets plumbed. This is precisely why things like metaprogramming (and declarative expression, generally) is the trend. The only reason functional programming has moved into the mainstream mindset, for example, is due to its declarative nature, explicitly controlable side effects, composability, etc, all paramount to building reliable concurrent systems that are compositional. Part of me agrees with your vote, but another part can't help but wonder if we don't, in parallel, start from first principles in the  (con)current context.


    C

  • User profile image
    exoteric

    Where did that come from? Are you talking about auto-parallelization - or else how could you continue with imperative programming? I must say I like LINQ for the compositionality and the way I can uniformly attack every problem I can think of and build up small building blocks but the same compositionality is not as easy with "purely imperative" programming - or is it?

  • User profile image
    elmer

    exoteric said:

    Where did that come from? Are you talking about auto-parallelization - or else how could you continue with imperative programming? I must say I like LINQ for the compositionality and the way I can uniformly attack every problem I can think of and build up small building blocks but the same compositionality is not as easy with "purely imperative" programming - or is it?

    Are you talking about auto-parallelization

     

    Yes. I really don't want to know or have to care about what I can/can't do on a particular platform.

     

    The complier should be able to work out what is possible, and the runtime/opsys should provide the required services to make the best possible use of the hardware. Certainly, it should be able to do it a lot more accurately and reliably than I could.

     

    e.g. If I want to run a FOR loop, I just want it to go as fast as the platform will allow, and it should be the job of the compiler and underlying runtime/os to decide how best to implement/run it.

     

    Of course, all of that is much easier said than done, and it might not be possible to get ultimate performace, but asking the application programmer to make decisions on when to implement parallel code, is just not going to work for 99% of programmers.

  • User profile image
    exoteric

    elmer said:
    exoteric said:
    *snip*

    Are you talking about auto-parallelization

     

    Yes. I really don't want to know or have to care about what I can/can't do on a particular platform.

     

    The complier should be able to work out what is possible, and the runtime/opsys should provide the required services to make the best possible use of the hardware. Certainly, it should be able to do it a lot more accurately and reliably than I could.

     

    e.g. If I want to run a FOR loop, I just want it to go as fast as the platform will allow, and it should be the job of the compiler and underlying runtime/os to decide how best to implement/run it.

     

    Of course, all of that is much easier said than done, and it might not be possible to get ultimate performace, but asking the application programmer to make decisions on when to implement parallel code, is just not going to work for 99% of programmers.

    Much easier said than done, yes. A single for loop does not an application make.

     

    I was asking Charles though.

  • User profile image
    PerfectPhase

    Charles said:
    elmer said:
    *snip*

    The same thing is in fact part of problem. What I mean is that in order for plumbing to be effective, the higher level expressive abstraction (like a language) needs to describe what eventually gets plumbed. This is precisely why things like metaprogramming (and declarative expression, generally) is the trend. The only reason functional programming has moved into the mainstream mindset, for example, is due to its declarative nature, explicitly controlable side effects, composability, etc, all paramount to building reliable concurrent systems that are compositional. Part of me agrees with your vote, but another part can't help but wonder if we don't, in parallel, start from first principles in the  (con)current context.


    C

    More and more these day I like the ideas of Languages such as Erlang and Axum, and find my self trying to use some of their ideas in my everyday work.

  • User profile image
    Charles

    exoteric said:
    elmer said:
    *snip*

    Much easier said than done, yes. A single for loop does not an application make.

     

    I was asking Charles though.

    I'm just trying to have a conversation around developing new (revolutionary versus evolutionary) methodologies versus modifying old ones to exploit the advancements in hardware in the most effective manner as possible. Auto-parallelization at the machine level is pretty much science fiction without explicit support at the expressive level way up the abstraction stack.... Or is it?

     

    Of course, throwing everything out that's been invested in for so long is unrealistic, but this is why theory is fun Smiley


    C

  • User profile image
    elmer

    Charles said:
    exoteric said:
    *snip*

    I'm just trying to have a conversation around developing new (revolutionary versus evolutionary) methodologies versus modifying old ones to exploit the advancements in hardware in the most effective manner as possible. Auto-parallelization at the machine level is pretty much science fiction without explicit support at the expressive level way up the abstraction stack.... Or is it?

     

    Of course, throwing everything out that's been invested in for so long is unrealistic, but this is why theory is fun Smiley


    C

    Auto-parallelization at the machine level is pretty much science fiction without explicit support at the expressive level way up the abstraction stack.... Or is it?

     

    Yep, it’s not an easy problem to solve, and people have been working on it for a long time... a google search of ‘automatic code parallelization’ will turn up a long list of stuff on it... including some interesting research papers.

     

    In essence, the problem seems to be that it requires some level of “comprehension” by the compiler, understanding what you are trying to do, rather than simple structure analysis.

     

    All way over my head... I just *KNOW* that I could not hand-write parallel code, even if my life depended on it. Tongue Out

  • User profile image
    Charles

    elmer said:
    Charles said:
    *snip*

    Auto-parallelization at the machine level is pretty much science fiction without explicit support at the expressive level way up the abstraction stack.... Or is it?

     

    Yep, it’s not an easy problem to solve, and people have been working on it for a long time... a google search of ‘automatic code parallelization’ will turn up a long list of stuff on it... including some interesting research papers.

     

    In essence, the problem seems to be that it requires some level of “comprehension” by the compiler, understanding what you are trying to do, rather than simple structure analysis.

     

    All way over my head... I just *KNOW* that I could not hand-write parallel code, even if my life depended on it. Tongue Out

    Not over your head at all. I've already stated the problem you mention: in order for plumbing to be effective, the higher level expressive abstraction (like a language) needs to describe what eventually gets plumbed. New programming abstractions designed to express solutions to general purpose problems in the many core domain will go a long way in speeding up innovation in the software that instructs the eventual machine instructions.

     

    Where's Hal when you need him?

     

    C

  • User profile image
    magicalclick

    Charles said:
    elmer said:
    *snip*

    Not over your head at all. I've already stated the problem you mention: in order for plumbing to be effective, the higher level expressive abstraction (like a language) needs to describe what eventually gets plumbed. New programming abstractions designed to express solutions to general purpose problems in the many core domain will go a long way in speeding up innovation in the software that instructs the eventual machine instructions.

     

    Where's Hal when you need him?

     

    C

    There are many parrallel based language out there. Usually they have some kind of internal messaging system for parralel sync. I still think it is best to go for the route of OO methodology. The main reason OO is so much more "useful" is because it is easier to debug, easier to predict the result, and very "expressive".

     

    Most of the language that failed are because they are hard to use for the most part. SQL survived because it is actually easy comepare to the rest of sementic DB languages. But, even that, people is moving DB to OO because OO is just that much easier to deal with (especially on high performance tasks).

     

     

    Leaving WM on 5/2018 if no apps, no dedicated billboards where I drive, no Store name.
    Last modified
  • User profile image
    Bass

    Charles said:
    exoteric said:
    *snip*

    I'm just trying to have a conversation around developing new (revolutionary versus evolutionary) methodologies versus modifying old ones to exploit the advancements in hardware in the most effective manner as possible. Auto-parallelization at the machine level is pretty much science fiction without explicit support at the expressive level way up the abstraction stack.... Or is it?

     

    Of course, throwing everything out that's been invested in for so long is unrealistic, but this is why theory is fun Smiley


    C

    There is simply some practical problems that can not be parallelied effectively, if algorithms depend on intermediate data you are SOL until that intermediate data is computed. It's not science fiction it's logicial impossibility. This whole parallel affliction is one of the worst things to ever happen to the software industry. I don't think it's something that most software developers should have to worry about.

  • User profile image
    contextfree

    Bass said:
    Charles said:
    *snip*

    There is simply some practical problems that can not be parallelied effectively, if algorithms depend on intermediate data you are SOL until that intermediate data is computed. It's not science fiction it's logicial impossibility. This whole parallel affliction is one of the worst things to ever happen to the software industry. I don't think it's something that most software developers should have to worry about.

    So use different algorithms.   Tongue Out

  • User profile image
    Sven Groot

    Bass said:
    Charles said:
    *snip*

    There is simply some practical problems that can not be parallelied effectively, if algorithms depend on intermediate data you are SOL until that intermediate data is computed. It's not science fiction it's logicial impossibility. This whole parallel affliction is one of the worst things to ever happen to the software industry. I don't think it's something that most software developers should have to worry about.

    Unfortunately this "affliction" is caused by the laws of physics that prevent us from linearly scaling up the speed of CPUs. So unless someone can come up with a completely different way to make CPUs that doesn't have this issue, we're stuck with it.

  • User profile image
    AndyC

    Bass said:
    Charles said:
    *snip*

    There is simply some practical problems that can not be parallelied effectively, if algorithms depend on intermediate data you are SOL until that intermediate data is computed. It's not science fiction it's logicial impossibility. This whole parallel affliction is one of the worst things to ever happen to the software industry. I don't think it's something that most software developers should have to worry about.

    Sometimes that is true, others it is only true because we are attempting to do the absolute minimum amount of work possible to solve a problem. I suspect that as CPU cores become more numerous, you'll start to see much wider use of algorithms and languages that rely on attempting multiple possible solutions simultaneously and discarding the results of those that turn out to be unnecessary.

Conversation locked

This conversation has been locked by the site admins. No new comments can be made.