Entries:
Comments:
Posts:

Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Comments

sylvan sylvan
  • Programming in the Age of Concurrency: Software Transactional Memory

    Cristom, I'm not a huge fan of your style of argument. You take on a very superior tone (such as assuming I don't know anything about multithreading) and make your point by posting tons of links, which makes following your argument very time consuming if not impossible. No doubt that leads you to "win" a lot of arguments by walk-over, but you're not actually convincing anyone. Summarize your point in way which is relevant to this discussion, don't assume I'm willing to get involved in 20 previous discussions to try to deduce your point from there.

    However you're missing the target completely. I already conceded that lock free programming is all well and good in certain cases, so there is no point linking a whole bunch of lock free abstractions. What I claimed was that while it is possible to write a lock free abstraction that will run faster than an STM based one (as you point out), you can trivially write an STM transaction that simply is not expressible with lock free programming. STM gives you general composability (i.e. always), lock free programming does not - which does not mean that it's impossible to write composable abstractions in specific cases (i.e sometimes).

    You also seemed to have misunderstood my point about using STM in purely functional programming because I can't make sense of your response to that. You're not making the erronous assumption that "purely functional programming == no mutable data" are you? Pure FP simply means that there are strong static guarantees that will ensure that a pure function won't use mutable state in any way which interferes with other pure functions (read about the ST monad). You also need some "impure functions" (really they're just pure values which represents impure functions through monads - so it's still pure) to do IO and some of the low level stuff - and that's the only place you'd use STM. So even if you think performance is horrible for STM, a typical functional program only has a handful of places where it would be useful anyway, so it's not a big problem -- 90% of the program is purely functional and thus has no problem with parallellism (which is different from concurrency). Most of my FP applications have no mutable variables at all, and the ones that do typicall have one or two or so (usually the ones with GUIs or lots of IO stuff). This is an ideal setting to introduce STM.

  • Programming in the Age of Concurrency: Software Transactional Memory

    cristom wrote:
    http://groups.google.com/group/comp.arch/msg/e0958ecf43f95f51

    Any thoughts?


    P.S.--

    may I suggest that you all visit comp.programming.threads once and a while… We have the goods wrt scalability and throughput. We will give you the honest appraisal, and not mislead you with TM!

    Thank you,

    Chris Thomasson
    http://appcore.home.comcast.net/
    (portable lock-free data-structures)



    Lock-less programming is all well and good for a few very specific cases, but you still don't get composability (which is the whole point of using transactions).

    Also, the performance argument is inherently flawed. First of all, "sufficient performance" is a moving target. If I can make use of four times as many threads in my program, but it runs twice as slow, then it's still a win (if not now, then in a few years). Also, don't forget that STM basically gives you exception safety for free.

    I also think it's a misstake to consider transactions for everything. I understand why this is such a common knee-jerk reaction since most programmers are more familiar with imperative languages, but in my opinion using a language whose fundamental method of computation is "modify state" in a multithreaded environment where the state is shared and all the threads depend on the state remaining consistent from their point of view, is pretty much a bad idea. You're just asking for problems.
    STM really begins to make perfect sense when you consider it in a purely functional context. You may sprinkle in some imperative nuggets here and there when needed for algorithmic reasons (see the ST monad in Haskell - imperative code in a pure setting with static guarantees that no side effects leak out). These small sections of imperative code wouldn't be parallellized, of course. Then the vast majority of your code uses the pure functional approach with parallellism (e.g. using the "par" keyword in Haskell, or some of the new data parallel stuff), and then at the very bottom you have some thread based shared memory concurrency where it makes sense, which is where STM comes in.

    I don't think shared state concurrency with threads is the best way to do concurrency, but I don't think we can do completely without it either. And for that code, which in typical programs would be a very small fraction, you need something clever to get composability - STM fits the bill perfectly.
  • Programming in the Age of Concurrency: Software Transactional Memory

    jan.devaan wrote:
    Can we have it tomorrow?


    You can have it today!

    This is already implemented in Haskell (as mentioned in the video), and in a much more elegant way than will ever be possible in an imperative language. Haskell already separates code with side effects from code without side effects because it's purely functional, so it's "trivial" to simply disallow the side effecting code within transactions (except, of course, the transactional side effects, which are kind of the point Smiley). Basically you'll layer your code into three layers, the lowest layer is the IO layer which does all the nasty side effecting things like networking, user input, etc., on top of that is the transactional layer in which you do all your reads and writes to shared memory (in atomic blocks as needed), and from both of these lower layers you'll call the topmost layer, the purely functional layer (which is where the actual application logic is written). They key is that you only allow calls "up", not "down". I.e. you can call pure functions and transactions from IO, but you can't call IO from transactions and pure functions. Likewise you can call pure functions from transactions, but you can't call transactions from pure functions.

    The transactional layer is "new" (added late 2005, IIRC), but the strategy of separating your code into layers of increasing purity (now three, previously two) is not new and has been proven to be extremly elegant and viable. In fact, even well written imperative code usually does this for design reasons - Haskell just enforces it and buys all sorts of cool things with the properties that hold because of it, such as easy parallellism ("These two functions can provably not interfere with each other due to lack of side effects you say? Well then I'll just go ahead and compute them on different threads then!"), transactions, far easier reasoning (and therefor maintenance) etc.

    Check it out! http://www.haskell.org
  • Paul Vick and Amanda Silver - VB Language Futures

    Motley wrote:

    I agree Alex, the syntax:
    Dim cust as New Customer = { .Name="Me", .Address="123 Main" }



    What?

    Would

     new int x = 5

    also make sense to you then?

    "as" specifies a TYPE, using new in that context doesn't make sense.

    Maybe the following would be reasonable, though:

    Dim cust as Customer = new { .Name="Me", .Adress="123 Main" }