Loading User Information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading User Information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements

Frank Hileman

Frank Hileman Frank Hileman VG.net

Niner since 2004

Lead developer for VG.net, an animated vector graphics system integrated in Visual Studio.
  • PerfView Tutorial 1 - Collecting data with the Run command

    Thanks Vance. I found the videos useful. I only recommend a better microphone.

  • Stephen Toub: Inside TPL Dataflow

    Hi Charles,

    I did not notice I was not logged in. Maybe previously I had to log in to comment at all. Thanks.

  • Hanselminutes on 9 - Why Aren't There More WinForms Talks with Rocky Lhotka

    For vector graphics development, I find the WPF API clumsy. I am not happy with the memory use either. I am not comparing to windows forms, but to other vector graphics APIs.

  • Windows 7: Writing Your Application to Shine on Modern Graphics Hardware

    I am very, very happy to see the return of an immediate mode 2d API, Direct2D. I only wish it were available on all windows platforms. An immediate mode API is especially needed on low-end hardware.
  • Expert to Expert: Contract Oriented Programming and Spec#

    The first question people have regarding contracts is, why bother? The speakers addressed this by explaining how much money Microsoft saved in manual testing time. But there are other savings as well.

    By specifying assumptions explicitly, the design improves. You move from fuzzy verbal communication to concise, precise communication about the behavior and expectations of code. This has the same benefits as writing documentation before code, or writing tests before code. It clarifies the design.

    Even without compile-time or run-time contract checks, a contract allows one to view classes or methods as black boxes with precise behavior, ignoring internal details. This helps to logically determine if a design functions correctly.

    Another benefit is reduced debugging time. If you use only run-time contract checks, you eliminate the time normally spent trying to narrow down a bug -- you find it earlier, before it is manifested in peculiar ways. If you have compile time checking as well, as in spec#, the same bug would never occur -- it would spotted by the compiler.

    The work done by the spec# group is important. I hope it goes into products as soon as possible.

  • Expert to Expert: Erik Meijer and Bertrand Meyer - Objects, Contracts, Concurrency, Sleeping Barbers

    foostar wrote:
    I'm not convinced that SCOOP can work. The problem I see with it is that it allows for arbitrary sharing of state among concurrently executing objects. If it can be made to work then the compiler will probably have to make such conservative decisions wrto synchronization that performance will be a tiny fraction of the theoretical optimum.

    Is this really true? My understanding is that SCOOP is very high level and there are a number of different ways it could be implemented internally. For example, it could be implemented on top of message passing?

    I like the SCOOP concepts. Being a fan of design by contract for many years now, I like especially the way it works neatly with contracts. I would like to think foostar is incorrect in these assumptions.
  • Joe Duffy, Huseyin Yildiz, Daan Leijen, Stephen Toub - Parallel Extensions: Inside the Task Parallel

    For those who like language extensions: you may like an extensible language instead:

  • Burton Smith: On General Purpose Super Computing and the History and Future of Parallelism

    Message passing in games:


    That is only distributed server stuff. It seems message passing would work well on clients as well, given that the game might be modeled as many independent actors interacting with one another.

  • Burton Smith: On General Purpose Super Computing and the History and Future of Parallelism

    There is no way to tell if an efficient message passing system would be as fast or faster than transactional memory for your scenario, without building and trying it out. But I suspect there is some bias against the idea of lightweight processes and efficient message passing, because we see so few common implementations with that level of efficiency.

    If you read some of the links I pointed out earlier, they explain how message passing is a lower level building block than transactional memory. Being lower level, it can be faster as well, when you do not need full transactions.

    I have nothing against transactional memory except that it helps preserve existing serial ways of thinking. Share nothing, message passing concurrency, seems to balance and scale almost automatically.

  • Burton Smith: On General Purpose Super Computing and the History and Future of Parallelism

    sylvan wrote:
    Frank Hileman wrote:

    If you have decided by design that all messages to the barrel modify its state, and that all messages are dependent upon the state of the barrel (ie are invalid if the barrel has been modified), you have serialized access to the barrel by design. It does not matter what form of concurrency you use, locks, message passing, transactions, it is the same problem, and is the same problem a CPU has when determining the dispatch order of instructions that write to a memory location.

    No, it's not serialized by design, in fact it's (deliberately) extremely parallel by design, with the occasional rare conflict. You have tens of thousands of objects, most of which don't care one bit about that barrel, but sometimes one of them does, and even more rarely two or more of them do.
    The point is that the mere infinitismal possibility of conflicts cause 100% serialization when you use message passing, whereas with transactions you can run in parallel, and deal with those rare cases of conflicts when and only when they actually occur.

    If it's just the case of a single barrel you may be able to hack your own optimistic transactional memory on top of the messages (e.g. you have one message which does not block that you can use to check if you need to update the barrel, and if so you just do it again with the atomic version - that way 99.9% of the objects would just decide that they don't care about the barrel at all and leave it alone), but it gets much worse in real world scenarios. In practice you'll often have each object want read N unspecified objects from the world, and modify M other unspecified objects in the world (which may or may not overlap with the N that you read). There is no way to know up front which objects you need to read/modify, you only know the exact set of objects that was needed after the operation has occured. All this has to happen atomically, naturally, which means that with message passing you'll be forced to have a single service guarding "The World", and each object's operations on the world will be entirely serialized. It's simply impossible to do this concurrently if your world is guarded by a message process, even though the number of actual conflicts that these atomic operations have are very very low.
    And again, with transactional memory, the problem simply disappears and you get near linear speedup as you add more CPU:s.

    Also, I didn't "design" the problem to be difficult for message passing, it just was difficult for message passing all by itself. Sometimes the thing you're simulating just isn't suitable to message passing. You can't blame the problem because the language doesn't offer a good way of solving it!

    Look, I'm the biggest FP advocate there is. I like Erlang et al. as much as the next guy (though my favourite language is Haskell), but the fact of the matter is that there are real problems that can not be solved with message passing. In my experience, most applications that are actually concurrent by nature (servers, etc.) can use message passing good effect, but when you try to speed up non-concurrent applications the instances where your problems map nicely to threads and messages start to become more rare. We can't just ignore these problems (again, Almdahl's law won't let us), we need to provide a solution for them too. That's why we need many ways of doing these things. In most cases you can be data parallel, in some cases you need task based parallelism, and in yet fewer cases you can use threads and message passing, and in fewer cases still you need transactional memory. We can't leave any of these out though, as that would disqualify the language from being considered "general purpose" w.r.t. concurrency/parallelism, IMO.

    Games work exceptionally well with message passing. As I discussed regarding your previous ai scenario, if the message to the barrel (change state) includes a "barrel state stamp" then the barrel knows it can change state, assuming that stamp matches the current barrel state. If this is hard to envision, imagine the barrel increments a private counter every time it changes important state (important to the message sender). That counter is the state stamp.

    When the barrel receives your state changing message, it can process it as long as there is not other similar messages competing. If the state stamp has changed it must tell the sender that message was discarded, as it is no longer valid. Then the sender can recompute or abandon.

    This is essentially what happens with transactions as well. The messages as I describe are a type of optimistic transaction. Scalability in games is probably acheived by minimizing choke points, regardless of the concurrency mechanmism used.

    There is no more serialization with message passing than with transactions. If you have many processes modifying the same mutable state in your barrel, and all these modifications are dependent on the state of the barrel (ie invalid if state has changed) you have a contention problem that is not solved by any concurrency mechanism. Messaging does not make this worse. If most processes are not modifying your barrel state, they are not barrel state dependent, and there is no serialization problem as you describe. Then both messaging and transations work well.

    Your second argument, regarding N reads and M writes, you claim is solved better by a transaction. All you are doing is breaking up the granularity. You can do the exact same thing with messages. Instead treating the whole world as one process, break up the writes into messages to M processes. If all must be done atomically, then you do need a transactional system built using messages. Ultimately such a transactional system must commit writes.

    One way you might do it is a two stage commit. First the supervisor (modifying) process sends a message to each M process acquiring a lock. Assuming a message is sent back with succeed or fail, the next step is to send a message to each M process to actually mutate data. During that time each M process cannot be modified by anything else (ie is temporarily owned by the supervisor process). After mutation is complete, the lock is freed. This only requires two messages to each M process and one message back from each M process. This is a fine grained form of locking and does not block any other processes from modifying any other mutable data in the meantime. Nor do the locks prevent reading messages from being processed. Only a writing or lock acquiring message would fail, and only to those specific pieces of data.

    The point is you can do anything you wish with message passing. It is a fundamental building block, and can scale as well as your design permits. If your design has no need for atomic composite commits, you can do that. If you do need atomic composite commits, you can do that as well.

See more comments…