Mikhail, a task will drop references to its continuations when it completes. And if you register continuations with a task after that task has completed, the continuation is either immediately executed or scheduled, and no reference will be stored.
Anders, Task's IDisposable implementation in .NET 4.5 exists purely to enable disposing of its WaitHandle, which is only ever allocated if you explicitly access the ((IAsyncResult)task).AsyncWaitHandle. Even if disposed, the task will continue to work, except that trying to use its IAsyncResult.AsyncWaitHandle will result in an exception. So, you should feel comfortable caching your own tasks, as the worst that will happen if someone disposes of them is that they won't be able to use what's effectively a legacy property (you shouldn't need to use this explicitly-implemented property unless you're bridging the gap with the existing IAsyncResult pattern).
Anders, as mentioned in the talk, in .NET 4.5 tasks no longer throw their unobserved exception on the finalizer thread (you can re-enable the behavior with a configuration switch). The .NET 4 behavior is discussed at https://msdn.microsoft.com/en-us/library/dd997415.aspx, e.g. "If you do not wait on a task that propagates an exception, or access its Exception property, the exception is escalated according to the .NET exception policy when the task is garbage-collected." Regarding disposing of Tasks, in general you shouldn't need to Dispose of tasks, and unless you can prove that it's actually beneficial, I'd urge you to forget that Task even implements IDisposable. If we had it to do over again, I don't believe it would.
Regarding ConcurrentDictionary.GetOrAdd, that won't do what I want in this case. Note that I'm only storing the task into the dictionary if the task completes successfully... if I were to use GetOrAdd, the task would always end up getting stored, even if it faulted. That's why I wait to store the task until the task has completed, so that I can store it conditionally. You're right that if I were going to store it unconditionally, GetOrAdd would be a simpler approach.
Thanks, and I'm very glad to hear you enjoyed the talk!
This isn't a change in behavior, rather this case just doesn't map to the same cases that would result in inlining. In this case, there's actually nothing to be inlined: the task being returned from the async method doesn't have a delegate associated with it, so there's nothing to run. It's akin to waiting on a task created by a TaskCompletionSource<T>: you can use a TCS<T> to represent any arbitrary asynchronous operation, but when you Wait() on such a task, there's no invokable work associated with that Task<T>, so the Wait() call has no choice but to spin/block until the Task<T> is eventually signaled/completed. The implementation currently has no way to associate the work being posted back to the UI thread with the task separately returned from the async method.
Good eye. TPL Dataflow is in part based on and inspired by concepts from CCR, along with concepts from Axum and Visual C++ 10's Asynchronous Agents library, so you'll see a lot of similarities in terms of the kinds of problems you can solve. The APIs were redesigned to fit in well with the rest of the .NET Framework and to take advantage of what the Task Parallel Library and other .NET goodies have to offer, as well as redesigned to incorporate some more scenarios and patterns we felt were important.
Regarding TaskSchedulers, sure, we'll see what we can pull together. Note that these dataflow blocks can be targetted to run on any TaskScheduler instance, so you could configure a block to run on the thread pool, or on the UI, or in a concurrent/exclusive fashion, or whatever underlying semantics you want to achieve by plugging in a custom scheduler. It sounds like you're interested in and asking about the other direction, implementing a TaskScheduler with a dataflow block (like ActionBlock)... you could certainly do that, too.
I'm glad to hear you find it useful. And it is possible to build custom blocks; the system is primarily based on two interfaces, ISourceBlock<TOutput> and ITargetBlock<TInput>, which the built-in blocks implement and which you can implement as well. Then you can link up your blocks with others, just as is done for the custom blocks, all of the extension methods that are built in and that work with these interfaces will apply to your block as well, etc. You can build custom blocks in a few ways, and there's actually a short section on doing so in the document at https://www.microsoft.com/downloads/en/details.aspx?FamilyID=d5b3e1f8-c672-48e8-baf8-94f05b431f5c&displaylang=en, which you may find useful.
Regarding handles large numbers of small data packets, it should work well for that situation. If you find otherwise, please do let us know (note that we're still doing a fair amount of optimization to further decrease overheads and allocations, so this should also get better in the future).
Regarding which thread things run on, awaiting a task attempts to resume execution in the same threading environment where the operation was suspended. If there was a current SynchronizationContext when the await began, then execution will resume on that context (by Post'ing to it); otherwise, execution will begin on whatever TaskScheduler was current at the await. This means that if you await on a UI thread, execution will continue on the UI thread. If you await on a threadpool thread, you'll resume on a thread pool thread, though not necessarily the same thread that began the await.