Corrector2
Comments

Building Awesome Apps for Windows 7 (Session 1): A Lap Around the Windows API Code Pack
How does this relate to WPF for doing the exact same thing?

Building Awesome Apps for Windows 7: Overview
Wait, what's wrong with WPF for building these same managed apps without downloading any of this Windows API code pack mumbo jumbo?

C9 Lectures: Stephan T. Lavavej  Standard Template Library (STL), 4 of n
Looking much better in 1.3! I'd still do more refactoring on unreachable(), but it's a whole world better than before...

C9 Lectures: Stephan T. Lavavej  Standard Template Library (STL), 4 of n
Since most of the issues revolved around solve, let me be more clear in my argument:
 A complex function, even if it is mostly one containing linear flow can be broken down into several simple functions in order to become more hierarchical and less linear. Let me give a simple example to elucidate my point:
Instead of a long function such as:
void Car::Build()
{
// Build the frame... build the frame code
// Add doors
... add doors code
// Insert engine
... insert engine code
// Insert transmission
... insert transmission code
// Insert upholstery
... insert upholstery code
// Paint the car
... paint the car code
// Whatever else needs to be done
... whatever else needs to be done
}
It is much easier to see (the comments are not part of the code, I am just adding them to show what the functions being called do for the purpose of explaining my point):
void Car::Build()
{
BuildFrame(...); // Calls AddDoors()AddInternalParts(...); // Calls AddEngine(), AddTransmission(), AddUphostery()?, etc...
AddBodyParts(...); // Calls AddDoors()
Paint(...);
}
Now the Build function is easy to eye over and comprehend. If the programmer wishes to look into detail and see how each step is actually implemented, he can go into each of the functions being called and examine them (or look at call graphs, etc...).
By creating a deeper (rather than shallower) Build() function, it is much easier to comprehend in a top down (rather than bottom up) fashion. This is how most people solve and analyze problems  from an abstract problem statement to analyzing (i.e., "sweating") the details.
If you:
 Break up your solve function into subtasks (functions)
 Give them meaningful names
 Only call these subtasks from within solve
Then, it would be much easier to see what solve does as a whole and there would be no performance penalty in doing this, to boot, because these subtasks would probably still be gross tasks. If any of the subtasks are still complex, you can further decompose them into more subtasks.
A developer reading solve would drill down to the level of interest to him, while seeing the big picture all along the way. Compare this to having to gloss over 400 lines of code, even if only looking for comments, in order to try to figure out what solve does.
My two cents...

C9 Lectures: Stephan T. Lavavej  Standard Template Library (STL), 4 of n
Well, he does state that the solve function, the meat of this application, will be covered in the next video, so I suppose we can take a wait and see approach. However, basic OO/structured programming tenets do not condone the use of 400 line "superdoeverything" functions, of which solve() is one. Bottom line is that we'll have to give Stephan the benefit of the doubt and let him elaborate in the next episode...

C9 Lectures: Stephan T. Lavavej  Standard Template Library (STL), 4 of n
So, I opened the source file, saw a nearly 400 line solve function with minimal "island" comments and pretty much lost all interest in what the program does. Just my two cents worth... Does it really have to be this unstructured to perform? Can it be any less structured and still be instructive (/sarcasm)? Was it necessary to cram it all into one .cpp file, given that it was part of a zip archive?
Just my 2 cents...

Techniques in Advanced .NET Debugging with John Robbins (Part 1 of 3)
I nominate this for the worst encoding of a very important presentation to ever grace Channel 9.

C# 4.0 and beyond by Anders Hejlsberg
Thanks for posting this wonderful presentation.
Please follow up with more from DevDays 2010. Also, the video and audio quality are superb! All encodings of presentations should be this good.

How to Embed PowerShell Within a C# Application
Why didn't you guys use a tripod? The tiny screen in the video is very hard to read and the camera shake makes it even harder. Fortunatelly, the code was posted to the right of the video (and the subtitles helped, too), because it was nearly impossible to make it out on the screen. In the end, I think this entire exercise would have been better as a post to a blog.

TechDays 2010 Keynote by Anders Hejlsberg: Trends and future directions in programming languages
Before uploading, please do a good job of compressing the video. It is VC1 compression after all and the WMVHigh version should not suffer posterization, blocking and other highly noticeable artifacts, making it not much better than the regular WMV version.
Thanks

All Data/All Day Dive into .NET Data Access (Part 4 of 6): Getting Started with ADO.NET Entity Framework
I am sorry, most of the videos from this MSDN simulcast have shoddy audio, but the audio on this presentation is barely legible. Is this 2010 or 1910? Horrible echo combined with audio compression beyond any reasonable limit leads to audio that one has to strain oneself to an extreme in order to be able to understand. There is also a complete audio drop from 5:06 to 5:29 during the presentation.

MSDN Live Meeting  Visual Studio 2010 and .NET 4 Update
For anyone interested in the scalability of the various approaches presented, here are the results of a proper 64bit release mode run on a more modern box with 6GB of RAM and a single Intel Xeon W5590 CPU (3.33 GHz / 8MB L3 cache quadcore NehalemEP with hyperthreading turned on for a total of 8 logical CPUs). The build was done using VS2010 RC / .NET 4.0 RC and run on the host I just gave the specs for, under the Windows 2008 R2 Standard OS:
CalcPrimes
Count of prime numbers: 283146
Elapsed = 1713ThreadingDemo.CalcPrimes
Count of prime numbers: 283146
Elapsed = 1080ParallelFor.CalcPrimes
Count of prime numbers: 283146
Elapsed = 431ParallelForEach.CalcPrimes
Count of prime numbers: 283146
Elapsed = 481ParallelFor.CalcPrimes_TLV
Count of prime numbers: 283146
Elapsed = 430ParallelForEach.CalcPrimes_TLV
Count of prime numbers: 283146
Elapsed = 447LinqDemo.CalcPrimes
Count of prime numbers: 283146
Elapsed = 1740PLinqDemo.CalcPrimes
Count of prime numbers: 283146
Elapsed = 461Summary: It looks like, with a proper release 64bit build, there are many options available here to gain the scalability, at least with this particular test and the specified number of CPUs.
Interestingly, if we make a slight improvement to the algorithm we can improve performance and most likely not even need to consider multithreaded approaches. Also, we can leverage the contents of the container we are forming without worrying about (and paying a concurrency penalty for) locking it, as follows:
private static void CalcPrimes2()
{
List<int> primes = new List<int>() {2};
for (int nr = 3; nr <= 4000000; nr+=2)
{
int upperBound=(int) Math.Sqrt(nr);
foreach (int prime in primes)
{
if (prime > upperBound)
{
primes.Add(nr);
break;
}if (nr % prime == 0)
break;
}
}
Console.WriteLine("Count of prime numbers: " + primes.Count);
}Running with these small changes, we get:
CalcPrimes2 (SingleThreaded)
Count of prime numbers: 283146
Elapsed = 501Notices how this result is comparable to our best multithreaded results.
For the next test, let's increase the number of numbers being tested for primality to 8 million and see how the various approaches fair:
CalcPrimes
Count of prime numbers: 539777
Elapsed = 4585CalcPrimes2 (SingleThreaded)
Count of prime numbers: 539777
Elapsed = 1224ThreadingDemo.CalcPrimes
Count of prime numbers: 539777
Elapsed = 2870ParallelFor.CalcPrimes
Count of prime numbers: 539777
Elapsed = 1146ParallelForEach.CalcPrimes
Count of prime numbers: 539777
Elapsed = 1210ParallelFor.CalcPrimes_TLV
Count of prime numbers: 539777
Elapsed = 1144ParallelForEach.CalcPrimes_TLV
Count of prime numbers: 539777
Elapsed = 1178LinqDemo.CalcPrimes
Count of prime numbers: 539777
Elapsed = 4623PLinqDemo.CalcPrimes
Count of prime numbers: 539777
Elapsed = 1183Now, the slightly improved single threaded function (CalcPrimes2) is almost in line with out best parallel attempt, using the previous algorithm, despite the fact that this is a decent multiprocessor host (i.e., 4 physical, 8 logical CPU cores) which allows for a lot of potential parallelism to the various decent parallel approaches presented in this session. This example (i.e., extending the domain of numbers being tested to 8 million) and how it scales as the workload increases, underlines the point about taking a little time to improve the algorithm first, before jumping with both feet into a multithreaded solution.
If you are still not convinced, let's look at what happens when the domain is doubled again to the first 16 million numbers:
CalcPrimes
Count of prime numbers: 1031130
Elapsed = 12248CalcPrimes2 (SingleThreaded)
Count of prime numbers: 1031130
Elapsed = 3017ThreadingDemo.CalcPrimes
Count of prime numbers: 1031130
Elapsed = 7682ParallelFor.CalcPrimes
Count of prime numbers: 1031130
Elapsed = 3054ParallelForEach.CalcPrimes
Count of prime numbers: 1031130
Elapsed = 3119ParallelFor.CalcPrimes_TLV
Count of prime numbers: 1031130
Elapsed = 3052ParallelForEach.CalcPrimes_TLV
Count of prime numbers: 1031130
Elapsed = 3103LinqDemo.CalcPrimes
Count of prime numbers: 1031130
Elapsed = 12318PLinqDemo.CalcPrimes
Count of prime numbers: 1031130
Elapsed = 3106In this run, the (improved) singlethreaded solution (marginally) beats our best parallel attempts with the weaker algorithm.
Thanks for listening
Pagination