WinDbg should evolve and be based around objects, like powershell. Filtering would be much easier, but also you could use the data to create custom visualizations and debugger-based monitoring (like deciding whether to dump the process based on the file path passed to CreateFile and whether MyModule!* is on the stack).
it's awesome that you're going through all the diagnostic tools used by MS support engineers. Now the biggest problem is choosing which tool to use when.
By the way, installing a windows service (that's what DebugDiag does in order to collect data) seems a bit uncomfortable in production. Could I take dumps with procdump or windbg and open them in the analyzer to take advantage of the application specific rules, like the one for SharePoint?
Does PerfView need full dumps for that? I've been trying to diagnose an alleged memory leak (it could as well just mean we have to scale it out) in a process who's baseline is 5GB memory usage. The "leak" manifests itself by the fact that after some time (it's a w3wp.exe, which is recycled automatically at 2pm, so it's less than 24 hours of running) the CPU spends more and more time in GC and there is quite a bit of paging (hard faults in resmon). Through perfmon counters, I've noticed that most of memory usage increase is in large object heap.
My question is: will PerfView need to take the snapshot for over 2 minutes just like procdump's full memory dump deos in this case?
is the network I/O provider collecting the same data as the commandline "netsh trace start [parameters]"? Can you use this stuff safely in production? For instance on an Exchange or SharePoint server or Domain Controller?