@MagicAndre1981: ProcMon has the same problem: the filtering can't be done until *after* you create a huge dump of data. I need to apply filtering *before* the dump ever gets stored, because the data is just too vast. 1-2 hour builds generate *way* too much data to filter them after-the-fact.
@MagicAndre1981: ProcMon has the same problem--it starts filling up a log file that gets really huge.
I've tried using xperf for various things before, but the problem I run into is that the amount of data generated in the output files is huge--so using xperf seems to be limited to very small time frames.
My use case of interest is tracing/monitoring what happens over the course of a build (entire process tree, files read, ... etc), which could last over an hour. The kind of data I'd look for, is what can be had with strace on *nix, but there appears to be no user-configurable way to filter at that level of granularity. Can you offer any tips?