Loading user information from Channel 9

Something went wrong getting user information from Channel 9

Latest Achievement:

Loading user information from MSDN

Something went wrong getting user information from MSDN

Visual Studio Achievements

Latest Achievement:

Loading Visual Studio Achievements

Something went wrong getting the Visual Studio Achievements


BradL BradL
  • Case Study: Debugging the Load Test - 07

    @gt65345:Whoops... yeah, meant choice B. Wink

    I never looked at the dumps from this customer for this issue - I solved it by looking at perfmon.  So I don't have any dump analysis to share.  But if I did, I would be employing the steps in Episode 5 to find what threads are consuming the cpu.  All I would expect to find are GC threads which may or may not be in the midst of a GC, and worker threads with or without custom code doing whatever work they're supposed to do on the stack.  I say "with or without" b/c the dump may be captured at a time when threads are performing certain work or waiting for work.

  • Case Study: Debugging the Load Test - 07

    @gt65345:The customer engaged me b/c they were up against a release date, and that was in danger of slipping due to this issue in the test environment.  Root cause for the high cpu *was* found - too much load.  Knowing this, their options were at that point A) continue to tshoot as an academic exercise, increasing the potential that their release date would sliip, or B) re-test with a realistic load, keeping alive the goal of their planned ship date.

    Taking all things into consideration, they chose what most/all customers would - choice A. 

  • Case Study: Debugging the Load Test - 07

    @Frank:That's definitely an option, and a good one at that.  Though, this test was done against one server. 

    But the customer didn't ask me what tool to use to run the test; they already had their tools in place.  They just asked me to help them find root cause of the problem.

  • Case Study: Debugging the Load Test - 07

    @gt65345:The handoff has little/nothing to do with the high cpu.  I didn't look at the dumps or have a profiler trace to verify what threads were consuming the cpu.  But based on past experience, it was the worker threads (from the CLR worker threadpool) - they are the ones doing all the work & executing the requests.  And maybe the GC threads, too (depending on how much memory was allocated).

  • Case Study: Debugging the Load Test - 07

    @gt65345:The way the perfmon counters are updated has a say in this.  RequestsQueued is incremented when the request is posted from native code to the CLR threadpool, and then decremented when the callback is invoked.  This all happens in native code.  With a heavy load or a "burst", you may see this counter go above zero.

    On the other side, if the CLR threadpool is draining these requests very quickly (e.g., very lightweight requests), then they'll never have to wait in the app-specific queue.  And therefore Requests In Application Queue won't go above 0

    Hence, "Requests Queued by itself isn't an indicator of a problem, per se."  A healthy ASP.NET server can have Requests Queued > 0.

  • Putting it all together: finding root cause of high memory pressure - 12

    @gt65345:You want the LocalDumps reg key, setting DumpType=2.

  • Analyzing a dump of a process under memory pressure - 11

    @Jehanzeb:I've never heard of clrdump before, but from debuginfo.com, it says the default is a mini, which sounds like doing a variation of .dump /m with cdb or the other debuggers from Microsoft's debugging tools package.  Honestly, I never use these mini dumps, as they're very limited in what they provide and essentially useless for tshooting memory issues.  I tried running !address from a dump obtained by .dump /m (see debugger.chm from our Debugging Tools package) and !address wouldn't even run.

    In any case, I'd suggest following the sympath chages I suggested above.  Then tell me the results.

  • Analyzing a dump of a process under memory pressure - 11

    @Jehanzeb: Hmmm.. what arguments do you use when you issue a .dump command.  Also, try the sympath changes I suggested to see if that helps.

  • Preparing to troubleshoot memory pressure issues: a primer on virtual memory - 10

    Ran@Randhir: Good catch.  Episodes 7-9 have been recorded, but there were issues & need to be re-recorded before I post them.  I should be able to get them published in the next few weeks.

  • Analyzing a dump of a process under memory pressure - 11


    Initially, looks like a symbol issue.

    1. What did you use to get this dump?  What tool, and what command?  You need to ensure you have a full user mode dump.  While you can line up symbols correctly with a mini dump (which is typically ~1%-5% of the file size of a full user mode dump), they have limited use when it comes to debugging production application issues.

    2. I don't know what your sympath is, but adding c:\windows\symbols isn't likely to help.  Try using our public symbol server:

    .sympath srv*c:\symcache*http://msdl.microsoft.com/download/symbols

    .reload /f