Having difficulty seeing what's important

When working with code that uses anonymous methods or lambda expressions to wrap blocks of code, it is difficult to see what the problem spots are when profiling because these methods tend to make everything else look unimportant.

For example, if you have code like...

AppTools.ExecuteWithStandardExceptionHandling( delegate {

// Basically the entire program

}; );

.. where ExecuteWithStandardExceptionHandling is a simple method that does a try/catch and emails errors to developers in the case of an unhandled exception, the profiler makes it look like 99% of your processing power gets spent on ExecuteWithStandardExceptionHandling. This is technically true, but it's really not useful. Yes, the whole program executes inside it, but it spends almost no (usually, none) time running its own error emailing code - it's just running the code inside it.

So, everything the profiler is telling me is technically correct, and I can track down specific pieces of my "actual" code and see how many milliseconds it is consuming, but it is very difficult to quickly identify trouble spots when there are these wrapping methods hogging the charts.

Is there any way around this? Thank you.

Comments

  • Just a very quick check- are you using "line level" mode of profiling (most detail)?

    (I think the problem is actually deeper than this so I'll check with the developer but I just thought I'd ask this one :-)
  • I am using the defaults. So, Line-level and method-level timings; only methods with source (detailed), for the setting you are talking about, I think.
  • Can you post a screenshot of what the results look like in the source code view please (wth line-level timings).
  • gsmaltergsmalter Posts: 8
    edited November 6, 2009 10:22AM
    http://picasaweb.google.com/lh/photo/Xj ... directlink

    Here's a view of the worst-offending single line of code I could find. It gets called 271 times, takes up 3.6 seconds of execution time cumulatively, and if were were to cache the result it would make for a big improvement. But look where it is in the big picture. It's like the 25th line in a list of methods that are mostly garbage. The methods above it include the ones that parse the command line arguments and invoke the operation, and methods that help you invoke a database operation.

    I don't know how the profiler would know what methods I don't care about and which ones I do, and most of this problem is due to my lack of experience with the program, I'm sure. I'm just looking for ideas to help me find critical areas faster. I've had some luck sorting by Hit Count or Time rather than Time With Children.

    The only thing that strikes me as outright wrong in this picture is that you'll notice the chart above shows Time With Children as 944ms, while the line in the code shows the time as 3,681. I'm not sure what's causing the difference there.

    Thanks.
  • Hi,

    I can't seem to access the image. Could you check the URL.

    Thanks,

    Stephen
  • Picasa sucks. Try again.
  • Yes- I see what you mean, when represented in the methods view grod, its only claiming 3.6 milliseconds but in the source code view its 3.6 seconds. This is rather odd, is it easily reproducible?
  • I don't think it's as simple as it being off by a factor of 1000. But, regardless, this actually isn't central to the problem. There are other cases where the time is accurate but the fact remains that it is hard to sift through a dozen meaningless methods to find the real performance bottlenecks in the system.
  • Hello,

    I got some information earlier in the week that the source code view does not employ the ANTS Performance Profiler's overhead compensation. If you "disable overhead compensation" in the options, you should notice that the source and grid times match up.
  • We've acknowledged this as a UI bug (PP-895) and will hopefully enhance the presentation of such data in the next version.
Sign In or Register to comment.