Ticket #5282 (closed bug: fixed)
Bizarre results from -P profiler on OS X
|Reported by:||bos||Owned by:||simonmar|
|Operating System:||Unknown/Multiple||Architecture:||x86_64 (amd64)|
|Type of failure:||Incorrect result at runtime||Difficulty:|
|Test Case:||Blocked By:|
I'm trying to use the profiler to understand the performance of an application on OS X. I have a smallish benchmark that prints a lot of lines of text to stdout, and the numbers reported when I use +RTS -P are very different from what I see with my eyes.
The benchmark itself is quite simple:
When I run it on the command line, it produces simple enough output:
$ time ./Simple one 500000 +RTS -P >/dev/null 500000 iterations in 2.6 secs (191.7 thousand/sec) real 0m2.626s user 0m2.586s sys 0m0.030s
The time measured within the app and the wallclock time are both in agreement.
However, if I look at the Simple.prof file left behind afterwards, its numbers seem bizarre:
Mon Jun 27 11:35 2011 Time and Allocation Profiling Report (Final) Simple +RTS -P -RTS one 500000 total time = 0.28 secs (14 ticks @ 20 ms) total alloc = 2,200,395,720 bytes (excludes profiling overheads)
It reports a time of just 0.28 seconds, and the number of ticks it counts arithmetically agree with this.
I have other programs that suffer from the same discrepancy between wallclock measurement and what the profiler reports to me, such that I don't feel like I can trust the profiler at all here. I'm at a loss for what to do about this, though.