How to interpret Catch2 output when calculating a BENCHMARK?

Hi,

I have this code:

1
2
3
4
5
6
7
8
9
10
11
12
13
TEST_CASE("Extract_error", "[Parser]")
{
	BENCHMARK("find_include_name")
	{
		using namespace std;

		ParserTest parser{ false };

		string line = "   #include Martin";
		return parser.find_include_name(line, 2);
	};

}


and I get this output:



benchmark name                       samples       iterations    estimated
                                     mean          low mean      high mean
                                     std dev       low std dev   high std dev
-------------------------------------------------------------------------------
find_include_name                              100            23     8.3812 ms
                                        3.48196 us    3.46626 us    3.51257 us
                                        107.115 ns    65.9583 ns    194.605 ns


what does estimated measure and why is the mean value so different from the estimated value?


Why do we have 100 samples but only 23 iterations?


Documentation is not clear!!
JUANDENT wrote:
why is the mean value so different from the estimated value?

8.3812 ms / (100 × 23) = 3.644 µs

This is somewhat close to the mean but it's not exactly the same so maybe I'm wrong...

My thought was that maybe it runs the code 100 × 23 times and estimated is the total time that it takes to run everything.
Last edited on
Topic archived. No new replies allowed.