Benchmarking planners

The OMPL benchmarking utility is used to compare the performance of different planners. The following subsections show how to prepare and launch the benchmarking, and how to visualize the results. Some examples are included in some of the demos provided (see The directory structure), like in the Cube-sPassage problem (demos/OMPL_geo_demos/Cube-sPassage_R2/).

Preparing the benchmark

To benchmark different planners (or planner parameters) you need:

  1. To create a different problem file for each planner. All the problem files must have the same robot(s), obstacles, control files and query, the only difference being in the planner used, i.e. the <Parameters> tag where the planner is chosen and its parameter values set. For instance, to compare a PRM planner and RRT planner:

1
2
3
4
5
6
7
8
9
 <Parameters>
   <Name>omplPRM</Name>
   <Parameter name="_Max Planning Time">1.5</Parameter>
   <Parameter name="_Speed Factor">1</Parameter>
   <Parameter name="DistanceThreshold">0.10</Parameter>
   <Parameter name="MinGrowTime">0.2</Parameter>
   <Parameter name="MinExpandTime">0.0</Parameter>
   <Parameter name="BounceDistanceThreshold">0.0</Parameter>
</Parameters>
1
2
3
4
5
6
7
<Parameters>
  <Name>omplRRT</Name>
  <Parameter name="_Max Planning Time">1.0</Parameter>
  <Parameter name="_Speed Factor">1</Parameter>
  <Parameter name="Range">0.02</Parameter>
  <Parameter name="Goal Bias">0.05</Parameter>
</Parameters>

or to compare an RRTConnect planner with different Range vaues:

1
2
3
4
5
6
 <Parameters>
  <Name>omplRRTConnect</Name>
  <Parameter name="_Max Planning Time">1.0</Parameter>
  <Parameter name="_Speed Factor">1</Parameter>
  <Parameter name="Range">0.02</Parameter>
 </Parameters>
1
2
3
4
5
6
 <Parameters>
  <Name>omplRRTConnect</Name>
  <Parameter name="_Max Planning Time">1.0</Parameter>
  <Parameter name="_Speed Factor">1</Parameter>
  <Parameter name="Range">0.05</Parameter>
 </Parameters>
  1. To write a benchmarking file, in XML format, that must include:

  1. The list of problems to be benchamrked. An alias has to be provided that will appear in the graphical results.

  2. The benchmarking parameters:

  • maxTime: This is the maximum execution time (in seconds) per run. This value will override the corresponding ones written in the problem files.

  • runCount: The number of runs each problem will be executed.

  • maxMem: The maximum memory (in MB) allocated to store the results of each problem. Memory measurements are not very accurate, so it is recommended to set this to a very large value.

  • filename: The name of the log file where the data of the resulting runs will be stored.

  • displayProgress: Boolean flag to allow or disallow the storage of partial information during the executions (interesting for optimization planners like RRT*).

  • timeBetweenUpdates: The update time in case the previous flag is set to true.

Store this benchmarking file in the same folder where your problem files are.

As an example the following file shows the benchmarking file used to compare the different samplers that the PRM planner can use:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
<Benchmark Name="experiment">
  <Problem File="OMPL_PRM_bigcube-s-passage_R2.xml" PlannerAlias="PRMrnd"/>
  <Problem File="OMPL_PRMhalton_bigcube-s-passage_R2.xml" PlannerAlias="PRMhal"/>
  <Problem File="OMPL_PRMsdk_bigcube-s-passage_R2.xml" PlannerAlias="PRMsdk"/>
  <Problem File="OMPL_PRMgaussian_bigcube-s-passage_R2.xml" PlannerAlias="PRMgau"/>
  <Parameter Name="maxTime" Value="10.0" />
  <Parameter Name="maxMem" Value="8000.0" />
  <Parameter Name="runCount" Value="10" />
  <Parameter Name="displayProgress" Value="false" />
  <Parameter Name="filename" Value="resultPRM.log" />
</Benchmark>

Launching the benchmark

The benchmarking is launched using the kautham-console application, called with the -b option and the absolute path of the benchmarking file:

$ kautham-console -b absolute_path/benchmarking.xml

The output log file is stored in the same folder as the benchmarking file.



Analyzing the results

To analyze the results first a database file has to be generated using this python file, also available at the python directory of the project (see The directory structure).

From that folder do:

$ ompl_benchmark_statistics.py absolute_path_to_log_file -d absolute_path_to_db_file

For instance, for the benchmarking of the PRM planners in the cube-sPassage problem shown previously:

$ ompl_benchmark_statistics.py ~/gitrepos/kautham/demos/OMPL_geo_demos/Cube-sPassage_R2/resultPRM.log -d ~/gitrepos/kautham/demos/OMPL_geo_demos/Cube-sPassage_R2/resultsPRM.db

Once this database file has been generated, it can be uploaded to plannerarena.org, using the Change database menu option, and then different results can be visualized, mainly using box plots.

../../_images/plannerarena_time.png

Some of the interesting Benchmark attributes that can be set are:

  • solved: to evaluate the success rate,

  • time: to evaluate the computational time (in seconds),

  • solution length, to evaluate the obtained solution,

  • collision checks to evaluate the number of calls to the collision checker,

  • graph states to report the number of states in the constructed graph,

  • valid segment fraction: the fraction of segments that turned out to be valid out of all the segments that were checked for validity.

See https://ompl.kavrakilab.org/benchmark.html for the complete list.