This module is intended to be a Ciao interface for the scripts for running benchmaks of (modular) (incremental) analysis.
Before running any comman, the bench drivers have to be compiled:
% load this module ?- compile_all. % generate executables yes
Running and generating graphs of several benchmarks
?- run_benchmark(aiakl, add, ). % % Benchmark text yes ?- run_benchmark(aiakl, del, ). % ... yes ?- generate_results_summary("aiakl*"). % generates a graph with aiakl add and del tests % This will open in your default pdf viewer the generated graph yes ?- run_benchmark(qsort, add, ). % ... yes ?- generate_results_summary("*-add-"). % generates a graph for all the tests of adding sequences % This will open in your default pdf viewer the generated graph yes ?- show_performed_tests_directory. aiakl-add-not_rand-1-shfr-dd aiakl-del-not_rand-1-shfr-dd graphs qsort-add-not_rand-1-shfr-dd .. .
Checking the correctness of a performed test:
?- check_tests_semantic_results(shfr, ['test_results/aiakl-add-not_rand-1-shfr-dd']).
Runs all configurations of incremental and modular analysis for benchmark Bench and Edition for the edition simulation sequence. Note that if there were previous results of a benchmark with the same configuration, those will be overwritten
Checks the correctness of analysis results in an already performed benchmark. Dirs is a (sub)list of the directories displayed by show_performed_tests_directory.