Unit testing

Author(s): Edison Mera, Pedro López, Manuel Hermenegildo, Jose F. Morales, Alvaro Sevilla San Mateo, Nataliia Stulova, Ignacio Casso, José Luis Bueno.

The Ciao assertion language (see The Ciao assertion language) allows writing tests (including unit tests) by means of test assertions. These assertions make it possible to write specific test cases at the predicate level. This library contains predicates that can be used to run tests in modules and gather or pretty-print the results. It also provides some special properties that are convenient when writing tests and the corresponding run-time support.

Writing test assertions

As described in The Ciao assertion language a test assertion is written as follows:

:- test predicate(A1, A2, ..., An) 
   :  <Precondition>
   => <Postcondition>
   +  <Global properties>
   #  <Comment>.

Where the fields of the test assertion have the usual meaning in Ciao assertions, i.e., they contain conjunctions of properties which must hold at certain points in the execution. Here we give a somewhat more operational (``test oriented'') reading to these fields:

  • predicate/n is the predicate to be tested.

  • Precondition is a goal (a literal or a conjuntion of literals) that is called before the predicate being tested, and can be used to generate values of the input parameters. While in some other types of assertions these preconditions contain properties to be checked, the typical role of the preconditions here is to provide concrete input values for which the predicate can be actually executed.

  • Postcondition is a goal that should succeed after predicate/n has been called. This is used to test that the output of the predicate is the correct one for the input provided.

  • Global properties specifies some global properties that the predicate should meet, in the same way as other assertions. For example, not_fails means that the predicate does not fail, exception(error(a,b)) means that the predicate should throw the exception error(a,b), and so on.

  • Comment is a string that documents the test.

The following are some example tests for a complex number evaluator (see Examples (unittest) for the full code):

:- module(ceval2, [ceval/2], [assertions, regtypes, nativeprops]).

:- test ceval(A, B) : (A = c(3, 4) + c(1, 2) - c(2, 3))
    => (B = c(2, 3)) + (not_fails, is_det).

:- test ceval(A, B) : (A = c(3, 4) * c(1, 2) / c(1, 2))
    => (B = c(3.0, 4.0)) + (not_fails, is_det).

ceval(A,   A) :- complex(A), !.
ceval(A+B, C) :- ceval(A, CA), ceval(B, CB), add(CA, CB, C).
ceval(A-B, C) :- ceval(A, CA), ceval(B, CB), sub(CA, CB, C).
ceval(A*B, C) :- ceval(A, CA), ceval(B, CB), mul(CA, CB, C).
ceval(A/B, C) :- ceval(A, CA), ceval(B, CB), div(CA, CB, C).

...

:- regtype complex/1.
:- export(complex/1).

complex(c(A, B)) :-
    num(A),
    num(B).

Test assertions can be combined with other assertions:

:- test ceval(A, B) : (A = c(3, 4) + c(1, 2) - c(2, 3))
    => (B = c(2, 3)) + (not_fails, is_det).
:- test ceval(A, B) : (A = c(3, 4) * c(1, 2) / c(1, 2))
    => (B = c(3.0, 4.0)) + (not_fails, is_det).
:- check pred ceval/2 : gnd * term => gnd * complex.

Test assertions can also take the standard assertion status prefixes. In particular, a status of false can be used to state that a test fails. This can be useful to flag bugs as known.

:- false test ceval(A, B) : (A = c(3, 4) + c(1, 2) - c(2, 3))
    => (B = c(2, 3)) + (not_fails, is_det).

Tests with a false (or true) prefix are not run.

There are some specific properties that only apply to testing which are provided in module unittest_props.pl (see Special properties for testing). For example, the limit to the number of solutions to be generated for the tested predicate can be set with the property try_sols(N), a timeout to a test can be set with the property timeout(N), etc.

The test setup and cleanup can be done with the setup(SetupPred) and cleanup(CleanupPred) properties, where SetupPred and CleanupPred refers to the predicate that will perform the setup and cleanup respectively, etc.

The following is an example of setup and cleanup:

:- dynamic animal/2.

:- test animals_test1 + (setup(setup_db), cleanup(cleanup_db)).

animals_test1 :-
    findall((Name, Type), animal(Name, Type), AnimalList),
    print_animals(AnimalList).

setup_db :-
    add_animal(parakeet, bird),
    add_animal(dolphin, mammal).

cleanup_db :-
    retractall(animal(_, _)).

add_animal(Name, Type) :-
    assertz(animal(Name, Type)).

print_animals([]).
print_animals([(Name, Type) | Rest]) :-
    format('Animal: ~w, Type: ~w~n', [Name, Type]),
    print_animals(Rest).

Unit tests as examples

The special property example can be used to mark the unit test as an example, so that it is documented as such in manuals. The default behavior in lpdoc is to not include the unit tests in manuals unless they are marked this way. For example, the following test would be included in the manual as an example:

:- test ceval(A, B) : (A = c(3, 4) + c(1, 2) - c(2, 3))
    => (B = c(2, 3)) + (not_fails, is_det, example).

Running unit tests

There are several ways to run the unit tests :

  • Select CiaoDbg menu within the development environment, e.g., select the Run tests in current module.
  • Run all tests in a given bundle by running the following command at the top level of the source tree or a bundle:
    ciao test
  • Run from the top level, loading this module (unittest.pl) and calling the appropiate predicates (e.g., run_tests/3) (see the module Usage and interface section below). This can also be done from a program, provided it imports this module.

Combination with run-time tests

These tests can be combined with the run-time checking of other assertions present in the involved modules. This can be done by including the rtchecks package in the desired modules. Any check assertions present in the code will then be checked dynamically during the execution of the tests and can detect additional errors.

Integration tests

If you need to write tests for predicates that are spread over several modules, but work together, it may be useful to create a separate module, and reexport the predicates required to build the tests. This allows performing integration testing, using the syntax and functionality of the test assertions.


Usage and interface

Documentation on exports

A global option that specifies a testing routine. The current set of actions is:
  • check : run tests and temporarily save results in the auto-rewritable module.testout file;
  • show_results : print the test results;
  • status(S) : unify S with the overall test status;
  • summaries(S) : unify S with the test summaries;
  • stats(S) : unify S with the statistical summary;
  • save : save test results file in module.testout-saved file;
  • briefcompare : check whether current and saved test output files differ;
  • briefcompare(S) : like briefcompare, unfiy S with status;
  • compare : see the differences in the current and saved test output files in the diff format;

Usage:test_action(Action)

Action is a testing action

    Usage:run_tests_in_module(Alias)

    Executes all the tests for Alias and show the results (without further options). Defined as follows:

    run_tests_in_module(Alias) :-
        run_tests(Alias,[],[check,show_results]).
    

    Usage:run_tests_in_module(Alias,Opts)

    Executes all the tests for Alias and show the results. Defined as follows:

    run_tests_in_module(Alias,Opts) :-
        run_tests(Alias,Opts,[check,show_results]).
    

    Usage:run_tests_in_module_check_exp_assrts(Alias)

    Executes all the tests for Alias and show the results, with the rtc_entry option. Defined as follows:

    run_tests_in_module_check_exp_assrts(Alias) :-
        run_tests(Alias,[rtc_entry],[check,show_results]).
    

    Usage:run_tests_in_module(Alias,Opts,TestSummaries)

    Executes all the tests for Alias. Unify TestSummaries with the test results. Defined as follows:

    run_tests_in_module(Alias,Opts,TestSummaries) :-
        run_tests(Alias,Opts,[check,summaries(TestSummaries)]).
    

    Usage:run_tests_in_dir_rec(BaseDir,Opts,S)

    Executes all the tests in the modules of the given directory and its subdirectories (see dic_rec test option). Unify S with 1 if all test passed or 0 otherwise. Defined as follows:

    run_tests_in_dir_rec(BaseDir,Opts,S) :-
        run_tests(BaseDir,[dir_rec|Opts],[check,show_results,status(S)]).
    

    Usage:show_test_output(Alias,Format)

    Show the test results for Alias, where Format specifies: output for test full trace, stats for a statistical summary only, full for both.

    PREDICATErun_tests/3

    Usage:run_tests(Target,Opts,Actions)

    Perform the test action Actions on the test target Target with options Opts.

    (UNDOC_REEXPORT)test_option/1
    Imported from unittest_runner_common (see the corresponding documentation for details)

    (UNDOC_REEXPORT)statistical_summary/1
    Imported from unittest_statistics (see the corresponding documentation for details)

    Imported from unittest_statistics (see the corresponding documentation for details)

    Documentation on multifiles

    PREDICATEdefine_flag/3

    Usage:define_flag(Flag,FlagValues,Default)

    The predicate is multifile.