
For instance, let V = {x, y, z, w} be the set of variables of interest. Consider the abstraction {x, xy, xyz, xz, y, yz, z, w}. We can see that there is no information available about the subset of variables C = {x,y,z} since any aliasing may be possible: there might be run-time variables shared by any pair of the three program variables, by the three of them, or not shared at all. Thus, we may define a more compact representation to group the powerset of C, i.e., the clique set. In our example, the clique that will convey the same information is simply the set C. It pays off to replace any set S of sharing groups, which is the proper powerset of the set of variables C, by including C as a clique. Moreover, the set S can be eliminated from the sharing set, since the presence of C in the clique set makes S redundant.
Available domains with this representation:
Type analysis supports different degrees of precision. For example, with the flag type_precision with value defined, the analysis restricts the types to the finite domain of predefined types, i.e., the types defined by the user or in libraries, without generating new types. Another alternative is to use the normal analysis, i.e., creating new type definitions, but having only predefined types in the output. This is handled through the type_output flag.
Greater precision can be obtained evaluating builtins like is/2 abstractly: eterms includes a variant which allows evaluation of the types, which is governed by the type_eval flag.
Partial evaluation is performed during analysis when the local_control flag is set to other than off. Flag fixpoint must be set to di. Unfolding will take place while analyzing the program, therefore creating new patterns to analyze. The unfolding rule is governed by flag local_control (see transformation(codegen)).
For partial evaluation to take place, an analysis domain capable of tracking term structure should be used (e.g., eterms, pd, etc.). In particular:
Note that these two analyses will not infer useful information on the program. They are intended only to enable (classical) partial evaluation.
Size analysis yields functions which give bounds on the size of output data of procedures as a function of the size of the input data. The size can be expressed in various measures, e.g., term-size, term-depth, list-length, integer-value, etc.
Cost (steps) analysis yields functions which give bounds on the cost (expressed in the number of resolution steps) of procedures as a function of the size of their input data.