On the efficiency of subsumption algorithms
- 1 April 1985
- journal article
- Published by Association for Computing Machinery (ACM) in Journal of the ACM
- Vol. 32 (2) , 280-295
- https://doi.org/10.1145/3149.214118
Abstract
The costs of subsumption algorithms are analyzed by an estimation of the maximal number of unification attempts (worst-case unification complexity) made for deciding whether a clause C subsumes a clause D . For this purpose the clauses C and D are characterized by the following parameters: number of variables in C , number of literals in C , number of literals in D , and maximal length of the literals. The worst-case unification complexity immediately yields a lower bound for the worst-case time complexity. First, two well-known algorithms (Chang-Lee, Stillman) are investigated. Both algorithms are shown to have a very high worst-case time complexity. Then, a new subsumption algorithm is defined, which is based on an analysis of the connection between variables and predicates in C . An upper bound for the worst-case unification complexity of this algorithm, which is much lower than the lower bounds for the two other algorithms, is derived. Examples in which exponential costs are reduced to polynomial costs are discussed. Finally, the asymptotic growth of the worst-case complexity for all discussed algorithms is shown in a table (for several combinations of the parameters).Keywords
This publication has 5 references indexed in Scilit:
- Linear unificationJournal of Computer and System Sciences, 1978
- Resolution, Refinements, and Search Strategies: A Comparative StudyIEEE Transactions on Computers, 1976
- Problems and Experiments for and with Automated Theorem-Proving ProgramsIEEE Transactions on Computers, 1976
- The Concept of Weak Substitution in Theorem-ProvingJournal of the ACM, 1973
- A Machine-Oriented Logic Based on the Resolution PrincipleJournal of the ACM, 1965