Explanations in knowledge systems: design for explainable expert systems
- 1 June 1991
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in IEEE Expert
- Vol. 6 (3) , 58-64
- https://doi.org/10.1109/64.87686
Abstract
The explainable expert systems framework (EES), in which the focus is on capturing those design aspects that are important for producing good explanations, including justifications of the system's actions, explications of general problem-solving strategies, and descriptions of the system's terminology, is discussed. EES was developed as part of the Strategic Computing Initiative of the US Dept. of Defense's Defense Advanced Research Projects Agency (DARPA). both the general principles from which the system was derived and how the system was derived from those principles can be represented in EES. The Program Enhancement Advisor, which is the main prototype on which the explanation work has been developed and tested, is presented. PEA is an advice system that helps users improve their Common Lisp programs by recommending transformations that enhance the user's code. How EES produces better explanations is shown.Keywords
This publication has 6 references indexed in Scilit:
- Using a description classifier to enhance deductive inferencePublished by Institute of Electrical and Electronics Engineers (IEEE) ,2002
- Generation and Explanation: Building an Explanation Facility for the Explainable Expert Systems FrameworkPublished by Springer Nature ,1991
- A Reactive Approach to Explanation: Taking the User’s Feedback into AccountPublished by Springer Nature ,1991
- Planning text for advisory dialoguesPublished by Association for Computational Linguistics (ACL) ,1989
- Enhanced Maintenance and Explanation of Expert Systems Through Explicit Models of Their DevelopmentIEEE Transactions on Software Engineering, 1985
- An Overview of the KL‐ONE Knowledge Representation System*Cognitive Science, 1985