J. Herrmann, M. Kloth, Frank Feldkamp
Artif. Intell. Eng.
Abstract This paper sets out to illustrate the importance of transparency within software support systems and in particular for those intelligent assistant systems performing complex industrial design tasks. Such transparency (with the meaning ‘clear’ or ‘easy to understand’) can be achieved by two distinct strategies that complement each other: 1. (i) The design of intelligible systems that would avoid the need for in depth explanation. 2. (ii) The flexible generation of those definitions or aspects of the system or domain that remain ambiguous. The paper illustrates that for the generation of useful explanations going beyond a simple justification of a problem solving trace, specific explanatory knowledge must be acquired. By itself the problem solving techniques are not sufficient. A new approach to acquire and model explanatory knowledge for software systems is presented. The new four-layer explanatory model can be used to determine the range of explanation suitable for a given systems domain. This model has been successfully used for the development of an explanation component for the design assistant system ASSIST that supports factory layout planning, in itself a complex design task.