Started the project, added most of the existing text of 'Against Truth'.

This commit is contained in:
Simon Brooke 2020-04-22 11:41:21 +01:00
commit ce8c0a7057
No known key found for this signature in database
GPG key ID: A7A4F18D1D4DF987
30 changed files with 4916 additions and 0 deletions

24
CHANGELOG.md Normal file
View file

@ -0,0 +1,24 @@
# Change Log
All notable changes to this project will be documented in this file. This change log follows the conventions of [keepachangelog.com](http://keepachangelog.com/).
## [Unreleased]
### Changed
- Add a new arity to `make-widget-async` to provide a different widget shape.
## [0.1.1] - 2020-04-21
### Changed
- Documentation on how to make the widgets.
### Removed
- `make-widget-sync` - we're all async, all the time.
### Fixed
- Fixed widget maker to keep working when daylight savings switches over.
## 0.1.0 - 2020-04-21
### Added
- Files from the new template.
- Widget maker public API - `make-widget-sync`.
[Unreleased]: https://github.com/your-name/wildwood/compare/0.1.1...HEAD
[0.1.1]: https://github.com/your-name/wildwood/compare/0.1.0...0.1.1

277
LICENSE Normal file
View file

@ -0,0 +1,277 @@
Eclipse Public License - v 2.0
THE ACCOMPANYING PROGRAM IS PROVIDED UNDER THE TERMS OF THIS ECLIPSE
PUBLIC LICENSE ("AGREEMENT"). ANY USE, REPRODUCTION OR DISTRIBUTION
OF THE PROGRAM CONSTITUTES RECIPIENT'S ACCEPTANCE OF THIS AGREEMENT.
1. DEFINITIONS
"Contribution" means:
a) in the case of the initial Contributor, the initial content
Distributed under this Agreement, and
b) in the case of each subsequent Contributor:
i) changes to the Program, and
ii) additions to the Program;
where such changes and/or additions to the Program originate from
and are Distributed by that particular Contributor. A Contribution
"originates" from a Contributor if it was added to the Program by
such Contributor itself or anyone acting on such Contributor's behalf.
Contributions do not include changes or additions to the Program that
are not Modified Works.
"Contributor" means any person or entity that Distributes the Program.
"Licensed Patents" mean patent claims licensable by a Contributor which
are necessarily infringed by the use or sale of its Contribution alone
or when combined with the Program.
"Program" means the Contributions Distributed in accordance with this
Agreement.
"Recipient" means anyone who receives the Program under this Agreement
or any Secondary License (as applicable), including Contributors.
"Derivative Works" shall mean any work, whether in Source Code or other
form, that is based on (or derived from) the Program and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship.
"Modified Works" shall mean any work in Source Code or other form that
results from an addition to, deletion from, or modification of the
contents of the Program, including, for purposes of clarity any new file
in Source Code form that contains any contents of the Program. Modified
Works shall not include works that contain only declarations,
interfaces, types, classes, structures, or files of the Program solely
in each case in order to link to, bind by name, or subclass the Program
or Modified Works thereof.
"Distribute" means the acts of a) distributing or b) making available
in any manner that enables the transfer of a copy.
"Source Code" means the form of a Program preferred for making
modifications, including but not limited to software source code,
documentation source, and configuration files.
"Secondary License" means either the GNU General Public License,
Version 2.0, or any later versions of that license, including any
exceptions or additional permissions as identified by the initial
Contributor.
2. GRANT OF RIGHTS
a) Subject to the terms of this Agreement, each Contributor hereby
grants Recipient a non-exclusive, worldwide, royalty-free copyright
license to reproduce, prepare Derivative Works of, publicly display,
publicly perform, Distribute and sublicense the Contribution of such
Contributor, if any, and such Derivative Works.
b) Subject to the terms of this Agreement, each Contributor hereby
grants Recipient a non-exclusive, worldwide, royalty-free patent
license under Licensed Patents to make, use, sell, offer to sell,
import and otherwise transfer the Contribution of such Contributor,
if any, in Source Code or other form. This patent license shall
apply to the combination of the Contribution and the Program if, at
the time the Contribution is added by the Contributor, such addition
of the Contribution causes such combination to be covered by the
Licensed Patents. The patent license shall not apply to any other
combinations which include the Contribution. No hardware per se is
licensed hereunder.
c) Recipient understands that although each Contributor grants the
licenses to its Contributions set forth herein, no assurances are
provided by any Contributor that the Program does not infringe the
patent or other intellectual property rights of any other entity.
Each Contributor disclaims any liability to Recipient for claims
brought by any other entity based on infringement of intellectual
property rights or otherwise. As a condition to exercising the
rights and licenses granted hereunder, each Recipient hereby
assumes sole responsibility to secure any other intellectual
property rights needed, if any. For example, if a third party
patent license is required to allow Recipient to Distribute the
Program, it is Recipient's responsibility to acquire that license
before distributing the Program.
d) Each Contributor represents that to its knowledge it has
sufficient copyright rights in its Contribution, if any, to grant
the copyright license set forth in this Agreement.
e) Notwithstanding the terms of any Secondary License, no
Contributor makes additional grants to any Recipient (other than
those set forth in this Agreement) as a result of such Recipient's
receipt of the Program under the terms of a Secondary License
(if permitted under the terms of Section 3).
3. REQUIREMENTS
3.1 If a Contributor Distributes the Program in any form, then:
a) the Program must also be made available as Source Code, in
accordance with section 3.2, and the Contributor must accompany
the Program with a statement that the Source Code for the Program
is available under this Agreement, and informs Recipients how to
obtain it in a reasonable manner on or through a medium customarily
used for software exchange; and
b) the Contributor may Distribute the Program under a license
different than this Agreement, provided that such license:
i) effectively disclaims on behalf of all other Contributors all
warranties and conditions, express and implied, including
warranties or conditions of title and non-infringement, and
implied warranties or conditions of merchantability and fitness
for a particular purpose;
ii) effectively excludes on behalf of all other Contributors all
liability for damages, including direct, indirect, special,
incidental and consequential damages, such as lost profits;
iii) does not attempt to limit or alter the recipients' rights
in the Source Code under section 3.2; and
iv) requires any subsequent distribution of the Program by any
party to be under a license that satisfies the requirements
of this section 3.
3.2 When the Program is Distributed as Source Code:
a) it must be made available under this Agreement, or if the
Program (i) is combined with other material in a separate file or
files made available under a Secondary License, and (ii) the initial
Contributor attached to the Source Code the notice described in
Exhibit A of this Agreement, then the Program may be made available
under the terms of such Secondary Licenses, and
b) a copy of this Agreement must be included with each copy of
the Program.
3.3 Contributors may not remove or alter any copyright, patent,
trademark, attribution notices, disclaimers of warranty, or limitations
of liability ("notices") contained within the Program from any copy of
the Program which they Distribute, provided that Contributors may add
their own appropriate notices.
4. COMMERCIAL DISTRIBUTION
Commercial distributors of software may accept certain responsibilities
with respect to end users, business partners and the like. While this
license is intended to facilitate the commercial use of the Program,
the Contributor who includes the Program in a commercial product
offering should do so in a manner which does not create potential
liability for other Contributors. Therefore, if a Contributor includes
the Program in a commercial product offering, such Contributor
("Commercial Contributor") hereby agrees to defend and indemnify every
other Contributor ("Indemnified Contributor") against any losses,
damages and costs (collectively "Losses") arising from claims, lawsuits
and other legal actions brought by a third party against the Indemnified
Contributor to the extent caused by the acts or omissions of such
Commercial Contributor in connection with its distribution of the Program
in a commercial product offering. The obligations in this section do not
apply to any claims or Losses relating to any actual or alleged
intellectual property infringement. In order to qualify, an Indemnified
Contributor must: a) promptly notify the Commercial Contributor in
writing of such claim, and b) allow the Commercial Contributor to control,
and cooperate with the Commercial Contributor in, the defense and any
related settlement negotiations. The Indemnified Contributor may
participate in any such claim at its own expense.
For example, a Contributor might include the Program in a commercial
product offering, Product X. That Contributor is then a Commercial
Contributor. If that Commercial Contributor then makes performance
claims, or offers warranties related to Product X, those performance
claims and warranties are such Commercial Contributor's responsibility
alone. Under this section, the Commercial Contributor would have to
defend claims against the other Contributors related to those performance
claims and warranties, and if a court requires any other Contributor to
pay any damages as a result, the Commercial Contributor must pay
those damages.
5. NO WARRANTY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT
PERMITTED BY APPLICABLE LAW, THE PROGRAM IS PROVIDED ON AN "AS IS"
BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR
IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF
TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE. Each Recipient is solely responsible for determining the
appropriateness of using and distributing the Program and assumes all
risks associated with its exercise of rights under this Agreement,
including but not limited to the risks and costs of program errors,
compliance with applicable laws, damage to or loss of data, programs
or equipment, and unavailability or interruption of operations.
6. DISCLAIMER OF LIABILITY
EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, AND TO THE EXTENT
PERMITTED BY APPLICABLE LAW, NEITHER RECIPIENT NOR ANY CONTRIBUTORS
SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST
PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE PROGRAM OR THE
EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
7. GENERAL
If any provision of this Agreement is invalid or unenforceable under
applicable law, it shall not affect the validity or enforceability of
the remainder of the terms of this Agreement, and without further
action by the parties hereto, such provision shall be reformed to the
minimum extent necessary to make such provision valid and enforceable.
If Recipient institutes patent litigation against any entity
(including a cross-claim or counterclaim in a lawsuit) alleging that the
Program itself (excluding combinations of the Program with other software
or hardware) infringes such Recipient's patent(s), then such Recipient's
rights granted under Section 2(b) shall terminate as of the date such
litigation is filed.
All Recipient's rights under this Agreement shall terminate if it
fails to comply with any of the material terms or conditions of this
Agreement and does not cure such failure in a reasonable period of
time after becoming aware of such noncompliance. If all Recipient's
rights under this Agreement terminate, Recipient agrees to cease use
and distribution of the Program as soon as reasonably practicable.
However, Recipient's obligations under this Agreement and any licenses
granted by Recipient relating to the Program shall continue and survive.
Everyone is permitted to copy and distribute copies of this Agreement,
but in order to avoid inconsistency the Agreement is copyrighted and
may only be modified in the following manner. The Agreement Steward
reserves the right to publish new versions (including revisions) of
this Agreement from time to time. No one other than the Agreement
Steward has the right to modify this Agreement. The Eclipse Foundation
is the initial Agreement Steward. The Eclipse Foundation may assign the
responsibility to serve as the Agreement Steward to a suitable separate
entity. Each new version of the Agreement will be given a distinguishing
version number. The Program (including Contributions) may always be
Distributed subject to the version of the Agreement under which it was
received. In addition, after a new version of the Agreement is published,
Contributor may elect to Distribute the Program (including its
Contributions) under the new version.
Except as expressly stated in Sections 2(a) and 2(b) above, Recipient
receives no rights or licenses to the intellectual property of any
Contributor under this Agreement, whether expressly, by implication,
estoppel or otherwise. All rights in the Program not expressly granted
under this Agreement are reserved. Nothing in this Agreement is intended
to be enforceable by any entity that is not a Contributor or Recipient.
No third-party beneficiary rights are created under this Agreement.
Exhibit A - Form of Secondary Licenses Notice
"This Source Code may also be made available under the following
Secondary Licenses when the conditions for such availability set forth
in the Eclipse Public License, v. 2.0 are satisfied: {name license(s),
version(s), and exceptions or additional permissions here}."
Simply including a copy of this Agreement, including this Exhibit A
is not sufficient to license the Source Code under Secondary Licenses.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to
look for such a notice.
You may add additional accurate notices of copyright ownership.

23
README.md Normal file
View file

@ -0,0 +1,23 @@
# wildwood
A general inference library using a game theoretic inference mechanism.
## Usage
FIXME
## License
Copyright © 2020 FIXME
This program and the accompanying materials are made available under the
terms of the Eclipse Public License 2.0 which is available at
http://www.eclipse.org/legal/epl-2.0.
This Source Code may also be made available under the following Secondary
Licenses when the conditions for such availability set forth in the Eclipse
Public License, v. 2.0 are satisfied: GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or (at your
option) any later version, with the GNU Classpath Exception which is available
at https://www.gnu.org/software/classpath/license.html.

34
doc/AgainstTruth.md Normal file
View file

@ -0,0 +1,34 @@
# Against Truth
> Hey, what IS truth, man? [Beeblebrox, quoted in [Adams, 1978]]
*This title is, of course, a respectful nod to Feyerabend's Against Method*
## Introduction
This document is in two parts: a statement of a problem, and an account of an attempt to address it. The problem is stated briefly in the first chapter, and fleshed out in the following two with a history of attempts which have been made in the past to address it, and an analysis of what would be needed to solve it.
The second part starts with an account of a system built by the author in collaboration with Peter Mott, describing particularly how the problem was addressed by this system; subsequent chapters will describe the development of a further system, in which the analysis developed in the first section will be applied.
This document deals only with explanation. Issues relating to inference and especially to truth maintenance will undoubtedly be raised as it progresses, but such hares will resolutely not be followed.
## Contents
### Frontmatter
1. [Manifesto](Manifesto.html)
### Part one: Stating the problem
1. [The Problem](TheProblem.html)
2. [History](History.html)
3. [Analysis](Analysis.html)
### Part Two: Into the wild wood
### Endmatter
1. [Errata](Errata.html)
----
[Adams, 1978](https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy)

1669
doc/Analysis.md Normal file

File diff suppressed because it is too large Load diff

3
doc/Errata.md Normal file
View file

@ -0,0 +1,3 @@
# Errata
1. On title page: the claim that Zaphod Beeblebrox is quoted as saying 'Hey, what IS truth, man?' in the printed text of Douglas Adams 'Hitchhikers Guide to the Galaxy' is false.

642
doc/History.md Normal file
View file

@ -0,0 +1,642 @@
# History
## History: Introduction
The object of this chapter is to describe and discuss the development of Expert System explanations from the beginning' to the most recent systems. The argument which I will try to advance is that development has been continuously driven by the perceived inadequacy of the explanations given; and that, while many ad hoc, and some principled, approaches have been tried, no really adequate explanation system has emerged. Further, I will claim that, as some of the later and more principled explanation systems accurately model the accounts of explanation advanced in current philosophy, the philosophical understanding of explanation is itself inadequate.
## Family Tree of Systems discussed
(diagram here)|
Chronology relates to publication, and not to implementation. Links are shown where system designers acknowledge influence, or where family resemblance between systems is extremely obvious. In a small field like this, it is reasonably (but not absolutely) safe to assume that major practitioners are up to date with the current literature.
Contrary to the current view, expressed by such authors as Weiner:
"... (Expert) systems include some mechanism for giving explanations, since their credibility depends on the user's ability to follow their reasoning, thereby verifying that an answer is correct." [Weiner, 80]
This view might be paraphrased as saying that an explanation generator is an intrinsic and essential part of an expert system. By contrast, the first thing that I intend to argue is that:
## The earliest systems contained no explanation facilities
Two of the famous early expert systems, Internist [Pople 77] and Macsyma [Martin & Fateman 71] did not have anything approaching an explanation system and made no claims to have one. Consequently, these will not be discussed at any length here. One other, Dendral, had a command 'EXPLAIN'; and the last, MYCIN, is famous for its explanations. To maintain my claim that neither of these systems had, in their original conception, what we would recognise as an explanation system, we will examine them in detail.
## Dendral
### General description of the system
Dendral is one of the earliest programmes which are conventionally included in the history of Expert Systems. As Jackson says:
> "DENDRAL can be seen as a kind of stepping stone between the older, general-purpose problem solving programs and more recent approaches involving the explicit representation of domain knowledge." [Jackson 86 p 19]
The system is designed to deduce the molecular structure of an organic compound from mass-spectrum analysis. It differs from the modern, post MYCIN, conception of an 'expert system' - or indeed even the weaker conception of a 'knowledge based system'- in a number of ways.
Firstly, it operates in 'batch-mode' - that is, when the system is started, it prompts the user for input, and then 'goes away' and analyses this without further interaction. When this is completed, it outputs a report.
Secondly, the program explicitly implements an algorithm, which is described [Buchanan et al 69, section 7].
Most significantly for the purpose of the current argument, although an attempt is made to produce information from which a justification of the conclusion could be reconstructed (by printing out the states of some internal variables at the end of the run), and although the function which causes the state of the variables to be printed is called 'EXPLAIN', there is no 'explanation facility' as currently understood. This lack is partially made good by a 'speak' option, which causes information about the current hypothesis to be printed out at each stage in the inference process.
### Example output:
(EXPLAIN (QUOTE C8H160) s:09046 (QUOTE TEST1) (QUOTE JULY8)) *GOODLIST= (*ETHYL-KETONE 3*)
*BADLIST= (*c-2-ALCOHOL* *PRIMARY-ALCOHOL* *ETHYL-ETHER2* *METHYL-ETHER2* *ETHER2* *ALDEHYDE* *ALCOHOL* *ISO-PROPYL KETONE3* *N-PROPYL-KETONE3* *METHYL-KETONE 3*)
(JULY-4-1968 VERSION) c2*ETHYL-KETONE 3*H8 MOLECULES NO DOUBLE BOND EQUIVS
CH2..CH2.c3H7 c=.0 C2H5, CH2..CH..CH3 C2H5c=.0 C215 CH2..CH2.CH..CH3 CH3 c=.0 C2H5.
DONE
> {from op. cit. table 10, p 250}
### DENDRAL as an Expert System
So why should DENDRAL be considered an 'Expert System'? The programme consists of two major components, a 'structure generator' and a 'evaluation function'. Both of these incorporate inference mechanisms, supported by explicit representations of knowledge.
### The 'Generate' stage
The input data gives approximate information about the relative quantities of different ion-masses in the compound, and consequently roughly suggests the proportions of elements present. The 'structure generator generates compounds compatible with the analysis data, by exploiting knowledge about possible and impossible atomic bonds. This knowledge appears to be held essentially as patterns, against which generated patterns are matched. Two primary collections of patterns are maintained, a 'badlist' and a 'goodlist'. The badlist comprises, initially, those primitive compounds which cannot exist in nature; those compounds which are ruled out by features of the input data are added.
### The 'Test' stage
The evaluation function takes structures passed by the generator, and uses a predictor to calculate what the spectrum to be expected from this structure would be. It then compares this against the spectrum originally entered, and scores it for similarity.
The predictor uses some form of a rule engine. My caution in that statement derives from the extremely technical nature of the passage in [Buchanan et al 69, section 4], and the fact that no actual examples of rules given. These rules determine the way in which a compound is likely to break down under conditions inside the spectrometer, and what new compounds in what proportion will be the products of these breakdowns; generally the form of the rule appears to be a pair:
(<compound-specification> · <product-specification>)
where <compound-specification> is a description of a compound or class of compounds, and <product-specification> may be a list of compound specifications with information about their proportions, or may, where it is uncertain what the precise products would be, or no further decomposition is likely, be spectrum fragments. The spectrum fragments which form the nodes of the decomposition graph are then summed to generate a 'predicted spectrum'.
So that it appears that two main inference mechanisms are employed in DENDRAL: a simple pattern matcher helps to generate hypotheses, and a more sophisticated forward chaining mechanism supports the test stage, eliminating the impossible ones. Summary
It is clear from the above that DENDRAL is an 'Intelligent knowledge Based System'; but the absence of any high-level explanation or justification system, or any method of exploring the inference interactively with the machine, make it very different from what we now think of as an 'expert system'.
Despite this, DENDRAL has a very direct relevance to the present project: as [Buchanan &amp; Feigenbaum 78] report:
> "Another concern has been to exploit the AI methodology to understand better some fundamental questions in the philosophy of science, for example the processes by which explanatory hypotheses are discovered or judged adequate." (Buchanan &amp; Feigenbaum 78, p 5)
Thus DENDRAL set out to contribute to exactly the same debate that I am addressing.
Interestingly, although later developments of the DENDRAL family included interactive editing of hypotheses, and although Buchanan was involved in the MYCIN project in the interim, no explanation facility had been added to the system by 1978, the date of the later of these two papers. This may be seen as providing some very tenuous support for one or other of two hypotheses of mine:
1. It was Davis, with TEIRESIAS, who developed what we now think of as MYCIN's explanation facility;
2. It is extremely difficult to add explanation post facto, if the knowledge representation and inference mechanism have not been designed to support it.
## Mycin
Mycin [Davis et al, 77] is perhaps the program most often seen as the starting point for expert system research, and is certainly the first system which is remembered for its explanation facilities.
### Explanation Facilities
What isn't so frequently remembered is that MYCIN itself, the consulting programme, didn't have any explanation facilities. These were provided by a separate front end module, TEIRESIAS, which was intended as a knowledge engineers tool. The point here is that the MYCIN project did not (as | understand it) expect end users to use - or need - explanations. Rather, the explanation facility was conceived as a high level debugging trace to help the knowledge engineer, presumed to be an "experienced programmer", with knowledge of the problem domain, to discover what is going on inside the system. Consequently:
> "The fundamental goal of an explanation facility is to enable a program to display a comprehensible account of the motivation for all its actions." [Davis &amp; Lenat, 82] my emphasis.
The explanation tells why the machine carried out an action, not why it believes a proposition. This is justified on the grounds that:
> "We assume ... that a recap of program actions can be an effective explanation as long as the correct level of detail is chosen."
> "With a program that does symbolic reasoning, recapping offers an easily understood explanation." [ibid]
This understanding of the explanation as simply a development of high level trace facilities is confirmed by the fact that the fragments chosen for concatenation to form the explanation are constructed by applying fixed templates to the rules. It is a (perhaps the) classic sweetened backtrace.
Rules were assumed to be comprehensible in themselves because they had been formulated by human experts attempting to formalise their own knowledge of the domain. As such the rules were expected to:
> "...embody accepted patterns of human reasoning, implying that they should be relatively easy to understand, especially for those familiar with the domain. ... They also attack the problem at what has been judged (by the expert) to be an appropriate level of detail."
The explanation is also helped by the very high level language in which the rules are expressed, with its stylised code and small number of primitive operations, which together made it easy to apply templates to transform executable code into legible English.
#### The WHY question
The WHY question had two different semantics, depending on the mode that MYCIN was in when the question is asked. MYCIN is generally thought of (as it is) as an inference mechanism, so that the literature generally refers to MYCIN working in this mode. But before starting on an inference process on a new case, MYCIN asks a series of questions to allow it to set up the primary attributes of the objects it is instantiating. This is more akin to form filling than to rule driven behaviour.
In the more familiar Inference mode, TEIRESIAS response to a WHY question was to justify its action in asking the current question, by printing the rule currently being executed, in a special template. WHY is thus an 'immediate mode' command; it is not clear from the material whether it was possible to ask WHY any question other than the current one had been asked.
However, WHY queries can be repeated, to climb back up the inference trace one rule at a time.
In the form filling mode referred to above, WHY queries are simply responded to by printing out some canned text associated with the primary attribute being requested.
#### The HOW question
The HOW query, by contrast, operates on a history list, and requires, as argument, a statement number. The response given is (again templated) a print out of the rule whose 'test part is given in the numbered statement. Thus there are two quite different semantics to HOW. HOW of a rule which has been evaluated will give what is in some sense a justification (by modus ponens) for belief in the statement - in this sense it might be rendered "how do you know that...". HOW of a rule which has yet to be evaluated gives procedural information about how to find the truth value of the statement, and might be rendered "how would you find out whether...". These different semantics are to some extent signalled by the use of different templates.
Some numbered statements, eg (5.0) below, do not appear to be 'test parts' of any rule. It is not made clear what the effect of asking 'WHY [5.0]' would be.
####= Example, user input in bold:
Where is the suspected portal of entry of organism-1 into this sterile site?
** WHY 
[I.e., why is it important to determine the suspected portal of entry of organism-l into this sterile site?]
[3.0] ...in order to find out about an organism or class of organisms for which therapy should cover.
It has already been established that
[3.1] The site of the culture is blood [3.2] The stain of the organism is gram positive
Therefore if:
[3.3] The portal of entry of the organism is G.I.
[3.4] [Abdomen is the locus of infection, or
Pelvis is the locus of infection]
Then:
There is strongly suggestive evidence (.9) that Enterobacteriacea is the class of organisms for which therapy should cover.
[Rule 095]
** HOW [3.4]
[I.e., how were you trying to determine that the Abdomen is the locus of infection, or that the pelvis is the locus of infection]
[4.0] At that point Rule 021 was being used.
**HOW [4.0] 
[I.e., how was Rule 021 used?]
[5.0] It has already been established that
[5.1] the culture is recent
Therefore if:
[5.2] There is therapeutically significant disease associated with the occurrence of this organism
Then
It is definite (1.0) that the site of the culture is the locus of infection in the patient.
> {Taken from Barr &amp; Feigenbaum, vol ii pp 96-97; similar, but more extensive, examples may be found in Davis &amp; Lenat, pp 265-285. More surprising examples appear in Davis et al. pp 35-37}
### Relevance filtering
Another feature of MYCIN for rather, of TIERESIAS) which is often forgotten is the advanced relevance filtering, which has rarely been equaled by imitators.
Briefly the problem which gives rise to the need for filtering is this. An 'explanation' (at least one given in the form of a syntactically sugared inference trace) which gives all the steps used to reach a goal will, in real applications, tend to be too long and complex for the user to understand. The critical information will be lost in a mass of trivial detail. This problem does not, of course, arise with toy systems, where the knowledge base is not large enough for extended chains of inference to develop. As Davis writes:
> "In an explanation we must not carry reasoning too far back, or the length of our argument will cause obscurity; neither must we put in all the steps which lead to our conclusion, or we shall waste words by saying what is manifest."
> "Depending on the individual user, it might be best to show all steps in a reasoning chain, to omit those that are definitional or trivial, or, for the most sophisticated user, to display only the highlights." [Davis &amp; Lenat]
Later, we find Weiner writing:
> "If an explanation is to be understood, it must not appear complex." [Weiner, 80]
TIERESIAS' relevance filter was based on the ['certainty factor']3 of inference steps. A function of this was used as a measure of the significance of an inference step, with inferences having a CF of 1 (true in every case) being considered to have no contribution to make to explanation, and lower certainty factors having higher indices on a logarithmic scale. This was seen as a "...clearly imperfect..." solution, but provided:
> ".... a 'dial' with which the user can adjust the level of detail in the explanations. Absolute settings are less important than the ability to make relative adjustments."
This index of explanation abstraction could be used as an optional argument to the WHY query, as in the example:
** WHY 4
We are trying to find out whether the organism has been observed in significant numbers, in order to determine an organism or class of organisms for which therapy should cover.
> {taken from Davis and Lenat pp 269 - 270. See also ibid pp 265 - 266 for the 'low level' version of this reply. This is a very impressive feature which has been quite neglected in the general literature.}
This feature is further extended with an EXPLAIN command, which can be used to go over the same ground as an immediately previous WHY command, but at a different level of detail. Thus if the user tried, for example 'WHY 10' and got back an answer that was too sketchy, it would be possible to try 'EXPLAIN 3' to bring the level down. Whether the reverse would be possible - to try EXPLAIN 10 after WHY 3 - is not made clear, but it appears not, as "...the EXPLAIN command will cover the same starting and ending points in the reasoning...".
### Limitations of the explanation system
Parts of the MYCIN expertise (those parts concerned with the selection of drugs are mentioned) were not encoded as rules but were coded in LISP directly. This expertise could not be explained by TIERESIAS.
Other limitations of the system recognised by Davis &amp; Lenat include the limited semantics of the question commands (i.e. the user was tightly constrained in what could be asked), the fact that beyond the level at which knowledge was primitive in the system, further explanations could not be given, and the lack of any user model, which might help remove unneeded detail from explanations. Furthermore, they observe that "... the system should be able to describe its actions at different conceptual levels...".
### Conclusion
TIERESIAS is one of the earliest examples of Expert Systems explanation. It is significant that explanation was not seen as being a critical or integral part of MYCIN, but was provided in a separate programme initially intended only as an aid to knowledge engineers and not as part of the consulting system. In this context it is not surprising that it should have developed out of high level backtrace facilities familiar from LISP programming environments.
Despite the fact that it was a very early attack on the problem, and is in essence simply a syntactically sugared backtrace, TIERESIAS is highly sophisticated in a number of ways; notably in the provision of an effective (if crude) measure of detail, which allowed for remarkably successful abstraction of high-level explanations from the inference trace.
MYCIN/TEIRESIAS was undoubtedly a revolutionary system in many ways, and it has spawned many derivative systems. But it was less revolutionary than it appeared to be. The 'explanation facilities', which are made so much of in the literature, are not able to give declarative reasons for belief in propositions. They were not designed to, being conceptually merely very high level trace facilities for the knowledge engineer.
The fact that users eagerly accepted the facilities MYCIN/TEIRESIAS provided indicated that there was a demand for explanation systems.
# A wide variety of 'technical fixes' have been experimented with
A very wide range of approaches to the problem of providing a high level account of a system beliefs or actions have been tried. One of the interesting avenues has been that followed William Swartout, in attempting to provide 'explanations' of what conventional procedural programmes are doing.
## Digitalis Therapy Advisor
This is yet another medical expert system - this time dealing with the administration of digitalis to heart attack sufferers.
The knowledge base maintains a model of the patient's individual response to digitalis - a highly toxic drug to which people have widely varying response - as well as general information about its properties and administration.
Digitalis Therapy Advisor was written in, and clearly developed with, a prototype 'self-documenting' language called OWL 1. This is a clearly LISP like language, which incorporates a parser which can translate programme statements into a high-level procedural account. This parser can be applied not only to pieces of code, to show what the programme would do to execute it, but also to items on the history list, to show what it has done.
### Explanation by translation of programming language
Explanation was a key goal in the design of the advisor programme, and the implementation of it largely exploited this feature of the underlying language. Broadly, english-language templates are associated with each primitive procedure of the language, into which the arguments passed, and on completion the results returned, can be spliced. However the programmer, when writing a new procedure, does not to need to supply templates, as the system is able to splice together the templates belonging to the primitives. As the OWL interpreter runs, it builds up an 'event structure', or trace, of the procedures it has called, and what their arguments were:
> "The system is 'self-documenting' in the sense that it can produce English explanations of the procedures it uses and the actions it takes directly from the code it executes. Most of its explanations are produced in this manner, although a few types of explanation are canned phrases... The explanations are designed to be understood by a physician with no programming experience."
The limitations of this approach are acknowledged, and the 'work around" used is described:
> "When writing a computer program, it is sometimes necessary to use methods that are totally foreign to users of the system. This may be because the methods employed by humans are unknown, (or) too inefficient... Whenever this situation occurs, it will not be possible to provide explanations by merely translating the code of the program into English...
> To deal with this problem ... we have attached English comments to the OWL code... When the ... method is explained, the comments are displayed along with the translated OWL code."
### Sample Explanation
DURING THE SESSION ON 9/21/76 AT 11:10, I CHECKED SENSITIVITY DUE TO THYROID-FUNCTION BY EXECUTING THE FOLLOWING STEPS:
1: I ASKED THE USER THE STATUS OF MYXEDEMA. THE USER RESPONDED THAT MYXEDEMA WAS PRESENT.
2: SINCE THE STATUS OF MYXEDEMA WAS PRESENT I DID THE FOLLOWING
2.1 I ADDED MYXEDEMA TO THE PRESENT AND CORRECTABLE CONDITIONS. THE PRESENT AND CORRECTABLE CONDITIONS THEN BECAME MYXEDEMA.
2.2 I REMOVED MYXEDEMA FROM THE DEGRADABLE CONDITIONS. THE DEGRADABLE CONDITIONS THEN BECAME HYPOKALEMIA, HYPOXEMIA, CARDIOMYOPATHIESMI, AND POTENTIAL POTASSIUM LOSS DUE TO DIURETICS
> {And so on ad nauseam. Taken from Swartout 77, page 822}
### Explanation: Discussion
Essentially this is a 'syntactic sugaring system, which provides for splicing text fragments into the output where necessary. Clearly, as the methods which are executed are procedural, the explanation given is a procedural explanation, an explanation of why things were done, and not of why things were believed. It appears for example that we cannot ask why the various actions were carried out - ie what the system was attempting to achieve, as in a MYCIN 'WHY' question; nor why specific things are believed: for example, why 'hypoxemia' is one of the degradable conditions.
Instead of dividing the system into an inference engine and a knowledge base, the knowledge is hard-wired into OWL1 'methods' (procedures). This approach appears more applicable to domains where an algorithm is available, than to the more classic 'Expert System' domains.
## XPLAIN
### General Description
Swartout's next system, XPLAIN, grew out of the work on Digitalis Therapy Advisor, and parallels the development of in that an objective in the design was the justification of programme actions.
The explanation system follows DTA in which explanations were based on expansions of the actual executable code - a sophisticated variant of syntactic sugaring. Swartout argues against the use of canned text:
> "There are several problems with the canned text approach. The fact that the program code and the text strings that explain that code can be changed independently makes it difficult to maintain consistency between what the program does and what it claims to do. Another problem is that all questions must be anticipated in advance... Finally, the system has no conceptual model of what it is saying... Thus, it is difficult to use this approach to provide more advanced sorts of explanations..." [Swartout 83, p 291]
But now he explores the limitations of the approach used in DTA, mentioning, in addition to those problems noted in his earlier paper [Swartout 77], that redundant and irrelevant information is included in a mechanical expansion of code:
> "The fact that every operation must be explicitly spelled out sometimes forces the programmer to program operations which the physician would perform without thinking.... (eg) steps which are involved more with record keeping than with medical. reasoning... Since they appear in the code, they are described in the explanation routines, although they are more likely to confuse a physician user than enlighten him. An additional problem is that it is difficult to get an overview of what is really going on..." (ibid, p 293)
Once again, we see that the inclusion of irrelevant material can mask the important points from the human reader.
Swartout's solution to his rejection both of canned text and of syntactic sugar is to have the computer generate the expert system - called 'the performance program' - and simultaneously, from the same knowledge base, generates a 'refinement structure' which records the transformations from the input information to the performance program. This is exploited in the later construction of explanations. XPLAIN itself is thus an automatic programmer, whose purpose is to write such a system.
The 'refinement structure' is a tree of goals, each being a subgoal of the one above it, at a less abstract level. Each subgoal is succesively refined until a primitive of the underlying programming language is produced. The performance program is thus found at the leaves of the refinement structure.
In addition to the refinement structure, knowledge is held in a 'domain model and a collection of 'domain principles'.
### Generating explanations
The explanation system which exploits this structure is constructed in two modules, a phrase generator, and an answer generator. The phrase generator task is to compile sections of the refinement structure into English fragments; the answer generator first chooses which sections of the refinement structure to compile, and then concatenates the generated phrases to produce the answer. In addition to generating phrases from the refinement structure, the explanation generator has access to the domain model and principles.
It applies relevance filtering by maintaining
> “...knowledge of the state of the program execution, knowledge of what has already been said, and knowledge of what the user is likely to be interested in. Other issues the answer generators confront include deciding whether to omit information the user can be presumed to know from the explanation and determining whether analogies can be suggested to previous explanations." [ibid, p 306]
This is a much more sophisticated conception of what it is relevant to tell the user than we have yet seen.
### Sample Explanation, user input in bold
Please enter the value of serum potassium:
**why?**
The system is anticipating digitalis toxicity. Decreased serum potassium causes increased automaticity, which may cause a change of ventricular fibrillation. Increased digitalis also causes increased automaticity. Thus, if the system observes decreased serum potassium, it reduces the dose of digitalis due to decreased serum potassium.
Please enter the value of serum potassium:
3.7
Please enter the value of serum calcium:
**why?** 
The system is anticipating digitalis toxicity. Increased serum calcium also causes increased automaticity. Thus, (as with decreased serum potassium) if the system observes increased serum calcium, it reduces the dose of digitalis due to increased serum calcium
> {after Swartout 83, p 294} 
Points to note are that, unlike the DTA example, this explanation tells us why the system is doing what it is, rather than how it is doing it. It now resembles, conceptually, the MYCIN type WHY explanation, except that the text is noticeably more fluid. It is also worth noticing how the generator of the second explanation fragment has taken into account the information given in the earlier one. The procedural explanations, as produced by DTA, are still available. However, the system still cannot produce an account of why it holds a particular belief.
## APES
Another interest which was developing in the Artificial Intelligence community at the same time as Expert Systems was 'Logic Programming': implementing restricted subsets of first order predicate calculus as programming languages. The most significant of these languages was PROLOG. It was inevitable that these two strands would come together, and one of the first signs of this was APES - A PROLOG Expert System - developed Peter Hammond in the early 80s.
Hammond and Sergot discuss the motivation for writing an expert system in PROLOG: they show the structural similarity between the production rules used in the MYCIN family of systems and horn clauses, note that horn clauses offer greater expressive power, and claim that this will assist in the construction of :
> "... knowledge bases which are more flexible, more useful, and more elegant than would be possible with less powerful languages." [Hammond &amp; Sergot 83 p 95]
The inference engine is constructed as a meta-interpreter in PROLOG, similar in concept to Walker's Syllog [Walker et al, 87]. The explanation mechanism is a syntactic sugaring of the rule trace, clearly modelled closely on MYCIN or some derivative.
Explanation fragments are generated by applying english-language templates written by the knowledge engineer to the rules, which are themselves written in a strict horn-clause form.
### Example explanation, user input in bold
Is it true that Peter suffers-from peptic-ulcer ?
**why?** 
aspirin aggravates peptic-ulcer
if Peter suffers-from peptic-ulcer
then aspirin is-unsuitable-for Peter
Peter complains-of pain
aspirin suppresses pain
if not aspirin is unsuitable-for Peter
then Peter should-take aspirin
Is it true that Peter suffers-from peptic-ulcer ?
**yes**
Is it true that Peter suffers-from impaired-liver function ?
**no**
==> Peter should-take lomotil.
**how**
To show Peter should-take lomotil I used the rule
<person> should-take <drug> if
<person> complains-of <symptom> and
<drug> suppresses <symptom> and
not <drug> is-unsuitable-for <person>
You said Peter complains-of diarrhoea
I know lomotil suppresses diarrhoea
I can show not lomotil is-unsuitable-for Peter.
### Discussion
Although PROLOG is a declarative language, and it would seem natural to provide it with a declarative explanation facility, the implementers of APES seem to have been more concerned to demonstrate that existing Expert System functionality could be implemented in PROLOG than to consider what functionality was actually desirable. Thus they provide a system which is similar to but actually cruder than MYCIN - there is, for example, no relevance filtering.
So this must be seen as a toy system, whose only real interest is that it demonstrates that it may be possible to build an explanation system in Prolog. It does not demonstrate that a good explanation system can be built, and it would not effectively handle a knowledge base of any size.
## Syllog
Syllog, like APES, is an attempt to make a fusion between Expert Systems and logic programming. In some senses it is a better thought out and better engineered attempt, as I hope to show, than APES; and this is reflected by the fact that Syllog has been employed in a number of experimental, but significant, applications, by IBM (Syllog was developed by Adrian Walker of IBM's Thomas J Watson Research Centre).
Syllog is a rule based system, and like Apes, the rules are technically horn clauses - but they are expressed in a high-level rule language, which makes them easier to understand, and are termed 'syllogisms' by Walker - even though they clearly aren't.
What makes Syllog interesting from the present view point is the explanation system, which, although lacking in interesting capabilities like relevance filtering, is that the explanation given is declarative. The technique of explanation generation is also extremely different from preceding systems, in that the rule is (conceptually, at any rate) compiled into the explanation, in something like the way that a conventional language compiler works. The system compiles reasonable English with remarkably little knowledge of the language; and indeed is very simply adapted to work in other natural languages.
### Sample explanation (1):
This sample explains planning a flight from John F Kennedy airport, New York, to San Francisco. It is, essentially, a pretty-printed execution trace, without syntactic sugar.
FLY ( JFK, SFO, 9.30, 15.30)
OK ( JFK, SFO, AMERIC, 9.30,10.0, 15.25, 15.30)
FLIGHT ( AMERIC, 183, JFK, CHI, 10.0, 11.24)
BEFORE( 9.30, 10.0)
LT( 9, 10)
CONNECTION ( CHI, AMERIC, UNITED, 11.24, 11.44)
ADD( 11.24, 0.20, 11.44)
SUM( 11, 0, 11)
SUM( 24, 20, 44)
LT ( 44, 60)
OK( CHI, SFO, UNITED, 11.44, 13.5,15.30)
FLIGHT ( UNITED, 121, CHI, SFO, 13.5, 15.25)
BEFORE ( 11.44, 13.5)
LT ( 11, 13)
BEFORE ( 15.25, 15.30)
EQ ( 15, 15)
LE ( 25, 30)
 {from [Walker 82] page 9}
### Sample explanation (2):
We shall set up testers for 18719 of part chip2 in quarter 3
Yes, that's true
Because...
we shall set up testers for 2273 of card1 in quarter 3 card1 has 7 of immediate part chip2 2273 * 7 = 15911
the expected yield of cardi is 85% based on past experience
15911 divided by 85 (normalized and rounded up) is 18719 
we shall set up testers for 18719 of part chip2 in quarter 3
we plan to ship 1000 of box1 in quarter 3
box1 has 2 of the immediate part card1
the expected yield of card1 is 88%, based on past experience
1000 * 2 = 2000
2000 divided by 88 (normalised and rounded up) is 2273 
we shall set up testers for 2273 of card1 in quarter 3
{after [Walker et al 87], p 244}
These two explanations look superficially very different, but a careful reading will show that the form of the later (published 1987) explanation is simply a - very competent - syntactic sugaring of exactly the same semantic form as that of the earlier explanation.
Note that (in the later version) the user has to give the system exactly the proposition to be explained. This is supported by a menu system which allows the user to browse through - and pick from - templates for all the statements the system knows about. Once the user has picked a template, further menus help with filling in the blanks.
The slightly weird arithmetic is, as they say, sic: otherwise we see a clearly expressed declarative statement of why just this number of testers are needed. We also see that without relevance filtering, this arrangement is only suitable for relatively shallow search spaces.
To be fair, there is something that serves in place of a relevance filter: the top few nodes of the proof tree constructed by the inference engine are compiled into explanation fragments, which are placed on the screen; this proceeds until the screen is filled. Because (as we argued in [Mott &amp; Brooke 87] - although we were discussing a single path selected from the tree) an inference mechanism chaining backwards will generate a proof from the general to the particular, it can be assumed that a general statement of the explanation will be given first, with what follows being more and more tightly focussed detail. So that what is immediately presented on the screen is likely to be the most important - and perhaps the most relevant - part of the proof.
So, once again, this system should be classified as a bit ad hoc - an explanation system constructed without a lot of thought for what explanation is. However, the explanation constructed now conforms to the deductive nomological account of explanation, rather than (to use Nagel's terminology) the genetic form. So we have arrived at last at the classic explanation form of the Philosophy of Science.
## Arboretum
### General description
Arboretum is more completely described in a later chapter, so I will not go into any great detail here. The system was built to demonstrate a decision procedure for a novel non-monotonic logic developed by Peter Mott. The other major innovation of the system was the graphical presentation of rules and of inference traces: this feature has been seen by others as a form of explanation, but is not my central interest here. The generation of textual explanation was not part of the original conception but was added in an ad-hoc manner during implementation.
The explanation system, as we wrote, depended on:
> "... the fact that DTrees (the knowledge representation used) are structured through exceptions from the very general to the more abstruse and particular; and that, in consequence, any path through a rule structure follows a coherent argument, again from the general to the particular. " [Mott &amp; Brooke 87, p 110]
This allowed us to attach an explanation fragment to each node, knowing that each implied a unique conclusion for the structure in which it appeared. We used fragments of canned text, because we found this allowed us to produce more fluid explanations, but as we noted:
> "... there is no reason why the system should not be modified to generate explanation fragments itself, for example by using a text macro similar to '<feature of root-node> was found to be <colour of stick-node> because <feature of stick-node> was true'." [Ibid, p 111]
### Relevence filtering
The most interesting feature of this explanation system was that fortuitously, the evaluation process enabled us to extract precisely that clause in each rule which was relevant to the eventual outcome. We also developed a neat heuristic to the effect that, when generating a 'no' explanation, we should:
> "... concatenate the explanation fragments from the deepest sticking node in each successive tree on the search path. The reason is that this represents the 'nearest' that the claimant got to succeeding in the claim... In the case of a 'yes' decision we chose the opposite approach and select the shallowest sticking node available... it is not relevent to describe how a long and tortuous inference path finally delivered 'yes' when a much shorter, less involved one, did so too." [Ibid]
### Sample explanation
The application here is to the adjudication of claims to health insurance benefits. The system would be used by the adjudication officer, and the explanation would be sent to the claimant.
Dear [Name of Claimant]
You are capable of work and there are no special circumstances permitting you to be deemed incapable of work. Although you provided a valid certificate of explanation, this is insufficient unless either there is evidence of contact with the disease or you are a known carrier thereof.
Yours Sincerely
[your name]
TODO: this is not a very good Arboretum explanation; I know we did better ones on Widows benefit. Check whether I can find a surviving good one, and substitute it.
### Discussion
It will be seen that this is a short, clear, declarative statement in seemingly natural English, which covers all (and only) the relevant points of a complex case. To be fair, the system does not always do this well, but most of its explanations are of this quality.
# Attempts at more principled approaches
After a long series of systems, such as those just described, in which the approach taken to explanation generation was essentially one of ad hoc mechanisms and technical fixes, systems began to emerge in the late 1970's which took a more principled approach to the problem. One of the first of these was BLAH.
## BLAH
### General description
This system sought to address issues of explanation structuring and complexity. Like XPLAIN, it sought to reduce detail by maintaining a model of what the user could be expected to know. However, its design was based on studies of human explanation behaviour, described in [Goguen et al., 83] and in [Wiener 1979).
This system is also interesting in that for the first time we see declarative explanations:
> "The third type of question (supported by BLAH) is a request to BLAH to explain why some assertion, already in the knowledge base, is believed." [Weiner 80, p 20]
The inference mechanism used was written in AMORD [de Kleer et al., '78] with a truth maintenance sub-system described in [Doyle, 78]. Essentially this appears to be a production system.
The knowledge base contains assertions, each of which is supported by a list of other assertions which tend to justify belief in it, and optionally, a list of assertions which tend to question such belief. Justifications are based on a set of rules: PREMISE, STATEMENT/REASON, REASON/STATEMENT, IF/THEN, THEN/IF, AND, OR, GENERAL/SPECIFIC, EXAMPLES, and ALTERNATIVES; these are claimed to derive from justifications used by subjects in the studies of natural explanation. Each rule has associated with it a series of alternative templates into which the predicates and instantiated variables can be patched.
Two parallel views of this knowledge base are maintained: a system view and a user's view.
> "... When a user poses a question to BLAH, BLAH uses the knowledge in the system's view to reason about it; and when BLAH generates an explanation, it uses knowledge in the user's view to determine (by reasoning) what information the user already knows, so that it can be deleted from the explanation."
The system's view is built by the knowledge engineer; information given by the user is added to the user's view, and information generated by the inference process is added to both.
The knowledge base is also segmented into 'partitions' based on category; and further divided into separate 'hypothetical worlds'; these last being used, presumably, by the truth maintenance system.
The inference process generates a tree having at its terminals instantiated statements about the case, and at its non-terminals justification types, drawn from those listed above. This structure is passed to the explanation generator, which generates text by applying templates which are associated with the justification types. These templates, as well as english-ifying the system's statements, have the power to reorder the nodes of the tree below them, for example by converting an IF/THEN justification type to a THEN/IF. The reorderings are intended to improve the explanation structure.
However before applying the templates it prunes the tree by removing all those statements which the user is presumed to know (those which can be derived from the user's view of the knowledge base), and which have no dependents, using a bottom up, right to left search; and then further prunes the tree by removing sub-trees which are considered to contain detail.
The primary measure of detail used is a function of the depth of the explanation tree, but trees are also pruned for complexity if any node has more than two dependents.
Where complexity pruning has been used, explanations generated from the excised sub-trees are successively appended to the original explanation. A meaningless interjection ("uh") apparently culled from the study of human explanation is used as a marker that this has been done!
The length of the explanation is thus clearly a function of the size (number of nodes) of the explanation tree, but with the rider that splitting the tree in order to improve the explanation structure will actually LENGTHEN the explanation. Wiener claims this as a benefit:
> "As we see in (9), by copying a node from one tree to another we cause the text associated with that node to be repeated in the explanation. As [Halliday and Hassan 76] point out, repetition is one factor which influences the view that sentences, although separate, are tied together to form a unified text."
BLAH provided three top level facilities to the user. These were of the form:
(SHOW <assertion>)
-> <assertion>
(CHOICE <assertion1><assertion2>{<category partition>})
-> (I CHOSE <assertionX>) (NOT (I CHOSE <assertionY>))
(EXPLAIN <explanation>)
-> <explanation>
Although these are all LISP like in form (indeed the assertions themselves are in the form of lists), it is not clear whether the user had the option of entering:
(EXPLAIN (SHOW <assertion>))
### Example explanation:
Well, Peter makes less than 750 dollars, and Peter is under 19, and Harry supports Peter so Peter is a dependent of Harry's. Uh Peter makes less than 750 dollars because Peter does not work, and Peter is a dependent of Harry's because Harry provides more than one half of Peter's support.
I should explain that the application is to the US Federal Income Tax system. This explanation does indeed capture something of the flavour of a natural spoken explanation. Furthermore, it is clearly declarative rather than procedural. However, personally, I find its style rather too informal for textual presentation. I particularly dislike the meaningless 'Uh' which is use to tag the supporting point.
### Discussion
With this system we can begin to construct a model of what the designers have meant by explanation, and relate it to the philosophical work to be described in the following chapter. The form of the explanation is essentially the deductive-nomological explanation, as described by Hempel, but there are subtleties. The deductive nomological form essentially requires that the explanation must be given in terms of things which can be verified by reference to the world; we will discuss the meaning of this later. But BLAH'S explanations are given simply in terms of things which BLAH knows that the user knows, making the assumption that the user can supply the rest of the argument.
## ATTENDING
### General Description
ATTENDING takes a radically novel approach to the problem of assisting decision making in complex domains: it works by inviting the user to describe a case, and to then describe the proposed course of action. The machine reviews the proposals, and produces a critique. The critique is generated by fragment concatenation, and appears to be of high quality, with very natural seeming english. The application described [Miller, 84] is medical, considering plans for the anaesthetisation of patients requiring surgery.
### Explanation System
The explanation system is provided with limited ability to prevent repetitiousness by allowing the fragment concatenator to follow alternative routes at points in the knowledge base, thus allowing for differently worded explanations of the same inference.
### Interaction Style
The input methods are crude in the extreme, however, with the user being presented with fairly brief menus of options to describe the case being handled. Thus courses of action not foreseen by the knowledge engineer cannot be described, and, consequently, cannot be criticised
### Inference mechanism
The explanation generator is based on the knowledge representation chosen, which is a variant of the Augmented Transition Network and is called an 'Augmented Decision Network'. The nodes of this network are 'states' which the patient may be in. These states are joined by arcs, labelled with actions which may move the patient from the initial to the consequent state of the arc. Each arc also holds a list of risks and benefits consequent on the action. Where a choice of arc exists between two nodes, the arc whose total risks scores least will be prefered; where more than one arc has no risks associated with it, the arc whose total benefits score most will be preffered. Fragments (it is interesting to note that the author uses the word) are stored on arcs of futher transition nets, which are themselves expansions of the arcs in the decision net., and the explanation generator chooses a path through this net collecting and concatenating fragments along the way.
### Redundancy filtering
The concatenator maintains lists of topics mentioned at sentence, paragraph, and text level, and uses these to prevent redundancy. Where a topic is mentioned a second or subsequent time, a template is substituted for the reference. Thus it is clear that the fragments are more complex than just strings; they must also have some information about their content, in machine handleable form.
### Explanation: Discussion
This principled approach to explanation generation is seen as more sophisticated than the 'if x and y and z then print "this is an explanation" school of explanation generation:
> "Many systems which produce prose output use a fairly ad-hoc approach. Sentences and sentence fragments are stored in the machine as 'Canned Text'. The control of the generation of this 'canned text' is embedded in the procedural logic, often in an ad-hoc way.
> This approach can work well if the system's discussion is straightforward and predictable. If complex analysis is attempted, however, and the system designer wants flexibility for the discussion to vary depending on the particulars of the content, then this approach can become quite unwieldy.
> There are a number of drawbacks. 1] the programming of the discussion itself becomes difficult. 2] Any major revision of the prose output may involve substantial reprogramming. 3] The logic that generates the prose expression may become hopelessly interwoven with the logic that determines and organises the content of the material to be discussed." p 56
The strategy used is described as less ambitious than schemes which involve constructing explanations from semantic information generated by an inference mechanism. This is seen to be a research problem in itself.
> "Attending has set itself an intermediate goal: developing a flexible formalism to facilitate the generation of polished prose. Although the PROSENET approach is clearly closer in spirit to canned text generation than to sophisticated language generation it does allow the system designer great flexibility to maipulate, massage, and refine the system's prose output, independent of the rest of the system's analysis." [Miller 84 p77] (Miller's emphasis)
> "From the standpoint of computer science, critiquing can be perceived as a mode of explanation which lets a system structure its advice around the particular concerns of the user in a direct and natural way." [Miller 84 p 74] (Miller's emphasis).
> "...critiquing allows the physician to be the final decision maker. The computer is never forced to commit itself to one approach or another." [Ibid]
[Waah! I forgot to copy a sample explanation!]
[Here insert all the analysis and discussion for this chapter...
## Models of Explanation
## Developing relevance
"Just as Thompson's lookup program displayed exasperating shallowness, so total lookahead has its own 'mentality' which from the point of view of the human questioner could be described as impenetrably deep. While the response of lookup is instantaneous, lookahead ruminates through combinatorially vast ramifications while constructing its forward tree of possibilities. Long rumination before each reply is not of course in itself a guarantee of mental depth. But when asked how it selected its move, lookahead is able to make an exceptionally profound response by disgorging the complete analysis tree. This comprises not only a complete strategy but at the same time... a complete justification of the strategy. Could anyone wish for a more profound response?
"On the contrary, mortal minds are overwhelmed by so much reactive detail. Reporting on the Three Mile Island nuclear plant accident the Malone committee stated that .... the operator was bombarded with displays, warning lights, print-outs and so on to the point where detection of any error condition and the assessment of the right action to correct the condition was impossible'. So lookahead, with a quite opposite mentality from lookup, has its own reasons for inability to interact helpfully with a human." [ Michie, 83; Michie's emphasis]
[look out and refer to recent work by Shiela Hughes and Allison Kidd]
## Endnotes
1 This should not be understood too literally, I think. The conceptual distinction between algorithmic and heuristic programmes had not developed at the time DENDRAL was first developed. The algorithm simply provides a method of generating all the possible combinations of compounds in a fixed sequence, and thus supports only part of the generate stage.
2 This assertion will probably be seen as contentious. I take as evidence the following: the assertion [Davis and Lenat 1982 p 276] that .... the current performance program (is) MYCIN', together with the diagram [ ibid., p 243 figure 2-3] which clearly shows that the explanation module is outside the performance program. To support my argument that the explanation mechanism described in [Davis et al. 1977] - the MYCIN paper - is in fact the TEIRESIAS explanation module, compare e.g. the discussion of information metrics [Davis and Lenat p 269] with [Davis et al p 36]; and the sample explanations given in the two sources.
3 MYCIN/TEIRESIAS used "certainty factors" (not to be confused with formal indices of probability) to express its confidence in steps of reasoning. These were entered by the Knowledge Engineer for the individual rules, and manipulated arithmetically by the inference mechanism. They ranged in value from -1 (certainly false) through to (no confidence at all in the reasoning step) to 1 (certainty). 
## References
Barr, A &amp; Feigenbaum, E A: The Handbook of 'Artificial Intelligence, Pitman, 82, especially articles VII B, TEIRESIAS, and VIII B1, MYCIN
Brooke, S: Interactive Graphical Representation of Knowledge: in Proceedings of the Alvey KBS Club SIG on Explanation second workshop, 87
Buchanan, B, Sutherland, G, &amp; Feigenbaum, EA; Heuristic Dendral: a program for generating explanatory hypotheses in organic chemistry: in Meltzer &amp; Michie, eds, Machine Intelligence 4: Edinburgh University Press, 1969;
Buchanan, BG &amp; Feigenbaum, EA: Dendral and Meta-Dendral: Their Applications Dimension: in Artificial Intelligence 11, 1978
Davis, R, Buchanan, B and Shortliffe, E: Production Rules as a Representation for a Knowledge-Based Consultation Program: in Artificial Intelligence 8, 1977
Davis, R &amp; Lenat, D: Knowledge-based systems in Artificial Intelligence; McGraw-Hill, 1982
especially part 2 chap 3 Hammond, P, &amp; Sergot, M: A PROLOG Shell for Logic Based Expert Systems: in Proceedings of Expert Systems 83: BCS
Martin, WA &amp; Fateman, RJ: The Macsyma System: in Procedings of the 2nd Symposium on Symbolic and Algebraic Manipulation: ACM: Los Angeles 1971
Michie, D: Game playing programs and the conceptual interface: in Bramer, MA (ed): Computer Game Playing theory and practice: Ellis Horwood, Chichester, 1983
Miller, Perry L: A Critiqueing Approach to Expert Computer Advice: ATTENDING: Pitman Research Notes in Artificial Intelligence 1, London, 1984
Mott, P &amp; Brooke, S: A Graphical Inference Mechanism: in Expert Systems iv, 2, May 87
Pople, H E: The Formation of Composite Hypotheses in Diagnostic Problem Solving - an Exercise in Synthetic Reasoning in Papers presented at the 5th International Joint Conference on Artificial Intelligence, MIT, 1977
Swartout, W: A Digitalis Therapy Advisor with Explanations: in Proceedings of the 5th International Joint Conference on Artificial Intelligence, MIT, 1977
Swartout, W R: XPLAIN: a System for Creating and Explaining Expert Consulting Programs: in Artificial Intelligence 21, 1983
Walker, A: Automatic Generation of Explanations of Results from Knowledge Bases: Research Report RJ3481, IBM Research Laboratory, San
Jose, California, 1982)
Walker, A et al, Knowledge Systems and Prolog, Addison-Wesley, Reading (Mass.) 1987
Weiner, J L: BLAH, a system which explains its reasoning: in Artificial Intelligence 15, 1980

73
doc/Manifesto.md Normal file
View file

@ -0,0 +1,73 @@
Manifesto
=========
Machine inference automated reasoning, the core of what gets called
Artificial Intellegence has ab initio been based on the assumption
that the purpose of reasoning was to preserve truth. It is because this
assumption is false that the project has thus far failed to bear fruit,
that Allan Turing's eponymous test has yet to be passed.
Of course it is possible to build machines which, within the constraints
of finite store, can accurately compute theora of first order predicate
calculus ad nauseam but such machines do not display behaviour which is
convincingly intelligent. They are cold and mechanical; we do not
recognise ourselves in them. Like the Girl in the Fireplace's beautiful
clocks, they are precisely inhuman.
As Turing's test itself shows, intelligence is a hegemonic term, a term
laden with implicit propaganda. A machine is 'intelligent' if it can
persuade a person that it is a person. By 'intelligent' we don't mean
'capable of perfect reasoning'. We mean 'like us'; and in meaning 'like
us' we are smuggling under the covers, as semantic baggage, the claim
that we ourselves are intelligent.
I might argue that perfect reasoning has little utility in a messy
world, that to cope with the messiness of a messy world one needs messy
reasoning. I shall not do so: the core of my argument is not that there
is principle and value in the mode of reasoning that I propose, but
precisely that it is ruthlessly unprincipled.
In this thesis I shall argue that the purpose of real world argument is
not to preserve truth but to achieve hegemony: not to enlighten but to
persuade, not to inform but to convince. This thesis succeeds not if in
some arid, clockwork, mechanical sense I am right, but if, having read
it, you believe that I am.
On inference and explanation
----------------------------
I wrote the first draft of this thesis thirty two years ago. In that
draft I was concerned with the very poor explanations that mechanised
inference systems were able to provide for their reasons for coming to
the conclusions they did, with their unpersuasiveness. There was a
mismatch, an impedance, between machine intelligence and human
intelligence. Then, I did not see this as the problem. Rather I thought
that the problem was to provide better explanation systems as a way to
buffer that impedance. I wrote then:
> This document deals only with explanation. Issues relating to inference
> and especially to truth maintenance will undoubtedly be raised as it
> progresses, but such hares will resolutely not be followed.
In this I was wrong. The problem was not explanation; the problem was
inference. The problem was, specifically, that human accounts of
inference since Aristotle have been hegemonistic and self serving, so
that when we started to try to automate inference we tried to automate
not what we do but what we claim we do. We've succeeded. And having
succeeded, we've looked at it and said, 'no, that is not intelligence'.
It is not intelligence because it is not like us. It is clockwork,
inhuman, precise. It does things, let us admit this covertly in dark
corners, that we cannot do. But it does not do things we can do: it does
not convince. It does not persuade. It does not explain.
I shall do these things, and in doing them I shall provide an account of
how these things are done in order that we can build machines that can
do them. In doing this, I shall argue that truth does not matter; that
it is a tool to be used, not an end to achieve. I shall argue that
reason is profoundly unreasonable. The end to achieve, in argument as in
so much other human behaviour, is not truth but dominance, dominance
achieved by hegemony. In the end you will acknowledge that I am right;
you will acknowledge it because I am right. I am right not because in
some abstract sense what I say is true, but because you acknowledge it.

40
doc/PredicateSubtext.md Normal file
View file

@ -0,0 +1,40 @@
On the subtext of a predicate
-----------------------------
Predicates are not atomic. They do not come single spies, but freighted
with battalions of inferable subtexts. Suppose Anthony says
Brutus killed Caesar in Rome during the ides of March
I learn more than just that 'Brutus killed Caesar in Rome during the
ides of March'. I also learn that
- Brutus is a killer
- Caesar has been killed
- Rome is a place where killings happen
- The ides of March are a time to be extra cautious
Suppose Drusilla now says
E killed Caesar in Rome during the ides of March
this casts doubt on Anthony's primary claim, and on the belief that
Brutus is a killer; but it reinforces the beliefs that
- Caesar has been killed
- Rome is a place where killings happen
- The ides of March are a time to be extra cautious.
If Falco then says
No, I heard from Gaius that it happened in April
the beliefs that
- Caesar has been killed
- Rome is a place where killings happen
are still further strengthened.
In proposing a formalism to express predicates, we need to consider how
it allows this freight to be unpacked.

13
doc/TheProblem.md Normal file
View file

@ -0,0 +1,13 @@
# The Problem
In this chapter talk about the perceived need for expert system explanations. Advance:
the arguments used by expert systems designers, saying why explanations are needed;
the arguments used by critics which claim that the explanations given are not good enough.
### References
{pretty much the same as for History - see below}

41
doc/intro.md Normal file
View file

@ -0,0 +1,41 @@
## Introduction to Wildwood
I started building Wildwood nearly forty years ago on InterLisp-D workstations.
Then, because of changing academic projects, I lost access to those machines,
and the project was effectively abandoned. But, I've kept thinking about it; it
has cool ideas.
### Explicable inference
Wildwood was a follow on from ideas developed in Arboretum, an inference system
based on a novel propositional logic using defaults. Arboretum was documented in
our paper
[Mott, P & Brooke, S: A graphical inference mechanism : Expert Systems Volume 4, Issue 2, May 1987, Pages 106-117]
(https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1468-0394.1987.tb00133.x)
Two things were key about this system: first, we had a systematic mechanism for
eliciting knowledge from domain experts into visual representations which it
was easy for those experts to validate, and second, the system could easily
generate high quality natural language explanations of its decisions, which
could be understood (and therefore be challenged) by ordinary people
This explicability was, I felt, a key value. Wildwood, while being able to infer
over much broader and more messy domains, should be at least as transparent
and easy to understand as Arboretum.
### Game theoretic reasoning
The insight which is central to the design of Wildwood is that human argument
does not seek to preserve truth, it seeks to be hegemonic: to persuade the
auditor of the argument of the advocate.
Consequently, an inference process should be a set of at least two arguing
processes, each of whom takes a different initial view and seeks to defend it
using a system of legal moves.
### Against truth
Wildwood was originally intended to be a part of my (unfinished) thesis,
[Against Truth](AgainstTruth.html), which is included in this archive for
your amusement.

View file

@ -0,0 +1,29 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title>Against Truth</title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 current"><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><h1><a href="#against-truth" name="against-truth"></a>Against Truth</h1>
<blockquote>
<p>Hey, what IS truth, man? [Beeblebrox, quoted in [Adams, 1978]]</p>
</blockquote>
<p><em>This title is, of course, a respectful nod to Feyerabends Against Method</em></p>
<h2><a href="#introduction" name="introduction"></a>Introduction</h2>
<p>This document is in two parts: a statement of a problem, and an account of an attempt to address it. The problem is stated briefly in the first chapter, and fleshed out in the following two with a history of attempts which have been made in the past to address it, and an analysis of what would be needed to solve it.</p>
<p>The second part starts with an account of a system built by the author in collaboration with Peter Mott, describing particularly how the problem was addressed by this system; subsequent chapters will describe the development of a further system, in which the analysis developed in the first section will be applied.</p>
<p>This document deals only with explanation. Issues relating to inference and especially to truth maintenance will undoubtedly be raised as it progresses, but such hares will resolutely not be followed.</p>
<h2><a href="#contents" name="contents"></a>Contents</h2>
<h3><a href="#frontmatter" name="frontmatter"></a>Frontmatter</h3>
<ol>
<li><a href="Manifesto.html">Manifesto</a></li>
</ol>
<h3><a href="#part-one-stating-the-problem" name="part-one-stating-the-problem"></a>Part one: Stating the problem</h3>
<ol>
<li><a href="TheProblem.html">The Problem</a></li>
<li><a href="History.html">History</a></li>
<li><a href="Analysis.html">Analysis</a></li>
</ol>
<h3><a href="#part-two-into-the-wild-wood" name="part-two-into-the-wild-wood"></a>Part Two: Into the wild wood</h3>
<h3><a href="#endmatter" name="endmatter"></a>Endmatter</h3>
<ol>
<li><a href="Errata.html">Errata</a></li>
</ol>
<hr />
<p><a href="https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy">Adams, 1978</a></p></div></div></div></body></html>

695
docs/codox/Analysis.html Normal file
View file

@ -0,0 +1,695 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title></title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 current"><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><ol>
<li>Analysis
<ol>
<li>
<p>Accounts from the Philosophy of Science</p></li>
</ol>
</li>
</ol>
<p>&lt;Towards another chapter. What l want to do is: ,</p>
<p>1] present the DN account, and its defenders.</p>
<p>2] present the conventional response</p>
<p>3] present the radical response</p>
<p>4] add some notes of my own, if possible supported by sources, on the polemic nature of explanation&gt;</p>
<p>This section looks at accounts of explanation culled from the Philosophy of Science. My reason for starting with this field of study is that philosophers in this field have a commitment to express their ideas in strictly formal ways.</p>
<p>Furthermore, the matter of explanation has always been central to the study [see e.g. van Fraassen 80, p 92; Nagel 61, p 4]. Thus it might be hoped that they would produce a formal account of what constitutes a good argument. While it is not necessarily the case that what can be expressed formally will be computationally tractable, a formal statement of an idea is at least a good start towards computability.</p>
<p>In fact this link between formal statement and computability is what makes the methodology of Artificial Intelligence such a good tool for the philosopher. Where a formal description can be encoded into a computer language in a computationally tractable manner, the computer programme can be used to evaluate how well the description matches the phenomenon described, simply by observing how well the computer models this phenomenon.</p>
<ol>
<li>
<ol>
<li>Some definitions</li>
</ol></li>
</ol>
<p>In part of the discussion that follows, it will be necessary to use a shorthand to describe some of the different positions that have been adopted at one time or A another by philosophers of science. Two important positions are realism and A empiricism. The positions are only peripheral to the discussion that follows, but in using them I will generally follow van Fraassens usage: by realism, I will mean the view that “…science aims to find a true description of unobservable processes that explain the observable ones…” [van Fraassen 8O page 3] This view holds that a theory is only adequate if its description of the world is correct in the finest detail.</p>
<p>By contrast, by empiricism I shall mean the view that a theory is adequate provided it gives a correct account of the observable phenomena. Any underlying mechanism may be postulated, provided that it accounts for the observable behaviour.</p>
<p>I shall also use another term from the same debate - positivism. I shall not use this in the precise sense that van Frassen gives it, but in a looser, more colloquial sense, to define that group of doctrines (including all realist and most empiricist doctrines) which hold that there is a real world, and that that real world is accurately reflected in our perceptions.</p>
<ol>
<li>
<ol>
<li>Aristotle to Nagel</li>
</ol></li>
</ol>
<p>The Philosophy of Science has had the development of an account of explanation as one of its central projects since Aristotle. Aristotles account of explanation was broadly that an explanation was an argument which had the explicandum as its consequent. As, to Aristotle, arguments should be constructed as syllogisms or chains of syllogisms, the correct explanation in response to How do you know that Socrates is mortal? would be:</p>
<p>Socrates is a man;</p>
<p>All men are mortal;</p>
<p>Therefore Socrates is mortal.</p>
<p>[a syllogism of the mood barbara ]</p>
<p>This account, which has become known as the hypothetico-deductive or deductive-nomological account of explanation, has effectively been the dominant account ever since. The classic statement of this account in the present century is probably that given by Hempel [Hempel, 65]. Ernest Nagel [Nagel, 61] attempted to go beyond this to a more general account of explanation, but he included the deductive-nomological form as the first of the four types of explanation which he describes.</p>
<p>Nagel claims, rather baldly, that “Explanations are answers to the question Why?’”, but later goes on to note “…the important point that even answers to the limited class of questions introduced by Why are not all of the same kind.” His four types of explanation are: deductive explanations, probabilistic explanations, functional or teleological explanations, and genetic explanations.</p>
<ol>
<li>
<ol>
<li>
<ol>
<li>Deductive explanations</li>
</ol></li>
</ol></li>
</ol>
<p>Nagels deductive explanations have the formal structure of a deductive argument, rather in the manner of a logical proof. This is essentially the deductive nomological explanation. In the case of the explanation of singularities, it fits neatly with the Toulmin [below] - or indeed the sylogistic - form of argument: data, plus principle, imply conclusion:</p>
<p>“It is evident that at least one of the premises in a deductive explanation of a singular explicandum must be a universal law…” (page 31)</p>
<p>“… a deductive scientific explanation, whose explicandum is the occurance of some event or the possession of some property by a given object, must satisfy two logical conditions. The premises must include at least one universal law, whose inclusion is essential to the deduction of the explicandum. And the premises must also contain a suitable number of initial conditions.” (page 32)</p>
<p>In the case of the explanation of scientific laws, however:</p>
<p>“…all the premises are universal statements; (and) there is more than one premise, each of which is essential in the derivation of the exp1icandum…” (page 34)</p>
<p>“…at least one of the premises must be more general than the law being explained.” (page 37)</p>
<p>The notion of more general is discussed at length, and is defined as follows:</p>
<p>"Let L~1~ be a law (or a set of laws and theories constituting some science such as physics), and let P~1~ P~2~ … , P~n~ be a set of primitive predicates in terms of which the predicates occuring in L~1~ are in some sense definable. (For the sake of simplicity, and without any loss of the generality of statement, we shall assume that the predicates are all adjectives or one-place predicates such as rigid or heavy …. ) Similarly, let Q~1~ Q~2~ … , Q~n~ be the corresponding set of primitives for a law L~2~. Finally, let K be a class of objects, each of which can be significantly (or meaningfully) characterised, whether truly or falsely, by the predicates of either set …. We shall also say that an object in K satisfies a law L non-vacuously only if the object actually possesses the various traits mentioned in the law and, moreover, the traits do stand to each other in the relations asserted by the law…</p>
<p>We now assume the following conditions: (1) Some (and perhaps all) of the predicates in the first set occur in the second, but some predicates in the second set do not occur in the first set. (2) Every object in K has at least one P-property, that is, a property designated by a predicate in the first set. (3) There is a non-empty subclass A of objects in K possessing only P-properties. (4) There is a non-empty subclass <del>A</del> of objects in K each of which possesses at least one Q- property that is not a P-property. …. (5) There is a non-empty (but not necessarily proper) subclass B of objects in K each of which satisfies L~1~ non-vacuously, and such that some objects in B belong to A while others belong to <del>A</del>. …. (6) There is a non-empty subclass C of objects in A for which L~2~ holds non-vacuously, and such that some (and perhaps all) of the objects in C also belong to B ….. When these six conditions are fulfilled, L~1~ may be said to be more general in K than L~2~." (pp 40- 41).</p>
<p>[my transcription differs from the original in. that where Nagel uses a character formed of an upper case letter A with a bar over it, I have used a struck-through A; and, more significantly, Nagel uses an unbarred A in condition 4. From my understanding of the text I have assumed this is a misprint, and have substituted an struck-through A]</p>
<p>Deductive explanations, as stated above, have the general form of the syllogism Barbara. However, Nagel adds a condition: while all the clauses of Barbara are universally quantified, one of the premisses must be more general than the conclusion to qualify as an explanation of it. In the case of an explanation about a singularity, it is trivial that the universal law is more general than the initial conditions; in the case of explanations of generalities, Nagel has gone to lengths to assert this condition.</p>
<p>Nagel rejects as too strong the Aristotelian requirements that the premises of a deductive explanation should not only be true but should be <em>known</em> to be true, and that they should be better known than the explicandum. He observes that the universal premises which are subsumed into scientific laws are true. Instead he suplies the weaker condition that:</p>
<p>“…the explanatory premises be compatible with established empirical facts and be in addition adequately supported (or made probable) by evidence based on data other than the observational data upon which the acceptance of the explicandum is based.” (p 43)</p>
<p>“In maintaining that the premises in an explanation must be better known than the explicandum, Aristotle was thus simply making explicit his conception of science. This conception is true of nothing that can be identified as part of the asserted content of modern empirical science,” (p 45)</p>
<ol>
<li>
<ol>
<li>
<ol>
<li>Probabilistic explanations</li>
</ol></li>
</ol></li>
</ol>
<p>Nagels second category, probabilistic explanations, depend not on formal</p>
<p>implication of the explicandum by the premises contained within the explanation,</p>
<p>but on some statistical reregularity taking the general form:</p>
<p>Most x are y</p>
<p>a is x</p>
<p>=&gt; it is probable that a is y</p>
<p>Nagel notes that:</p>
<p>“It is still an unsettled question whether an explanation must contain a statistical assumption in order to be a probabilistic one, or whether nonstatistical premises may make an explicandum probable in some nonstatistical sense of the word.” (p 23)</p>
<p>such explanations can be expressed in the Toulmin scheme (see below); but as syllogisms have the general form IAA, which is not valid.</p>
<p>These two types are the models of scientific explanation that Nagel describes; but he also describes two other categories. These are explanation types which appear in natural conversation, and in law and the humanities.</p>
<ol>
<li>
<ol>
<li>
<ol>
<li>Functional (teleological) explanations</li>
</ol></li>
</ol></li>
</ol>
<p>Functional (teleological) explanations are explanations which refer to the supposed purpose of the explicandum:</p>
<p>“lt is characteristic of functional explanations that they employ such typical locutions as in order that, for the sake of and the like.” (p 24)</p>
<p>There are two sub classes of functional explanation. One is the explanation of a particular instance or occurrence: .. .</p>
<p>I caught hold of the branch so that I would not fall.</p>
<p>whereas the other explains something which occurs in all instances of a particular class:</p>
<p>Birds have wings so that they can fly.</p>
<ol>
<li>
<ol>
<li>
<ol>
<li>Genetic explanations</li>
</ol></li>
</ol></li>
</ol>
<p>Genetic explanations seek to explain why something is the way it is by explaining how it came to be this way:</p>
<p>My arm is broken because I fell out of a tree</p>
<p>Penguins have wings because they evolved from birds which could fly</p>
<p>It is clear that these are categories where the explanations are neither necessary nor sufficient causes of the explicanda; you dont have to fall out of a tree to break your arm, and you can break it without falling out. Likewise, there may have been other ways of not falling than catching hold of a branch, and catching hold of a branch may not prevent a fall.</p>
<p>But there are statistical probabilities (not necessarily high) that catching the branch will prevent the fall, and that a fall will lead to a broken arm.</p>
<p>“It is therefore a reasonable conclusion that genetic explanations are by and large probabilistic.” (p 26)</p>
<ol>
<li>
<ol>
<li>The radical critics</li>
</ol></li>
</ol>
<p>These conventional arguments in the philosophy of science were dramatically disturbed in the 1960s by radical philosophers like Thomas Kuhn [Kuhn, 62] and Paul Feyerabend [Feyerabend, 78; and passim]. Their attack was not directed specifically at accounts of explanation, but at the empiricist and realist views of science as such - which is to say the view implicit in both that we can directly access the external world in order to gain knowledge which would support a theory.</p>
<p>The damage to the account of explanation was almost incidental in this process. It was fatal, however; the consensual view of explanation did not survive. A number of accounts of explanation have developed since then; dissapointingly, these all seem to be timid and conservative, attempting to preserve a view of science which is no longer tenable. Following (but not, as far as I am aware, prior to) the radicals onslaught, writers on the philosophy of science developed a formidable arsenal of objections to the deductive-nomological account. I cite here a few drawn from [Korner 75]:</p>
<p>Achinstein, in his note The Object of Explanation, gives some standard counler—examp|es:</p>
<p>"1] We see a fire engine which is black and demand an explanation. Here is one that satisfies the D-N model:</p>
<p>This fire engine is the same colour as that crow.</p>
<p>All crows are black.</p>
<p>=&gt;This fire engine is black.</p>
<p>2] A bridge collapses and we demand an explanation, Here is one that satisfies the D-N model:</p>
<p>An engineer with years of training and experience examined the bridge and said it would collapse.</p>
<p>Whenever an engineer with years of training and experience examines a bridge and says it will collapse, it does.</p>
<p>=&gt; The bridge collapsed.</p>
<p>These are unacceptable as explanations …. " (page 35)</p>
<p>Hesse, in her rejoinder to Achinstein, writes:</p>
<p>“… there are by now at least three widely accepted general consequences of the ensuing debate which are enough to show that this model is by no means to be taken as the standard model of explanation.</p>
<p>These are:</p>
<p>(i) It has grave shortcomings as an explication of the structure of explanation in natural science itself. (ii) The question of whether it usefully applies to explanation in history, psychology, and the social sciences is a matter of current controversy. (iii) The question of whether <em>any</em> model applying to the natural sciences is adequate also for these human sciences is also highly controversial." (page 46, Hesses emphasis)</p>
<p>and finally, Salmon, in a piece entitled Theoretical Explanation, argues:</p>
<p>"… contra Hempel and many others - that an explanation [of a particular event] is not an argument to the effect that the event to be explained was to be expected by reason of certain explanatory facts</p>
<p>In addition, I have claimed that the so-called deductive- nomological model of explanation of particular events is incorrect. It is not merely that there are explanandum events which seem explainable only inductively or statistically… There are also cases such as the man who takes his wifes birth control pills and avoids pregnancy in which an obviously defective explanation fulfills the conditions for deductive-nomological explanation. (pp 118-119; the embedded quote is from Hempel, Explanation in Science and in History, in Colodny, ed, Frontiers of Science and Philosophy, University of Pittsburgh Press, Pittsburgh, 1962. The italics are Salmons. This birth control example is also mentioned in for example van Frassen 1980)</p>
<p>[Here insert some detailed analysis of Feyerabends position, and passing reference to Lakatos and Kuhn]</p>
<ol>
<li>
<ol>
<li>Achinstein</li>
</ol></li>
</ol>
<p>One of the first attempts to recover an account of explanation from this mess is that of Achinstein [Achinstein 83]. Achinstein distinguishes between the act of explaining, which is an utterance, and the explanation, which is the product of the act. This act must take place in an appropriate context.</p>
<p>His project is to answer three questions: .</p>
<ol>
<li>What is an explaining act?</li>
<li>What is the product of an explaining act?</li>
<li>How should explanations be evaluated?</li>
</ol>
<!-- -->
<ol>
<li>
<ol>
<li>
<ol>
<li>Explaining acts</li>
</ol></li>
</ol></li>
</ol>
<p>Achinstein claims that explaining acts have a period, and a completion point.</p>
<p>John was explaining why x</p>
<p>John explained x</p>
<p>The latter form implies a completion has occurred.</p>
<p>An explanation has not occurred until the act of explanation has occurred. For John to have explained why x, it is not sufficient that he knew why x.</p>
<p>Explaining is illocutionary [see Austin, how to do things with words]; it is done in an appropriate context. Out of such a context, the same statement would be an equivalent perlocutary act: enlightening, getting &lt;the auditor&gt; to understand.</p>
<p>The intention of an explaining act must be to engender understanding of the explicanda. Achinstein does not state, but may be taken to imply, that for such an act to take place there must (at any rate in the mind of the explainer) be some explainee or auditor, in whose mind understanding is to be engendered[^1].</p>
<p>"The first condition expresses what I take to be a fundamental relationship between explaining and understanding. It is that S explains q by uttering u only if</p>
<p>(1) S utters u with the intention that his utterance of u render q understandable."(Page ??) </p>
<p>Under this account it is impossible to explain something by accident a rather over strict condition, l think.</p>
<p>Achinstein also claims that it is neccesary that the utterance produced by the explaining act must be (at least believed to be) a correct account:</p>
<p>"(2) S believes that u expresses a proposition that is a correct answer to Q…</p>
<p>Often people will present hints, clues, or instructions which do not themselves answer the questions… Some hints, no doubt, border on being answers to the question. But in those cases where they do not, it is not completely appropriate to speak of explaining."(p 17)</p>
<p>Thus explanation by analogy is no explanation; again, this seems a very strong condition. Yet Achinstein goes on to make this even stronger by the assertion that the proposition expressed by u is itself a complete answer to Q:</p>
<p>“(3) S utters u with the intention that his utterance render q understandable by producing the knowledge, of the proposition expressed by u, that it is a correct answer to Q.”(p 18)</p>
<p>Atchinstein goes on to say that explain can be used in two senses: a loose sense, in which any situation fulfilling the above contains an explanation; and a more restricted sense, which will “cover only correct explainings”. He does not describe how these may be differentiated, however.</p>
<ol>
<li>
<ol>
<li>
<ol>
<li>Understanding</li>
</ol></li>
</ol></li>
</ol>
<p>If explaining is to be seen as an act directed at engendering understanding, some account of what is meant by understanding must be supplied. Achinstein asserts that:</p>
<p>"One understands q only if one knows a correct answer to Q which one knows to be correct (sic)… we can say that a necessary condition for the truth of sentences of the form At understands q is</p>
<p>(1) (£x)(A knows of x that it is a correct answer to Q)" (p 23 - 4) </p>
<p>Achinsteins essential problem is quite simple to express: he wants to say that one doesnt understand something until one not only knows a proposition which expresses the reason for it, and knows that this proposition does in fact express the correct reason, but also has internalised this proposition.</p>
<ol>
<li>
<ol>
<li>
<ol>
<li>
<ol>
<li>Content-giving Propositions</li>
</ol></li>
</ol></li>
</ol></li>
</ol>
<p>Achinstein identifies a class of nouns which he describes as content-giving, such as explanation, meaning, fact, reason. He then defines a class of propositions, constructed using such nouns and the verb to be, which he calls content-giving propositions. These take roughly the form:</p>
<p>the &lt;cg-noun&gt; {that | of | for} &lt;proposition x&gt; is &lt;proposition y&gt;.</p>
<p>Achinstein now expands his definition of understanding to:</p>
<p>“(∃p)(A knows of p that it is a correct answer to Q, and p is a complete content-giving proposition with respect to Q)” (p 42) .</p>
<p>Achinstein examines a number or accounts of the epistemic nature of an explanation, largely to use them as Aunt Sallies, before presenting his preferred account. He examines first explanation considered as sentence and as proposition.</p>
<p>These are rejected for reasons which appear to me quite strange. Achinstein has trouble with the idea that two sentences can be differently constructed but have the same meaning. E.g.:</p>
<p>“…since sentence (1) ≠ sentence (2), the explanation of Bills stomach ache given by Dr. _Smith ≠ the explanation of Bills stomach ache given by Dr. Robinson, which seems unsatisfactory. Intuitively, both doctors have given the same explanation…”(p 76)</p>
<p>What Achinstein dearly wants is some Leibnitzian language in which he can express the semantic content of the sentences. Not having one, it is not sufficient for him to posit that such a thing must exist; so he claims that behind every sentence there is a (presumably unique) proposition, in English. Thus:</p>
<p>“Since sentences (1) and (2) express the same proposition we can conclude that the explanations given by the doctors are the same.”(p 76)</p>
<p>For evidence that the proposition is considered to be unique, see the bizarre argument presented as the Locutionary Force Problem on p 77. ln this (by my reading) he claims that:</p>
<p>ix if the product of an explanation is a proposition, and it is possible to produce, as separate acts of explanation and of criticism, utterances which map onto the same proposition, then that proposition is at once a criticism and an explanation, which is absurd. Therefore the product of an explanation cannot be a proposition.</p>
<p>l doubt whether this argument has meaning, let alone force. ln this and other arguments, Achinstein seems to confound operations which would be legitimate on some formal grammar with what it is possible to do with natural language.</p>
<p>His view that propositions are unique is made more difficult to maintain by his apparent identification of the propositions themselves with the sequence of symbols used to represent it. For a statement of the view that sequences of symbols are efficient (i.e. that they may have many differing meanings), see e.g. [Barwise and Perry 86, p S2]; ,</p>
<p>He also has trouble with emphasis shifts in propositions which are</p>
<p>represented by the same sequence of word-tokens, which he resolves by the</p>
<p>concept of e-sentences - sentences with additional flagging to indicate the</p>
<p>central phrase, so that:</p>
<p>"(7) the e-sentence Bill ate spoiled meat on Tuesday is not</p>
<p>identical with</p>
<p>8 (8) the e-sentence Bill ate spoiled meat on Tuesday" (p 80)</p>
<p>He next considers the deductive-nomological view - the view that an</p>
<p> explanation is an argument.</p>
<p>Achinstein argues that the form of the deductive nomological explanation can</p>
<p>be seen as an ordered pair &lt;conjunction of premises, conclusion&gt;. This view is</p>
<p>dismissed for reasons analogous to those above.</p>
<p>Explanation as ordered pair</p>
<p>Having considered, and dismissed, the preceding accounts of explanation, A.</p>
<p>_ now proposes that explanation should be considered to be an ordered pair &lt;x, y&gt;</p>
<p>s.t.: .</p>
<p>"The explanation of q given by S denotes (x; y) if and only if</p>
<p>(i) Q is a content-question; </p>
<p>(ii) x is a complete content-giving proposition with respect to Q; </p>
<p>(iii) y = explaining q; ` </p>
<p>(iv) (£a)(£u)[a is an act in which S explained q by uttering u -&gt; </p>
<p>(r)(r is associated with with a;r=x)]. (i.e., x is the one and only</p>
<p>A proposition associated with every act in which S explained q by `</p>
<p>uttering something.)"(p 87) _</p>
<p>What this appears to boil down to is this:</p>
<p>Achinstein believes any given concatenation of symbols representing a</p>
<p>" proposition to be identical with that proposition; and that any proposition is</p>
<p>uniquel. ·</p>
<p>Thus each and every occurance of this sequence of symbols is the same thing.</p>
<p>Yet he believes, as shown above, that it is impossible for (e.g.) an explanation to</p>
<p>lln fact, Achinsteins claim is even stronger than this, for he believes that there is some</p>
<p>proposition underneath each sentence; thus, potentially, many sentences may map onto</p>
<p>the same proposition, and consequently all these sentences may be treated as being</p>
<p>identical (l).</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 34</p>
<p>Meghaniseq Inference and Explanation Draught dated ; 2]{§[§§</p>
<p>be the same thing as (e.g.) a criticism. So it is impossible for an explanation to</p>
<p>be identical with an occurance of a sequence of symbols.</p>
<p>Thus an ordered pair must be constructed, in which the symbol-sequence is</p>
<p>linked with another symbol sequence which represents a reference to the</p>
<p>question which was asked.</p>
<p>Note that q, the question, is also a sequence of symbols; so that a pair:</p>
<p>&lt;[aI| plants are green]; [answering: “why is this grass green”]&gt; .</p>
<p>is unique, and is always the same explanation. Therefore (since AchinsteinS4» gp</p>
<p>schema makes no mention of an auditor) it is always either a good explanation or</p>
<p>a bad one, regardless of whether the question was asked by a three year old child</p>
<p>or a plant physiologist.</p>
<p>I would like to return the readers attention to Achinsteins claim that:</p>
<p>"One understands q only if one knows a correct answer to Q which</p>
<p>one knows to be correct " [p 23, my emphasis]</p>
<p>and goes on to clarify that, by this, he implies …a de re sense of knowing. as</p>
<p>" The use of correct here is the problem. It appears to imply that one knows that</p>
<p>the answer which one knows maps in some unproblematic way onto something</p>
<p>real in the external world. Thus Achinstein, too, falls into the trap sprung by the</p>
<p>radicals. ln the absence of some definition of what is to be understood by</p>
<p>correct, these definitions are simply meaningless.</p>
<p>·; ; —• ·_ gt g J; rg FQ :r. ns .· efbr ¢’&lt;—¤ · S = " . ~w ll</p>
<p>9 (v x ~ J · Q~ k. , l 6 4 4</p>
<p>P7 » %‘· both orde al ;/ an ns (parables s ve.</p>
<p> This account does not address many of the facets of common sense explanation.</p>
<p>It has nothing to say about the amount of detail contained in an explanation. It has</p>
<p>nothing to say about the need to express an explanation in terms of an account of</p>
<p>the world which is accessible to the auditor. lt fails to account for the possibility</p>
<p>of explanation by analogy, or of unintentional explanation.</p>
<p>It appears that Achinsteins motivation in producing a new account has less to</p>
<p>do with addressing these real world problems than with overcoming such</p>
<p>philosophical puzzles as the Paradox of the Ravens; so his account takes us no</p>
<p>nearer to providing a model which will support the construction of better</p>
<p>common sense explanations. 5</p>
<p>A [ mh L/Go/w%` \</p>
<p>` Eegozi</p>
<p>van Fraassen Arclltuqrfélv</p>
<p>_ Another philosopher who has attracted attention with his work on explanation</p>
<p>in recent years is Bas van Fraassen. His work seems MNMW directed, trbeislz</p>
<p>however, not primarily at providing better accounts of explanation, so much as</p>
<p>at defending the empirical view of science from the realist critics of it - surely,</p>
<p>at this late date, a redundant activity. The work is marred, in my opinion, by</p>
<p>intellectually dishonest treatment of the radicals, to whom van Fraassen has no</p>
<p>honest response.</p>
<p>van Fraassens response to the radicals</p>
<p>van Fraassen published his the Scientific Image in 1980, in the age of</p>
<p>Feyerabend and Kuhn. He mentions both these authors twice, and is clearly</p>
<p>familiar with their theses, describing Feyerabends work as well known (page</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 35</p>
<p>l· 1¤ ·•I f I · - • sustai n !q.u •q·• •\$€</p>
<p>14), and paraphrasing one of Feyerabends central points (although without</p>
<p>acknowledging it): l</p>
<p>"… one theory says that there are electrons, and the other says that</p>
<p>there may not be. Even if the observable phenomena are as Rutherford</p>
<p>says, the unobservable may be different. However, the positivists</p>
<p>would say, if you argue that way, you automatically become a prey to</p>
<p>scepticism. You will have to admit that there are possibilities that you</p>
<p>cannot prove or disprove by experiment, and so you will have to say we</p>
<p>cannot know what the world is like. Worse; you will have no reason to</p>
<p>reject any number of outlandish possibilities; demons, witchcraft,</p>
<p>hidden powers contirbuting to fantastic ends." (page 35, my emphasis)</p>
<p>Well yes, precisely. And although it is possible that the positivists would say</p>
<p>it, Feyerabend did say it, repeatedly; and the choice of words seems to me to</p>
<p>betray a consciousness of this [see for example Feyerabend, 81, vol 1, foot of</p>
<p>page 196 - which other evidence (below) shows us van Fraassen was familiar</p>
<p>with]. But the only time van Fraassen addresses an argument of Feyerabends</p>
<p>directly, he dismisses it as …a totally false issue… (page 93). Let us look a</p>
<p>P little at the context for this. 3</p>
<p>van Fraassen is discussing the centrality of explanation to science, citing -</p>
<p>and accepting - Nagels strong claim [Nagel 61, page 4], and using incidentally</p>
<p>the same passage from Nagel as Feyerabend had used in his earlier critique of this</p>
<p>work [Feyerabend 81, page 52; this paper originally published in BJPS vol 16,</p>
<p>1964] to illustrate this. He claims that Nagels view does not entail realism, but</p>
<p>will equally support an empiricist view. He now claims that Feyerabend has</p>
<p>advanced the ..totally false.. argument that, in his paraphrase:</p>
<p>…only Realism is a philosophy that stimulates scientific enquiry;</p>
<p>anti-realism hampers it." [page 93]</p>
<p>This is a position that Feyerabend advanced? Really? Feyerabend whose</p>
<p>central thesis might be summed up in his phrase "The only principle that does</p>
<p>not inhibit progress is: anything goes." [Feyerabend 75, p10: Fs emphasis]. van</p>
<p>Fraassen refers us to Feyerabends paper [in</p>
<p>Feyerabend 81 vol 1]. What Feyerabend in fact says (forgive me ifi quote at</p>
<p>length) is:</p>
<p>"To sum up: the issue between realism and instrumentalism has g`</p>
<p>A many facets….. There are arguments for instrumentalism which</p>
<p>concern specific theories such as the quantum theory or the</p>
<p>heliocentric hypothesis and which are based on specific facts and well</p>
<p>confirmed theories. It was shown that to demand realism in these cases</p>
<p>* amounts to demanding support for implausible conjectures which</p>
<p>possess no independent empirical support and which are inconsistent</p>
<p>with facts and well confirmed theories. It was also shown that this is a</p>
<p>plausible demand which immediately follows from the principle of</p>
<p>testability. Hence realism is preferable to instrumentalism even in</p>
<p>these most difficult cases." [Feyerabend 81 vol 1 pp 201-2;</p>
<p>Feyerabends emphasis.</p>
<p>Realism is preferable to instrumentalism. That is a very different (and far</p>
<p>weaker) claim than only Realism stimulates scientific enquiry. So the</p>
<p>totally false issue is of van Fraassens own devising, and has nothing to do with</p>
<p>Feyerabend.</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 36</p>
<p>ll tan ·• l“`l · anu, ne u lan uw •\$¢</p>
<p>l have gone on at some length about van Fraassens slight (and slighting)</p>
<p>mention of Feyerabend. Why? Well, because van Fraassen, continueing the now</p>
<p>moribund debate between realsim and empiricism in order to defend the latter,</p>
<p>addresses Feyerabends criticisms of neither doctrine. This seems an</p>
<p>extraordinary ommission.</p>
<p>&lt;Here insert some stuff about van Fs account of exp/anati0n…&gt;</p>
<p>Toulmin</p>
<p>Turning aside for the moment from those philosophers who have made a study VcL&amp;/6</p>
<p>of explanation per se, there is another whose work is currently attracting</p>
<p>fashionable attention in Expert Systems (and especially mechanised explanation)</p>
<p>circles; one must make passing reference to Toulmin, and, unless one chooses to</p>
<p>use his argument schema, one must produce a very good argument for not doing</p>
<p>so. Therfore it is important to know what he actually argued.</p>
<p>A Tou|mins programme was to replace the syllogistic with a new logical</p>
<p>schema of his own. He did not address symbolic logic except by dismissing it as</p>
<p>irrelevant to the study of real world arguments:</p>
<p>"We shall have to replace mathematically-idealised logical</p>
<p>relations - timeless context-free relations between either statements</p>
<p>or propositions — by relations which in practical fact are no more than</p>
<p>the statements to which they relate. This is not to say that the</p>
<p>elaborate mathematical systems which constitute symbolic logic must</p>
<p>now be thrown away; but only that people with intellectual capital</p>
<p>invested in them should retain no illusions about the extent of their</p>
<p>relevance to practical arguments." (Toulmin 58, p 185)</p>
<p>or again:</p>
<p>"…the question needs to be pressed, whether this branch of</p>
<p>mathematics (propositional calculus) is entitled to the name logical</p>
<p>theory. If we give it this name, we imply that the propositional</p>
<p>calculus plays a part in the assessment of actual arguments comparable</p>
<p>to that played by physical theory in explaining actual physical</p>
<p>phenomena. But this we have seen reason to doubt: this branch of ~</p>
<p>" mathematics does not form the theoretical part of logic… By now, the</p>
<p>mathematicians logic has become a frozen calculus, having no</p>
<p>functional connection with the canons for assessing the strength and</p>
<p>_` cogency of arguments." (Ibid, p 186 - and he goes on in this vein!)</p>
<p>These contentions seem polemical, even propagandistic, in tone; they are</p>
<p>certainly not, in Tou|mins own terminology, candid. They advance scant</p>
<p>evidence in support of their allegations. They seem out of place in a work whose</p>
<p>matter is the analysis of argument; and they lie especially uneasy with TouImins</p>
<p>claim to be seeking to make argument more scrutable.</p>
<p>He goes on to attack the concept of logicaI form and formal validity in</p>
<p>argument:</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 37</p>
<p>1: nan ·•. · ·n · an ,…».ent -1 U an u ·•. 1- \$3</p>
<p>"…the suggestion that validity is to be explained in terms of</p>
<p>formal properties, in any geometric sense, loses its plausibility."</p>
<p>(Ibid, p 120)</p>
<p>He argues that the conception that arguments in general could or should have a</p>
<p>logical form arises out of a false start in the study of logic:</p>
<p>"The development of logical theory… began historically with the</p>
<p>study of a rather special class of arguments -_ namely, the class of</p>
<p>unequivocal, analytic, formally valid arguments with a universal</p>
<p>statement as major premiss. Arguments in this class are exceptional</p>
<p>in four different ways, which together make them a bad example for</p>
<p>general study." (Ibid, p 144)</p>
<p>Clearly, if arguments have no logical form, it is not possible to advance a</p>
<p>decision process for them, and Toulmin does not attempt to do so. Rather, the</p>
<p>burden of his argument is that decision processes are (for real world arguments)</p>
<p>impossible in principle.</p>
<p>Thus those practitioners in the field of Expert Systems who advance Toulmin</p>
<p>,·— as an authority must face up to this corrolary: if Toulmins arguments are valid,</p>
<p>then there cannot ever be a successful Expert System based on any technology we</p>
<p>have now available, as all current Expert Systems are based on some logical</p>
<p>formalism (even if, as in the case of production systems, this formalism is</p>
<p>crude); and all, certainly, claim to be decision processes for arguments in their</p>
<p>domain. As such, according to Toulmin, they are the modern equivalents of the</p>
<p>PhiIosophers stone - they cannot exist.</p>
<p>He was, to be fair to him, writing in the year that LISP was being developed,</p>
<p>when the posibility of mechanical inference was dreamed of only by visionaries</p>
<p>like Turing; nevertheless the cavalier manner in which he dismisses all of the</p>
<p>19th and 20th century advances in logic significantly weakens his claims to be</p>
<p>taken seriously in this field.</p>
<p>But if Toulmin cannot be taken seriously as a logician, is there any residual</p>
<p>value in his work on argument? I think it is possible that there is, because the</p>
<p>argument schema with which he hoped to replace both the traditional syllogism,</p>
<p>and the formalisms of modern logic, does indeed capture the content of normal</p>
<p>conversational discussion in an easily manageable form. Had Toulmin advanced</p>
<p>this and this only, I think he might have been taken more seriously; and I think x</p>
<p>A there is merit in the current revival of interest in it.</p>
<p>the Schema</p>
<p>Toulmins concern is with arguments as they are presented informally, in</p>
<p>* conversation or writing, and the relation between the form of such informal</p>
<p>arguments and the forms recognised by formal logic. `</p>
<p>He questions whether the aristotelian standard form of an argument is</p>
<p>candid; that is, whether it is presented in such a way as to make its merits as</p>
<p>argument most manifest. He suggests that the analysis into only three elements,</p>
<p>major premiss, minor premiss, conclusion may be artificially simplified:</p>
<p>"Simplicity is of course a merit, but may it not in this case have</p>
<p>been bought too dearly? Can we properly classify all the elements in</p>
<p>our arguments under the three headings, major premiss, minor</p>
<p>premiss and conclusion, or are these categories misleadingly few in</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 38</p>
<p>U al. “! ·1 · any t •n |»•1 •qf•. •¢\$</p>
<p>number? Is there even enough similarity between major and minor</p>
<p>premisses for them usefully to be yoked together by the single</p>
<p>name…?"</p>
<p>He draws on the practice of jurisprudence to find alternative schemae; and</p>
<p>synthesises one such comprising the following elements: a statement of some</p>
<p>assertion, which implicitly carries with it a g_|_a_j_m_ as to the truth of the</p>
<p>assertion; and data, information which is consensual at the time of argument (or</p>
<p>is supported by some futher argument), which tends to support that claim; some</p>
<p>inference rule, a warrant, which will allow the argument to move from the data</p>
<p>to the claim (for example, all as are bs), again optionally supported by some</p>
<p>further argument or bagking; an optional ggjalifigr (e.g. probably, l believe);</p>
<p>and, implicit in the qualifier, the possibility of a rebuttal:</p>
<p>D —————··-— &gt; S0, O , C</p>
<p>I I</p>
<p>I I</p>
<p>Since Un/ess</p>
<p> W F?</p>
<p>I</p>
<p>on account of</p>
<p>B</p>
<p>(after Toulmin, p 104: my emphasis of optional elements)</p>
<p>In conversation, Toulmin argues, it may be natural simply to say &lt;data&gt; so</p>
<p>&lt;c|aim&gt; ; to say &lt;c|aim&gt; because &lt;warrant&gt; because &lt;data&gt; "…strikes us as</p>
<p>cumbrous and artificial, for it_puts in an extra step which is trivial and</p>
<p>unnecessary".</p>
<p>Toulmin sets out to validate this schema by comparing it with the syllogism,</p>
<p>and asking "What corresponds in the syllogism to our distinction between data,</p>
<p>warrant, and backing?" He devotes special attention to arguments of the forms</p>
<p>Almost all As are Bs and Scarcely any As _are Bs - forms which are, of</p>
<p>course, of the utmost importance in default logics, but which are beyond the</p>
<p>scope of the syllogistic. He shows that such arguments are representable using</p>
<p>his schema without difficulty, and that the traditional syllogism forms are also so</p>
<p>representable.</p>
<p>f But beyond this he observes that the major premiss of a syllogism plays a</p>
<p>dual role with hegemonistic implications: it is at once an inference step and an</p>
<p>assertion of some piece of information. Thus one may challenge it in its role as</p>
<p>_; warrant, on the basis that it is not relevant to the case, and as backing, on the</p>
<p>grounds that it is not true. TouImins schema, by separating these roles, makes ` . _ E</p>
<p>clearer from what grounds a counter argument can be launched. °_ i._ ` `</p>
<p>While TouImins ambitious claims for his schema as the authoritative tt-, . _ f,</p>
<p>representation of argument - and his audacious dismissal of modern logic - seem Q_ A »_ ·_ ·</p>
<p>unworthy of attention, his representation shema does offer some merits in the W ; \ M I</p>
<p>representation of real world arguments, especially those which appeal to default .»—l— .,,_</p>
<p>reasoning. Furthermore, the fact that these argument structures can be daisy-</p>
<p>. chained offers a convenient way to pack up large arguments into a structure</p>
<p>which can be unfolded to any given level of detail, and this property makes the</p>
<p>Simon urookez Notes ror a tnesns emmeo r-age ; eu</p>
<p>· {n w 1.··1· qa .• ·n. •n |·.•n nv . •\$\$</p>
<p>schema an interesting candidate argument representation as we look for ways to</p>
<p>control the amount of detail revealed to the user of an Expert System.</p>
<p>His contention that argument can be hegemonic is one which I shall want to</p>
<p>examine further.</p>
<p>Towards a Situation Semantic account of Explanation</p>
<p>Part of Barwise and Perrys programme in constructing Situation Semantics</p>
<p>was to elucidate the problem of the transfer of knowledge from one person to</p>
<p>another, and this is, of course, one of the central problems in an account of</p>
<p>explanation.</p>
<p>However, they give no account of explanation as such. What follows is my</p>
<p>guess at what their account would look like, had they formally stated it. This</p>
<p>should not be taken too seriously, as their formalism is rich and complex, and I</p>
<p>have by no means mastered it. Certainly, any howlers in what follows all my own</p>
<p>work. `</p>
<p>My first cut at the problem looked like this. An explanation was to be that</p>
<p>which happened in a situation E defined:</p>
<p>° E := at jj: understands, _a, gg no</p>
<p>understands, p_, _c_; yes</p>
<p>enquirlng, g; yes</p>
<p>addressing, _a_, _b_; yes</p>
<p>S¤Yl¤9. Q. 9; l/GS</p>
<p>subject, g, g; yes</p>
<p>atlz: responding, bi, g; yes</p>
<p>addressing, _b_, a_; yes</p>
<p>\$¤Yl¤9. 9.. ll; VSS</p>
<p>subject, _u, jg; yes</p>
<p>atj3: understands, gd, g; yes</p>
<p>understands, p_, _c_; yes `</p>
<p>I1 `&lt; lz &lt;Is</p>
<p>»» where: a, b are some actors; c is some concept; q, u are some utterances. "</p>
<p>This begs a number of questions. Firstly, what relationship is meant by</p>
<p>understands, x, y ? This question is, of course, the central one for the whole</p>
<p>programme of Situation Semantics; when it can be answered, the programme</p>
<p>*1 will have succeeded. Secondly, how can we represent the (presumed) causal</p>
<p>dependence of the understanding at I3 on the responding at I2 ? Thirdly, what</p>
<p>implications (if any) does this have for the semantic content of u?</p>
<p>However, this definition does have some points of value. Firstly, an</p>
<p>explanation is located in a situation with at least two roles (which,</p>
<p>incidentally, cannot be taken by the same individual); secondly, a transfer of</p>
<p>understanding takes place. Neither of these points are met by traditional</p>
<p>accounts.</p>
<p>So what we need to add is something like this. The sages response, u, must</p>
<p>be seen as a list of statements [sq, sg, sn], Such that, for any statement sj</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 40</p>
<p>ll: ·.¤ ‘• .|.&lt;·n · aan .•tn» •n |q•1 •··• •\$\$</p>
<p>in u, sj is understandable in terms of [&lt;students state of knowledge at l1&gt; + sj</p>
<ul>
<li>
<ul>
<li>Sl-1]. and also that c, the concept originally asked about, is</li>
</ul></li>
</ul>
<p>understandable in-terms of [&lt;students state of knowledge at l1&gt; + sj + +</p>
<p>sn]. This (fortunately) accords with Barwise and Perrys understanding:</p>
<p>"An utterance u of a sentence l/I gives us not just one discourse _</p>
<p>situation d for 1/J that is part of u, but also discourse situations d G for</p>
<p>every constituent expression 0 of ltr, discourse situations that are also</p>
<p>parts of u… Thus, part of what the utterance gives us is the set</p>
<p>{d G : cz is zlr or a constituent of lp}</p>
<p>of discourse situations." [Ibid p 123]</p>
<p>This is inadequate, because (among other things) it fails to represent the</p>
<p>negotiation which is inherent in the construction of many real world</p>
<p>explanations.</p>
<p>However, it appears to be a step along a potentially interesting path, and</p>
<p>one which I intend to pursue.</p>
<p>A lemma eeneerning the natjjre ef pereeptign</p>
<p>The major argument which l wish to develop in this chapter is based on a very</p>
<p>important - and contentious - assumption: there may (or may not) be such a thing</p>
<p>as a real world, but its existance is largely irrelevant, as if it does exist, we are</p>
<p>unable to percieve it directly. So before going on to advance this major argument, l</p>
<p>will attempt to support this lemma.</p>
<p>In van Fraassens words, "the human organism is, from the point of view of</p>
<p>physics, a certain kind of measuring apparatus. As such, it has certain inherent</p>
<p>|imitations… " [van Fraassen 80, p 17] Precisely: and as van Fraassen observes,</p>
<p>our perceptions of the world are conditioned by the nature of this apparatus, and by</p>
<p>these limitations, which, as he goes on to claim, "…wilI be described in the final</p>
<p>physics and biology." [ibid] But how can this be? We are to investigate the nature of</p>
<p>these instruments, using, as analytic tools, the instruments themselves.</p>
<p>Take for example the human eye. We know what the eye looks like; weve seen</p>
<p>many of them: with our eyes. We know their physical nature, because biologists have</p>
<p>dissected them, and using their eyes, have made careful diagrammes of what they 7;</p>
<p>A have seen, which we, in turn, can examine using our eyes. F `</p>
<p>So, if our theories about the nature of the eye are right, and it does indeed</p>
<p>project a two dimensional image of a three dimensional world (supposing such</p>
<p>at exists) onto our retinas, then our theories about the nature of the eye are right.</p>
<p>This argument is not as far fetched as it sounds: after all, the memory of a</p>
<p>computer [assuming, for the moment, that such things exist] is strictly one</p>
<p>dimensional - a vector of cells. Yet Computer-aided design software, for example, or</p>
<p>flight simulator software, renders an image from this one-dimensional environment</p>
<p>onto a two dimensional screen in such a way that we interpret it in three dimensions.</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 41</p>
<p>Meclgegieeg lgteregce egg Explegeticg Qragghtegtegj ·, 21[§[§§</p>
<p>If such transformations between a one dimensional reality and a three dimensional</p>
<p>appearance are possible in a machine, why not in our own optical systems?1</p>
<p>Furthermore, our theories about the nature of world are conditioned by the</p>
<p>languages in which we express them. Different languages allow for the descrition of</p>
<p>different theories about reality. Again, different communities (or even individuals)</p>
<p>using what they believe to be the same language (say, for example, English) may</p>
<p>attatch sufficiently different interpretations to the same linguistic symbols and</p>
<p>constructs that communication fails, without either of the parties being aware that</p>
<p>it has failed. This point, about the incommensurability of (some) theories has been</p>
<p>made many times [see eg Kuhn, Feyerabend]; but some of the consequences require</p>
<p>exploration.</p>
<p>An erggment cencerning the netgre cf explgneticn</p>
<p>Firstly, if we cannot access reality except through the medium of theory, then</p>
<p>Nagels claim that an explanation is a mapping from a statement to reality must fall.</p>
<p>It becomes clear that explanation can at best be a mapping from a statement to a</p>
<p>theory. ls this a good account of the nature of explanation? .</p>
<p>A Well, let us consider what, in common sense, we think of as a good explanation.</p>
<p>We feel we have recieved a good explanation if we understand both the explanation</p>
<p>and the explicandum; if it makes sense: if, in fact, we do not percieve it as grossly</p>
<p>inconsistent with the body of be|ief or know|edge which we already hold.</p>
<p>For example, most modern Western people would not think that the statement:</p>
<p>the God Fla drives his chariot daily across the rainbow bridge</p>
<p>was a good explanation of why the sun rises and sets; we do not believe that there</p>
<p>is a God Fla, nor that what we percieve as the sun is identical with this being, nor</p>
<p>that a rainbow is sufficiently rigid and strong to bear the weight of such a being,</p>
<p>lThis argument that our perception of a real world does not prove its existence is not</p>
<p>new, of course. Here is a classic statement of a similar argument from BerkeIeys Fire;</p>
<p>Dielegge cf Hylee end Philcnegcz</p>
<p>Hyl.: Do we not perceive the stars and moon, for example, to be a</p>
<p>A great way off? Is not this, I say, manifest to the senses? I</p>
<p>Phil.: Do you not in a dream too perceive those or like objects?</p>
<p>Hyl.: I do.</p>
<p>) Phil.: And have they not then the same appearance of distance?</p>
<p>Hyl.: They have.</p>
<p>Phil.: But you do not htence conclude the apparitions in a dream to q..;F</p>
<p>be without the mind?</p>
<p>Hyl.: By no means.</p>
<p>Phil.: You ought not therefore to conclude that sensible objects are</p>
<p>without the mind, from their appearance or manner wherein they are</p>
<p>percieved.</p>
<p>Hyl.: I acknowledge it.</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 42</p>
<p>should it exist. But to a first kingdom Egyptian, it may have been an excellent</p>
<p>explanation. Similarly, that first kingdom Egyptian would not accept the statement:</p>
<p>the Earth rotates on its axis once every day</p>
<p>as a good explanation for the occurance of the same phenomenon. He would know</p>
<p>perfectly well that the earth, being fixed, could not rotate on its axis. And even if it</p>
<p>could, why should this influence the daily ritual of Ra?</p>
<p>We consider an explanation good if it maps a statement about theexplicandum</p>
<p>onto the theory, or body of belief, which we currently ho|d.1</p>
<p>[Main text of argument. In the tradition of philosophy of science I intend to</p>
<p>draw on examples from two genuine debates, drawn in this instance from the</p>
<p>development of the theory of evolution. These debates are</p>
<p>The debate between Huxley and Kropotkin over whether co-operation or</p>
<p>competition was the more important factor in the survival of species.</p>
<p>Kropotkin, a leading Anarchist, sought to show that human beings (among</p>
<p>other animals) were inherently co-operative, and (implied conclusion)</p>
<p>»· would get along fine in the absence of government. Huxley, a Tory, sought to</p>
<p>show that, on the contrary, competition (and, implicitly, capitalism) red in</p>
<p>tooth and claw was natural.</p>
<p>The debate between Bateson and Kammerer over whether aquired</p>
<p>characteristics were inherited.</p>
<p>Kammerer, then the only scientist capable of breeding many species of</p>
<p>amphibian in captivity, showed in a series of experiments that</p>
<p>characteristics aquired by parents were inherited by their offspring.</p>
<p>Bateson, in a series of increasingly virulent attacks, ultimately claimed that</p>
<p>these experiments were fraudulent. As no-one else was even capable of</p>
<p>breeding the creatures involved, they could not be repeated.</p>
<p>Kammerer was a communist, and the implicit argument behind his work</p>
<p>was that human beings were perfectable; that some parts of the benefits of</p>
<p>humane education and culture would be transmitted. Bateson was again a Tory,</p>
<p>though not as politically committed as the other figures discussed.</p>
<p>In these debates it is clear that the protagonists sought ot explain a</p>
<p>phenomenon - in this case evolution - in terms of theories which supported their _</p>
<p>» own views of the world. The act of explanation was clearly being used as a polemic</p>
<p>act, to try to pursuade the explainee of the correctness of the explainers</p>
<p>ideological stance]</p>
<p>4. </p>
<p>ll am grateful to Vernon Pratt for helping me clarify the consequences that this doctrine</p>
<p>has for the concepts of theory and belief. lf it is the case that there is no access to a</p>
<p>real world, then all statements about the nature of the world are of equal -</p>
<p>undifferentiable - validity (except in so far as some aesthetic criteria may be applied to</p>
<p>them). lt remains possible to differentiate between a belief - an unsupported statement</p>
<p>about the nature of the world - and a theory: a statement that the world has some</p>
<p>property as a consequence of certain other properties which it may have. However, it</p>
<p>seems to me that this distinction is of little practical importance.</p>
<p>Simon Brooke: Notes for a thesis entitled Page : 43</p>
<p>li al ,! n··n ¢ in L.•-,ni •a |a•1•¤·• •\$¢</p>
<p>Linguistics</p>
<p>Psychology</p>
<p>Referenses:</p>
<p>PhHosophy · V</p>
<p>Barwise, Situations and Attitudes (?)</p>
<p>Feyerapend, Ageinst Metheg</p>
<p>ven Fresssen, B Q: The Seientifis Image: Qlerengen Press, Qxfgrg, 1QSQ</p>
<p>Garfinkel, Forms of Explanation</p>
<p>Nagel, The Structure of Science</p>
<p>Pepper, K Qeniesjures and Refujetiens</p>
<p>A Toulmin, The uses of Argument A</p>
<p>Linguistics</p>
<p>Sperber, Relevance</p>
<p>Psychology</p>
<p>Antaki, Lay Explanations of Behaviour</p>
<p>Craik, The Nature of Explanation</p>
<p>Draper, S W·, A User Qentreg Qeneep] ef Explenatien: Alvey Exp SIS 2</p>
<p>OMe||ey, Q: Applying Studies ef Neturel Dielegge te Human Qempggter Intereejienz Alvey</p>
<p>Exp SIQ 2 _</p>
<p>Artificial Intelligence . ~</p>
<p>A Goguen, Reasoning and Natural Explanation `</p>
<p>·k.</p>
<p>[^1]: Later (p 19), Achinstein refers to the audience. By contrast, he cites (p 20) an alternative formulation by RJ Mattews in which the audience is explicitly represented.</p></div></div></div></body></html>

6
docs/codox/Errata.html Normal file
View file

@ -0,0 +1,6 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title>Errata</title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 current"><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><h1><a href="#errata" name="errata"></a>Errata</h1>
<ol>
<li>On title page: the claim that Zaphod Beeblebrox said Hey, what IS truth, man? in the printed text of Douglas Adams Hitchhikers Guide to the Galaxy is false.</li>
</ol></div></div></div></body></html>

469
docs/codox/History.html Normal file
View file

@ -0,0 +1,469 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title>History</title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 current"><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><h1><a href="#history" name="history"></a>History</h1>
<h2><a href="#history-introduction" name="history-introduction"></a>History: Introduction</h2>
<p>The object of this chapter is to describe and discuss the development of Expert System explanations from the beginning to the most recent systems. The argument which I will try to advance is that development has been continuously driven by the perceived inadequacy of the explanations given; and that, while many ad hoc, and some principled, approaches have been tried, no really adequate explanation system has emerged. Further, I will claim that, as some of the later and more principled explanation systems accurately model the accounts of explanation advanced in current philosophy, the philosophical understanding of explanation is itself inadequate.</p>
<h2><a href="#family-tree-of-systems-discussed" name="family-tree-of-systems-discussed"></a>Family Tree of Systems discussed</h2>
<p>(diagram here)|</p>
<p>Chronology relates to publication, and not to implementation. Links are shown where system designers acknowledge influence, or where family resemblance between systems is extremely obvious. In a small field like this, it is reasonably (but not absolutely) safe to assume that major practitioners are up to date with the current literature.</p>
<p>Contrary to the current view, expressed by such authors as Weiner:</p>
<p>“… (Expert) systems include some mechanism for giving explanations, since their credibility depends on the users ability to follow their reasoning, thereby verifying that an answer is correct.” [Weiner, 80]</p>
<p>This view might be paraphrased as saying that an explanation generator is an intrinsic and essential part of an expert system. By contrast, the first thing that I intend to argue is that:</p>
<h2><a href="#the-earliest-systems-contained-no-explanation-facilities" name="the-earliest-systems-contained-no-explanation-facilities"></a>The earliest systems contained no explanation facilities</h2>
<p>Two of the famous early expert systems, Internist [Pople 77] and Macsyma [Martin &amp; Fateman 71] did not have anything approaching an explanation system and made no claims to have one. Consequently, these will not be discussed at any length here. One other, Dendral, had a command EXPLAIN; and the last, MYCIN, is famous for its explanations. To maintain my claim that neither of these systems had, in their original conception, what we would recognise as an explanation system, we will examine them in detail.</p>
<h2><a href="#dendral" name="dendral"></a>Dendral</h2>
<h3><a href="#general-description-of-the-system" name="general-description-of-the-system"></a>General description of the system</h3>
<p>Dendral is one of the earliest programmes which are conventionally included in the history of Expert Systems. As Jackson says:</p>
<blockquote>
<p>“DENDRAL can be seen as a kind of stepping stone between the older, general-purpose problem solving programs and more recent approaches involving the explicit representation of domain knowledge.” [Jackson 86 p 19]</p>
</blockquote>
<p>The system is designed to deduce the molecular structure of an organic compound from mass-spectrum analysis. It differs from the modern, post MYCIN, conception of an expert system - or indeed even the weaker conception of a knowledge based system- in a number of ways.</p>
<p>Firstly, it operates in batch-mode - that is, when the system is started, it prompts the user for input, and then goes away and analyses this without further interaction. When this is completed, it outputs a report.</p>
<p>Secondly, the program explicitly implements an algorithm, which is described [Buchanan et al 69, section 7].</p>
<p>Most significantly for the purpose of the current argument, although an attempt is made to produce information from which a justification of the conclusion could be reconstructed (by printing out the states of some internal variables at the end of the run), and although the function which causes the state of the variables to be printed is called EXPLAIN, there is no explanation facility as currently understood. This lack is partially made good by a speak option, which causes information about the current hypothesis to be printed out at each stage in the inference process.</p>
<h3><a href="#example-output-" name="example-output-"></a>Example output:</h3>
<pre><code>(EXPLAIN (QUOTE C8H160) s:09046 (QUOTE TEST1) (QUOTE JULY8)) *GOODLIST= (*ETHYL-KETONE 3*)
*BADLIST= (*c-2-ALCOHOL* *PRIMARY-ALCOHOL* *ETHYL-ETHER2* *METHYL-ETHER2* *ETHER2* *ALDEHYDE* *ALCOHOL* *ISO-PROPYL KETONE3* *N-PROPYL-KETONE3* *METHYL-KETONE 3*)
(JULY-4-1968 VERSION) c2*ETHYL-KETONE 3*H8 MOLECULES NO DOUBLE BOND EQUIVS
CH2..CH2.c3H7 c=.0 C2H5, CH2..CH..CH3 C2H5c=.0 C215 CH2..CH2.CH..CH3 CH3 c=.0 C2H5.
DONE
</code></pre>
<blockquote>
<p>{from op. cit. table 10, p 250}</p>
</blockquote>
<h3><a href="#dendral-as-an-expert-system" name="dendral-as-an-expert-system"></a>DENDRAL as an Expert System</h3>
<p>So why should DENDRAL be considered an Expert System? The programme consists of two major components, a structure generator and a evaluation function. Both of these incorporate inference mechanisms, supported by explicit representations of knowledge.</p>
<h3><a href="#the-generate-stage" name="the-generate-stage"></a>The Generate stage</h3>
<p>The input data gives approximate information about the relative quantities of different ion-masses in the compound, and consequently roughly suggests the proportions of elements present. The structure generator generates compounds compatible with the analysis data, by exploiting knowledge about possible and impossible atomic bonds. This knowledge appears to be held essentially as patterns, against which generated patterns are matched. Two primary collections of patterns are maintained, a badlist and a goodlist. The badlist comprises, initially, those primitive compounds which cannot exist in nature; those compounds which are ruled out by features of the input data are added.</p>
<h3><a href="#the-test-stage" name="the-test-stage"></a>The Test stage</h3>
<p>The evaluation function takes structures passed by the generator, and uses a predictor to calculate what the spectrum to be expected from this structure would be. It then compares this against the spectrum originally entered, and scores it for similarity.</p>
<p>The predictor uses some form of a rule engine. My caution in that statement derives from the extremely technical nature of the passage in [Buchanan et al 69, section 4], and the fact that no actual examples of rules given. These rules determine the way in which a compound is likely to break down under conditions inside the spectrometer, and what new compounds in what proportion will be the products of these breakdowns; generally the form of the rule appears to be a pair:</p>
<pre><code>(&lt;compound-specification&gt; · &lt;product-specification&gt;)
</code></pre>
<p>where <compound-specification> is a description of a compound or class of compounds, and <product-specification> may be a list of compound specifications with information about their proportions, or may, where it is uncertain what the precise products would be, or no further decomposition is likely, be spectrum fragments. The spectrum fragments which form the nodes of the decomposition graph are then summed to generate a predicted spectrum.</product-specification></compound-specification></p>
<p>So that it appears that two main inference mechanisms are employed in DENDRAL: a simple pattern matcher helps to generate hypotheses, and a more sophisticated forward chaining mechanism supports the test stage, eliminating the impossible ones. Summary</p>
<p>It is clear from the above that DENDRAL is an Intelligent knowledge Based System; but the absence of any high-level explanation or justification system, or any method of exploring the inference interactively with the machine, make it very different from what we now think of as an expert system.</p>
<p>Despite this, DENDRAL has a very direct relevance to the present project: as [Buchanan &amp; Feigenbaum 78] report:</p>
<blockquote>
<p>“Another concern has been to exploit the AI methodology to understand better some fundamental questions in the philosophy of science, for example the processes by which explanatory hypotheses are discovered or judged adequate.” (Buchanan &amp; Feigenbaum 78, p 5)</p>
</blockquote>
<p>Thus DENDRAL set out to contribute to exactly the same debate that I am addressing.</p>
<p>Interestingly, although later developments of the DENDRAL family included interactive editing of hypotheses, and although Buchanan was involved in the MYCIN project in the interim, no explanation facility had been added to the system by 1978, the date of the later of these two papers. This may be seen as providing some very tenuous support for one or other of two hypotheses of mine:</p>
<ol>
<li>
<p>It was Davis, with TEIRESIAS, who developed what we now think of as MYCINs explanation facility;</p></li>
<li>
<p>It is extremely difficult to add explanation post facto, if the knowledge representation and inference mechanism have not been designed to support it.</p></li>
</ol>
<h2><a href="#mycin" name="mycin"></a>Mycin</h2>
<p>Mycin [Davis et al, 77] is perhaps the program most often seen as the starting point for expert system research, and is certainly the first system which is remembered for its explanation facilities.</p>
<h3><a href="#explanation-facilities" name="explanation-facilities"></a>Explanation Facilities</h3>
<p>What isnt so frequently remembered is that MYCIN itself, the consulting programme, didnt have any explanation facilities. These were provided by a separate front end module, TEIRESIAS, which was intended as a knowledge engineers tool. The point here is that the MYCIN project did not (as | understand it) expect end users to use - or need - explanations. Rather, the explanation facility was conceived as a high level debugging trace to help the knowledge engineer, presumed to be an “experienced programmer”, with knowledge of the problem domain, to discover what is going on inside the system. Consequently:</p>
<blockquote>
<p>“The fundamental goal of an explanation facility is to enable a program to display a comprehensible account of the motivation for all its actions.” [Davis &amp; Lenat, 82] my emphasis.</p>
</blockquote>
<p>The explanation tells why the machine carried out an action, not why it believes a proposition. This is justified on the grounds that:</p>
<blockquote>
<p>“We assume … that a recap of program actions can be an effective explanation as long as the correct level of detail is chosen.”</p>
<p>“With a program that does symbolic reasoning, recapping offers an easily understood explanation.” [ibid]</p>
</blockquote>
<p>This understanding of the explanation as simply a development of high level trace facilities is confirmed by the fact that the fragments chosen for concatenation to form the explanation are constructed by applying fixed templates to the rules. It is a (perhaps the) classic sweetened backtrace.</p>
<p>Rules were assumed to be comprehensible in themselves because they had been formulated by human experts attempting to formalise their own knowledge of the domain. As such the rules were expected to:</p>
<blockquote>
<p>“…embody accepted patterns of human reasoning, implying that they should be relatively easy to understand, especially for those familiar with the domain. … They also attack the problem at what has been judged (by the expert) to be an appropriate level of detail.”</p>
</blockquote>
<p>The explanation is also helped by the very high level language in which the rules are expressed, with its stylised code and small number of primitive operations, which together made it easy to apply templates to transform executable code into legible English.</p>
<h4><a href="#the-why-question" name="the-why-question"></a>The WHY question</h4>
<p>The WHY question had two different semantics, depending on the mode that MYCIN was in when the question is asked. MYCIN is generally thought of (as it is) as an inference mechanism, so that the literature generally refers to MYCIN working in this mode. But before starting on an inference process on a new case, MYCIN asks a series of questions to allow it to set up the primary attributes of the objects it is instantiating. This is more akin to form filling than to rule driven behaviour.</p>
<p>In the more familiar Inference mode, TEIRESIAS response to a WHY question was to justify its action in asking the current question, by printing the rule currently being executed, in a special template. WHY is thus an immediate mode command; it is not clear from the material whether it was possible to ask WHY any question other than the current one had been asked.</p>
<p>However, WHY queries can be repeated, to climb back up the inference trace one rule at a time.</p>
<p>In the form filling mode referred to above, WHY queries are simply responded to by printing out some canned text associated with the primary attribute being requested.</p>
<h4><a href="#the-how-question" name="the-how-question"></a>The HOW question</h4>
<p>The HOW query, by contrast, operates on a history list, and requires, as argument, a statement number. The response given is (again templated) a print out of the rule whose test part is given in the numbered statement. Thus there are two quite different semantics to HOW. HOW of a rule which has been evaluated will give what is in some sense a justification (by modus ponens) for belief in the statement - in this sense it might be rendered “how do you know that…”. HOW of a rule which has yet to be evaluated gives procedural information about how to find the truth value of the statement, and might be rendered “how would you find out whether…”. These different semantics are to some extent signalled by the use of different templates.</p>
<p>Some numbered statements, eg (5.0) below, do not appear to be test parts of any rule. It is not made clear what the effect of asking WHY [5.0] would be.</p>
<p>####= Example, user input in bold:</p>
<pre><code>Where is the suspected portal of entry of organism-1 into this sterile site?
** WHY 
[I.e., why is it important to determine the suspected portal of entry of organism-l into this sterile site?]
[3.0] ...in order to find out about an organism or class of organisms for which therapy should cover.
It has already been established that
[3.1] The site of the culture is blood [3.2] The stain of the organism is gram positive
Therefore if:
[3.3] The portal of entry of the organism is G.I.
[3.4] [Abdomen is the locus of infection, or
Pelvis is the locus of infection]
Then:
There is strongly suggestive evidence (.9) that Enterobacteriacea is the class of organisms for which therapy should cover.
[Rule 095]
** HOW [3.4]
[I.e., how were you trying to determine that the Abdomen is the locus of infection, or that the pelvis is the locus of infection]
[4.0] At that point Rule 021 was being used.
**HOW [4.0] 
[I.e., how was Rule 021 used?]
[5.0] It has already been established that
[5.1] the culture is recent
Therefore if:
[5.2] There is therapeutically significant disease associated with the occurrence of this organism
Then
It is definite (1.0) that the site of the culture is the locus of infection in the patient.
</code></pre>
<blockquote>
<p>{Taken from Barr &amp; Feigenbaum, vol ii pp 96-97; similar, but more extensive, examples may be found in Davis &amp; Lenat, pp 265-285. More surprising examples appear in Davis et al. pp 35-37}</p>
</blockquote>
<h3><a href="#relevance-filtering" name="relevance-filtering"></a>Relevance filtering</h3>
<p>Another feature of MYCIN for rather, of TIERESIAS) which is often forgotten is the advanced relevance filtering, which has rarely been equaled by imitators.</p>
<p>Briefly the problem which gives rise to the need for filtering is this. An explanation (at least one given in the form of a syntactically sugared inference trace) which gives all the steps used to reach a goal will, in real applications, tend to be too long and complex for the user to understand. The critical information will be lost in a mass of trivial detail. This problem does not, of course, arise with toy systems, where the knowledge base is not large enough for extended chains of inference to develop. As Davis writes:</p>
<blockquote>
<p>“In an explanation we must not carry reasoning too far back, or the length of our argument will cause obscurity; neither must we put in all the steps which lead to our conclusion, or we shall waste words by saying what is manifest.”</p>
<p>“Depending on the individual user, it might be best to show all steps in a reasoning chain, to omit those that are definitional or trivial, or, for the most sophisticated user, to display only the highlights.” [Davis &amp; Lenat]</p>
</blockquote>
<p>Later, we find Weiner writing:</p>
<blockquote>
<p>“If an explanation is to be understood, it must not appear complex.” [Weiner, 80]</p>
</blockquote>
<p>TIERESIAS relevance filter was based on the [certainty factor]3 of inference steps. A function of this was used as a measure of the significance of an inference step, with inferences having a CF of 1 (true in every case) being considered to have no contribution to make to explanation, and lower certainty factors having higher indices on a logarithmic scale. This was seen as a “…clearly imperfect…” solution, but provided:</p>
<blockquote>
<p>“…. a dial with which the user can adjust the level of detail in the explanations. Absolute settings are less important than the ability to make relative adjustments.”</p>
</blockquote>
<p>This index of explanation abstraction could be used as an optional argument to the WHY query, as in the example:</p>
<pre><code>** WHY 4
We are trying to find out whether the organism has been observed in significant numbers, in order to determine an organism or class of organisms for which therapy should cover.
</code></pre>
<blockquote>
<p>{taken from Davis and Lenat pp 269 - 270. See also ibid pp 265 - 266 for the low level version of this reply. This is a very impressive feature which has been quite neglected in the general literature.}</p>
</blockquote>
<p>This feature is further extended with an EXPLAIN command, which can be used to go over the same ground as an immediately previous WHY command, but at a different level of detail. Thus if the user tried, for example WHY 10 and got back an answer that was too sketchy, it would be possible to try EXPLAIN 3 to bring the level down. Whether the reverse would be possible - to try EXPLAIN 10 after WHY 3 - is not made clear, but it appears not, as “…the EXPLAIN command will cover the same starting and ending points in the reasoning…”.</p>
<h3><a href="#limitations-of-the-explanation-system" name="limitations-of-the-explanation-system"></a>Limitations of the explanation system</h3>
<p>Parts of the MYCIN expertise (those parts concerned with the selection of drugs are mentioned) were not encoded as rules but were coded in LISP directly. This expertise could not be explained by TIERESIAS.</p>
<p>Other limitations of the system recognised by Davis &amp; Lenat include the limited semantics of the question commands (i.e. the user was tightly constrained in what could be asked), the fact that beyond the level at which knowledge was primitive in the system, further explanations could not be given, and the lack of any user model, which might help remove unneeded detail from explanations. Furthermore, they observe that “… the system should be able to describe its actions at different conceptual levels…”.</p>
<h3><a href="#conclusion" name="conclusion"></a>Conclusion</h3>
<p>TIERESIAS is one of the earliest examples of Expert Systems explanation. It is significant that explanation was not seen as being a critical or integral part of MYCIN, but was provided in a separate programme initially intended only as an aid to knowledge engineers and not as part of the consulting system. In this context it is not surprising that it should have developed out of high level backtrace facilities familiar from LISP programming environments.</p>
<p>Despite the fact that it was a very early attack on the problem, and is in essence simply a syntactically sugared backtrace, TIERESIAS is highly sophisticated in a number of ways; notably in the provision of an effective (if crude) measure of detail, which allowed for remarkably successful abstraction of high-level explanations from the inference trace.</p>
<p>MYCIN/TEIRESIAS was undoubtedly a revolutionary system in many ways, and it has spawned many derivative systems. But it was less revolutionary than it appeared to be. The explanation facilities, which are made so much of in the literature, are not able to give declarative reasons for belief in propositions. They were not designed to, being conceptually merely very high level trace facilities for the knowledge engineer.</p>
<p>The fact that users eagerly accepted the facilities MYCIN/TEIRESIAS provided indicated that there was a demand for explanation systems.</p>
<h1><a href="#a-wide-variety-of-technical-fixes-have-been-experimented-with" name="a-wide-variety-of-technical-fixes-have-been-experimented-with"></a>A wide variety of technical fixes have been experimented with</h1>
<p>A very wide range of approaches to the problem of providing a high level account of a system beliefs or actions have been tried. One of the interesting avenues has been that followed William Swartout, in attempting to provide explanations of what conventional procedural programmes are doing.</p>
<h2><a href="#digitalis-therapy-advisor" name="digitalis-therapy-advisor"></a>Digitalis Therapy Advisor</h2>
<p>This is yet another medical expert system - this time dealing with the administration of digitalis to heart attack sufferers.</p>
<p>The knowledge base maintains a model of the patients individual response to digitalis - a highly toxic drug to which people have widely varying response - as well as general information about its properties and administration.</p>
<p>Digitalis Therapy Advisor was written in, and clearly developed with, a prototype self-documenting language called OWL 1. This is a clearly LISP like language, which incorporates a parser which can translate programme statements into a high-level procedural account. This parser can be applied not only to pieces of code, to show what the programme would do to execute it, but also to items on the history list, to show what it has done.</p>
<h3><a href="#explanation-by-translation-of-programming-language" name="explanation-by-translation-of-programming-language"></a>Explanation by translation of programming language</h3>
<p>Explanation was a key goal in the design of the advisor programme, and the implementation of it largely exploited this feature of the underlying language. Broadly, english-language templates are associated with each primitive procedure of the language, into which the arguments passed, and on completion the results returned, can be spliced. However the programmer, when writing a new procedure, does not to need to supply templates, as the system is able to splice together the templates belonging to the primitives. As the OWL interpreter runs, it builds up an event structure, or trace, of the procedures it has called, and what their arguments were:</p>
<blockquote>
<p>“The system is self-documenting in the sense that it can produce English explanations of the procedures it uses and the actions it takes directly from the code it executes. Most of its explanations are produced in this manner, although a few types of explanation are canned phrases… The explanations are designed to be understood by a physician with no programming experience.”</p>
</blockquote>
<p>The limitations of this approach are acknowledged, and the work around" used is described:</p>
<blockquote>
<p>"When writing a computer program, it is sometimes necessary to use methods that are totally foreign to users of the system. This may be because the methods employed by humans are unknown, (or) too inefficient… Whenever this situation occurs, it will not be possible to provide explanations by merely translating the code of the program into English…</p>
<p>To deal with this problem … we have attached English comments to the OWL code… When the … method is explained, the comments are displayed along with the translated OWL code."</p>
</blockquote>
<h3><a href="#sample-explanation" name="sample-explanation"></a>Sample Explanation</h3>
<pre><code>DURING THE SESSION ON 9/21/76 AT 11:10, I CHECKED SENSITIVITY DUE TO THYROID-FUNCTION BY EXECUTING THE FOLLOWING STEPS:
1: I ASKED THE USER THE STATUS OF MYXEDEMA. THE USER RESPONDED THAT MYXEDEMA WAS PRESENT.
2: SINCE THE STATUS OF MYXEDEMA WAS PRESENT I DID THE FOLLOWING
2.1 I ADDED MYXEDEMA TO THE PRESENT AND CORRECTABLE CONDITIONS. THE PRESENT AND CORRECTABLE CONDITIONS THEN BECAME MYXEDEMA.
2.2 I REMOVED MYXEDEMA FROM THE DEGRADABLE CONDITIONS. THE DEGRADABLE CONDITIONS THEN BECAME HYPOKALEMIA, HYPOXEMIA, CARDIOMYOPATHIESMI, AND POTENTIAL POTASSIUM LOSS DUE TO DIURETICS
</code></pre>
<blockquote>
<p>{And so on ad nauseam. Taken from Swartout 77, page 822}</p>
</blockquote>
<h3><a href="#explanation-discussion" name="explanation-discussion"></a>Explanation: Discussion</h3>
<p>Essentially this is a syntactic sugaring system, which provides for splicing text fragments into the output where necessary. Clearly, as the methods which are executed are procedural, the explanation given is a procedural explanation, an explanation of why things were done, and not of why things were believed. It appears for example that we cannot ask why the various actions were carried out - ie what the system was attempting to achieve, as in a MYCIN WHY question; nor why specific things are believed: for example, why hypoxemia is one of the degradable conditions.</p>
<p>Instead of dividing the system into an inference engine and a knowledge base, the knowledge is hard-wired into OWL1 methods (procedures). This approach appears more applicable to domains where an algorithm is available, than to the more classic Expert System domains.</p>
<h2><a href="#xplain" name="xplain"></a>XPLAIN</h2>
<h3><a href="#general-description" name="general-description"></a>General Description</h3>
<p>Swartouts next system, XPLAIN, grew out of the work on Digitalis Therapy Advisor, and parallels the development of in that an objective in the design was the justification of programme actions.</p>
<p>The explanation system follows DTA in which explanations were based on expansions of the actual executable code - a sophisticated variant of syntactic sugaring. Swartout argues against the use of canned text:</p>
<blockquote>
<p>“There are several problems with the canned text approach. The fact that the program code and the text strings that explain that code can be changed independently makes it difficult to maintain consistency between what the program does and what it claims to do. Another problem is that all questions must be anticipated in advance… Finally, the system has no conceptual model of what it is saying… Thus, it is difficult to use this approach to provide more advanced sorts of explanations…” [Swartout 83, p 291]</p>
</blockquote>
<p>But now he explores the limitations of the approach used in DTA, mentioning, in addition to those problems noted in his earlier paper [Swartout 77], that redundant and irrelevant information is included in a mechanical expansion of code:</p>
<blockquote>
<p>“The fact that every operation must be explicitly spelled out sometimes forces the programmer to program operations which the physician would perform without thinking…. (eg) steps which are involved more with record keeping than with medical. reasoning… Since they appear in the code, they are described in the explanation routines, although they are more likely to confuse a physician user than enlighten him. An additional problem is that it is difficult to get an overview of what is really going on…” (ibid, p 293)</p>
</blockquote>
<p>Once again, we see that the inclusion of irrelevant material can mask the important points from the human reader.</p>
<p>Swartouts solution to his rejection both of canned text and of syntactic sugar is to have the computer generate the expert system - called the performance program - and simultaneously, from the same knowledge base, generates a refinement structure which records the transformations from the input information to the performance program. This is exploited in the later construction of explanations. XPLAIN itself is thus an automatic programmer, whose purpose is to write such a system.</p>
<p>The refinement structure is a tree of goals, each being a subgoal of the one above it, at a less abstract level. Each subgoal is succesively refined until a primitive of the underlying programming language is produced. The performance program is thus found at the leaves of the refinement structure.</p>
<p>In addition to the refinement structure, knowledge is held in a domain model and a collection of domain principles.</p>
<h3><a href="#generating-explanations" name="generating-explanations"></a>Generating explanations</h3>
<p>The explanation system which exploits this structure is constructed in two modules, a phrase generator, and an answer generator. The phrase generator task is to compile sections of the refinement structure into English fragments; the answer generator first chooses which sections of the refinement structure to compile, and then concatenates the generated phrases to produce the answer. In addition to generating phrases from the refinement structure, the explanation generator has access to the domain model and principles.</p>
<p>It applies relevance filtering by maintaining</p>
<blockquote>
<p>“…knowledge of the state of the program execution, knowledge of what has already been said, and knowledge of what the user is likely to be interested in. Other issues the answer generators confront include deciding whether to omit information the user can be presumed to know from the explanation and determining whether analogies can be suggested to previous explanations." [ibid, p 306]</p>
</blockquote>
<p>This is a much more sophisticated conception of what it is relevant to tell the user than we have yet seen.</p>
<h3><a href="#sample-explanation-user-input-in-bold" name="sample-explanation-user-input-in-bold"></a>Sample Explanation, user input in bold</h3>
<pre><code>Please enter the value of serum potassium:
**why?**
The system is anticipating digitalis toxicity. Decreased serum potassium causes increased automaticity, which may cause a change of ventricular fibrillation. Increased digitalis also causes increased automaticity. Thus, if the system observes decreased serum potassium, it reduces the dose of digitalis due to decreased serum potassium.
Please enter the value of serum potassium:
3.7
Please enter the value of serum calcium:
**why?** 
The system is anticipating digitalis toxicity. Increased serum calcium also causes increased automaticity. Thus, (as with decreased serum potassium) if the system observes increased serum calcium, it reduces the dose of digitalis due to increased serum calcium
</code></pre>
<blockquote>
<p>{after Swartout 83, p 294} </p>
</blockquote>
<p>Points to note are that, unlike the DTA example, this explanation tells us why the system is doing what it is, rather than how it is doing it. It now resembles, conceptually, the MYCIN type WHY explanation, except that the text is noticeably more fluid. It is also worth noticing how the generator of the second explanation fragment has taken into account the information given in the earlier one. The procedural explanations, as produced by DTA, are still available. However, the system still cannot produce an account of why it holds a particular belief.</p>
<h2><a href="#apes" name="apes"></a>APES</h2>
<p>Another interest which was developing in the Artificial Intelligence community at the same time as Expert Systems was Logic Programming: implementing restricted subsets of first order predicate calculus as programming languages. The most significant of these languages was PROLOG. It was inevitable that these two strands would come together, and one of the first signs of this was APES - A PROLOG Expert System - developed Peter Hammond in the early 80s.</p>
<p>Hammond and Sergot discuss the motivation for writing an expert system in PROLOG: they show the structural similarity between the production rules used in the MYCIN family of systems and horn clauses, note that horn clauses offer greater expressive power, and claim that this will assist in the construction of :</p>
<blockquote>
<p>“… knowledge bases which are more flexible, more useful, and more elegant than would be possible with less powerful languages.” [Hammond &amp; Sergot 83 p 95]</p>
</blockquote>
<p>The inference engine is constructed as a meta-interpreter in PROLOG, similar in concept to Walkers Syllog [Walker et al, 87]. The explanation mechanism is a syntactic sugaring of the rule trace, clearly modelled closely on MYCIN or some derivative.</p>
<p>Explanation fragments are generated by applying english-language templates written by the knowledge engineer to the rules, which are themselves written in a strict horn-clause form.</p>
<h3><a href="#example-explanation-user-input-in-bold" name="example-explanation-user-input-in-bold"></a>Example explanation, user input in bold</h3>
<pre><code>Is it true that Peter suffers-from peptic-ulcer ?
**why?** 
aspirin aggravates peptic-ulcer
if Peter suffers-from peptic-ulcer
then aspirin is-unsuitable-for Peter
Peter complains-of pain
aspirin suppresses pain
if not aspirin is unsuitable-for Peter
then Peter should-take aspirin
Is it true that Peter suffers-from peptic-ulcer ?
**yes**
Is it true that Peter suffers-from impaired-liver function ?
**no**
==&gt; Peter should-take lomotil.
**how**
To show Peter should-take lomotil I used the rule
&lt;person&gt; should-take &lt;drug&gt; if
&lt;person&gt; complains-of &lt;symptom&gt; and
&lt;drug&gt; suppresses &lt;symptom&gt; and
not &lt;drug&gt; is-unsuitable-for &lt;person&gt;
You said Peter complains-of diarrhoea
I know lomotil suppresses diarrhoea
I can show not lomotil is-unsuitable-for Peter.
</code></pre>
<h3><a href="#discussion" name="discussion"></a>Discussion</h3>
<p>Although PROLOG is a declarative language, and it would seem natural to provide it with a declarative explanation facility, the implementers of APES seem to have been more concerned to demonstrate that existing Expert System functionality could be implemented in PROLOG than to consider what functionality was actually desirable. Thus they provide a system which is similar to but actually cruder than MYCIN - there is, for example, no relevance filtering.</p>
<p>So this must be seen as a toy system, whose only real interest is that it demonstrates that it may be possible to build an explanation system in Prolog. It does not demonstrate that a good explanation system can be built, and it would not effectively handle a knowledge base of any size.</p>
<h2><a href="#syllog" name="syllog"></a>Syllog</h2>
<p>Syllog, like APES, is an attempt to make a fusion between Expert Systems and logic programming. In some senses it is a better thought out and better engineered attempt, as I hope to show, than APES; and this is reflected by the fact that Syllog has been employed in a number of experimental, but significant, applications, by IBM (Syllog was developed by Adrian Walker of IBMs Thomas J Watson Research Centre).</p>
<p>Syllog is a rule based system, and like Apes, the rules are technically horn clauses - but they are expressed in a high-level rule language, which makes them easier to understand, and are termed syllogisms by Walker - even though they clearly arent.</p>
<p>What makes Syllog interesting from the present view point is the explanation system, which, although lacking in interesting capabilities like relevance filtering, is that the explanation given is declarative. The technique of explanation generation is also extremely different from preceding systems, in that the rule is (conceptually, at any rate) compiled into the explanation, in something like the way that a conventional language compiler works. The system compiles reasonable English with remarkably little knowledge of the language; and indeed is very simply adapted to work in other natural languages.</p>
<h3><a href="#sample-explanation-1-" name="sample-explanation-1-"></a>Sample explanation (1):</h3>
<p>This sample explains planning a flight from John F Kennedy airport, New York, to San Francisco. It is, essentially, a pretty-printed execution trace, without syntactic sugar.</p>
<pre><code>FLY ( JFK, SFO, 9.30, 15.30)
OK ( JFK, SFO, AMERIC, 9.30,10.0, 15.25, 15.30)
FLIGHT ( AMERIC, 183, JFK, CHI, 10.0, 11.24)
BEFORE( 9.30, 10.0)
LT( 9, 10)
CONNECTION ( CHI, AMERIC, UNITED, 11.24, 11.44)
ADD( 11.24, 0.20, 11.44)
SUM( 11, 0, 11)
SUM( 24, 20, 44)
LT ( 44, 60)
OK( CHI, SFO, UNITED, 11.44, 13.5,15.30)
FLIGHT ( UNITED, 121, CHI, SFO, 13.5, 15.25)
BEFORE ( 11.44, 13.5)
LT ( 11, 13)
BEFORE ( 15.25, 15.30)
EQ ( 15, 15)
LE ( 25, 30)
</code></pre>
<p> {from [Walker 82] page 9}</p>
<h3><a href="#sample-explanation-2-" name="sample-explanation-2-"></a>Sample explanation (2):</h3>
<pre><code>We shall set up testers for 18719 of part chip2 in quarter 3
Yes, that's true
Because...
we shall set up testers for 2273 of card1 in quarter 3 card1 has 7 of immediate part chip2 2273 * 7 = 15911
the expected yield of cardi is 85% based on past experience
15911 divided by 85 (normalized and rounded up) is 18719 
we shall set up testers for 18719 of part chip2 in quarter 3
we plan to ship 1000 of box1 in quarter 3
box1 has 2 of the immediate part card1
the expected yield of card1 is 88%, based on past experience
1000 * 2 = 2000
2000 divided by 88 (normalised and rounded up) is 2273 
we shall set up testers for 2273 of card1 in quarter 3
</code></pre>
<p>{after [Walker et al 87], p 244}</p>
<p>These two explanations look superficially very different, but a careful reading will show that the form of the later (published 1987) explanation is simply a - very competent - syntactic sugaring of exactly the same semantic form as that of the earlier explanation.</p>
<p>Note that (in the later version) the user has to give the system exactly the proposition to be explained. This is supported by a menu system which allows the user to browse through - and pick from - templates for all the statements the system knows about. Once the user has picked a template, further menus help with filling in the blanks.</p>
<p>The slightly weird arithmetic is, as they say, sic: otherwise we see a clearly expressed declarative statement of why just this number of testers are needed. We also see that without relevance filtering, this arrangement is only suitable for relatively shallow search spaces.</p>
<p>To be fair, there is something that serves in place of a relevance filter: the top few nodes of the proof tree constructed by the inference engine are compiled into explanation fragments, which are placed on the screen; this proceeds until the screen is filled. Because (as we argued in [Mott &amp; Brooke 87] - although we were discussing a single path selected from the tree) an inference mechanism chaining backwards will generate a proof from the general to the particular, it can be assumed that a general statement of the explanation will be given first, with what follows being more and more tightly focussed detail. So that what is immediately presented on the screen is likely to be the most important - and perhaps the most relevant - part of the proof.</p>
<p>So, once again, this system should be classified as a bit ad hoc - an explanation system constructed without a lot of thought for what explanation is. However, the explanation constructed now conforms to the deductive nomological account of explanation, rather than (to use Nagels terminology) the genetic form. So we have arrived at last at the classic explanation form of the Philosophy of Science.</p>
<h2><a href="#arboretum" name="arboretum"></a>Arboretum</h2>
<h3><a href="#general-description" name="general-description"></a>General description</h3>
<p>Arboretum is more completely described in a later chapter, so I will not go into any great detail here. The system was built to demonstrate a decision procedure for a novel non-monotonic logic developed by Peter Mott. The other major innovation of the system was the graphical presentation of rules and of inference traces: this feature has been seen by others as a form of explanation, but is not my central interest here. The generation of textual explanation was not part of the original conception but was added in an ad-hoc manner during implementation.</p>
<p>The explanation system, as we wrote, depended on:</p>
<blockquote>
<p>“… the fact that DTrees (the knowledge representation used) are structured through exceptions from the very general to the more abstruse and particular; and that, in consequence, any path through a rule structure follows a coherent argument, again from the general to the particular. ” [Mott &amp; Brooke 87, p 110]</p>
</blockquote>
<p>This allowed us to attach an explanation fragment to each node, knowing that each implied a unique conclusion for the structure in which it appeared. We used fragments of canned text, because we found this allowed us to produce more fluid explanations, but as we noted:</p>
<blockquote>
<p>“… there is no reason why the system should not be modified to generate explanation fragments itself, for example by using a text macro similar to <feature of="" root-node=""> was found to be <colour of="" stick-node=""> because <feature of="" stick-node=""> was true.” [Ibid, p 111]</feature></colour></feature></p>
</blockquote>
<h3><a href="#relevence-filtering" name="relevence-filtering"></a>Relevence filtering</h3>
<p>The most interesting feature of this explanation system was that fortuitously, the evaluation process enabled us to extract precisely that clause in each rule which was relevant to the eventual outcome. We also developed a neat heuristic to the effect that, when generating a no explanation, we should:</p>
<blockquote>
<p>“… concatenate the explanation fragments from the deepest sticking node in each successive tree on the search path. The reason is that this represents the nearest that the claimant got to succeeding in the claim… In the case of a yes decision we chose the opposite approach and select the shallowest sticking node available… it is not relevent to describe how a long and tortuous inference path finally delivered yes when a much shorter, less involved one, did so too.” [Ibid]</p>
</blockquote>
<h3><a href="#sample-explanation" name="sample-explanation"></a>Sample explanation</h3>
<p>The application here is to the adjudication of claims to health insurance benefits. The system would be used by the adjudication officer, and the explanation would be sent to the claimant.</p>
<pre><code>Dear [Name of Claimant]
You are capable of work and there are no special circumstances permitting you to be deemed incapable of work. Although you provided a valid certificate of explanation, this is insufficient unless either there is evidence of contact with the disease or you are a known carrier thereof.
Yours Sincerely
[your name]
</code></pre>
<p>TODO: this is not a very good Arboretum explanation; I know we did better ones on Widows benefit. Check whether I can find a surviving good one, and substitute it.</p>
<h3><a href="#discussion" name="discussion"></a>Discussion</h3>
<p>It will be seen that this is a short, clear, declarative statement in seemingly natural English, which covers all (and only) the relevant points of a complex case. To be fair, the system does not always do this well, but most of its explanations are of this quality.</p>
<h1><a href="#attempts-at-more-principled-approaches" name="attempts-at-more-principled-approaches"></a>Attempts at more principled approaches</h1>
<p>After a long series of systems, such as those just described, in which the approach taken to explanation generation was essentially one of ad hoc mechanisms and technical fixes, systems began to emerge in the late 1970s which took a more principled approach to the problem. One of the first of these was BLAH.</p>
<h2><a href="#blah" name="blah"></a>BLAH</h2>
<h3><a href="#general-description" name="general-description"></a>General description</h3>
<p>This system sought to address issues of explanation structuring and complexity. Like XPLAIN, it sought to reduce detail by maintaining a model of what the user could be expected to know. However, its design was based on studies of human explanation behaviour, described in [Goguen et al., 83] and in [Wiener 1979).</p>
<p>This system is also interesting in that for the first time we see declarative explanations:</p>
<blockquote>
<p>“The third type of question (supported by BLAH) is a request to BLAH to explain why some assertion, already in the knowledge base, is believed.” [Weiner 80, p 20]</p>
</blockquote>
<p>The inference mechanism used was written in AMORD [de Kleer et al., 78] with a truth maintenance sub-system described in [Doyle, 78]. Essentially this appears to be a production system.</p>
<p>The knowledge base contains assertions, each of which is supported by a list of other assertions which tend to justify belief in it, and optionally, a list of assertions which tend to question such belief. Justifications are based on a set of rules: PREMISE, STATEMENT/REASON, REASON/STATEMENT, IF/THEN, THEN/IF, AND, OR, GENERAL/SPECIFIC, EXAMPLES, and ALTERNATIVES; these are claimed to derive from justifications used by subjects in the studies of natural explanation. Each rule has associated with it a series of alternative templates into which the predicates and instantiated variables can be patched.</p>
<p>Two parallel views of this knowledge base are maintained: a system view and a users view.</p>
<blockquote>
<p>“… When a user poses a question to BLAH, BLAH uses the knowledge in the systems view to reason about it; and when BLAH generates an explanation, it uses knowledge in the users view to determine (by reasoning) what information the user already knows, so that it can be deleted from the explanation.”</p>
</blockquote>
<p>The systems view is built by the knowledge engineer; information given by the user is added to the users view, and information generated by the inference process is added to both.</p>
<p>The knowledge base is also segmented into partitions based on category; and further divided into separate hypothetical worlds; these last being used, presumably, by the truth maintenance system.</p>
<p>The inference process generates a tree having at its terminals instantiated statements about the case, and at its non-terminals justification types, drawn from those listed above. This structure is passed to the explanation generator, which generates text by applying templates which are associated with the justification types. These templates, as well as english-ifying the systems statements, have the power to reorder the nodes of the tree below them, for example by converting an IF/THEN justification type to a THEN/IF. The reorderings are intended to improve the explanation structure.</p>
<p>However before applying the templates it prunes the tree by removing all those statements which the user is presumed to know (those which can be derived from the users view of the knowledge base), and which have no dependents, using a bottom up, right to left search; and then further prunes the tree by removing sub-trees which are considered to contain detail.</p>
<p>The primary measure of detail used is a function of the depth of the explanation tree, but trees are also pruned for complexity if any node has more than two dependents.</p>
<p>Where complexity pruning has been used, explanations generated from the excised sub-trees are successively appended to the original explanation. A meaningless interjection (“uh”) apparently culled from the study of human explanation is used as a marker that this has been done!</p>
<p>The length of the explanation is thus clearly a function of the size (number of nodes) of the explanation tree, but with the rider that splitting the tree in order to improve the explanation structure will actually LENGTHEN the explanation. Wiener claims this as a benefit:</p>
<blockquote>
<p>“As we see in (9), by copying a node from one tree to another we cause the text associated with that node to be repeated in the explanation. As [Halliday and Hassan 76] point out, repetition is one factor which influences the view that sentences, although separate, are tied together to form a unified text.”</p>
</blockquote>
<p>BLAH provided three top level facilities to the user. These were of the form:</p>
<pre><code>(SHOW &lt;assertion&gt;)
-&gt; &lt;assertion&gt;
(CHOICE &lt;assertion1&gt;&lt;assertion2&gt;{&lt;category partition&gt;})
-&gt; (I CHOSE &lt;assertionX&gt;) (NOT (I CHOSE &lt;assertionY&gt;))
(EXPLAIN &lt;explanation&gt;)
-&gt; &lt;explanation&gt;
</code></pre>
<p>Although these are all LISP like in form (indeed the assertions themselves are in the form of lists), it is not clear whether the user had the option of entering:</p>
<pre><code>(EXPLAIN (SHOW &lt;assertion&gt;))
</code></pre>
<h3><a href="#example-explanation-" name="example-explanation-"></a>Example explanation:</h3>
<pre><code>Well, Peter makes less than 750 dollars, and Peter is under 19, and Harry supports Peter so Peter is a dependent of Harry's. Uh Peter makes less than 750 dollars because Peter does not work, and Peter is a dependent of Harry's because Harry provides more than one half of Peter's support.
</code></pre>
<p>I should explain that the application is to the US Federal Income Tax system. This explanation does indeed capture something of the flavour of a natural spoken explanation. Furthermore, it is clearly declarative rather than procedural. However, personally, I find its style rather too informal for textual presentation. I particularly dislike the meaningless Uh which is use to tag the supporting point.</p>
<h3><a href="#discussion" name="discussion"></a>Discussion</h3>
<p>With this system we can begin to construct a model of what the designers have meant by explanation, and relate it to the philosophical work to be described in the following chapter. The form of the explanation is essentially the deductive-nomological explanation, as described by Hempel, but there are subtleties. The deductive nomological form essentially requires that the explanation must be given in terms of things which can be verified by reference to the world; we will discuss the meaning of this later. But BLAHS explanations are given simply in terms of things which BLAH knows that the user knows, making the assumption that the user can supply the rest of the argument.</p>
<h2><a href="#attending" name="attending"></a>ATTENDING</h2>
<h3><a href="#general-description" name="general-description"></a>General Description</h3>
<p>ATTENDING takes a radically novel approach to the problem of assisting decision making in complex domains: it works by inviting the user to describe a case, and to then describe the proposed course of action. The machine reviews the proposals, and produces a critique. The critique is generated by fragment concatenation, and appears to be of high quality, with very natural seeming english. The application described [Miller, 84] is medical, considering plans for the anaesthetisation of patients requiring surgery.</p>
<h3><a href="#explanation-system" name="explanation-system"></a>Explanation System</h3>
<p>The explanation system is provided with limited ability to prevent repetitiousness by allowing the fragment concatenator to follow alternative routes at points in the knowledge base, thus allowing for differently worded explanations of the same inference.</p>
<h3><a href="#interaction-style" name="interaction-style"></a>Interaction Style</h3>
<p>The input methods are crude in the extreme, however, with the user being presented with fairly brief menus of options to describe the case being handled. Thus courses of action not foreseen by the knowledge engineer cannot be described, and, consequently, cannot be criticised</p>
<h3><a href="#inference-mechanism" name="inference-mechanism"></a>Inference mechanism</h3>
<p>The explanation generator is based on the knowledge representation chosen, which is a variant of the Augmented Transition Network and is called an Augmented Decision Network. The nodes of this network are states which the patient may be in. These states are joined by arcs, labelled with actions which may move the patient from the initial to the consequent state of the arc. Each arc also holds a list of risks and benefits consequent on the action. Where a choice of arc exists between two nodes, the arc whose total risks scores least will be prefered; where more than one arc has no risks associated with it, the arc whose total benefits score most will be preffered. Fragments (it is interesting to note that the author uses the word) are stored on arcs of futher transition nets, which are themselves expansions of the arcs in the decision net., and the explanation generator chooses a path through this net collecting and concatenating fragments along the way.</p>
<h3><a href="#redundancy-filtering" name="redundancy-filtering"></a>Redundancy filtering</h3>
<p>The concatenator maintains lists of topics mentioned at sentence, paragraph, and text level, and uses these to prevent redundancy. Where a topic is mentioned a second or subsequent time, a template is substituted for the reference. Thus it is clear that the fragments are more complex than just strings; they must also have some information about their content, in machine handleable form.</p>
<h3><a href="#explanation-discussion" name="explanation-discussion"></a>Explanation: Discussion</h3>
<p>This principled approach to explanation generation is seen as more sophisticated than the if x and y and z then print “this is an explanation” school of explanation generation:</p>
<blockquote>
<p>"Many systems which produce prose output use a fairly ad-hoc approach. Sentences and sentence fragments are stored in the machine as Canned Text. The control of the generation of this canned text is embedded in the procedural logic, often in an ad-hoc way.</p>
<p>This approach can work well if the systems discussion is straightforward and predictable. If complex analysis is attempted, however, and the system designer wants flexibility for the discussion to vary depending on the particulars of the content, then this approach can become quite unwieldy.</p>
<p>There are a number of drawbacks. 1] the programming of the discussion itself becomes difficult. 2] Any major revision of the prose output may involve substantial reprogramming. 3] The logic that generates the prose expression may become hopelessly interwoven with the logic that determines and organises the content of the material to be discussed." p 56</p>
</blockquote>
<p>The strategy used is described as less ambitious than schemes which involve constructing explanations from semantic information generated by an inference mechanism. This is seen to be a research problem in itself.</p>
<blockquote>
<p>“Attending has set itself an intermediate goal: developing a flexible formalism to facilitate the generation of polished prose. Although the PROSENET approach is clearly closer in spirit to canned text generation than to sophisticated language generation it does allow the system designer great flexibility to maipulate, massage, and refine the systems prose output, independent of the rest of the systems analysis.” [Miller 84 p77] (Millers emphasis)</p>
<p>“From the standpoint of computer science, critiquing can be perceived as a mode of explanation which lets a system structure its advice around the particular concerns of the user in a direct and natural way.” [Miller 84 p 74] (Millers emphasis).</p>
<p>“…critiquing allows the physician to be the final decision maker. The computer is never forced to commit itself to one approach or another.” [Ibid]</p>
</blockquote>
<p>[Waah! I forgot to copy a sample explanation!]</p>
<p>[Here insert all the analysis and discussion for this chapter…</p>
<h2><a href="#models-of-explanation" name="models-of-explanation"></a>Models of Explanation</h2>
<h2><a href="#developing-relevance" name="developing-relevance"></a>Developing relevance</h2>
<p>"Just as Thompsons lookup program displayed exasperating shallowness, so total lookahead has its own mentality which from the point of view of the human questioner could be described as impenetrably deep. While the response of lookup is instantaneous, lookahead ruminates through combinatorially vast ramifications while constructing its forward tree of possibilities. Long rumination before each reply is not of course in itself a guarantee of mental depth. But when asked how it selected its move, lookahead is able to make an exceptionally profound response by disgorging the complete analysis tree. This comprises not only a complete strategy but at the same time… a complete justification of the strategy. Could anyone wish for a more profound response?</p>
<p>“On the contrary, mortal minds are overwhelmed by so much reactive detail. Reporting on the Three Mile Island nuclear plant accident the Malone committee stated that …. the operator was bombarded with displays, warning lights, print-outs and so on to the point where detection of any error condition and the assessment of the right action to correct the condition was impossible. So lookahead, with a quite opposite mentality from lookup, has its own reasons for inability to interact helpfully with a human.” [ Michie, 83; Michies emphasis]</p>
<p>[look out and refer to recent work by Shiela Hughes and Allison Kidd]</p>
<h2><a href="#endnotes" name="endnotes"></a>Endnotes</h2>
<p>1 This should not be understood too literally, I think. The conceptual distinction between algorithmic and heuristic programmes had not developed at the time DENDRAL was first developed. The algorithm simply provides a method of generating all the possible combinations of compounds in a fixed sequence, and thus supports only part of the generate stage.</p>
<p>2 This assertion will probably be seen as contentious. I take as evidence the following: the assertion [Davis and Lenat 1982 p 276] that …. the current performance program (is) MYCIN, together with the diagram [ ibid., p 243 figure 2-3] which clearly shows that the explanation module is outside the performance program. To support my argument that the explanation mechanism described in [Davis et al. 1977] - the MYCIN paper - is in fact the TEIRESIAS explanation module, compare e.g. the discussion of information metrics [Davis and Lenat p 269] with [Davis et al p 36]; and the sample explanations given in the two sources.</p>
<p>3 MYCIN/TEIRESIAS used “certainty factors” (not to be confused with formal indices of probability) to express its confidence in steps of reasoning. These were entered by the Knowledge Engineer for the individual rules, and manipulated arithmetically by the inference mechanism. They ranged in value from -1 (certainly false) through to (no confidence at all in the reasoning step) to 1 (certainty). </p>
<h2><a href="#references" name="references"></a>References</h2>
<p>Barr, A &amp; Feigenbaum, E A: The Handbook of Artificial Intelligence, Pitman, 82, especially articles VII B, TEIRESIAS, and VIII B1, MYCIN</p>
<p>Brooke, S: Interactive Graphical Representation of Knowledge: in Proceedings of the Alvey KBS Club SIG on Explanation second workshop, 87</p>
<p>Buchanan, B, Sutherland, G, &amp; Feigenbaum, EA; Heuristic Dendral: a program for generating explanatory hypotheses in organic chemistry: in Meltzer &amp; Michie, eds, Machine Intelligence 4: Edinburgh University Press, 1969;</p>
<p>Buchanan, BG &amp; Feigenbaum, EA: Dendral and Meta-Dendral: Their Applications Dimension: in Artificial Intelligence 11, 1978</p>
<p>Davis, R, Buchanan, B and Shortliffe, E: Production Rules as a Representation for a Knowledge-Based Consultation Program: in Artificial Intelligence 8, 1977</p>
<p>Davis, R &amp; Lenat, D: Knowledge-based systems in Artificial Intelligence; McGraw-Hill, 1982</p>
<p>especially part 2 chap 3 Hammond, P, &amp; Sergot, M: A PROLOG Shell for Logic Based Expert Systems: in Proceedings of Expert Systems 83: BCS</p>
<p>Martin, WA &amp; Fateman, RJ: The Macsyma System: in Procedings of the 2nd Symposium on Symbolic and Algebraic Manipulation: ACM: Los Angeles 1971</p>
<p>Michie, D: Game playing programs and the conceptual interface: in Bramer, MA (ed): Computer Game Playing theory and practice: Ellis Horwood, Chichester, 1983</p>
<p>Miller, Perry L: A Critiqueing Approach to Expert Computer Advice: ATTENDING: Pitman Research Notes in Artificial Intelligence 1, London, 1984</p>
<p>Mott, P &amp; Brooke, S: A Graphical Inference Mechanism: in Expert Systems iv, 2, May 87</p>
<p>Pople, H E: The Formation of Composite Hypotheses in Diagnostic Problem Solving - an Exercise in Synthetic Reasoning in Papers presented at the 5th International Joint Conference on Artificial Intelligence, MIT, 1977</p>
<p>Swartout, W: A Digitalis Therapy Advisor with Explanations: in Proceedings of the 5th International Joint Conference on Artificial Intelligence, MIT, 1977</p>
<p>Swartout, W R: XPLAIN: a System for Creating and Explaining Expert Consulting Programs: in Artificial Intelligence 21, 1983</p>
<p>Walker, A: Automatic Generation of Explanations of Results from Knowledge Bases: Research Report RJ3481, IBM Research Laboratory, San</p>
<p>Jose, California, 1982)</p>
<p>Walker, A et al, Knowledge Systems and Prolog, Addison-Wesley, Reading (Mass.) 1987</p>
<p>Weiner, J L: BLAH, a system which explains its reasoning: in Artificial Intelligence 15, 1980</p></div></div></div></body></html>

16
docs/codox/Manifesto.html Normal file
View file

@ -0,0 +1,16 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title></title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 current"><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><h1><a href="#manifesto" name="manifesto"></a>Manifesto</h1>
<p>Machine inference automated reasoning, the core of what gets called Artificial Intellegence has ab initio been based on the assumption that the purpose of reasoning was to preserve truth. It is because this assumption is false that the project has thus far failed to bear fruit, that Allan Turings eponymous test has yet to be passed.</p>
<p>Of course it is possible to build machines which, within the constraints of finite store, can accurately compute theora of first order predicate calculus ad nauseam but such machines do not display behaviour which is convincingly intelligent. They are cold and mechanical; we do not recognise ourselves in them. Like the Girl in the Fireplaces beautiful clocks, they are precisely inhuman.</p>
<p>As Turings test itself shows, intelligence is a hegemonic term, a term laden with implicit propaganda. A machine is intelligent if it can persuade a person that it is a person. By intelligent we dont mean capable of perfect reasoning. We mean like us; and in meaning like us we are smuggling under the covers, as semantic baggage, the claim that we ourselves are intelligent.</p>
<p>I might argue that perfect reasoning has little utility in a messy world, that to cope with the messiness of a messy world one needs messy reasoning. I shall not do so: the core of my argument is not that there is principle and value in the mode of reasoning that I propose, but precisely that it is ruthlessly unprincipled.</p>
<p>In this thesis I shall argue that the purpose of real world argument is not to preserve truth but to achieve hegemony: not to enlighten but to persuade, not to inform but to convince. This thesis succeeds not if in some arid, clockwork, mechanical sense I am right, but if, having read it, you believe that I am.</p>
<h2><a href="#on-inference-and-explanation" name="on-inference-and-explanation"></a>On inference and explanation</h2>
<p>I wrote the first draft of this thesis thirty two years ago. In that draft I was concerned with the very poor explanations that mechanised inference systems were able to provide for their reasons for coming to the conclusions they did, with their unpersuasiveness. There was a mismatch, an impedance, between machine intelligence and human intelligence. Then, I did not see this as the problem. Rather I thought that the problem was to provide better explanation systems as a way to buffer that impedance. I wrote then:</p>
<blockquote>
<p>This document deals only with explanation. Issues relating to inference and especially to truth maintenance will undoubtedly be raised as it progresses, but such hares will resolutely not be followed.</p>
</blockquote>
<p>In this I was wrong. The problem was not explanation; the problem was inference. The problem was, specifically, that human accounts of inference since Aristotle have been hegemonistic and self serving, so that when we started to try to automate inference we tried to automate not what we do but what we claim we do. Weve succeeded. And having succeeded, weve looked at it and said, no, that is not intelligence.</p>
<p>It is not intelligence because it is not like us. It is clockwork, inhuman, precise. It does things, let us admit this covertly in dark corners, that we cannot do. But it does not do things we can do: it does not convince. It does not persuade. It does not explain.</p>
<p>I shall do these things, and in doing them I shall provide an account of how these things are done in order that we can build machines that can do them. In doing this, I shall argue that truth does not matter; that it is a tool to be used, not an end to achieve. I shall argue that reason is profoundly unreasonable. The end to achieve, in argument as in so much other human behaviour, is not truth but dominance, dominance achieved by hegemony. In the end you will acknowledge that I am right; you will acknowledge it because I am right. I am right not because in some abstract sense what I say is true, but because you acknowledge it.</p></div></div></div></body></html>

View file

@ -0,0 +1,29 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title></title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 current"><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><h2><a href="#on-the-subtext-of-a-predicate" name="on-the-subtext-of-a-predicate"></a>On the subtext of a predicate</h2>
<p>Predicates are not atomic. They do not come single spies, but freighted with battalions of inferable subtexts. Suppose Anthony says</p>
<p>Brutus killed Caesar in Rome during the ides of March</p>
<p>I learn more than just that Brutus killed Caesar in Rome during the ides of March. I also learn that</p>
<ul>
<li>Brutus is a killer</li>
<li>Caesar has been killed</li>
<li>Rome is a place where killings happen</li>
<li>The ides of March are a time to be extra cautious</li>
</ul>
<p>Suppose Drusilla now says</p>
<p>E killed Caesar in Rome during the ides of March</p>
<p>this casts doubt on Anthonys primary claim, and on the belief that Brutus is a killer; but it reinforces the beliefs that</p>
<ul>
<li>Caesar has been killed</li>
<li>Rome is a place where killings happen</li>
<li>The ides of March are a time to be extra cautious.</li>
</ul>
<p>If Falco then says</p>
<p>No, I heard from Gaius that it happened in April</p>
<p>the beliefs that</p>
<ul>
<li>Caesar has been killed</li>
<li>Rome is a place where killings happen</li>
</ul>
<p>are still further strengthened.</p>
<p>In proposing a formalism to express predicates, we need to consider how it allows this freight to be unpacked.</p></div></div></div></body></html>

View file

@ -0,0 +1,8 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title>The Problem</title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 current"><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><h1><a href="#the-problem" name="the-problem"></a>The Problem</h1>
<p>In this chapter talk about the perceived need for expert system explanations. Advance:</p>
<p>the arguments used by expert systems designers, saying why explanations are needed;</p>
<p>the arguments used by critics which claim that the explanations given are not good enough.</p>
<h3><a href="#references" name="references"></a>References</h3>
<p>{pretty much the same as for History - see below}</p></div></div></div></body></html>

View file

@ -0,0 +1,9 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title>Against Truth</title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span>## On inference and explanation</span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 current"><a href="against_truth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><h1><a href="#against-truth" name="against-truth"></a>Against Truth</h1>
<h2><a href="#contents" name="contents"></a>Contents</h2>
<ol>
<li><a href="Manifesto.html">Manifesto</a></li>
<li><a href="History.html">History</a></li>
<li><a href="Analysis.html">Analysis</a></li>
</ol></div></div></div></body></html>

551
docs/codox/css/default.css Normal file
View file

@ -0,0 +1,551 @@
body {
font-family: Helvetica, Arial, sans-serif;
font-size: 15px;
}
pre, code {
font-family: Monaco, DejaVu Sans Mono, Consolas, monospace;
font-size: 9pt;
margin: 15px 0;
}
h1 {
font-weight: normal;
font-size: 29px;
margin: 10px 0 2px 0;
padding: 0;
}
h2 {
font-weight: normal;
font-size: 25px;
}
h5.license {
margin: 9px 0 22px 0;
color: #555;
font-weight: normal;
font-size: 12px;
font-style: italic;
}
.document h1, .namespace-index h1 {
font-size: 32px;
margin-top: 12px;
}
#header, #content, .sidebar {
position: fixed;
}
#header {
top: 0;
left: 0;
right: 0;
height: 22px;
color: #f5f5f5;
padding: 5px 7px;
}
#content {
top: 32px;
right: 0;
bottom: 0;
overflow: auto;
background: #fff;
color: #333;
padding: 0 18px;
}
.sidebar {
position: fixed;
top: 32px;
bottom: 0;
overflow: auto;
}
.sidebar.primary {
background: #e2e2e2;
border-right: solid 1px #cccccc;
left: 0;
width: 250px;
}
.sidebar.secondary {
background: #f2f2f2;
border-right: solid 1px #d7d7d7;
left: 251px;
width: 200px;
}
#content.namespace-index, #content.document {
left: 251px;
}
#content.namespace-docs {
left: 452px;
}
#content.document {
padding-bottom: 10%;
}
#header {
background: #3f3f3f;
box-shadow: 0 0 8px rgba(0, 0, 0, 0.4);
z-index: 100;
}
#header h1 {
margin: 0;
padding: 0;
font-size: 18px;
font-weight: lighter;
text-shadow: -1px -1px 0px #333;
}
#header h1 .project-version {
font-weight: normal;
}
.project-version {
padding-left: 0.15em;
}
#header a, .sidebar a {
display: block;
text-decoration: none;
}
#header a {
color: #f5f5f5;
}
.sidebar a {
color: #333;
}
#header h2 {
float: right;
font-size: 9pt;
font-weight: normal;
margin: 4px 3px;
padding: 0;
color: #bbb;
}
#header h2 a {
display: inline;
}
.sidebar h3 {
margin: 0;
padding: 10px 13px 0 13px;
font-size: 19px;
font-weight: lighter;
}
.sidebar h3 a {
color: #444;
}
.sidebar h3.no-link {
color: #636363;
}
.sidebar ul {
padding: 7px 0 6px 0;
margin: 0;
}
.sidebar ul.index-link {
padding-bottom: 4px;
}
.sidebar li {
display: block;
vertical-align: middle;
}
.sidebar li a, .sidebar li .no-link {
border-left: 3px solid transparent;
padding: 0 10px;
white-space: nowrap;
}
.sidebar li .no-link {
display: block;
color: #777;
font-style: italic;
}
.sidebar li .inner {
display: inline-block;
padding-top: 7px;
height: 24px;
}
.sidebar li a, .sidebar li .tree {
height: 31px;
}
.depth-1 .inner { padding-left: 2px; }
.depth-2 .inner { padding-left: 6px; }
.depth-3 .inner { padding-left: 20px; }
.depth-4 .inner { padding-left: 34px; }
.depth-5 .inner { padding-left: 48px; }
.depth-6 .inner { padding-left: 62px; }
.sidebar li .tree {
display: block;
float: left;
position: relative;
top: -10px;
margin: 0 4px 0 0;
padding: 0;
}
.sidebar li.depth-1 .tree {
display: none;
}
.sidebar li .tree .top, .sidebar li .tree .bottom {
display: block;
margin: 0;
padding: 0;
width: 7px;
}
.sidebar li .tree .top {
border-left: 1px solid #aaa;
border-bottom: 1px solid #aaa;
height: 19px;
}
.sidebar li .tree .bottom {
height: 22px;
}
.sidebar li.branch .tree .bottom {
border-left: 1px solid #aaa;
}
.sidebar.primary li.current a {
border-left: 3px solid #a33;
color: #a33;
}
.sidebar.secondary li.current a {
border-left: 3px solid #33a;
color: #33a;
}
.namespace-index h2 {
margin: 30px 0 0 0;
}
.namespace-index h3 {
font-size: 16px;
font-weight: bold;
margin-bottom: 0;
}
.namespace-index .topics {
padding-left: 30px;
margin: 11px 0 0 0;
}
.namespace-index .topics li {
padding: 5px 0;
}
.namespace-docs h3 {
font-size: 18px;
font-weight: bold;
}
.public h3 {
margin: 0;
float: left;
}
.usage {
clear: both;
}
.public {
margin: 0;
border-top: 1px solid #e0e0e0;
padding-top: 14px;
padding-bottom: 6px;
}
.public:last-child {
margin-bottom: 20%;
}
.members .public:last-child {
margin-bottom: 0;
}
.members {
margin: 15px 0;
}
.members h4 {
color: #555;
font-weight: normal;
font-variant: small-caps;
margin: 0 0 5px 0;
}
.members .inner {
padding-top: 5px;
padding-left: 12px;
margin-top: 2px;
margin-left: 7px;
border-left: 1px solid #bbb;
}
#content .members .inner h3 {
font-size: 12pt;
}
.members .public {
border-top: none;
margin-top: 0;
padding-top: 6px;
padding-bottom: 0;
}
.members .public:first-child {
padding-top: 0;
}
h4.type,
h4.dynamic,
h4.added,
h4.deprecated {
float: left;
margin: 3px 10px 15px 0;
font-size: 15px;
font-weight: bold;
font-variant: small-caps;
}
.public h4.type,
.public h4.dynamic,
.public h4.added,
.public h4.deprecated {
font-size: 13px;
font-weight: bold;
margin: 3px 0 0 10px;
}
.members h4.type,
.members h4.added,
.members h4.deprecated {
margin-top: 1px;
}
h4.type {
color: #717171;
}
h4.dynamic {
color: #9933aa;
}
h4.added {
color: #508820;
}
h4.deprecated {
color: #880000;
}
.namespace {
margin-bottom: 30px;
}
.namespace:last-child {
margin-bottom: 10%;
}
.index {
padding: 0;
font-size: 80%;
margin: 15px 0;
line-height: 16px;
}
.index * {
display: inline;
}
.index p {
padding-right: 3px;
}
.index li {
padding-right: 5px;
}
.index ul {
padding-left: 0;
}
.type-sig {
clear: both;
color: #088;
}
.type-sig pre {
padding-top: 10px;
margin: 0;
}
.usage code {
display: block;
color: #008;
margin: 2px 0;
}
.usage code:first-child {
padding-top: 10px;
}
p {
margin: 15px 0;
}
.public p:first-child, .public pre.plaintext {
margin-top: 12px;
}
.doc {
margin: 0 0 26px 0;
clear: both;
}
.public .doc {
margin: 0;
}
.namespace-index .doc {
margin-bottom: 20px;
}
.namespace-index .namespace .doc {
margin-bottom: 10px;
}
.markdown p, .markdown li, .markdown dt, .markdown dd, .markdown td {
line-height: 22px;
}
.markdown li {
padding: 2px 0;
}
.markdown h2 {
font-weight: normal;
font-size: 25px;
margin: 30px 0 10px 0;
}
.markdown h3 {
font-weight: normal;
font-size: 20px;
margin: 30px 0 0 0;
}
.markdown h4 {
font-size: 15px;
margin: 22px 0 -4px 0;
}
.doc, .public, .namespace .index {
max-width: 680px;
overflow-x: visible;
}
.markdown pre > code {
display: block;
padding: 10px;
}
.markdown pre > code, .src-link a {
border: 1px solid #e4e4e4;
border-radius: 2px;
}
.markdown code:not(.hljs), .src-link a {
background: #f6f6f6;
}
pre.deps {
display: inline-block;
margin: 0 10px;
border: 1px solid #e4e4e4;
border-radius: 2px;
padding: 10px;
background-color: #f6f6f6;
}
.markdown hr {
border-style: solid;
border-top: none;
color: #ccc;
}
.doc ul, .doc ol {
padding-left: 30px;
}
.doc table {
border-collapse: collapse;
margin: 0 10px;
}
.doc table td, .doc table th {
border: 1px solid #dddddd;
padding: 4px 6px;
}
.doc table th {
background: #f2f2f2;
}
.doc dl {
margin: 0 10px 20px 10px;
}
.doc dl dt {
font-weight: bold;
margin: 0;
padding: 3px 0;
border-bottom: 1px solid #ddd;
}
.doc dl dd {
padding: 5px 0;
margin: 0 0 5px 10px;
}
.doc abbr {
border-bottom: 1px dotted #333;
font-variant: none;
cursor: help;
}
.src-link {
margin-bottom: 15px;
}
.src-link a {
font-size: 70%;
padding: 1px 4px;
text-decoration: none;
color: #5555bb;
}

View file

@ -0,0 +1,97 @@
/*
github.com style (c) Vasily Polovnyov <vast@whiteants.net>
*/
.hljs {
display: block;
overflow-x: auto;
padding: 0.5em;
color: #333;
background: #f8f8f8;
}
.hljs-comment,
.hljs-quote {
color: #998;
font-style: italic;
}
.hljs-keyword,
.hljs-selector-tag,
.hljs-subst {
color: #333;
font-weight: bold;
}
.hljs-number,
.hljs-literal,
.hljs-variable,
.hljs-template-variable,
.hljs-tag .hljs-attr {
color: #008080;
}
.hljs-string,
.hljs-doctag {
color: #d14;
}
.hljs-title,
.hljs-section,
.hljs-selector-id {
color: #900;
font-weight: bold;
}
.hljs-subst {
font-weight: normal;
}
.hljs-type,
.hljs-class .hljs-title {
color: #458;
font-weight: bold;
}
.hljs-tag,
.hljs-name,
.hljs-attribute {
color: #000080;
font-weight: normal;
}
.hljs-regexp,
.hljs-link {
color: #009926;
}
.hljs-symbol,
.hljs-bullet {
color: #990073;
}
.hljs-built_in,
.hljs-builtin-name {
color: #0086b3;
}
.hljs-meta {
color: #999;
font-weight: bold;
}
.hljs-deletion {
background: #fdd;
}
.hljs-addition {
background: #dfd;
}
.hljs-emphasis {
font-style: italic;
}
.hljs-strong {
font-weight: bold;
}

3
docs/codox/index.html Normal file
View file

@ -0,0 +1,3 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title>Wildwood 0.1.0-SNAPSHOT</title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 current"><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="namespace-index" id="content"><h1><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></h1><h5 class="license">Released under the <a href="https://www.eclipse.org/legal/epl-2.0/">EPL-2.0 OR GPL-2.0-or-later WITH Classpath-exception-2.0</a></h5><div class="doc"><p>A general inference library using a game theoretic inference mechanism.</p></div><h2>Installation</h2><p>To install, add the following dependency to your project or build file:</p><pre class="deps">[wildwood "0.1.0-SNAPSHOT"]</pre><h2>Topics</h2><ul class="topics"><li><a href="AgainstTruth.html">Against Truth</a></li><li><a href="Analysis.html"></a></li><li><a href="Errata.html">Errata</a></li><li><a href="History.html">History</a></li><li><a href="Manifesto.html"></a></li><li><a href="PredicateSubtext.html"></a></li><li><a href="TheProblem.html">The Problem</a></li><li><a href="intro.html"># Introduction to Wildwood</a></li></ul><h2>Namespaces</h2><div class="namespace"><h3><a href="wildwood.core.html">wildwood.core</a></h3><div class="doc"><div class="markdown"><p><strong>TODO</strong>: write docs</p></div></div><div class="index"><p>Public variables and functions:</p><ul><li> <a href="wildwood.core.html#var-foo">foo</a> </li></ul></div></div></div></body></html>

14
docs/codox/intro.html Normal file
View file

@ -0,0 +1,14 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title># Introduction to Wildwood</title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 current"><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 "><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="document" id="content"><div class="doc"><div class="markdown"><h2><a href="#introduction-to-wildwood" name="introduction-to-wildwood"></a>Introduction to Wildwood</h2>
<p>I started building Wildwood nearly forty years ago on InterLisp-D workstations. Then, because of changing academic projects, I lost access to those machines, and the project was effectively abandoned. But, Ive kept thinking about it; it has cool ideas.</p>
<h3><a href="#explicable-inference" name="explicable-inference"></a>Explicable inference</h3>
<p>Wildwood was a follow on from ideas developed in Arboretum, an inference system based on a novel propositional logic using defaults. Arboretum was documented in our paper</p>
<p><a href="https://onlinelibrary.wiley.com/doi/epdf/10.1111/j.1468-0394.1987.tb00133.x">Mott, P &amp; Brooke, S: A graphical inference mechanism : Expert Systems Volume 4, Issue 2, May 1987, Pages 106-117</a></p>
<p>Two things were key about this system: first, we had a systematic mechanism for eliciting knowledge from domain experts into visual representations which it was easy for those experts to validate, and second, the system could easily generate high quality natural language explanations of its decisions, which could be understood (and therefore be challenged) by ordinary people</p>
<p>This explicability was, I felt, a key value. Wildwood, while being able to infer over much broader and more messy domains, should be at least as transparent and easy to understand as Arboretum.</p>
<h3><a href="#game-theoretic-reasoning" name="game-theoretic-reasoning"></a>Game theoretic reasoning</h3>
<p>The insight which is central to the design of Wildwood is that human argument does not seek to preserve truth, it seeks to be hegemonic: to persuade the auditor of the argument of the advocate.</p>
<p>Consequently, an inference process should be a set of at least two arguing processes, each of whom takes a different initial view and seeks to defend it using a system of legal moves.</p>
<h3><a href="#against-truth" name="against-truth"></a>Against truth</h3>
<p>Wildwood was originally intended to be a part of my (unfinished) thesis, <a href="AgainstTruth.html">Against Truth</a>, which is included in this archive for your amusement.</p></div></div></div></body></html>

2
docs/codox/js/highlight.min.js vendored Normal file

File diff suppressed because one or more lines are too long

4
docs/codox/js/jquery.min.js vendored Normal file

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,112 @@
function visibleInParent(element) {
var position = $(element).position().top
return position > -50 && position < ($(element).offsetParent().height() - 50)
}
function hasFragment(link, fragment) {
return $(link).attr("href").indexOf("#" + fragment) != -1
}
function findLinkByFragment(elements, fragment) {
return $(elements).filter(function(i, e) { return hasFragment(e, fragment)}).first()
}
function scrollToCurrentVarLink(elements) {
var elements = $(elements);
var parent = elements.offsetParent();
if (elements.length == 0) return;
var top = elements.first().position().top;
var bottom = elements.last().position().top + elements.last().height();
if (top >= 0 && bottom <= parent.height()) return;
if (top < 0) {
parent.scrollTop(parent.scrollTop() + top);
}
else if (bottom > parent.height()) {
parent.scrollTop(parent.scrollTop() + bottom - parent.height());
}
}
function setCurrentVarLink() {
$('.secondary a').parent().removeClass('current')
$('.anchor').
filter(function(index) { return visibleInParent(this) }).
each(function(index, element) {
findLinkByFragment(".secondary a", element.id).
parent().
addClass('current')
});
scrollToCurrentVarLink('.secondary .current');
}
var hasStorage = (function() { try { return localStorage.getItem } catch(e) {} }())
function scrollPositionId(element) {
var directory = window.location.href.replace(/[^\/]+\.html$/, '')
return 'scroll::' + $(element).attr('id') + '::' + directory
}
function storeScrollPosition(element) {
if (!hasStorage) return;
localStorage.setItem(scrollPositionId(element) + "::x", $(element).scrollLeft())
localStorage.setItem(scrollPositionId(element) + "::y", $(element).scrollTop())
}
function recallScrollPosition(element) {
if (!hasStorage) return;
$(element).scrollLeft(localStorage.getItem(scrollPositionId(element) + "::x"))
$(element).scrollTop(localStorage.getItem(scrollPositionId(element) + "::y"))
}
function persistScrollPosition(element) {
recallScrollPosition(element)
$(element).scroll(function() { storeScrollPosition(element) })
}
function sidebarContentWidth(element) {
var widths = $(element).find('.inner').map(function() { return $(this).innerWidth() })
return Math.max.apply(Math, widths)
}
function calculateSize(width, snap, margin, minimum) {
if (width == 0) {
return 0
}
else {
return Math.max(minimum, (Math.ceil(width / snap) * snap) + (margin * 2))
}
}
function resizeSidebars() {
var primaryWidth = sidebarContentWidth('.primary')
var secondaryWidth = 0
if ($('.secondary').length != 0) {
secondaryWidth = sidebarContentWidth('.secondary')
}
// snap to grid
primaryWidth = calculateSize(primaryWidth, 32, 13, 160)
secondaryWidth = calculateSize(secondaryWidth, 32, 13, 160)
$('.primary').css('width', primaryWidth)
$('.secondary').css('width', secondaryWidth).css('left', primaryWidth + 1)
if (secondaryWidth > 0) {
$('#content').css('left', primaryWidth + secondaryWidth + 2)
}
else {
$('#content').css('left', primaryWidth + 1)
}
}
$(window).ready(resizeSidebars)
$(window).ready(setCurrentVarLink)
$(window).ready(function() { persistScrollPosition('.primary')})
$(window).ready(function() {
$('#content').scroll(setCurrentVarLink)
$(window).resize(setCurrentVarLink)
})

View file

@ -0,0 +1,3 @@
<!DOCTYPE html PUBLIC ""
"">
<html><head><meta charset="UTF-8" /><title>wildwood.core documentation</title><link rel="stylesheet" type="text/css" href="css/default.css" /><link rel="stylesheet" type="text/css" href="css/highlight.css" /><script type="text/javascript" src="js/highlight.min.js"></script><script type="text/javascript" src="js/jquery.min.js"></script><script type="text/javascript" src="js/page_effects.js"></script><script>hljs.initHighlightingOnLoad();</script></head><body><div id="header"><h2>Generated by <a href="https://github.com/weavejester/codox">Codox</a></h2><h1><a href="index.html"><span class="project-title"><span class="project-name">Wildwood</span> <span class="project-version">0.1.0-SNAPSHOT</span></span></a></h1></div><div class="sidebar primary"><h3 class="no-link"><span class="inner">Project</span></h3><ul class="index-link"><li class="depth-1 "><a href="index.html"><div class="inner">Index</div></a></li></ul><h3 class="no-link"><span class="inner">Topics</span></h3><ul><li class="depth-1 "><a href="AgainstTruth.html"><div class="inner"><span>Against Truth</span></div></a></li><li class="depth-1 "><a href="Analysis.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="Errata.html"><div class="inner"><span>Errata</span></div></a></li><li class="depth-1 "><a href="History.html"><div class="inner"><span>History</span></div></a></li><li class="depth-1 "><a href="Manifesto.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="PredicateSubtext.html"><div class="inner"><span></span></div></a></li><li class="depth-1 "><a href="TheProblem.html"><div class="inner"><span>The Problem</span></div></a></li><li class="depth-1 "><a href="intro.html"><div class="inner"><span># Introduction to Wildwood</span></div></a></li></ul><h3 class="no-link"><span class="inner">Namespaces</span></h3><ul><li class="depth-1 current"><a href="wildwood.core.html"><div class="inner"><span>wildwood.core</span></div></a></li></ul></div><div class="sidebar secondary"><h3><a href="#top"><span class="inner">Public Vars</span></a></h3><ul><li class="depth-1"><a href="wildwood.core.html#var-foo"><div class="inner"><span>foo</span></div></a></li></ul></div><div class="namespace-docs" id="content"><h1 class="anchor" id="top">wildwood.core</h1><div class="doc"><div class="markdown"><p><strong>TODO</strong>: write docs</p></div></div><div class="public anchor" id="var-foo"><h3>foo</h3><div class="usage"><code>(foo x)</code></div><div class="doc"><div class="markdown"><p>I dont do a whole lot.</p></div></div><div class="src-link"><a href="https://github.com/simon-brooke/the-great-game/blob/master/src/wildwood/core.clj#L3">view source</a></div></div></div></body></html>

17
project.clj Normal file
View file

@ -0,0 +1,17 @@
(defproject wildwood "0.1.0-SNAPSHOT"
:description "A general inference library using a game theoretic inference mechanism."
:url "http://example.com/FIXME"
:license {:name "EPL-2.0 OR GPL-2.0-or-later WITH Classpath-exception-2.0"
:url "https://www.eclipse.org/legal/epl-2.0/"}
:dependencies [[org.clojure/clojure "1.8.0"]
[org.clojure/math.numeric-tower "0.0.4"]
[com.taoensso/timbre "4.10.0"]]
:codox {:metadata {:doc "**TODO**: write docs"
:doc/format :markdown}
:output-path "docs/codox"
:source-uri "https://github.com/simon-brooke/the-great-game/blob/master/{filepath}#L{line}"}
:plugins [[lein-cloverage "1.1.1"]
[lein-codox "0.10.7"]
[lein-cucumber "1.0.2"]
[lein-gorilla "0.4.0"]]
:repl-options {:init-ns wildwood.core})

6
src/wildwood/core.clj Normal file
View file

@ -0,0 +1,6 @@
(ns wildwood.core)
(defn foo
"I don't do a whole lot."
[x]
(println x "Hello, World!"))

View file

@ -0,0 +1,7 @@
(ns wildwood.core-test
(:require [clojure.test :refer :all]
[wildwood.core :refer :all]))
(deftest a-test
(testing "FIXME, I fail."
(is (= 0 1))))