diff --git a/doc/Experience.md b/doc/Experience.md new file mode 100644 index 0000000..80fb1a6 --- /dev/null +++ b/doc/Experience.md @@ -0,0 +1,3 @@ +# Experience + +{Not yet written. To cover an evaluation of the Clojure Wildwood library, when it works, and what I can learn from it going forward} diff --git a/doc/History.md b/doc/History.md index de33343..d7c6f8b 100644 --- a/doc/History.md +++ b/doc/History.md @@ -4,9 +4,11 @@ The object of this chapter is to describe and discuss the development of Expert System explanations from the beginning' to the most recent systems. The argument which I will try to advance is that development has been continuously driven by the perceived inadequacy of the explanations given; and that, while many ad hoc, and some principled, approaches have been tried, no really adequate explanation system has emerged. Further, I will claim that, as some of the later and more principled explanation systems accurately model the accounts of explanation advanced in current philosophy, the philosophical understanding of explanation is itself inadequate. +{I ought to add to this chapter to give some overview of what's happened since 1990, and look at explanations of neural network decisions, because that will help in later parts/chapters of Part One} + ## Family Tree of Systems discussed -(diagram here)| +![Family tree](../img/family-tree.svg) Chronology relates to publication, and not to implementation. Links are shown where system designers acknowledge influence, or where family resemblance between systems is extremely obvious. In a small field like this, it is reasonably (but not absolutely) safe to assume that major practitioners are up to date with the current literature. @@ -123,7 +125,7 @@ The HOW query, by contrast, operates on a history list, and requires, as argume Some numbered statements, eg (5.0) below, do not appear to be 'test parts' of any rule. It is not made clear what the effect of asking 'WHY [5.0]' would be. -####= Example, user input in bold: +##### Example, user input prefixed with '**' prompt: Where is the suspected portal of entry of organism-1 into this sterile site? @@ -157,7 +159,7 @@ Some numbered statements, eg (5.0) below, do not appear to be 'test parts' of a [4.0] At that point Rule 021 was being used. - **HOW [4.0]  + ** HOW [4.0]  [I.e., how was Rule 021 used?] @@ -608,7 +610,7 @@ The strategy used is described as less ambitious than schemes which involve con Barr, A & Feigenbaum, E A: The Handbook of 'Artificial Intelligence, Pitman, 82, especially articles VII B, TEIRESIAS, and VIII B1, MYCIN -Brooke, S: Interactive Graphical Representation of Knowledge: in Proceedings of the Alvey KBS Club SIG on Explanation second workshop, 87 +Brooke, S: Interactive Graphical Representation of Knowledge: in Proceedings of the Alvey KBS Club SIG on Explanation second workshop, 87 {have this} Buchanan, B, Sutherland, G, & Feigenbaum, EA; Heuristic Dendral: a program for generating explanatory hypotheses in organic chemistry: in Meltzer & Michie, eds, Machine Intelligence 4: Edinburgh University Press, 1969; @@ -630,7 +632,7 @@ Mott, P & Brooke, S: A Graphical Inference Mechanism: in Expert Systems iv, Pople, H E: The Formation of Composite Hypotheses in Diagnostic Problem Solving - an Exercise in Synthetic Reasoning in Papers presented at the 5th International Joint Conference on Artificial Intelligence, MIT, 1977 -Swartout, W: A Digitalis Therapy Advisor with Explanations: in Proceedings of the 5th International Joint Conference on Artificial Intelligence, MIT, 1977 +Swartout, W: A Digitalis Therapy Advisor with Explanations: in Proceedings of the 5th International Joint Conference on Artificial Intelligence, MIT, 1977 {hav this} Swartout, W R: XPLAIN: a System for Creating and Explaining Expert Consulting Programs: in Artificial Intelligence 21, 1983 diff --git a/doc/Implementing.md b/doc/Implementing.md new file mode 100644 index 0000000..2a46ca4 --- /dev/null +++ b/doc/Implementing.md @@ -0,0 +1,3 @@ +# Implementing + +{not yet written. To cover the actual structure of the Clojure Wildwood library, as I do it} diff --git a/doc/Manifesto.md b/doc/Manifesto.md index 9463126..15ed573 100644 --- a/doc/Manifesto.md +++ b/doc/Manifesto.md @@ -1,5 +1,4 @@ -Manifesto -========= +# Manifesto Machine inference – automated reasoning, the core of what gets called Artificial Intellegence – has ab initio been based on the assumption @@ -33,8 +32,7 @@ persuade, not to inform but to convince. This thesis succeeds not if in some arid, clockwork, mechanical sense I am right, but if, having read it, you believe that I am. -On inference and explanation ----------------------------- +## On inference and explanation I wrote the first draft of this thesis thirty two years ago. In that draft I was concerned with the very poor explanations that mechanised diff --git a/doc/PredicateSubtext.md b/doc/PredicateSubtext.md index da54cde..045ae38 100644 --- a/doc/PredicateSubtext.md +++ b/doc/PredicateSubtext.md @@ -1,10 +1,9 @@ -On the subtext of a predicate ------------------------------ +# On the subtext of a predicate Predicates are not atomic. They do not come single spies, but freighted with battalions of inferable subtexts. Suppose Anthony says -Brutus killed Caesar in Rome during the ides of March + Brutus killed Caesar in Rome during the ides of March I learn more than just that 'Brutus killed Caesar in Rome during the ides of March'. I also learn that @@ -16,7 +15,7 @@ ides of March'. I also learn that Suppose Drusilla now says -E killed Caesar in Rome during the ides of March + Longus killed Caesar in Rome during the ides of March this casts doubt on Anthony's primary claim, and on the belief that Brutus is a killer; but it reinforces the beliefs that @@ -27,7 +26,7 @@ Brutus is a killer; but it reinforces the beliefs that If Falco then says -No, I heard from Gaius that it happened in April + No, I heard from Gaius that it happened in April the beliefs that diff --git a/doc/Reimagining.md b/doc/Reimagining.md new file mode 100644 index 0000000..768e9bc --- /dev/null +++ b/doc/Reimagining.md @@ -0,0 +1,3 @@ +# Reimagining + +{not yet written. To cover development of the Clojure Wildwood library, and the thinking and design which develops as I do it} diff --git a/doc/intro.md b/doc/intro.md index b9562a8..deed9fa 100644 --- a/doc/intro.md +++ b/doc/intro.md @@ -1,11 +1,11 @@ -## Introduction to Wildwood +# Introduction to Wildwood I started building Wildwood nearly forty years ago on InterLisp-D workstations. Then, because of changing academic projects, I lost access to those machines, and the project was effectively abandoned. But, I've kept thinking about it; it has cool ideas. -### Explicable inference +## Explicable inference Wildwood was a follow on from ideas developed in Arboretum, an inference system based on a novel propositional logic using defaults. Arboretum was documented in @@ -24,7 +24,7 @@ This explicability was, I felt, a key value. Wildwood, while being able to infer over much broader and more messy domains, should be at least as transparent and easy to understand as Arboretum. -### Game theoretic reasoning +## Game theoretic reasoning The insight which is central to the design of Wildwood is that human argument does not seek to preserve truth, it seeks to be hegemonic: to persuade the @@ -34,7 +34,7 @@ Consequently, an inference process should be a set of at least two arguing processes, each of whom takes a different initial view and seeks to defend it using a system of legal moves. -### Against truth +## Against truth Wildwood was originally intended to be a part of my (unfinished) thesis, [Against Truth](AgainstTruth.html), which is included in this archive for diff --git a/docs/codox/AgainstTruth.html b/docs/codox/AgainstTruth.html index 76cb571..c3cd7cf 100644 --- a/docs/codox/AgainstTruth.html +++ b/docs/codox/AgainstTruth.html @@ -1,6 +1,6 @@ -Against Truth

Against Truth

+Against Truth

Against Truth

Simon Brooke

Hey, what IS truth, man? [Beeblebrox, quoted in [Adams, 1978]]

diff --git a/docs/codox/Analysis.html b/docs/codox/Analysis.html index b95b3b7..b0474d7 100644 --- a/docs/codox/Analysis.html +++ b/docs/codox/Analysis.html @@ -1,6 +1,6 @@ -Analysis

Analysis

+Analysis

Analysis

Accounts from the Philosophy of Science

(Towards another chapter. What l want to do is: ,

    @@ -151,7 +151,7 @@ a is x

    The latter form implies a completion has occurred.

    An explanation has not occurred until the act of explanation has occurred. For John to have explained why x, it is not sufficient that he knew why x.

    ‘Explaining’ is illocutionary [see Austin, how to do things with words]; it is done in an appropriate context. Out of such a context, the same statement would be an equivalent perlocutary act: ‘enlightening’, ‘getting <the auditor> to understand’.

    -

    The intention of an explaining act must be to engender understanding of the explicanda. Achinstein does not state, but may be taken to imply, that for such an act to take place there must (at any rate in the mind of the explainer) be some explainee or auditor, in whose mind understanding is to be engendered[^1].

    +

    The intention of an explaining act must be to engender understanding of the explicanda. Achinstein does not state, but may be taken to imply, that for such an act to take place there must (at any rate in the mind of the explainer) be some explainee or auditor, in whose mind understanding is to be engendered[Achinstein 1].

    "The first condition expresses what I take to be a fundamental relationship between explaining and understanding. It is that S explains q by uttering u only if

      @@ -173,7 +173,7 @@ a is x

      If explaining is to be seen as an act directed at engendering understanding, some account of what is meant by ‘understanding’ must be supplied. Achinstein asserts that:

      "One understands q only if one knows a correct answer to Q which one knows to be correct (sic)… we can say that a necessary condition for the truth of sentences of the form ‘At understands q’ is

      -

      (∀x)(A knows of x that it is a correct answer to Q)" (p 23 - 4)

      +

      “(∀x)(A knows of x that it is a correct answer to Q)” (p 23 - 4)

      Achinstein’s essential problem is quite simple to express: he wants to say that one doesn’t understand something until one not only knows a proposition which expresses the reason for it, and knows that this proposition does in fact express the ‘correct’ reason, but also has internalised this proposition.

      ‘Content-giving Propositions’

      @@ -347,7 +347,7 @@ a is x

      Linguistics

      Sperber, Relevance {have this}

      Psychology

      -

      Antaki, C: Lay Explanations of Behaviour

      +

      Antaki, C., (1989) ‘‘Lay explanations of behaviour: two psychological cultures’’, in Expert Knowledge and Explanation: The Knowledge-language Interface, Ellis, C. (ed), Ellis Harwood, London, pp 42-60.

      Antaki, C: Analysing Everyday Explanation

      Craik, The Nature of Explanation

      Draper, S W: A User Centred Concept of Explanation: Alvey Exp SIG 2

      @@ -355,4 +355,4 @@ a is x

      Artificial Intelligence

      A Goguen, Reasoning and Natural Explanation `


      -

      [^1]: Later (p 19), Achinstein refers to ‘the audience’. By contrast, hecites (p 20) an alternative formulation by RJ Mattews in which the audience is explicitly represented. [1][OnHylasAndPhilonus.html] Statement of this argument from Berkley’s ‘Three Dialogues of Hylas and Philonous’

\ No newline at end of file +

[Achinstein 1]: Later (p 19), Achinstein refers to ‘the audience’. By contrast, he dcites (p 20) an alternative formulation by RJ Mattews in which the audience is explicitly represented. [1][OnHylasAndPhilonus.html] Statement of this argument from Berkley’s ‘Three Dialogues of Hylas and Philonous’

\ No newline at end of file diff --git a/docs/codox/Arboretum.html b/docs/codox/Arboretum.html index 4a8536d..0c2daca 100644 --- a/docs/codox/Arboretum.html +++ b/docs/codox/Arboretum.html @@ -1,6 +1,6 @@ -Arboretum

Arboretum

+Arboretum

Arboretum

TODO: To be scanned from chapter iv of the 21st June 1988 draft.

Arboretum screen view showing sample explanations

Arboretum screen showing a number of generated explanations. This picture was scanned from a 32 year old acetate slide, apologies for quality

\ No newline at end of file diff --git a/docs/codox/Conception.html b/docs/codox/Conception.html index 06934ca..c67c0c1 100644 --- a/docs/codox/Conception.html +++ b/docs/codox/Conception.html @@ -1,4 +1,4 @@ -Conception

Conception

+Conception

Conception

TODO: To be scanned from chapter v of the 21st June 1988 draft.

\ No newline at end of file diff --git a/docs/codox/Errata.html b/docs/codox/Errata.html index 5968a57..5d6bcf8 100644 --- a/docs/codox/Errata.html +++ b/docs/codox/Errata.html @@ -1,6 +1,6 @@ -Errata

Errata

+Errata

Errata

  1. On title page: the claim that Zaphod Beeblebrox is quoted as saying ‘Hey, what IS truth, man?’ in the printed text of Douglas Adams ‘Hitchhikers Guide to the Galaxy’ is false.
\ No newline at end of file diff --git a/docs/codox/Experience.html b/docs/codox/Experience.html new file mode 100644 index 0000000..ea64ad5 --- /dev/null +++ b/docs/codox/Experience.html @@ -0,0 +1,4 @@ + +Experience

Experience

+

{Not yet written. To cover an evaluation of the Clojure Wildwood library, when it works, and what I can learn from it going forward}

\ No newline at end of file diff --git a/docs/codox/History.html b/docs/codox/History.html index f4493ca..09bcf95 100644 --- a/docs/codox/History.html +++ b/docs/codox/History.html @@ -1,10 +1,11 @@ -History

History

+History

History

History: Introduction

The object of this chapter is to describe and discuss the development of Expert System explanations from the beginning’ to the most recent systems. The argument which I will try to advance is that development has been continuously driven by the perceived inadequacy of the explanations given; and that, while many ad hoc, and some principled, approaches have been tried, no really adequate explanation system has emerged. Further, I will claim that, as some of the later and more principled explanation systems accurately model the accounts of explanation advanced in current philosophy, the philosophical understanding of explanation is itself inadequate.

+

{I ought to add to this chapter to give some overview of what’s happened since 1990, and look at explanations of neural network decisions, because that will help in later parts/chapters of Part One}

Family Tree of Systems discussed

-

(diagram here)|

+

Family tree

Chronology relates to publication, and not to implementation. Links are shown where system designers acknowledge influence, or where family resemblance between systems is extremely obvious. In a small field like this, it is reasonably (but not absolutely) safe to assume that major practitioners are up to date with the current literature.

Contrary to the current view, expressed by such authors as Weiner:

“… (Expert) systems include some mechanism for giving explanations, since their credibility depends on the user’s ability to follow their reasoning, thereby verifying that an answer is correct.” [Weiner, 80]

@@ -85,7 +86,7 @@ DONE

The HOW question

The HOW query, by contrast, operates on a history list, and requires, as argument, a statement number. The response given is (again templated) a print out of the rule whose ’test part is given in the numbered statement. Thus there are two quite different semantics to HOW. HOW of a rule which has been evaluated will give what is in some sense a justification (by modus ponens) for belief in the statement - in this sense it might be rendered “how do you know that…”. HOW of a rule which has yet to be evaluated gives procedural information about how to find the truth value of the statement, and might be rendered “how would you find out whether…”. These different semantics are to some extent signalled by the use of different templates.

Some numbered statements, eg (5.0) below, do not appear to be ‘test parts’ of any rule. It is not made clear what the effect of asking ‘WHY [5.0]’ would be.

-

####= Example, user input in bold:

+
Example, user input prefixed with ’**’ prompt:
Where is the suspected portal  of entry of organism-1 into this sterile site?
 
 ** WHY 
@@ -118,7 +119,7 @@ There is strongly suggestive  evidence (.9) that Enterobacteriacea is the class
 
 [4.0] At that point Rule 021  was being used.
 
-**HOW [4.0] 
+** HOW [4.0] 
 
 [I.e., how was Rule 021  used?]
 
@@ -458,7 +459,7 @@ of Peter's  support.
 

3 MYCIN/TEIRESIAS used “certainty factors” (not to be confused with formal indices of probability) to express its confidence in steps of reasoning. These were entered by the Knowledge Engineer for the individual rules, and manipulated arithmetically by the inference mechanism. They ranged in value from -1 (certainly false) through to (no confidence at all in the reasoning step) to 1 (certainty). 

References

Barr, A & Feigenbaum, E A: The Handbook of ’Artificial Intelligence, Pitman, 82, especially articles VII B, TEIRESIAS, and VIII B1, MYCIN

-

Brooke, S: Interactive Graphical Representation of Knowledge: in Proceedings of the Alvey KBS Club SIG on Explanation second workshop, 87

+

Brooke, S: Interactive Graphical Representation of Knowledge: in Proceedings of the Alvey KBS Club SIG on Explanation second workshop, 87 {have this}

Buchanan, B, Sutherland, G, & Feigenbaum, EA; Heuristic Dendral: a program for generating explanatory hypotheses in organic chemistry: in Meltzer & Michie, eds, Machine Intelligence 4: Edinburgh University Press, 1969;

Buchanan, BG & Feigenbaum, EA: Dendral and Meta-Dendral: Their Applications Dimension: in Artificial Intelligence 11, 1978

Davis, R, Buchanan, B and Shortliffe, E: Production Rules as a Representation for a Knowledge-Based Consultation Program: in Artificial Intelligence 8, 1977

@@ -469,7 +470,7 @@ of Peter's support.

Miller, Perry L: A Critiqueing Approach to Expert Computer Advice: ATTENDING: Pitman Research Notes in Artificial Intelligence 1, London, 1984

Mott, P & Brooke, S: A Graphical Inference Mechanism: in Expert Systems iv, 2, May 87

Pople, H E: The Formation of Composite Hypotheses in Diagnostic Problem Solving - an Exercise in Synthetic Reasoning in Papers presented at the 5th International Joint Conference on Artificial Intelligence, MIT, 1977

-

Swartout, W: A Digitalis Therapy Advisor with Explanations: in Proceedings of the 5th International Joint Conference on Artificial Intelligence, MIT, 1977

+

Swartout, W: A Digitalis Therapy Advisor with Explanations: in Proceedings of the 5th International Joint Conference on Artificial Intelligence, MIT, 1977 {hav this}

Swartout, W R: XPLAIN: a System for Creating and Explaining Expert Consulting Programs: in Artificial Intelligence 21, 1983

Walker, A: Automatic Generation of Explanations of Results from Knowledge Bases: Research Report RJ3481, IBM Research Laboratory, San

Jose, California, 1982)

diff --git a/docs/codox/Implementing.html b/docs/codox/Implementing.html new file mode 100644 index 0000000..787a1d0 --- /dev/null +++ b/docs/codox/Implementing.html @@ -0,0 +1,4 @@ + +Implementing

Implementing

+

{not yet written. To cover the actual structure of the Clojure Wildwood library, as I do it}

\ No newline at end of file diff --git a/docs/codox/Manifesto.html b/docs/codox/Manifesto.html index 5e8820c..035482e 100644 --- a/docs/codox/Manifesto.html +++ b/docs/codox/Manifesto.html @@ -1,6 +1,6 @@ -

Manifesto

+Manifesto

Manifesto

Machine inference – automated reasoning, the core of what gets called Artificial Intellegence – has ab initio been based on the assumption that the purpose of reasoning was to preserve truth. It is because this assumption is false that the project has thus far failed to bear fruit, that Allan Turing’s eponymous test has yet to be passed.

Of course it is possible to build machines which, within the constraints of finite store, can accurately compute theora of first order predicate calculus ad nauseam but such machines do not display behaviour which is convincingly intelligent. They are cold and mechanical; we do not recognise ourselves in them. Like the Girl in the Fireplace’s beautiful clocks, they are precisely inhuman.

As Turing’s test itself shows, intelligence is a hegemonic term, a term laden with implicit propaganda. A machine is ‘intelligent’ if it can persuade a person that it is a person. By ‘intelligent’ we don’t mean ‘capable of perfect reasoning’. We mean ‘like us’; and in meaning ‘like us’ we are smuggling under the covers, as semantic baggage, the claim that we ourselves are intelligent.

diff --git a/docs/codox/OnHylasAndPhilonus.html b/docs/codox/OnHylasAndPhilonus.html index 1a11640..f9df41a 100644 --- a/docs/codox/OnHylasAndPhilonus.html +++ b/docs/codox/OnHylasAndPhilonus.html @@ -1,6 +1,6 @@ -On the First Dialogue of Hylas and Philonous

On the First Dialogue of Hylas and Philonous

+On the First Dialogue of Hylas and Philonous

On the First Dialogue of Hylas and Philonous

The argument that our perception of a ‘real world’ does not prove its existence is not new, of course. Here is a classic statement of a similar argument from BerkeIey’s First Dialogue of Hylas and Philonous:

Hyl.: Do we not perceive the stars and moon, for example, to be a A great way off? Is not this, I say, manifest to the senses? I

diff --git a/docs/codox/PredicateSubtext.html b/docs/codox/PredicateSubtext.html index 7707cb6..4ba25fc 100644 --- a/docs/codox/PredicateSubtext.html +++ b/docs/codox/PredicateSubtext.html @@ -1,8 +1,9 @@ -

On the subtext of a predicate

+On the subtext of a predicate

On the subtext of a predicate

Predicates are not atomic. They do not come single spies, but freighted with battalions of inferable subtexts. Suppose Anthony says

-

Brutus killed Caesar in Rome during the ides of March

+
Brutus killed Caesar in Rome during the ides of March
+

I learn more than just that ‘Brutus killed Caesar in Rome during the ides of March’. I also learn that

  • Brutus is a killer
  • @@ -11,7 +12,8 @@
  • The ides of March are a time to be extra cautious

Suppose Drusilla now says

-

E killed Caesar in Rome during the ides of March

+
Longus killed Caesar in Rome during the ides of March
+

this casts doubt on Anthony’s primary claim, and on the belief that Brutus is a killer; but it reinforces the beliefs that

  • Caesar has been killed
  • @@ -19,7 +21,8 @@
  • The ides of March are a time to be extra cautious.

If Falco then says

-

No, I heard from Gaius that it happened in April

+
No, I heard from Gaius that it happened in April
+

the beliefs that