wildwood/doc/Manifesto.md

3.8 KiB
Raw Blame History

Manifesto

Machine inference automated reasoning, the core of what gets called Artificial Intellegence has ab initio been based on the assumption that the purpose of reasoning was to preserve truth. It is because this assumption is false that the project has thus far failed to bear fruit, that Allan Turing's eponymous test has yet to be passed.

Of course it is possible to build machines which, within the constraints of finite store, can accurately compute theora of first order predicate calculus ad nauseam but such machines do not display behaviour which is convincingly intelligent. They are cold and mechanical; we do not recognise ourselves in them. Like the Girl in the Fireplace's beautiful clocks, they are precisely inhuman.

As Turing's test itself shows, intelligence is a hegemonic term, a term laden with implicit propaganda. A machine is 'intelligent' if it can persuade a person that it is a person. By 'intelligent' we don't mean 'capable of perfect reasoning'. We mean 'like us'; and in meaning 'like us' we are smuggling under the covers, as semantic baggage, the claim that we ourselves are intelligent.

I might argue that perfect reasoning has little utility in a messy world, that to cope with the messiness of a messy world one needs messy reasoning. I shall not do so: the core of my argument is not that there is principle and value in the mode of reasoning that I propose, but precisely that it is ruthlessly unprincipled.

In this thesis I shall argue that the purpose of real world argument is not to preserve truth but to achieve hegemony: not to enlighten but to persuade, not to inform but to convince. This thesis succeeds not if in some arid, clockwork, mechanical sense I am right, but if, having read it, you believe that I am.

On inference and explanation

I wrote the first draft of this thesis thirty two years ago. In that draft I was concerned with the very poor explanations that mechanised inference systems were able to provide for their reasons for coming to the conclusions they did, with their unpersuasiveness. There was a mismatch, an impedance, between machine intelligence and human intelligence. Then, I did not see this as the problem. Rather I thought that the problem was to provide better explanation systems as a way to buffer that impedance. I wrote then:

This document deals only with explanation. Issues relating to inference and especially to truth maintenance will undoubtedly be raised as it progresses, but such hares will resolutely not be followed.

In this I was wrong. The problem was not explanation; the problem was inference. The problem was, specifically, that human accounts of inference since Aristotle have been hegemonistic and self serving, so that when we started to try to automate inference we tried to automate not what we do but what we claim we do. We've succeeded. And having succeeded, we've looked at it and said, 'no, that is not intelligence'.

It is not intelligence because it is not like us. It is clockwork, inhuman, precise. It does things, let us admit this covertly in dark corners, that we cannot do. But it does not do things we can do: it does not convince. It does not persuade. It does not explain.

I shall do these things, and in doing them I shall provide an account of how these things are done in order that we can build machines that can do them. In doing this, I shall argue that truth does not matter; that it is a tool to be used, not an end to achieve. I shall argue that reason is profoundly unreasonable. The end to achieve, in argument as in so much other human behaviour, is not truth but dominance, dominance achieved by hegemony. In the end you will acknowledge that I am right; you will acknowledge it because I am right. I am right not because in some abstract sense what I say is true, but because you acknowledge it.