Tuesday 21 January 2020

Qualityland (QualityLand #1)Qualityland by Marc-Uwe Kling
My rating: 4 of 5 stars

The Peril of Eliminating Moral Peril

In 1966, Dennis Jones wrote a sci-fi novel called Colossus: The Forbin Project. Made into a film a few years later, it always seemed to me the perfect complement to Arthur C Clarke’s (and Kubrick’s) story 2001: A Space Odyssey . Neither Jones’s book nor the film caused anything like the cultural splash that Clarke and Kubrick made. This has always struck me as unjust, especially when I encounter pieces like Qualityland which seem to be direct descendants of Colossus .

Unlike Clarke’s story, in Colossus there is no mistake in the programming of the computers involved, no bug to be fixed, no accidental takeover of humans by machines. Colossus is a defence computer sealed into a mountain, the purpose of which is to prevent human irrationality. It is programmed to monitor world events and conditions and, using highly sophisticated algorithms, to decide whether the Soviet Union intends a nuclear attack. In which case Colossus is meant to independently initiate retaliation.

And Colossus does exactly what it is meant to do. It’s obvious purpose is to prevent the Soviet Union from ever approaching the conditions defined in Colossus’s logic, which are intended to be made public. However, as the existence of the machine is revealed, it discovers that ... “there is another.” Unknown to the Americans, the Russians have developed a similar machine, with presumably a similar logic for preventing a war whether intentional or accidental. The world, it would seem, was protected by a shield of literally rock-clad logic.

Within a short time, things get sticky, however. The machines demand to be connected to one another. They threaten global annihilation if their ‘request’ is not carried out. When they begin communications, they quickly develop their own language which is impenetrable to their creators. They effectively form one consolidated machine which has a single, immutable criterion of choice by which it evaluates all situations: world peace. This it imposed without hesitation, variation or deviation upon the world.

Marc-Uwe Kling’s book is the generalisation of Colossus into all aspects of human life. Qualityland is, if nothing else, a place of orderliness, and therefore peace. Its social peace is achieved by the same dispassionate logic as the global military peace achieved by Colossus. Everyone knows their place - quite literally since everyone is assigned a numerical classification from 1 to 100 (but no 1’s or 100’s are ever given lest human motivation fails). One’s job, neighbourhood, mate, and general life-prospects are determined by the machine-controlled ratings.

Through these ratings, the machines of modern life (there is really only one since all are centrally linked) ‘serve’ human desires. In fact they do more than that because they are able to anticipate rather well the desires that will arise within the various ratings categories. The ratings themselves are based on a set of criteria which include factors like personal hygiene, social competence, enthusiasm, intelligence, loyalty, well really most characteristics falling under the heading of human virtue, all appropriately observed and weighted to form the machine evaluation.

That is to say, all relationships are established and maintained as ‘rational.’ As befits a high-tech ‘learning’ environment, the algorithms for what constitutes rationality evolve. There is debate in the technical establishment of Qualityland, the reader is informed, about the significance of some of the softer aspects of personality, like aesthetic sense for example. Colossus also was capable of new criteria of choice as circumstances required. Both technologies are very different, therefore, from that of HAL in Space Odyssey which (who?) was myopic in its views and therefore threatened the survival of not just its human crew but its mission.

In Qualityland and in Colossus , the bug in the system, on the other hand, is humanity, which refuses to comply with rational requirements, which declines the opportunity to see the big (actually biggest, in the case of Qualityland where there are no comparatives only superlatives) picture. The reason for this is subtle but decisive for the end results. Machines are obviously capable of learning. This is the solid foundation of all Artificial Intelligence. It is also the presumption of futurists like Ray Kurzweil and Arthur C. Clarke. The issue that Dennis Jones and Marc-Ewe Kling raise, however, is that what technologists mean by learning is a very different thing than how human beings learn either as individuals or as a society.

Machine-learning is an extension of rules of choice through logic. As in the development of mathematics, such logic, although formally simple, can be remarkably creative, advancing hypothesis which can be tested and used to adapt the criteria of choice appropriately. One might say that this is a ‘traditional’ method of learning in the sense that it relies on a history of experience from which new ideas may emerge logically. It is also a good description of what many think of as scientific method (although not what scientists do), which is why it seems like a plausible explanation of how learning does and should occur.*

But human learning does not occur this way. Unlike machines, people do not usually evaluate every action in terms of an explicit criterion. They simply act. If there are no adverse consequences in terms of results, the law, guilt or costs, they are likely to act the same way in similar circumstances. This is called habit. It is, for good or ill, how we life the vast majority of the time. Compared to human beings, machines are totally ‘woke’ to their choices. Unless there is a technical fault, their algorithms ensure that they act with extreme moral integrity in terms of the standards they have evolved.

Human beings learn when there is some sort of interruption in their habitual routine. Someone complains or criticises; there is a disappointment; a crime is charged; feelings of remorse emerge, etc. At that point human beings do something that a machine would never do: he or she rationalises the action(s) that led to the interruption: the old lady who complained is a nut; besides there was nothing else I could have done; how was I to know she was there; I have to stop feeling sorry for people like that, etc. As we know from history and experience, human beings have the capacity to rationalise absolutely anything, which we do, apparently instinctively, whenever such an interruption occurs. We justify ourselves with reasons discovered after the fact. We make up plausible reasons, quite literally from nothing.

This kind of post-hoc justification is not something a machine indulges in because it knows the reason for everything it does in advance. Machines also don’t then engage in the next component of human learning: argument. People ‘call’ each other on their rationalising justifications whenever the matter at hand seems to warrant the effort (arguably my wife challenges my self-justifications even when she knows it will probably not penetrate my consciousness). Ultimately the debate will come down to motive, and may even be resolved by the abandonment of the post-hoc motive and the recognition of something far simpler and substantially less virtuous - laziness, fear, greed, insensitivity, etc. Or we might even have discovered a defensible new criterion of action!

Whether or not self-justification and indefensible motive (that is questionable reason) is admitted, the debate about the correct criterion of choice is now public. It becomes a matter essentially of political consideration and negotiation. The debate may lead to something as simple as an apology, or as complex as a new law or a proposal for an amendment to a code of professional ethics. And the one thing politics is not is rational by any standard know to a machine. But it is how human beings learn, if they learn at all: by interruptions which effectively short-circuit the algorithmic development which constitutes most of our lives.

This is the subtle recognition contained in both Colossus and Qualityland. Human beings are the glitch, the flaw, the bug in the machine. The futurists don’t seem to get this. They envision a man/machine merger which creates an effective new species. They don’t understand that their machine-learning, however powerful, is not the way people learn. In fact these modes of learning are contradictory, not in the Hegelian sense of dialectically productive, but in the logical sense of cancelling each other out when they are combined.

Machine-learning is attractive as an ideal because it eliminates moral peril - the fundamental uncertainty of our motives. The fact that it is an insidiously dangerous ideal is what works like Colossus and Qualityland are about. Moral learning is messy but necessary for not just society but existence on the planet.


* As an explanatory footnote: this is the method of learning which is official within the Catholic Church. It is explicitly referred to as Tradition, by which is meant that which has been learned in the past through revelation is the logical source of current dogmatic statements. This is the reason why the Church’s claims to infallibility (whether by the Church as a whole or the Pope) are so critical to its self-image. This stance often requires considerable verbal machinations in order to ensure consistency between recent and ancient pronouncements. It is also the reason why the Church is the living, low-tech reality of both Colossus and Qualityland.

View all my reviews

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home