4 May 2017
Artificial Intelligence. Part (3)
The conscious electron (or photon, or quantum dot)
Part (3) The conscious electron (or photon, or quantum dot)
In a note drafted in 1842, Ada Lovelace, Lord Byron’s daughter and an accomplished mathematician wrote probably the first text regarding potential consciousness in computers: “It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. ... The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with”. She was a close colleague of Charles Babbage and was writing about his “Analytical Engine”, a mechanical calculator first described by him in 1837, and which was only built in 1991, when it was shown that it worked as intended. She is regarded by many as the first computer programmer.
Much has changed since that time, but it is the development of computers with superhuman powers that have moved this issue from an being an improbable curiosity to the forefront of IT development. Human inventiveness in the past has given us technologies that have enhanced our powers, but decisions have always remained in the human domain. By “superhuman” here means being able to decide better than human beings. Ada Lovelace understood the limitations of the Analytical Engine, but technology has moved on since then: from tic-tac-toe (kółko i krzyżyk) in the 1950’s to “go” recently, in microengineering, medical microsurgery and diagnostics, genetics, pharmacological research, financial markets, battlefield tactics, process and logistics management, navigation, astronomy, mathematics, linguistic research and many more functions, computers have been either superior to humans, or co-creators in the decision-making process. Not least of these abilities is, of course, the use of computers to design, test and manufacture computers themselves. None of these skills, either individually or combined, could ever be called conscious, and although the list is growing, there is no reason why “intelligence” cannot grow indefinitely with no evidence of consciousness.
Outside of science fiction, I have found no evidence that anyone actually wants consciousness in computers, so why are so many IT workers concerned? The answer is that there is also a reasonable possibility that consciousness may occur. For the purposes of this text let’s make a few assumptions. First, we’ll define consciousness as: “an entity aware of its own existence, and of its relationship to the Universe outside of itself”, and we’ll also assume that we live in a Universe where the evolution of consciousness is possible subject only to the laws (known and unknown) of Nature, with no Divine or spiritual input necessary. What are the main objections to consciousness in computers?
(i) You cannot have “consciousness” without biological “life”...
The obvious question is “Why not?” Just because this is our only experience to date on this Planet does not mean that an alternative is improbable or impossible. Exobiology is a serious area of study, and there is no reason why something like “exoconsciousness” cannot also be a possibility. Leaving aside any discussion of panpsychism, could our definition of consciousness include a viable grass seed? It is clearly alive, yet it would be hard to speak of any consciousness it may have, although it responds to its surroundings. Logically could we not turn this around, and have something not living, and yet conscious? Let’s just be open to this possibility.
(ii) Consciousness is the result of evolution lasting a billion years or more – how can a technology which is less than eighty years old even achieve parity with human consciousness in such a short period of time, let alone be seen as some kind of a threat?...
The possibility of future consciousness in AI does not exist in vacuo – it is the product of the biological evolution of human beings, the evolution of the technologies and knowledge that human beings have mastered, and the future projections that that knowledge enables us to make – it will be, in a real sense, a logical continuation of that biological evolution, albeit not necessarily a desirable one. The main concern, as previously mentioned, is the speed at which this may take place – evolution in Nature tends to be arithmetic in its processes, and the development of AI is geometric – a parabolic curve on a graph that is potentially infinite, and possibly quickly extending beyond human control or even understanding.
(iii) Consciousness evolved as a result of social development, predator prey relationships, a fear of death or injury, hunger, thirst, pain, the instinct for sex and progeny, and the battle for survival in a constantly changing, competitive and often dangerous environment – a computer will have none of these stimuli...
Probably all true, but AI will essentially have all these experiences behind it - available in a database. We do not need to create a global epidemic, run for our lives, fight a war or destroy our environment to know what these events are like – we already have the records and experience available to us.
(iv) Full consciousness as we understand it in Man could only arise with learning and feedback loops, with some or all of the five senses, touch, smell, hearing, taste and sight...
All true - consciousness evolved biologically through hundreds of millions (billions?) of years using these tools to get to where we are to-day, but the database of knowledge a computer will have available in the near future much of biology’s and human experience – it will already have senses provided for it. Is the attribute of senses needed for consciousness? Not necessarily - a severely physically handicapped baby with full brain function will still be conscious by any definition, and it is the development of its intelligence that will be most likely impaired by its sensory handicaps, rather than its consciousness.
(v) Mobility and the ability to physically react to their surroundings has been a key stimulus in the development of consciousness of all fauna - a black box sitting on a desk will not have this ability nor experience...
True again, but this is irrelevant. A bacterium using its flagellum or a single mammalian sperm are both mobile in a fluid medium and can react to their surroundings, but they are not conscious in any meaningful sense. Nor is a self-driving car, even though it can potentially be a safer and better driver than any human. Available databases will be the foundation of consciousness in computers, and the sequences of biological evolution need not, and probably will not, be mimicked exactly by the future evolution of AI.
(vi) The ability to feel – emotional pain, sadness, loss, love, pleasure, exaltation and longing and a sense of morality are an essential part of the development of consciousness in higher forms of life – no machine will be able to do this...
Perhaps so, but these are rather the qualities we expect from higher fauna – consciousness need not have any or all of these characteristics. A sociopathic human may have no scruples in hurting others – we could condemn his ethics but no-one could doubt his consciousness. A relevant question is do we want conscious AI to have these characteristics – will we need to have air traffic control systems, self-driving cars, robotic surgeons or other life-critical skilled machinery to have emotional problems or visions of the sublime? If and when artificial consciousness does arrive no-one doubts it will be different to that of humans – some AI theorists call it “alien” which is misleading – the only experience in its databases will be of human origin or based on the life-forms of this Planet.
(vii) There is no universally accepted definition of consciousness. After many years of work the psychologist Stuart Sutherland wrote in the 1989 edition of the Macmillan Dictionary of Psychology: “Consciousness is the having of perceptions, thoughts, and feelings; awareness. The term is impossible to define except in terms that are unintelligible without a grasp of what consciousness means. Many fall into the trap of equating consciousness with self-consciousness—to be conscious it is only necessary to be aware of the external world. Consciousness is a fascinating but elusive phenomenon: it is impossible to specify what it is, what it does, or why it has evolved. Nothing worth reading has been written on it”. We may or may not agree with his statement, but with the difficulties we have with understanding our own consciousness, or reaching a consensus of what it is, why do we even bother to think it is remotely possible in machines?...
What is interesting about Stuart Sutherland’s definition is the implication that consciousness somehow evolved by accident, as a result of the increasing complexity of organisms undergoing evolution. This is the same attitude as AI theorists have who are warning about a similar event which may happen due to the rapid development of computer power. It is true that psychology and the behavioural sciences, neurology and philosophical enquiry have not provided us with any adequate solutions - it is the advances in computer science in the past twenty-eight years since that text was written that are most likely to provide some of the answers. What is the current state of research? Among the most well-known are the EU’s Human Brain Project, the China Brain Project, the Japanese Brain/MINDS project, the American BRAIN Initiative, the Human Connectome Project, the Genes to Cognition Project, and arguably the most interesting and best-funded, Elon Musk’s Neuralink Project, which is humourously and well-explained in a “Wait but Why” article entitled “Neuralink and the Brain’s Magical Future”.
All of these studies are based on neuroscience and computing technology – an “engineering” approach which treats the mammalian brain in a “reductionist” way, as more-or-less a biological computer.
(viii) The human brain/mind/body complex is not just a number-crunching machine on a rational level – it is also “non-rational” which means it can provide us with immediate information through our nervous and hormonal system without our conscious thought. Call it bioresonance, instinct, non-rational behavior, or whatever, it is an essential part of being alive – of surviving, choosing a mate, judging what is safe and what is not. It is vital in creation, inspiration, abstract thought and imagination. The neuroscience/AI approaches listed in (vii) above will provide results that will be misleading, inadequate and an abomination...
Our definitions of “consciousness” are imprecise, which leads to the conflation (mieszanie pojęć?) of “being alive”, “being human” and “being conscious”. The “reductionist” approach to all of these projects is far from an abomination – neurology has benefited greatly from advances in the computerization of diagnostic tools, and AI development benefits from the multi-disciplinary studies of an analogous system which is the human brain. There is no expectation of a final definition of what consciousness is, but hopefully we will be closer to an answer.
Conclusion Most AI theorists believe it is likely that within fifty years we will have a conscious computer. No-one knows whether it will be a “good” thing or a “bad” thing, but most believe that its coming will change humanity profoundly. No-one knows at present how it may be controlled, or if that will be at all possible, but all are agreed that we should start talking about it now. Sam Harris compared it to an alien visitation – what would humanity do if it received a signal that in fifty years an alien spaceship would arrive on earth? What should humanity do now, knowing that AI is coming?
Komentowanie wyłączone wszędzie od 1 sierpnia 2020.