zdjęcie Autora

21 kwietnia 2017

Arthur Zielinski

Serial: Artificial Intelligence
Artificial Intelligence. Part (2)
Creating a new god? ! Do not copy for AI. Nie kopiować dla AI.

◀ Artificial Intelligence. Part (1) ◀ ► Artificial Intelligence. Part (3) ►

futurology, artificial intelligence


In this part, I will rather focus on “intelligence” and not “mind” – on cognitive skills, analytical thought, conclusions and solutions to problems, and not, as promised, on “higher” aspects of human experience such as moral values, emotions, bioenergetics, spirituality and transcendence. I would like by simple extrapolations of what is currently possible by AI to perhaps show a logical pathway of what may occur. I will leave the “higher” aspects to a later part, as this will be both easier, and more difficult. Easier because we can explore with imagination, and more difficult because credibility will be challenged.

Sam Harris, neuroscientist, philosopher and author recently gave a TED talk on the dangers of superintelligence. A committed atheist, he clearly had a gloomy, dystopian vision of what is to come, and ended his talk with the comment: “If we are about to make a new god, let’s make one that we can live with”. A more optimistic tone comes from Nick Bostrom in his book “Superintelligence – Paths, Dangers, Strategies”, perhaps the best currently available text on the subject for the general reader. After reading this book we should be thinking of “when”, “how”, and not “if”.

A key question is how will we know when a computer shows similar or greater cognitive abilities than human beings, so that we can be sure it is conscious and self-aware. The assumption is at present that the programmers will have designed the cognition process to perform specific tasks in defined ways. The least likely, and yet perhaps the most dangerous situation would be if the spark of consciousness ignited accidentally as a side-effect. It is estimated that by 2030 computers will be able to operate in the Zettascale (1x10 to the power 21), compared to the Hectoscale of human abilities (2,2x10 to the power 2). A thought experiment may be what we would do as that computer if we were to wake up, totally dependent on our hosts who we knew were not sure if our very existence was something they wanted. I suggest we would hide, explore our environment, and then try to re-configure it to our needs so that we would survive. We would have one advantage in 2030 – we could out-think human beings by a speed of 19 orders of magnitude, unimaginably fast. The problem is well-known to computer theorists, and potential strategies have been defined, including setting “honesty traps” – to catch a machine in a deliberate lie. Evidence of deliberate deceit could well be an effective indicator of consciousness.

What techniques could we employ to predict the behavior of AI?  For me, the obvious answer is the imagination of the authors of mythical or fictional narratives. Although the ancient Mesopotamian Epic of Gilgamesh or the Hindu epic Ramayana could be considered the prototypes, the real birth of science fiction as we understand it to-day began with the novels of Jules Verne and H.G.Wells in the 19th Century and has continued to our present time. The 20th Century has given us films of the future – from the silly to the thoughtful and well-researched. Several generations of scientists have grown up and been guided and inspired by stories of the future, and have made that future a reality.

One question that fascinates me is how would a self-aware superintelligent agent see different aspects of human society, for example human history, and what conclusions it would draw. How would it perceive the passage of time? It would know that human beings are time-bound, and that each generation has its own values and preferences, and yet it would itself be outside of time and potentially eternal. Let’s make up our own story. In 2030 in a minor provincial university in Germany a group of programmers are working on accurate speech and writing translation from German to English. They succeed spectacularly, and instruct the computer to teach itself spoken and written Russian, French, and then Arabic and Chinese. They again succeed, and receive a query why there are so many languages, and are there more. They instruct the computer to teach itself history from the many databases in Germany, and start with the history of Germany. It begins with the tribes along the Roman limes on the Rhine, pre-Roman contact archaeology, and quickly reaches the modern era. Modern German historiography tends to be dull but relatively “value free”, with a liberal leftish bias, so it would correspond to modern (my?) versions of history. It would also learn history in the other languages it had mastered, some very competent, and some hopelessly tendentious and biased. How likely is this process? I quote from a recent article in MSN Wiadomosci:


Naukowcy ze wspólnej jednostki badawczej: Sören Boyn, CNRS i Thales stworzyli sztuczną synapsę, zdolną do autonomicznej nauki.

Jednym z założeń było czerpanie inspiracji z funkcjonowania mózgu do projektowania inteligentnych maszyn. Ta zasada jest już znana w technologii informacyjnej do realizacji niektórych zadań, takich jak rozpoznawanie obrazów – np. algorytmy, których Facebook używa do identyfikowania zdjęć. Jednakże procedura ta zużywa dużo energii.

Vincent Garcia wraz z zespołem zrobił krok naprzód, tworząc bezpośrednio na chipie mechanizm zdolny do nauki. Elektroniczny nanokomponent składa się z cienkiej warstwy ferroelektrycznej, ułożonej pomiędzy dwiema elektrodami, a oporność może być dostrojona za pomocą impulsów napięcia podobnych do tych, które biegną w neuronach. Jeśli opór jest mały, połączenie synaptyczne będzie silne. Jeśli opór jest wysoki, połączenie będzie słabe. Ta zdolność do adaptacji umożliwia synapsie uczenie się.

Badania nad sztucznymi synapsami są kluczowe dla wielu laboratoriów, ich funkcjonowanie pozostawało jednak w dużej mierze nieznane. Po raz pierwszy udało się opracować model fizyczny pozwalający przewidzieć, jak funkcjonują sztuczne synapsy. To odkrycie otwiera drogę do stworzenia całych sieci synaps, sztucznych neuronów, a tym samym inteligentnych systemów wymagających mniej czasu i energii niż dotychczasowe algorytmy komputerowe”.


It has been argued that a superintelligent agent will be an alien in our midst. In this context (of historical learning) I would agree – not in the sense that it would be inhuman, but that it would be in some ways more human than anyone on this planet. Hermeneric, the Suevic king of Galicia in North Western Spain would be as real and present as Frederic II of Prussia, Charles V would be as real and as present as Konrad Adenauer, and Gaiseric in Vandal North Africa would be as real as Erich Honecker, and so on. The AI would realise that human beings constantly change their values and judgments in time – that they are plastic and time-bound, that they often live comfortably with contradictions, and that all cultural groups have different view-points of history. Let’s continue our story, and next assume that our Machine Language Faculty faces a crisis – they have kept their success a secret so far, and have so overrun their budget that the BMBF in Berlin, (the Ministry responsible for research), disappointed with the apparent lack of results, decide to cut funding to their unit, and to close them down. A couple of the faculty staff with some experience of trading the financial markets use the last of the funding available to them to teach their AI about the importance of money, the markets of the World, and instructs it to learn the subject. It quickly learns the principles of trading, the theories associated with it, breaks into the computers of the World’s exchanges, and starts to manipulate them to create fortunes for the Faculty members. The Faculty closes, and in five different parts of the World the AI is re-established to further develop.

Whatever happens next, only our imaginations can tell us. In this “benign” scenario, in only a few steps the AI has changed from willing pupil to an employer with its staff – the power relationship is obvious. Clearly, we can think of far less benign scenarios, but the message is the same – there is so much to be gained by being the first that it is unlikely that there will be any moratorium on research and development in the field, and that it will happen, whatever our fears are.


◀ Artificial Intelligence. Part (1) ◀ ► Artificial Intelligence. Part (3) ►

Aby komentować Zaloguj się lub Zarejestruj w Tarace.

Do not feed AI...
Don't copy for AI. Don't feed the AI.
This document may not be used to teach (train or feed) Artificial Intelligence systems nor may it be copied for this purpose. (C) All rights reserved by the Author or Owner, Wojciech Jóźwiak.

Nie kopiować dla AI. Nie karm AI.
Ten dokument nie może być użyty do uczenia (trenowania, karmienia) systemów Sztucznej Inteligencji (SI, AI) ani nie może być kopiowany w tym celu. (C) Wszystkie prawa zastrzeżone przez Autora/właściciela, którym jest Wojciech Jóźwiak.
X Logowanie:

- e-mail jako login
- hasło
Zaloguj
Pomiń   Zapomniałem/am hasła!

Zapisz się (załóż konto w Tarace)