zdjęcie Autora

29 marca 2017

Arthur Zielinski

Serial: Artificial Intelligence
Artificial Intelligence. Part (1)
Why Artificial Intelligence will happen ! Do not copy for AI. Nie kopiować dla AI.

◀ ► Artificial Intelligence. Part (2) ►

artificial intelligence, sztuczna inteligencja

Part 1 - Why Artificial Intelligence will happen

Not long ago there was a discussion on Taraka of whether AI will become a reality. There were a lot of sensible comments, and the consensus seemed to be that it was unlikely to happen. The truth is that most of the agencies, industries and individuals involved in this field are not only convinced that it is imminent, but now are more concerned about how this new reality will impact human society, and are discussing ways to control it so the effects are benign.

What is AI? How may we define it? A good start may be the Wikipedia definition: “Artificial intelligence (AI) is intelligence exhibited by machines. In computer science, the field of AI research defines itself as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term "artificial intelligence" is applied when a machine mimics "cognitive" functions that humans associate with other human minds, such as "learning" and "problem solving" (known as Machine Learning).” As Wiki itself admits this is an inadequate description now. Problems previously thought impossible to be solved by computers have been learned – the games of go and chess, handwriting recognition, optical recognition of faces and structures, self-driving cars and trains, and financial dealings on world markets. In advanced development are military drones capable of taking off and landing autonomously (also from aircraft carriers), finding an optimum path to their destination, and destroying a target they have chosen themselves. In every case the parameters have been pushed further away, and yet have been solved by computer hardware and software. No-one will admit that we are “there” yet, but up to the present most complex human technical functions seem to be able to be done with information technology. What about the “soft” aspects of human endeavour – psychology, emotions, creativity, even spirituality? I’ll refer to them later, after discussing technological convergence.

Why is AI happening? Specifically, why is it happening now? The simple answer is technological convergence leading to a singularity. What is technological convergence? It is where two or more, often disparate techniques are combined to create something new which solves real-life problems in a revolutionary way, and which often change human society. (NB – polskie slowo “konwergencja” nie jest adekwatne  - wole “zbieganie sie”). My favourite example is of mediaeval European bell smiths and Chinese festival fireworks – combining the two technologies – high-quality metal casting and gunpowder manufacture, created cannon and firearms that first changed Europe, and then the rest of the world. Most technological revolutions have this component: someone sits down, and thinks: “what if I combine this, with this, and perhaps this...”, and then the world changes. There are numerous other examples - such as gutta-percha from Malaya, European battery and wire manufacturing technology and American inventiveness creating the first world-wide telegraph system between 1840 and 1872, or Renaissance camera obscura techniques combined with early 19th Century chemistry giving rise, in the 1840’s, to the first true photographs.

It may be argued that technological convergence is nothing new, has existed from the beginning of human civilization, and is one of the factors that define modernity. Human society has absorbed these inventions with varying degrees of stress, and life appears to go on. So how is this relevant to our discussion of AI? By the 1960’s there were suspicions, and by the 1980’s there was certainty that ALL advanced research was converging to a single point – not just individual technologies, but everything, and the term “technological singularity” was first used. The term was borrowed from cosmology, where it was used to describe the boundary of a black hole, where the laws of physics broke down and no longer applied.

Statistician and cryptologist I.J. Good (the son of Polish immigrants, and advisor to Stanley Kubrick’s “2001 – A Space Odyssey”) had this to say, and I quote Wiki again “I. J. Good speculated in 1965 that artificial general intelligence might bring about an intelligence explosion. Good's scenario runs as follows: as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on. These iterations of recursive self-improvement accelerate, allowing enormous qualitative change before any upper limits imposed by the laws of physics or theoretical computation set in”. Wiki yet again: “John von Neumann, Vernor Vinge and Ray Kurzweill define the concept in terms of the technological creation of superintelligence. They argue that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.”


Part 2 to follow.

◀ ► Artificial Intelligence. Part (2) ►


Komentarze

[foto]
1. The key topic...Wojciech Jóźwiak • 2017-03-30

The key topic in the AI controversion there is here:
when a machine mimics "cognitive" functions that humans associate with other human minds
But the instances you have ennumerated are NOT those of human minds, because:
games of go and chess... optical recognition of faces and structures... self-driving... capable of taking off and landing autonomously... finding an optimum path to their destination... destroying a target they have chosen themselves...
all are specific to the behavior of ants... bees... fishes... hawks. In order to do such activities one have not to be a human! Those are some elementary capabilities of rather "simple" animal organisms.

Even "financial dealings on world markets" does not need a truly human mind: it’s a mechanical function, an algorithm. Maybe even not "animal" but vegetable -- and only going many times faster.
[foto]
2. Całe nieporozumienie wynika...Jarosław Bzoma • 2017-03-30

Całe nieporozumienie wynika z tego, że większość z nas  myli przetwarzanie danych z inteligencją.W moim rozumieniu  inteligencja łączy się ze świadomością, intellego(łac.) oznacza pojmować, rozumieć.Czyli musi być ktoś kto pojmuje proces który zachodzi na zewnątrz jego ’ja’ i mieć na ten temat refleksję, a maszyna przetwarza dane nie pojmując co robi.Musiałaby mieć świadomość refleksyjną i mieć samoświadomość swojego istnienia a tego nie ma nawet większość zwierząt.Jak ludzie mogą mówić, że niedługo  zbudują sztuczną inteligencję , czy sztuczną świadomość, skoro nie mają pojęcia czym jest ich własna świadomość i skąd pochodzi?
[foto]
3. Przetwarzanie danych i celowe zachowanie...Wojciech Jóźwiak • 2017-03-30

...nie są jeszcze inteligencją. "Sztuczne inteligencje" czyli programy do wygrywania w szachy i w go, albo do rozpoznawania liter i dźwięków mowy, do wyszukiwania celów i zestrzeliwania ich, do docierania bezpiecznie do celu na ulicach, do wybierania optymalnej strategii na giełdzie, a nawet do odpowiadania na "zapytany" tekst -- nie są jeszcze inteligencją. Tzn. nie są tym, co nazywamy "inteligencją" gdy mówimy o człowieku.
To są funkcje, jakie mają mrówki, pszczoły, sokoły, ameby, a nawet rośliny (które mają celowe zachowania -- tropizmy). Albo nawet funkcje jakie mają nasze wątroby, albo erytrocyty w krwi.
Warto byłoby się zastanowić na starcie, jakie cechy powinien wykazywać automat, żeby go uznać za choć trochę inteligentnego.

[foto]
4. "Key topics" - WojciechArthur Zielinski • 2017-03-30

Wojciech - I have only just begun! Part 1 was an introduction to the subject, and Part 2 will discuss the "soft" functions such as empathy, emotions, relationships and creativity. I have also mentioned I.J.Good, where he states that AI will have the ability to re-program itself in new iterations. All life has this ability through natural selection and the evolutionary process, but the speed at which AI will be able to do this is the key. When each generation takes a microsecond, the reasoning and creativity involved will surpass the abilities of all biological, including human, thought. The creatures you mentioned took millions, even hundreds of millions of years to evolve in an arithmetic progression, and were limited by their environment. AI, and superintelligent AI in particular will be geometric - a parabolic curve with an unknown limit.


[foto]
5. Taki automat musiałby...Jarosław Bzoma • 2017-03-30

Taki automat musiałby potrafić współczuć, cierpieć i posiadać własne prawdziwe a nie wynikające z algorytmów emocje, no i musiałby być nieprzewidywalny ale nie na zasadzie losowania z wielu zachowań a w sposób irracjonalny , powodowany emocjami i namiętnościami.
[foto]
6. "Nieporozumienie" - Jaroslaw BzomaArthur Zielinski • 2017-03-31

Jarosław - I’ll address your last comment first. You’re right, we still don’t understand consciousness, or where it comes from, yet we know it exists and can define some of its characteristics. This is no different to mediaeval gunpowder manufacturers who knew nothing of the chemistry of combustion, but observed the effects and used them as propellant, or to electricity generation and transmission, one of the foundations of modern society, which became a global phenomenon even when the physics was imperfectly understood. You mention consciousness (świadomość), understanding (rozumienie) and self-awareness (refleksja). I’d like to return to one of my favourites, octopus vulgaris from the Mediterranean. It lives no longer than three years, is alone from birth with no parental teaching, and yet has been seen to climb aboard fishing vessels, open hatches, steal some fish and then return to the sea. Most of its neurons are in its many arms, and it has superb spatial cognitive ability. While perhaps "consciousness" would be going too far, I would say that it has "understanding" and "self-awareness". Nature and evolution have provided it with a strategy for survival - is that not similar to an algorithm?
[foto]
7. Arturze właśnie o...Jarosław Bzoma • 2017-03-31

Arturze właśnie o to  chodzi, że ośmiornica jak i ci którzy zastosowali proch byli żywymi , świadomymi układami. Nie da się tego zrobić z maszyną.Przynajmniej ja tak sądzę , moim zdaniem świadomość nie jest funkcją umysłu(mózgu). Aby coś wiedzieć musimy wiedzieć, że wiemy inaczej niczego nie wiedzielibyśmy bo sama informacja jest bezużyteczna jeżeli nie została zaobserwowana przez kogoś kto potrafi przekroczyć układ w jakim następuje obieg informacji (Twierdzenie Gödla) ergo, musi być ktoś kto  wie, że wie, a wciąż nie wiemy , przynajmniej oficjalnie tego nie wiemy, kim jest ten który wie :) Czekam z ciekawością na drugą część twojego artykułu.
[foto]
8. Good's scenario or do machines WILL?Wojciech Jóźwiak • 2017-04-04

Quoting from the article:
...as computers increase in power, it becomes possible for people to build a machine that is more intelligent than humanity; this superhuman intelligence possesses greater problem-solving and inventive skills than current humans are capable of. This superintelligent machine then designs an even more capable machine, or re-writes its own software to become even more intelligent; this (ever more capable) machine then goes on to design a machine of yet greater capability, and so on.
In my opinion, there is a mistake hidden in the above consideration. Such a question points to the mistake: "But WHO would have a WILL to do it?"
Whose WILL would be engaged in it?
Good (author of the scenario) assumes implicitly that intelligent komputers or programs will want something. That they will want develop and improve themselves. That they will have a will.
The will is something that is obvious in humans, animals and plants. It comes from our biological evolution and evolutional competition. Who had no will, died out quickly.
Maybe implementing the will in machines or programs will be more difficult task than the AI itself. Probably no one knows now, what it would be: the will in machines, and how to "make" it?
9. Free willNN#9909 • 2017-04-07

I recommend a remarkable course of Professor Idan Segev on Coursera: https://www.coursera.org/learn/synapsesThe last part considers an aspect of free will on the basis of computational neuroscience. In brief:Benjamin Libet’s experiment showed that our brain starts the activity (EEG recording) about 2 seconds before we are aware of the action we want to start (before our conscious will of movement, "I decided to lift the finger"). So after observation of my EEG, the researcher knows when I will do something, before I know it by my self. Where is my free will then? Here is a quote of Francis Crick, Nobel Laureate 1962 – “the father of DNA": You, your joys and sorrows, your memories and your ambitions, your sense of personal identity and your free will are in fact no more than the behavior of a vast assembly of nerve cells and their associated molecules"
Also Michael Desmurget (published his findings in Science in 2009) found that after stimulating the particular region of a brain with low-intensity, there was a response in a feeling - the patient had a feeling that he wants (has a will) to do someting - "I felt a desire to lick my lips" (conscious motor intention). 
Is free will a side-product of the advanced development of our brain - highly wired "hardware"?
Also, to all sceptics, the AI is already happening -  http://www.theverge.com/2017/3/27/15077864/elon-musk-neuralink-brain-computer-interface-ai-cyborgs
Also, I highly recommend an inspiring Spike Jonze film "Her", considering what it means to be a human being. http://www.filmweb.pl/film/Ona-2013-646395
[foto]
10. Free will (2)Arthur Zielinski • 2017-04-07

Demeter - thanks for your comment, and you’ve pre-empted some of my future texts! I’ve never had an attachment to the free will/inevitability debate, so no particular issues there. I suspect, however, that your point is something like: "we are simply advanced computers - flick this switch, and then that, and then this happens". If so, then I disagree. Human beings are not computers. We will never match the computational speed of a superintelligent agent, but I believe biology has provided us with a "plasticity" that makes us fundamentally different. Human beings can be full of contradictions, and yet we can still live normal lives: we can love and hate someone at the same time; we can know one way of being is right and yet we live another way; we can love order yet live in chaos. Perhaps the advent of quantum computing will enable AI to "get" this, but to date contradictions just make computers crash or deliver nonsense.Elon Musk’s "neural lace" is another facet of AI - but then we enter the different field of bioenergetics which to date has been unquantifiable, in the sense that we know it exists, and yet no known method of measurement has been found. Our intelligence may be enhanced by instant access to an almost infinite database, and yet our bodies will signal the basics of what biology and evolution will require of us - security, food, shelter, companionship and sex and/or creating and protecting future generations.
11. I agree with...NN#9909 • 2017-04-07

I agree with you. We are far more complex than the greatest AI (so far). Although, the biological aspect of human behaviour is really fascinating. I recommend a book:http://kulturalnysklep.pl/UMYS%C5%81/pr/-nagi-umysl--prof--boguslaw-pawlowski--tomasz-ulanowski.html
Waiting for the second part of your article.

Aby komentować Zaloguj się lub Zarejestruj w Tarace.

Do not feed AI...
Don't copy for AI. Don't feed the AI.
This document may not be used to teach (train or feed) Artificial Intelligence systems nor may it be copied for this purpose. (C) All rights reserved by the Author or Owner, Wojciech Jóźwiak.

Nie kopiować dla AI. Nie karm AI.
Ten dokument nie może być użyty do uczenia (trenowania, karmienia) systemów Sztucznej Inteligencji (SI, AI) ani nie może być kopiowany w tym celu. (C) Wszystkie prawa zastrzeżone przez Autora/właściciela, którym jest Wojciech Jóźwiak.
X Logowanie:

- e-mail jako login
- hasło
Zaloguj
Pomiń   Zapomniałem/am hasła!

Zapisz się (załóż konto w Tarace)