One of Turing's main argument for the Turing Test is proposed like so:
"According to the most extreme form of this view the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe 'A thinks but B does not' whilst B believes 'B thinks but A does not'. Instead of arguing continually over this point it is usual to have the polite convention that everyone thinks." (1936)
Art
It seems that Turing makes no distinction between the intent behind the action and the action itself. If you take art for example, can we say that something is art if the author of the piece did not have an intent based on emotion?
John Lennon's quote can help us bring into light what we are trying to convey here.
"My role in society, or any artist's or poet's role, is to try and express what we all feel. Not to tell people how to feel. Not as a preacher, not as a leader, but as a reflection of us all.")What if computers could make art that is based on emotions? In this case, what the computer is doing is not a reflection of itself, but a reflection of the subject, based on the emotion-cues it has received from him. It would then be possible to have an object of art created by a machine that is based on emotion, but we're still stopping short of what we humans do when we create art, what is lacking is 'the spirit with which we are moved by art' inherently in the machine.
Perhaps what we need is a model based on art created to convey meanings that cannot be expressed in words, a computer with its symbol system based off sensory cues (not only words) would produce its own point of view, its own synthesis of the world impressed by his senses. In which case, it would also have a hard time explaining succinctly what it is trying to convey into words (Perhaps it would even have to create new words if he tried to convey everything into word).
What we might be looking for is not only a system that can make and understand metaphors using that symbol system, but also one that is not meta-goal oriented. A machine that does not inherently have one goal that takes priority over all the other ones (or one that makes its own meta-goals, although this could lead to a Paperclip Universe)
The fact remains, it seems that the general consensus is that a computer that cannot make art cannot be fully conscious, let alone pass for a human. Art is deeply ingrained in our way of being (I'm particularly a fan of Goffman's Dramaturgical Model)
Neural Networks
It seems that much advance has been done in trying to reproduce the workings of the brain into a neural network, so as to give rise to consciousness.
The human brain takes care of many more things than simply the mind or its intelligence. Jeff Hawkins, the founder of Palm Computing, has delved on the origin of human intelligence in his book 'On Intelligence'. He advances the idea that what is needed to create an intelligent being like a human is a neocortex, the thin, outer layer of our brain. Of course, the rest of the brain is needed to support this neocortex, but when we learn the algorithms behind the model, we can learn how to build one without the need for the biological support. After all, if it were to be built in a virtual environment, there is no reason for the created mind to obey the same constraining laws we have in our own mortal state.
Jeff Hawkins sets up the neocortex as a system of biofeedback in which neurons receive input from sensory organs and are able to "call back" those inputs in a series of hierarchal ordered layers which we perceive as our imagination and thought. This gives rise to self-reflection and a self-reliant process of possible reorganization of these layers. In order to reproduce this effect, we would not need to recreate the thirty billion cells of our neocortex and their interactions. We could simply recreate the hierarchical system that the neocortex implements.
Works Cited
1. Turing, Alan. Computing Machinery and Intelligence. 1936
http://www.loebner.net/Prizef/TuringArticle.html
2. The computer that paints emotions - London blog - on Nature Network (at network.nature.com)
http://network.nature.com/hubs/london/blog/2008/01/25/the-computer-that-paints-emotions
3. Computer understanding of conventional metaphoric language - CiteSeerX (at citeseerx.ist.psu.edu)
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.7354
4. Dramaturgy (sociology) - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Dramaturgy_(sociology)
5. Kogs' Happy Blog: The Philosophical Implications of the Mind Modeled as a Machine
http://kogsworth.blogspot.com/2009/01/philosophical-implications-of-mind.html
6. Liquid state machine - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Liquid_state_machine
No comments:
Post a Comment