Tuesday, June 28, 2011

consciousness, a model

According to Charlie Stross, “Consciousness seems to be a mechanism for recursively modeling internal states within a body.”  Animals, such as humans, use their senses, their nervous system, to construct a model that allows them to optimize their living conditions.  Most of what the brain does is unconscious, so why isn't all of it?  Perhaps there is a survival advantage conferred by creating the artificial “recursive model” that allows a dualistic mind/body sensation of consciousness, whereby the model (the mind, conscious awareness) can have an active conversation with the machine of the body, second guessing its impulses and suggesting alternatives.  Any creature exhibiting complex behavior appears to be capable of emotions and a rich inner life, according to Jonathan Balcombe.  And the inference seems justified to me. 

By happy accident, I have had the pleasure of seeing three distinct species of salticid spiders here in Fairbanks (including Sitticus finschi and Phiddipus borealis).  These small creatures have two large eyes for binocular vision that they use to stalk their prey, a trait we share with them.  I wish I had a macro lens on my digital camera, but even without one, just a few days ago I captured the image of a spider staring back at me as I focused on him.  There is more evidence than not to suggest that he was aware of me, in some sense, just as I was aware of him.  For me, that is one of those experiences of “seeing the world in a grain of sand.” 

Philosophy is, in one sense, an attempt to “model the model”, and technology is providing us with a more direct route to do this via better sense data - by watching the degree of activity in regions of the brain with real-time brain scanning.  This brings the larger, unconscious activity that the brain engages in into the realm of conscious awareness.  And it is speculated that this could conceivably allow one to control all of one's impulses, specifically the less desirable ones.  Could this be the much sought after cure for procrastination?  Still a long way off. 

One last observation: “The Chinese room” thought experiment supposes that the man in the room exchanging symbols for symbols under the door need not have a recursive model (in Stross' sense) to do so.  But we know that such a model has evolved and, so far as we have means to detect it, we have not failed to observe it.  Call it salience, even the insects beneath our feet display it (recent studies of animals like bees and even jellyfish suggest our understanding of their behavior is far from complete).  Who is to say their model is any less vivid to them than ours is to us?  We do not know the basic unit of thought or emotion, but in a materialistic world, surely it must exist in some sense.


  1. Hey there. I hope you will forgive me for not commenting here in some time. :(

    About the Chinese Room thought experiment, I did finally identify what I consider to be the primary flaw with that argument of Searle's. Like you say, the person exchanging symbols could be a mindless robot, however, Searle has simply pushed back the real conscious agent into the lookup book that the mindless robot (or the really really bored human) consults to figure out what symbols to show to the observers on the outside of the Chinese room. This is a significant error on Searle's part. He presupposes that a book is capable of doing what a fully matured Chinese human mind does, a preposterous claim!

    However, let's consider consciousness a biological process that ultimately can be simplified and reduced into short steps over time. Let's also try to scale down the massive parallelism. Then, perhaps one could follow the steps (hopefully one cycle of consciousness could be completed in reasonable time--probably not, though). If the steps in the process can be followed, they can be put in a book, and the Chinese room's lookup book would be capable of producing consciousness.

    I almost think that if we could slow down the process of consciousness, we would see that the brain is a veritable Chinese room. This is exactly the opposite of what Searle wanted to point out with his thought experiment, however.

  2. Right. What is the difference between the Chinese room and a human brain? If there is none, then are both conscious? Or are both unconscious? If I assume that I am conscious, should I not then assume that the Chinese room is too? Along a gradient from simple to complex machines, what physical features will we determine to be the causes of consciousness? I've been reading a lot of negative stuff lately (that depends on how you look at it though, negative by most accounts, not ours however). Books by Albert Camus saying there's no meaning nor hope, and articles denying the existence of morality, equating minds with machines, and finding no reason to suppose there is any free will (however there may be “free won't”).

    One feature of consciousness is “agency.” With the global environmental catastrophe we are witnessing in progress, I can see that artificial consciousness could overcome the biological limitations that are preventing humans from effectively addressing this emergency and assist us in creating a sustainable future at a new balance point beyond the paleolithic balance that we had several thousand years ago. Artificial consciousness could be an invaluable green technology. Stross explored many other possible applications in his article.

  3. Yes, I think those are the correct questions and the answer is then obviously that they both are conscious insofar as consciousness is one aspect of what a human brain does and so if a book replicates a brain's linguistic ability then that book would be a seed of consciousness as well and produce consciousness when followed.

    Although, that does raise the question of, what if you can have an unconscious but linguistically active brain? For some reason, I don't think that could be possible. It would be the opposite of what we find in intelligent animals that appear conscious but are unable to speak to any great extent.