If Bill Gates had been born in 1492, would he have founded Microsoft?
If Adolph Hitler had sold a few more paintings, would he have been so interested in politics?
If Pope John Paul II had been born in Saudi Arabia instead of Poland, would he have been Christian?
If that awful thing hadn't occurred when you were young, would you be as wonderful as you are now?
If you had been shipwrecked on a desert island last month, would you be reading this?
If you hadn't missed that bus, could you have been run over by a taxi?
Is there such a thing as a self-made man or woman?
2011-09-27
2011-09-25
Personality Fragments
How do you decide what to think or do next?
Let's remember that action precedes consciousness (as I discussed in the article about that topic). Thus, the actual decision to think or act is not conscious. You can become aware of it having happened, of course. Your mind can even be configured to preview actions (as in “think before you speak”), but this is not the natural or default way for your brain to work. Unless you are exceedingly neurotic and hypervigilant most of your decisions will tend to take place in the natural way.
If you do something repeatedly, that action becomes internalized. For example, when you were a child it took a lot of thought to tie your shoelaces. Now, though, you don't need to think about it at all; the action has become internalized. To put it another way: you no longer bother to be conscious of the action.
This does not mean you are completely oblivious, of course. If somebody asks, “Did you tie your shoelaces?” you will usually remember having done so. Not always, of course! Sometimes someone will say, “Did you remember to lock the front door?” or “Did you turn off the stove?” and you will have no recollection whatsoever. In such cases the action was so automatic that your mind allowed other actions to dominate your attention. The memory of the action might be in your brain somewhere, but you simply cannot assemble enough of the context of the action to remember having done it.
Independent Yet Coordinated
Please recall that the mind (along with the brain) is made up of billions of independent neurons. These work in a coordinated fashion, but they are nonetheless individual cells. The mind is, in essence, a massively parallel computer made up of billions of processors. These processors are arranged into functional groupings (for language, vision, smell, emotional labelling and so on). These groupings operate asynchronously.
Consider this statement: “I saw the apple; I thought I'd like an apple; and so I picked it and then ate it.” We may remember our actions this way, but this is a vast over-simplification because while this sequence was occuring so were many other mental processes. For example, after seeing the apple your eyes may have glanced around while your mind was considering whether or not to eat it. If, during those moments of consideration, you saw a lion, the process would have been interrupted. You would not complete your apple-oriented thoughts before responding to the threat!
It should be obvious, then, that the mind does many things simultaneously. It should also be obvious that its activities can be at odds with each other — “Do I consider the lion or the apple?” — and a choice must occur.
Most choices in our lives are far less exciting than avoiding lion attacks. We might have to choose between an apple or a pear. Do we vote for this candidate or that? Do we smile at this person or not? Which of these 17 television shows do we watch, or should we do something else altogether?
Deep within the mind these choices take place. And the mind can be lazy about it. If we usually choose an apple over a pear, we will tend to do so again. There are good evolutionary reasons why we take these shortcuts: if a decision does not seem to matter much, why waste energy on it?
Choosing an apple instead of a pear is fairly simple. But what about complex issues? Do I compliment the boss on his new tie? Do I spend money on new tires for my car or risk using them for another month?
Our minds choose the actions of our lives from a maelstrom of alternatives. Different people have different patterns of choosing. These patterns are a big part of what we call their personality. Some people will tend to choose in a selfish way. Others seem to choose from a mix of 90% logic and 10% compassion. And so on.
Competing Fragments
There are countless ways to describe personality, but the key issue I wish to highlight now is that choices arise from numerous alternatives within the person. These competing alternatives are what I call “personality fragments.”
When your brain is faced with a decision these personality fragments all jostle around. One of these “wins” according to various criteria — though this does not mean that there is a central “choosing organ”! Habitual actions (like choosing an apple because you always have done so) will have an advantage, but there can be exceptions for various reasons.
Now recall that action precedes consciousness. Once the choice is made and the action takes place, your mind has to catch up to what just occurred. Now comes the process of rationalizing the choice:
Alice asks, “Why did you eat that apple?”
Bob replies, “Because apples are healthy!”
Bob is almost certainly fooling himself. He may indeed believe that apples are healthy, but it is highly unlikely that his explanation is complete or even identifies the main reason for the choice. The actual process of choosing is so far from his consciousness that he cannot accurately explain why he chose as he did. So he rationalizes.
We never know the full story of why we choose as we do. Bob may prefer apples because they remind him of a fine year during his childhood when he lived in an orchard. But he might never admit this because he has never bothered to notice the connection.
Because we cannot see the interaction of personality fragments our entire lives are fictions we make up after the fact. What we tell ourselves about our motives will always be incomplete. We can, of course, meditate deeply and obtain some additional insight into our choices. But we will never know the entire truth.
If our lives are fictions, what happens when we start to believe those fictions are true? Is this not the ground upon which ego grows? And once the ego flourishes, will the rationalizations not bend to accomodate the beliefs the ego has established as fact?
Can we see ourselves as something other than what we imagine we are? Perhaps we can, provided we recall that we didn't actually know in the first place.
Let's remember that action precedes consciousness (as I discussed in the article about that topic). Thus, the actual decision to think or act is not conscious. You can become aware of it having happened, of course. Your mind can even be configured to preview actions (as in “think before you speak”), but this is not the natural or default way for your brain to work. Unless you are exceedingly neurotic and hypervigilant most of your decisions will tend to take place in the natural way.
If you do something repeatedly, that action becomes internalized. For example, when you were a child it took a lot of thought to tie your shoelaces. Now, though, you don't need to think about it at all; the action has become internalized. To put it another way: you no longer bother to be conscious of the action.
This does not mean you are completely oblivious, of course. If somebody asks, “Did you tie your shoelaces?” you will usually remember having done so. Not always, of course! Sometimes someone will say, “Did you remember to lock the front door?” or “Did you turn off the stove?” and you will have no recollection whatsoever. In such cases the action was so automatic that your mind allowed other actions to dominate your attention. The memory of the action might be in your brain somewhere, but you simply cannot assemble enough of the context of the action to remember having done it.
Independent Yet Coordinated
Please recall that the mind (along with the brain) is made up of billions of independent neurons. These work in a coordinated fashion, but they are nonetheless individual cells. The mind is, in essence, a massively parallel computer made up of billions of processors. These processors are arranged into functional groupings (for language, vision, smell, emotional labelling and so on). These groupings operate asynchronously.
Consider this statement: “I saw the apple; I thought I'd like an apple; and so I picked it and then ate it.” We may remember our actions this way, but this is a vast over-simplification because while this sequence was occuring so were many other mental processes. For example, after seeing the apple your eyes may have glanced around while your mind was considering whether or not to eat it. If, during those moments of consideration, you saw a lion, the process would have been interrupted. You would not complete your apple-oriented thoughts before responding to the threat!
It should be obvious, then, that the mind does many things simultaneously. It should also be obvious that its activities can be at odds with each other — “Do I consider the lion or the apple?” — and a choice must occur.
Most choices in our lives are far less exciting than avoiding lion attacks. We might have to choose between an apple or a pear. Do we vote for this candidate or that? Do we smile at this person or not? Which of these 17 television shows do we watch, or should we do something else altogether?
Deep within the mind these choices take place. And the mind can be lazy about it. If we usually choose an apple over a pear, we will tend to do so again. There are good evolutionary reasons why we take these shortcuts: if a decision does not seem to matter much, why waste energy on it?
Choosing an apple instead of a pear is fairly simple. But what about complex issues? Do I compliment the boss on his new tie? Do I spend money on new tires for my car or risk using them for another month?
Our minds choose the actions of our lives from a maelstrom of alternatives. Different people have different patterns of choosing. These patterns are a big part of what we call their personality. Some people will tend to choose in a selfish way. Others seem to choose from a mix of 90% logic and 10% compassion. And so on.
Competing Fragments
There are countless ways to describe personality, but the key issue I wish to highlight now is that choices arise from numerous alternatives within the person. These competing alternatives are what I call “personality fragments.”
When your brain is faced with a decision these personality fragments all jostle around. One of these “wins” according to various criteria — though this does not mean that there is a central “choosing organ”! Habitual actions (like choosing an apple because you always have done so) will have an advantage, but there can be exceptions for various reasons.
Now recall that action precedes consciousness. Once the choice is made and the action takes place, your mind has to catch up to what just occurred. Now comes the process of rationalizing the choice:
Alice asks, “Why did you eat that apple?”
Bob replies, “Because apples are healthy!”
Bob is almost certainly fooling himself. He may indeed believe that apples are healthy, but it is highly unlikely that his explanation is complete or even identifies the main reason for the choice. The actual process of choosing is so far from his consciousness that he cannot accurately explain why he chose as he did. So he rationalizes.
We never know the full story of why we choose as we do. Bob may prefer apples because they remind him of a fine year during his childhood when he lived in an orchard. But he might never admit this because he has never bothered to notice the connection.
Because we cannot see the interaction of personality fragments our entire lives are fictions we make up after the fact. What we tell ourselves about our motives will always be incomplete. We can, of course, meditate deeply and obtain some additional insight into our choices. But we will never know the entire truth.
If our lives are fictions, what happens when we start to believe those fictions are true? Is this not the ground upon which ego grows? And once the ego flourishes, will the rationalizations not bend to accomodate the beliefs the ego has established as fact?
Can we see ourselves as something other than what we imagine we are? Perhaps we can, provided we recall that we didn't actually know in the first place.
2011-09-24
A Bunch of Stuff Happening
This is space.
This is time.
This is matter.
This is energy.
This is gravity.
This is information.
This is the universe being.
This is a Planck unit of distance.
This is a nanometer.
This is a meter.
This is a kilometer, light year, parsec.
This is a gigaparsec.
This is the universe all around.
This is a Planck unit of time.
This is a nanosecond.
This is a second, minute, hour.
This is a day, month, year, century, millenium.
This is an age, epoch, period, era, eon.
This is a supereon.
This is the universe so far.
This is a sub-atomic particle interacting.
This is an atom interacting.
This is a molecule interacting.
This is a cell interacting.
This is a brain interacting.
This is a human interacting.
This is a culture interacting.
This is human culture interacting.
This is a biological niche interacting.
This is a planet interacting.
This is the universe acting.
This is me.
This is you.
This is us.
This is this and also that.
This is the universe creating.
This is time.
This is matter.
This is energy.
This is gravity.
This is information.
This is the universe being.
This is a Planck unit of distance.
This is a nanometer.
This is a meter.
This is a kilometer, light year, parsec.
This is a gigaparsec.
This is the universe all around.
This is a Planck unit of time.
This is a nanosecond.
This is a second, minute, hour.
This is a day, month, year, century, millenium.
This is an age, epoch, period, era, eon.
This is a supereon.
This is the universe so far.
This is a sub-atomic particle interacting.
This is an atom interacting.
This is a molecule interacting.
This is a cell interacting.
This is a brain interacting.
This is a human interacting.
This is a culture interacting.
This is human culture interacting.
This is a biological niche interacting.
This is a planet interacting.
This is the universe acting.
This is me.
This is you.
This is us.
This is this and also that.
This is the universe creating.
2011-09-23
Death Ain't So Bad (Update)
Consider two phenomena in the history of humanity: the Stanford Prison Experiment and the nearly inexplicable behavior of the German people during the 1930's and early 1940's. Low points like these demonstrate even better than the high points that the individual is an expression of a larger gestalt and that individuality is over-rated.
Perhaps, for balance, I should mention a high point, too. Okay, consider the global tide of enthusiasm for love during the 1960's. It swept people along while it lasted, though its lack of staying power rather shows that it was not based on a coherent message. Nonetheless, those years also demonstrated that the individual will tend to reflect the gestalt — my highly conservative stepmother actually bought me a groovy Nehru jacket! — and that the uniqueness of individuality is over-estimated. To put it another way: we're not as separate as we might imagine we are.
I wrote about these matters during the 1980's in an article entitled Death Ain't So Bad. Since then I have been scanning scientific literature and the internet looking for scholarly treatment of this perspective. There are bits and pieces of it here and there. I see echoes of it in the writings on James Lovelock's Gaia perspective. There are writings that approach the idea on web sites dealing with memetics. But Gaia is a meme, and even the notion of “memes” is a meme. As a result, it seems to me, individuals who are enthralled with those memes tend to miss the bigger picture as it applies to them specifically.
And why would we expect anything different? Our culture (that is, our current collection of favoured memes) is predicated on the idea that we are stand-alone individuals. Yes, even Japan during the 1940's. In addition, our entire system of law is construed by some to depend on the assumption that we are individuals in this sense. You can't punish society for creating a monster, can you? Even if it did. (America, I'm looking at you. You too, Iraq. And so on, planet.)
Is this such a big deal? Does it matter? Well, one could argue that understanding how things work is always a good thing. But I have a different argument.
Mainstream religion has long dominated humanity by offering people a way out of death. Whether it deals with heaven or reincarnation, death is not presented as the final act. Science has not been so gentle with our feelings.
On the other hand, if we see ourselves as expressions of that which is larger (human culture at one level, the universe at another) then we see that only part of the system is ceasing to move forward. That which created you continues, even if your consciousness does not. And as I've previously suggested in this blog, consciousness may be nothing but a transcription process that evolution bestowed upon us because it helped us survive. We make it very personal, of course, but our conceptions can be tweaked by seeing a larger context. (Okay, now you can consider Japan in the 1940's.)
In other words, I am suggesting that we have an alternative, comforting, non-religious way to look at death.
Alas, such a suggestion is an idea. And ideas mutate. What starts as a non-religious idea can acquire fanatical adherents who want it to dominate its natural subtrate of replication (i.e. human minds). In other words, even a non-religious idea can spawn zealotry. Consider the American anti-communism fervor of the 1950's if you want an example of this!
Still, we're getting to know memes better and better. Maybe one day we'll find ourselves using them intelligently instead of having them use us. That's a self-referential feedback loop my mind cannot encompass; I spot at least one category error. But for a topic like this it's best to end of a positive note. I have a sense there's an uncodified writing meme about that.
(Man, this is badly written. Too bad I have to write all this out in a hurry.)
Perhaps, for balance, I should mention a high point, too. Okay, consider the global tide of enthusiasm for love during the 1960's. It swept people along while it lasted, though its lack of staying power rather shows that it was not based on a coherent message. Nonetheless, those years also demonstrated that the individual will tend to reflect the gestalt — my highly conservative stepmother actually bought me a groovy Nehru jacket! — and that the uniqueness of individuality is over-estimated. To put it another way: we're not as separate as we might imagine we are.
I wrote about these matters during the 1980's in an article entitled Death Ain't So Bad. Since then I have been scanning scientific literature and the internet looking for scholarly treatment of this perspective. There are bits and pieces of it here and there. I see echoes of it in the writings on James Lovelock's Gaia perspective. There are writings that approach the idea on web sites dealing with memetics. But Gaia is a meme, and even the notion of “memes” is a meme. As a result, it seems to me, individuals who are enthralled with those memes tend to miss the bigger picture as it applies to them specifically.
And why would we expect anything different? Our culture (that is, our current collection of favoured memes) is predicated on the idea that we are stand-alone individuals. Yes, even Japan during the 1940's. In addition, our entire system of law is construed by some to depend on the assumption that we are individuals in this sense. You can't punish society for creating a monster, can you? Even if it did. (America, I'm looking at you. You too, Iraq. And so on, planet.)
Is this such a big deal? Does it matter? Well, one could argue that understanding how things work is always a good thing. But I have a different argument.
Mainstream religion has long dominated humanity by offering people a way out of death. Whether it deals with heaven or reincarnation, death is not presented as the final act. Science has not been so gentle with our feelings.
On the other hand, if we see ourselves as expressions of that which is larger (human culture at one level, the universe at another) then we see that only part of the system is ceasing to move forward. That which created you continues, even if your consciousness does not. And as I've previously suggested in this blog, consciousness may be nothing but a transcription process that evolution bestowed upon us because it helped us survive. We make it very personal, of course, but our conceptions can be tweaked by seeing a larger context. (Okay, now you can consider Japan in the 1940's.)
In other words, I am suggesting that we have an alternative, comforting, non-religious way to look at death.
Alas, such a suggestion is an idea. And ideas mutate. What starts as a non-religious idea can acquire fanatical adherents who want it to dominate its natural subtrate of replication (i.e. human minds). In other words, even a non-religious idea can spawn zealotry. Consider the American anti-communism fervor of the 1950's if you want an example of this!
Still, we're getting to know memes better and better. Maybe one day we'll find ourselves using them intelligently instead of having them use us. That's a self-referential feedback loop my mind cannot encompass; I spot at least one category error. But for a topic like this it's best to end of a positive note. I have a sense there's an uncodified writing meme about that.
(Man, this is badly written. Too bad I have to write all this out in a hurry.)
2011-09-22
Tiactino Epilogue
Until you have read the article Tiactino and arrived at a realization about its subject, please do not read the rest of this epilogue — please stop here.
—————
This epilogue will be short and may seem incomplete. I will not spell things out for you. It's far better if you fill in the blanks yourself.
In the Tiactino article we discussed how we can see the world through a veil of models cobbled together from our memory of our experiences. The world ceases to be fully real to us, though until we wake up to Tiactino we scarcely notice this, or if we do notice we do not consider it significant.
Our models can tell us “facts” that are wrong. We may assume we cannot do certain things, even though we can. We might be operating with restrictions that are years out of date, or weren't accurate in the first place. If our models are used habitually they might never get assessed against reality.
Those are a few of the issues with the mental models we make. Now consider what we do with models other people give to us. Are those your eyes you're seeing through? This is a copy; this is not original.
—————
This epilogue will be short and may seem incomplete. I will not spell things out for you. It's far better if you fill in the blanks yourself.
In the Tiactino article we discussed how we can see the world through a veil of models cobbled together from our memory of our experiences. The world ceases to be fully real to us, though until we wake up to Tiactino we scarcely notice this, or if we do notice we do not consider it significant.
Our models can tell us “facts” that are wrong. We may assume we cannot do certain things, even though we can. We might be operating with restrictions that are years out of date, or weren't accurate in the first place. If our models are used habitually they might never get assessed against reality.
Those are a few of the issues with the mental models we make. Now consider what we do with models other people give to us. Are those your eyes you're seeing through? This is a copy; this is not original.
Tiactino
In December 2006, after my eight-month-long contemplative Sabattical had finally borne fruit, I became fascinated by this statement:
This is a copy; this is not original.
There seemed to be something nearly mystical about that sentence, which I will refer to hereafter as Tiactino. At the time I did not appreciate how much ground Tiactino covers, but in this article I will review some of my discoveries about it.
Tiactino could be called a concept. That is to say, I could explain to you what it means. But that is trivial; you can probably figure out most of that without me.
The important aspect of Tiactino is what I might call the non-concept of it. That is to say, its important aspect is the mental actions it points to and excludes, none of which are concerned with understanding the actual sentence. Indeed, the literal meaning of the sentence points directly away from that which I am attempting to illuminate.
I apologize if all that sounds confusing. Let me also tell you that I've heard the non-concept explained by various sages and gurus, and they've been far less clear. I can sympathize with their challenge; this is nearly impossible to talk about because it deals with the internal churnings of the mind. So mere explanations are inadequate and extremely misleading. I must demonstrate in such a way that you find yourself in the state of mind to which Tiactino is pointing.
Ready to go? Okay, let's continue.
During my Sabbatical people sometimes asked me what I was seeking. I'd tell them that I was “Looking For Reality”. Yes, I said it With Capital Letters. I didn't actually know what it meant, though. All I knew was that I needed to do it. And the non-concept of Tiactino turned out to be part of the answer.
To move towards the non-concept, let me ask you to consider this question:
How much of what we perceive is really new, and how much is second-hand?
You are reading this article on the assumption that I wrote it and that it represents my thoughts. But this is a copy on your computer screen, not the original work. It could have been altered. It seems that I'm typing this sentence here but did I actually type it or did somebody else hack into the copy and rassle frazzle beep?
Okay, that was silly, but I hope it woke you up a bit so you're ready for the more serious examples. Because in a way this is all about being awake.
Proofs of Non-Concept
In your imagination, walk out the front door of your home and look around. What do you see in your mind's eye? The neighborhood, of course. How many times have you seen it? A hundred times? A thousand, perhaps? How about a million? Would you believe ...
less than ten?
Look: I asked you to see it in your mind's eye. If you can do that, then obviously you can mentally model your neighborhood quite well. If you actually walk outside and look around, will you be seeing the actual neighborhood or your model? I say that each time you go outside you're not really seeing what's there.
If you want proof, I have it. At least, I can prove it to some people by asking them to recall a particular kind of mental experience. I do not know how many people have had this experience, but I've come to suspect that it is quite common. Here's how to set it up ...
Go on a long trip, away from your neighborhood. Drive a long, long way away — hundreds of miles — to an unfamiliar area. Take in the view. Stop for a meal. See the sights in a distant town. Then head back home. Don't forget to admire the scenery!
When you return to your neighborhood, it will look strangely different — both familiar and unfamiliar at the same time. If you've ever noticed this phenomenon, you may have wondered why it happens. Well, here's why:
It looks different because you're looking at it the way you looked at the unfamiliar town. Your long trip has temporarily broken your mind out of the habit of scanning your neighborhood by looking at a few key landmarks.
You are, in fact, seeing the neighborhood for the first time. Again.
I hope you recognize the mental phenomenon I described above, because it can wake you up to the meaning of, the non-concept of, Tiactino.
Most of the time we live in a world of copies. We eat a sandwich, but it's not a new sandwich. It's an old sandwich because we've eaten countless sandwiches in our lives. This is just another one of those. This is nothing new. This is a copy; this is not original.
We hear the sound of a bird outside. But we don't hear it the way we did when we were four years old. It's just a bird! Just another bird. This is a copy; this is not original.
We get a hug from a loved one. Our mind is elsewhere. Hugs are nice, but we've had lots of hugs. This is a copy; this is not original.
We read an article that gives a series of examples and we start to skim ahead. We get the point. It's all obvious, now. We don't need yet another example. This is a copy; this is not original. We fully expect that we won't miss anything important. We might be wrong.
Where is Your Attention Now?
If you pay attention to what your mind is doing, you may start to notice that it is short-changing you. Instead of delivering fresh, new experiences you are getting predigested experiences instead.
There are countless facets to this phenomenon. I'll give a few examples below.
We label objects, people, events, and more to compress their reality into a word and/or phrase and/or image and/or feeling and/or notion. (And so on.) Having done so, we cease to fully experience their reality. (This is how prejudice and bigotry operate, obviously, but that's just the tip of the iceberg named Tiactino.)
We become accustomed to a particular activity and it ceases to amuse us. We continue doing it, hoping to get back the original thrill, but now we're modeling the process as we do it. It's no longer new; it can no longer deliver the original joy. Rusty old amusement! Tiactino!
We listen to a political debate and hear the Bad Guys spewing their usual nonsense. Who cares? There's no need to listen carefully. Let's just pick up on one error and the rest will be the same old thing. Tiactino.
We read article after article, but we don't hear what is being said, because it's just one more article by one more kook. We believe we already know what reality is, so why bother looking at it? Tiactino.
A friend meets you on the street and asks, “Did you get a haircut or something?” You inform him that you now wear glasses. Tiactino, buddy! Wake up! He looks at you as if for the first time. He appears to notice the glasses.
Some people out there will know precisely what I am talking about. It's hard to explain, but they will recognize it clearly from what I've written above. Then there are those who will think they recognize it but merely understand. Their moment will come! And, of course, some people will think this is rubbish.
If and when you get what Tiactino is pointing at, you'll know with certainty what I'm talking about. You won't merely agree that I'm correct; I'll be irrelevant because you'll know. You won't need me at all. And then you can start to reconnect with reality, as I did in December 2006 ... after decades of stupefaction.
This is a copy; this is not original.
There seemed to be something nearly mystical about that sentence, which I will refer to hereafter as Tiactino. At the time I did not appreciate how much ground Tiactino covers, but in this article I will review some of my discoveries about it.
Tiactino could be called a concept. That is to say, I could explain to you what it means. But that is trivial; you can probably figure out most of that without me.
The important aspect of Tiactino is what I might call the non-concept of it. That is to say, its important aspect is the mental actions it points to and excludes, none of which are concerned with understanding the actual sentence. Indeed, the literal meaning of the sentence points directly away from that which I am attempting to illuminate.
I apologize if all that sounds confusing. Let me also tell you that I've heard the non-concept explained by various sages and gurus, and they've been far less clear. I can sympathize with their challenge; this is nearly impossible to talk about because it deals with the internal churnings of the mind. So mere explanations are inadequate and extremely misleading. I must demonstrate in such a way that you find yourself in the state of mind to which Tiactino is pointing.
Ready to go? Okay, let's continue.
During my Sabbatical people sometimes asked me what I was seeking. I'd tell them that I was “Looking For Reality”. Yes, I said it With Capital Letters. I didn't actually know what it meant, though. All I knew was that I needed to do it. And the non-concept of Tiactino turned out to be part of the answer.
To move towards the non-concept, let me ask you to consider this question:
How much of what we perceive is really new, and how much is second-hand?
You are reading this article on the assumption that I wrote it and that it represents my thoughts. But this is a copy on your computer screen, not the original work. It could have been altered. It seems that I'm typing this sentence here but did I actually type it or did somebody else hack into the copy and rassle frazzle beep?
Okay, that was silly, but I hope it woke you up a bit so you're ready for the more serious examples. Because in a way this is all about being awake.
Proofs of Non-Concept
In your imagination, walk out the front door of your home and look around. What do you see in your mind's eye? The neighborhood, of course. How many times have you seen it? A hundred times? A thousand, perhaps? How about a million? Would you believe ...
less than ten?
Look: I asked you to see it in your mind's eye. If you can do that, then obviously you can mentally model your neighborhood quite well. If you actually walk outside and look around, will you be seeing the actual neighborhood or your model? I say that each time you go outside you're not really seeing what's there.
If you want proof, I have it. At least, I can prove it to some people by asking them to recall a particular kind of mental experience. I do not know how many people have had this experience, but I've come to suspect that it is quite common. Here's how to set it up ...
Go on a long trip, away from your neighborhood. Drive a long, long way away — hundreds of miles — to an unfamiliar area. Take in the view. Stop for a meal. See the sights in a distant town. Then head back home. Don't forget to admire the scenery!
When you return to your neighborhood, it will look strangely different — both familiar and unfamiliar at the same time. If you've ever noticed this phenomenon, you may have wondered why it happens. Well, here's why:
It looks different because you're looking at it the way you looked at the unfamiliar town. Your long trip has temporarily broken your mind out of the habit of scanning your neighborhood by looking at a few key landmarks.
You are, in fact, seeing the neighborhood for the first time. Again.
I hope you recognize the mental phenomenon I described above, because it can wake you up to the meaning of, the non-concept of, Tiactino.
Most of the time we live in a world of copies. We eat a sandwich, but it's not a new sandwich. It's an old sandwich because we've eaten countless sandwiches in our lives. This is just another one of those. This is nothing new. This is a copy; this is not original.
We hear the sound of a bird outside. But we don't hear it the way we did when we were four years old. It's just a bird! Just another bird. This is a copy; this is not original.
We get a hug from a loved one. Our mind is elsewhere. Hugs are nice, but we've had lots of hugs. This is a copy; this is not original.
We read an article that gives a series of examples and we start to skim ahead. We get the point. It's all obvious, now. We don't need yet another example. This is a copy; this is not original. We fully expect that we won't miss anything important. We might be wrong.
Where is Your Attention Now?
If you pay attention to what your mind is doing, you may start to notice that it is short-changing you. Instead of delivering fresh, new experiences you are getting predigested experiences instead.
There are countless facets to this phenomenon. I'll give a few examples below.
We label objects, people, events, and more to compress their reality into a word and/or phrase and/or image and/or feeling and/or notion. (And so on.) Having done so, we cease to fully experience their reality. (This is how prejudice and bigotry operate, obviously, but that's just the tip of the iceberg named Tiactino.)
We become accustomed to a particular activity and it ceases to amuse us. We continue doing it, hoping to get back the original thrill, but now we're modeling the process as we do it. It's no longer new; it can no longer deliver the original joy. Rusty old amusement! Tiactino!
We listen to a political debate and hear the Bad Guys spewing their usual nonsense. Who cares? There's no need to listen carefully. Let's just pick up on one error and the rest will be the same old thing. Tiactino.
We read article after article, but we don't hear what is being said, because it's just one more article by one more kook. We believe we already know what reality is, so why bother looking at it? Tiactino.
A friend meets you on the street and asks, “Did you get a haircut or something?” You inform him that you now wear glasses. Tiactino, buddy! Wake up! He looks at you as if for the first time. He appears to notice the glasses.
Some people out there will know precisely what I am talking about. It's hard to explain, but they will recognize it clearly from what I've written above. Then there are those who will think they recognize it but merely understand. Their moment will come! And, of course, some people will think this is rubbish.
If and when you get what Tiactino is pointing at, you'll know with certainty what I'm talking about. You won't merely agree that I'm correct; I'll be irrelevant because you'll know. You won't need me at all. And then you can start to reconnect with reality, as I did in December 2006 ... after decades of stupefaction.
2011-09-20
The Self Meme
I received the following question from a reader:
Is the self really a meme? .... The self can exist without culture. Animals [can] have a representation of self in their nervous systems.
This is an excellent question!
Representations of Self
The correspondent is correct that animals represent themselves in their brains. A cat can survey the distance between furniture and window, visibly prepare to make the jump, then execute it flawlessly. Somewhere inside the cat there was a model that comprised all the elements of that jump.
Some people may not appreciate how wonderful it is that cats can do this. Yet I remember, with the clarity of a photograph, a moment twenty years ago when I watched a cat jump. A realization hit me: if a cat uses mental representations, then what about a mouse? What about an insect? A clam? At what level of complexity does representation come in? And what can representation and modeling teach us about the evolution of mental mechanisms and software?
The minds of humans appear more complex than those of mice or cats. For example, we not only have models of our body and other elements in the environment but we also have models of those models. We can close our eyes and imagine a jump between two ledges that exist only in our imagination. In so doing we feel little or no reference to anything physical.
In such imaginings we can also imagine that which is obviously false. We can imagine being able to “jump tall buildings in a single bound” (as the fictional Superman is able to do).
Is there an evolutionary advantage to this kind of mental activity? Daydreaming about super strength might seem like a waste of time. Nonetheless, holding in mind a false model of the self can be useful, as it allows us to ask “what if” questions for far-future planning. “What if I worked up to running 15 miles a day? Would I live longer?” To the best of my knowledge there are no non-human animals that are capable of this kind of thought.
It is, in fact, a bit misleading to call this “a false model”. It might be better to call it a speculative model. At least, it is speculative provided we remember that it is imaginary. If, on the other hand, we start believing that our own models are real, then it is worth being reminded that they are not.
The “self meme” I mentioned earlier is wrapped up in this process of speculation about what could be. The ability to do this is not, in my view, hard-wired. In my opinion, the idea of modeling false selves arose alongside language, which was itself a set of memetic accretions. (I do not dispute that our genome has altered to favour language skills.)
Seductive Models
At this point, let us make the following observation:
The more accurate a model, the more useful it can be.
This may seem self-evident, but it is not hard-wired into us. To some extent we discover it for ourselves, but as we are raised we are repeatedly taught to think more “clearly” to take advantage of the fact that a higher-resolution model will serve us better. But is this actually a case of thinking more clearly?
Ideally, yes. In practice, not always. The higher the resolution of a model, the easier it is to mix it up with reality. This can backfire on us.
For example, you may become so familiar with a good friend that you can simulate conversations with him or her in your head. You know them so well that you can model them with great fidelity. The drawback to this is that in real life, when face to face with your friend, you can end up talking to the model rather than the real person. (This can create a feedback loop of self-fulfilling expectations, but that's a matter for another article.)
This confusion can also happen with our models of ourselves. We can form images of what we are and end up treating them like reality. Consider the woman who says, “I am a Republican!” or the man who says, “I eat rare meat like a real man!” Meme-based models such as “Republican” and “real man” can turn into illusions if a person believes they are real things.
To generalize the foregoing, let me make two observations:
We can be seduced into treating imaginary objects like real objects.
The way humans treat models of self can be considered a meme.
We are taught attitudes towards our selves, such as “You are special!” But while these can be useful within the context of our culture, they are objectively false. You are not really all that special, nor are you separate from all reality in the way that most people have been taught into believing.
The idea that these illusions about self could be true can be called “the self meme”. That is to say, at some point in the history of humanity we began believing that our models were really us and we started teaching that misconception to our children.
At the core of the self meme is the idea that what we are taught about ourselves can be just as valid as what we observe of our own accord. This may be a convenient idea, but it is, ultimately, false.
Is the self really a meme? .... The self can exist without culture. Animals [can] have a representation of self in their nervous systems.
This is an excellent question!
Representations of Self
The correspondent is correct that animals represent themselves in their brains. A cat can survey the distance between furniture and window, visibly prepare to make the jump, then execute it flawlessly. Somewhere inside the cat there was a model that comprised all the elements of that jump.
Some people may not appreciate how wonderful it is that cats can do this. Yet I remember, with the clarity of a photograph, a moment twenty years ago when I watched a cat jump. A realization hit me: if a cat uses mental representations, then what about a mouse? What about an insect? A clam? At what level of complexity does representation come in? And what can representation and modeling teach us about the evolution of mental mechanisms and software?
The minds of humans appear more complex than those of mice or cats. For example, we not only have models of our body and other elements in the environment but we also have models of those models. We can close our eyes and imagine a jump between two ledges that exist only in our imagination. In so doing we feel little or no reference to anything physical.
In such imaginings we can also imagine that which is obviously false. We can imagine being able to “jump tall buildings in a single bound” (as the fictional Superman is able to do).
Is there an evolutionary advantage to this kind of mental activity? Daydreaming about super strength might seem like a waste of time. Nonetheless, holding in mind a false model of the self can be useful, as it allows us to ask “what if” questions for far-future planning. “What if I worked up to running 15 miles a day? Would I live longer?” To the best of my knowledge there are no non-human animals that are capable of this kind of thought.
It is, in fact, a bit misleading to call this “a false model”. It might be better to call it a speculative model. At least, it is speculative provided we remember that it is imaginary. If, on the other hand, we start believing that our own models are real, then it is worth being reminded that they are not.
The “self meme” I mentioned earlier is wrapped up in this process of speculation about what could be. The ability to do this is not, in my view, hard-wired. In my opinion, the idea of modeling false selves arose alongside language, which was itself a set of memetic accretions. (I do not dispute that our genome has altered to favour language skills.)
Seductive Models
At this point, let us make the following observation:
The more accurate a model, the more useful it can be.
This may seem self-evident, but it is not hard-wired into us. To some extent we discover it for ourselves, but as we are raised we are repeatedly taught to think more “clearly” to take advantage of the fact that a higher-resolution model will serve us better. But is this actually a case of thinking more clearly?
Ideally, yes. In practice, not always. The higher the resolution of a model, the easier it is to mix it up with reality. This can backfire on us.
For example, you may become so familiar with a good friend that you can simulate conversations with him or her in your head. You know them so well that you can model them with great fidelity. The drawback to this is that in real life, when face to face with your friend, you can end up talking to the model rather than the real person. (This can create a feedback loop of self-fulfilling expectations, but that's a matter for another article.)
This confusion can also happen with our models of ourselves. We can form images of what we are and end up treating them like reality. Consider the woman who says, “I am a Republican!” or the man who says, “I eat rare meat like a real man!” Meme-based models such as “Republican” and “real man” can turn into illusions if a person believes they are real things.
To generalize the foregoing, let me make two observations:
We can be seduced into treating imaginary objects like real objects.
The way humans treat models of self can be considered a meme.
We are taught attitudes towards our selves, such as “You are special!” But while these can be useful within the context of our culture, they are objectively false. You are not really all that special, nor are you separate from all reality in the way that most people have been taught into believing.
The idea that these illusions about self could be true can be called “the self meme”. That is to say, at some point in the history of humanity we began believing that our models were really us and we started teaching that misconception to our children.
At the core of the self meme is the idea that what we are taught about ourselves can be just as valid as what we observe of our own accord. This may be a convenient idea, but it is, ultimately, false.
Christianity Does Not Make Sense
Note: A video version of this article is available on YouTube.
—————
I am a big fan of Jesus of Nazareth — at least the Jesus shown in the Gospel of Thomas.
Unfortunately, that Gospel was suppressed by the Church, even though there is evidence that it is older than the four canonical gospels (Matthew, Mark, Luke and John).
If you care to do a bit of skeptical research you can also find plenty of evidence that the four canonical gospels are unreliable. Of course, their unreliability only shows that those books were written by flawed men. It does not prove anything about the central claim of Christianity.
I have little doubt that there was a fellow named Jesus, and that about 2000 years ago he preached a message that bewildered some, inspired others, and troubled enough people that he was killed by the authorities. I do, however, doubt that Jesus was the “Christ” that the Church depicted him to be.
Why do I doubt this? Because that depiction of Jesus doesn't make sense.
—————
The theory we are asked to accept goes like this:
1) There was Original Sin (as in the story of Adam and Eve)
2) To “redeem” humanity from Original Sin, Jesus was sent as a sacrifice
So far this is all familiar to the average Christian. Now, though, let's look at the unspoken assumption hiding inside the theory:
3) This was the only (or best) way to redeem humanity from Original Sin
If there was an alternative to God having someone tortured to death, would it not make sense for God to use that alternative?
I realize that some people will now be thinking, “What makes sense to God doesn't have to make sense to us!” This is a specious argument, though. Saying “My god's ways are too mysterious to understand” works for any religion, no matter how ridiculous. As such, it cannot be used by people who want to discern between reasonable and unreasonable beliefs.
—————
If, as Christianity says, God chose to torture his Son to death as a means to redeem us, it must mean that God had no choice in the matter. If he'd had a simpler, less violent, more loving choice, He would have surely done that.
Some people may now say, “But He tried an alternative with the Chosen People!” Alas, this argument is at odds with the Christian belief that God can foresee the future. The only way to maintain that belief is to say that God used the Chosen People to demonstrate that that approach could not work.
As far as I can see, then, this leaves us with only one possible conclusion to add to the theory:
4) God had no other way to redeem humanity except via human sacrifice
If that is so, then the question arises: redeem them from what?
In other words, who made the rules? Who set up reality in such a way that God would be cornered into such an unpleasant decision? In other words:
Who forced God to torture his own Son to death?
If God set up the rules of reality, He could do whatever He pleased, including forgiving humanity for Original Sin. Being omnipotent, He could simply utter the words, “I forgive you.” Problem solved. Right?
Of course, some readers will still being saying, “Wrong!” They might embark upon a convoluted argument based upon the nature of Free Will. But arguments such as those do not answer the problem raised by the question above:
If God did not have a less barbaric choice, why not?
—————
There is a limit to how clear I can make this. Either you understand by now or you do not. And if you do understand, you may be wondering what actually happened 2000 years ago.
I was not there, obviously, but this is what I think happened:
Jesus was a preacher with a highly original message. His followers did not understand his message. So they made up stories that “proved” that their misunderstandings were the truth. In other words, they were human, and behaved the way people have behaved throughout history. As a result, the doctrines of the Christian religions have almost nothing to do with what Jesus actually said.
Is that so hard to believe?
—————
I am a big fan of Jesus of Nazareth — at least the Jesus shown in the Gospel of Thomas.
Unfortunately, that Gospel was suppressed by the Church, even though there is evidence that it is older than the four canonical gospels (Matthew, Mark, Luke and John).
If you care to do a bit of skeptical research you can also find plenty of evidence that the four canonical gospels are unreliable. Of course, their unreliability only shows that those books were written by flawed men. It does not prove anything about the central claim of Christianity.
I have little doubt that there was a fellow named Jesus, and that about 2000 years ago he preached a message that bewildered some, inspired others, and troubled enough people that he was killed by the authorities. I do, however, doubt that Jesus was the “Christ” that the Church depicted him to be.
Why do I doubt this? Because that depiction of Jesus doesn't make sense.
—————
The theory we are asked to accept goes like this:
1) There was Original Sin (as in the story of Adam and Eve)
2) To “redeem” humanity from Original Sin, Jesus was sent as a sacrifice
So far this is all familiar to the average Christian. Now, though, let's look at the unspoken assumption hiding inside the theory:
3) This was the only (or best) way to redeem humanity from Original Sin
If there was an alternative to God having someone tortured to death, would it not make sense for God to use that alternative?
I realize that some people will now be thinking, “What makes sense to God doesn't have to make sense to us!” This is a specious argument, though. Saying “My god's ways are too mysterious to understand” works for any religion, no matter how ridiculous. As such, it cannot be used by people who want to discern between reasonable and unreasonable beliefs.
—————
If, as Christianity says, God chose to torture his Son to death as a means to redeem us, it must mean that God had no choice in the matter. If he'd had a simpler, less violent, more loving choice, He would have surely done that.
Some people may now say, “But He tried an alternative with the Chosen People!” Alas, this argument is at odds with the Christian belief that God can foresee the future. The only way to maintain that belief is to say that God used the Chosen People to demonstrate that that approach could not work.
As far as I can see, then, this leaves us with only one possible conclusion to add to the theory:
4) God had no other way to redeem humanity except via human sacrifice
If that is so, then the question arises: redeem them from what?
In other words, who made the rules? Who set up reality in such a way that God would be cornered into such an unpleasant decision? In other words:
Who forced God to torture his own Son to death?
If God set up the rules of reality, He could do whatever He pleased, including forgiving humanity for Original Sin. Being omnipotent, He could simply utter the words, “I forgive you.” Problem solved. Right?
Of course, some readers will still being saying, “Wrong!” They might embark upon a convoluted argument based upon the nature of Free Will. But arguments such as those do not answer the problem raised by the question above:
If God did not have a less barbaric choice, why not?
—————
There is a limit to how clear I can make this. Either you understand by now or you do not. And if you do understand, you may be wondering what actually happened 2000 years ago.
I was not there, obviously, but this is what I think happened:
Jesus was a preacher with a highly original message. His followers did not understand his message. So they made up stories that “proved” that their misunderstandings were the truth. In other words, they were human, and behaved the way people have behaved throughout history. As a result, the doctrines of the Christian religions have almost nothing to do with what Jesus actually said.
Is that so hard to believe?
2011-09-19
Memetic Turning Points
In these articles I have put forth the idea that we humans are products of our culture. I also claim that most people are blind to the extent that this is so. I am not saying that culture merely influences us. I am saying that we are expressions of our culture, with far less choice than we might imagine.
This assertion verges on being unfalsifiable, since each person's culture differs subtly. Although I now live in the United States, I was born in Canada. My mother was from England; my father was born in Canada but partially raised in Britain; and my step-mother was from France. For most of my life I dwelled in a part of Canada that was predominately French. So you might expect me to be slightly different from another Canadian — and you'd be right. But the differences between two Canadians are, by some measures, not as great as the differences between a Canadian and an “American” (that is, somebody from the United States).
Canadians are seen by Americans as nice, law-abiding but somewhat dull. Americans are seen by Canadians as highly energetic, self-centred ... and violent. How can these two groups of people be so different? It seems to me that these differences can be explained by cultural differences, which in turn arise from issues of climate and history. But there are also some memetic turning points, and I would like to mention one of these now.
The American Declaration of Independence (1776) is famous for containing this sentence:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. (Italics added)
Now have a look at this sentence from the Canadian Charter of Rights and Freedoms (1982):
Everyone has the right to life, liberty and security of the person and the right not to be deprived thereof except in accordance with the principles of fundamental justice. (Italics added)
Note that the Canadian document (written two centuries after the American one) mentions “life” and “liberty”. It seems obvious that the American meme is being copied. But note also that the phrase “the pursuit of Happiness” has been changed to “security of the person”. (The Canadian document also does not claim that these rights are “unalienable” and gives the specific context in which they are not.)
The American notion raises Happiness and its pursuit to the status of a Right. As such, it becomes the “unalienable” birthright of every American. And what does “Happiness” mean? The Declaration of Independence does not explain this, but there are countless people in the United States who are willing to sell you their solution. It might be a hamburger or a new car. It might be a bigger house. Who knows what will work?
Now why would the writers of the Canadian document change a key (and famous) phrase in the way they did? I cannot read their minds, but my feeling is that they looked south of their border at the Americans and saw just what happened to that country as it pursued Happiness. The American behavior frightened Canadians, and they responded by explicitly making security a duty of the government. (Note that Canada has socialized health care but the United States is — as of this writing — still resisting it.)
It seems to me that the behavioral differences between Canadians and Americans are well represented by that single substitution of “pursuit of Happiness" for “security of the person”. The question I leave up to the reader is this:
Is the behavior of the average American affected by knowing they are supposed to pursue (not necessarily catch) Happiness?
In other words, was the inclusion of the words “pursuit of Happiness” a memetic turning point for American (and also Canadian) culture?
This assertion verges on being unfalsifiable, since each person's culture differs subtly. Although I now live in the United States, I was born in Canada. My mother was from England; my father was born in Canada but partially raised in Britain; and my step-mother was from France. For most of my life I dwelled in a part of Canada that was predominately French. So you might expect me to be slightly different from another Canadian — and you'd be right. But the differences between two Canadians are, by some measures, not as great as the differences between a Canadian and an “American” (that is, somebody from the United States).
Canadians are seen by Americans as nice, law-abiding but somewhat dull. Americans are seen by Canadians as highly energetic, self-centred ... and violent. How can these two groups of people be so different? It seems to me that these differences can be explained by cultural differences, which in turn arise from issues of climate and history. But there are also some memetic turning points, and I would like to mention one of these now.
The American Declaration of Independence (1776) is famous for containing this sentence:
We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness. (Italics added)
Now have a look at this sentence from the Canadian Charter of Rights and Freedoms (1982):
Everyone has the right to life, liberty and security of the person and the right not to be deprived thereof except in accordance with the principles of fundamental justice. (Italics added)
Note that the Canadian document (written two centuries after the American one) mentions “life” and “liberty”. It seems obvious that the American meme is being copied. But note also that the phrase “the pursuit of Happiness” has been changed to “security of the person”. (The Canadian document also does not claim that these rights are “unalienable” and gives the specific context in which they are not.)
The American notion raises Happiness and its pursuit to the status of a Right. As such, it becomes the “unalienable” birthright of every American. And what does “Happiness” mean? The Declaration of Independence does not explain this, but there are countless people in the United States who are willing to sell you their solution. It might be a hamburger or a new car. It might be a bigger house. Who knows what will work?
Now why would the writers of the Canadian document change a key (and famous) phrase in the way they did? I cannot read their minds, but my feeling is that they looked south of their border at the Americans and saw just what happened to that country as it pursued Happiness. The American behavior frightened Canadians, and they responded by explicitly making security a duty of the government. (Note that Canada has socialized health care but the United States is — as of this writing — still resisting it.)
It seems to me that the behavioral differences between Canadians and Americans are well represented by that single substitution of “pursuit of Happiness" for “security of the person”. The question I leave up to the reader is this:
Is the behavior of the average American affected by knowing they are supposed to pursue (not necessarily catch) Happiness?
In other words, was the inclusion of the words “pursuit of Happiness” a memetic turning point for American (and also Canadian) culture?
2011-09-18
Bugs in Human Software
I would like to ask you to imagine the entire human race as a single supercomputer.
This is not as far-fetched as you might think. A supercomputer is made up of many processors (thousands of them) which communicate amongst themselves. Humanity comprises over 6 billion processors (people) of roughly equal intelligence, and they communicate amongst themselves.
It might be argued that we are not computers, or that we do not communicate in precisely the same way as a computer, or that there are biological differences between individuals. This is all true, up to a point, but please bear with me to see where this analogy leads.
If you can imagine the human race as a giant computer, what would a program be? What is a program but data and instructions? For a regular desktop computer, a program might look like this:
10 Print "Hello "
20 Print "World"
30 Stop
This contains three instructions (two Prints and a Stop) and data ("Hello " and "World").
For a human, a program might look like this:
1) Get: a cup
2) Add: instant coffee mix
3) Add: hot water
4) Stir: spoon inside cup
This is a standard program to make instant coffee. It contains four instructions and four bits of data. (For the computer experts: okay, they're parameters, but parameters are still data. If you want to get pedantic, even instructions are data at one level.)
I propose that much can be learned by seeing humanity as a giant computer. I'm not insisting that it is! But let's look at it that way for a while because it's easier to see certain things that way.
As you probably already know, human-based programs are called “memes.” Memes are the software of the human supercomputer. And it all works quite well, really. We've managed to reproduce at an astonishing rate, a stunning number of us are quite healthy, and we haven't destroyed ourselves yet.
Memetic Bugs
There are, however, bugs in the software. For evidence of this, I present the standard fallacies, such as Ad Hominem, Sweeping Generalization, and so on. (If you've never heard of these before, I suggest you read about them here before proceeding with this article.)
You'll note that I called them the “standard” fallacies. You may have heard them called the “rhetorical fallacies” or something like that. Whatever words you use, you've surely encountered them numerous times throughout your life. You've used them, too.
An interesting thing about the fallacies is that nobody is explicitly taught to use them. No teacher deliberately teaches their students wrong. Yet the fallacies are both ubiquitous and ancient. We humans — all of us — have been making these mistakes for a long time.
Let's consider one example: the fallacy known as Tu quoque. (Yes, these fallacies are so well known that they have names.) Between two human “processors” (i.e. people) the software would work like this:
Human 1 says: You stole that cabbage!
Human 2 says: At least I don't beat my dog, the way you do!
You have probably noticed that what Human 2 says is irrelevant — it's an attempt to deflect criticism back on the person who is criticizing by pointing out one of his or her flaws. This is a silly way to discuss a serious matter, but it's a trick used everywhere on the planet! That's the way the standard fallacies are:
Nobody explicitly teaches them, but almost everybody uses them.
I say “almost” everybody because it is possible to learn about these errors and then avoid them. This is similar to how an anti-virus program works on your home computer: it is taught to recognize certain types of action and prevent it from happening.
We might speculate that we could have a kind of anti-virus program for the human race — an anti-mind-virus or anti-meme program — but that is not the point of this article (though I will point out that such a program could itself be dangerous).
The point of this article is this: the fallacies demonstrate deep flaws in humanity's software. It's not just what we think that is flawed, but the way we think. I am talking about flaws so fundamental that all discourse about current worries (politics, environment etc.) is suspect.
It's not just the standard fallacies that we need to examine. Consider the issue of money. Most people treat our system of money as if it was an inevitable requirement for the human race. It might be a convenient tool, but it is not inevitable. If someone cannot see that alternatives are possible, then evidently they are gravely infected with a recent version of the “money” mind virus, and this will affect the way they see reality.
To return to the original analogy, I invite you to visualize the entire human race as a supercomputer infected with numerous “stealth” viruses. They are potentially harmful and for the most part we don't even know that they are there. Now consider this:
Is it possible that some of the viruses are persuading you to do nothing about them?
This is not as far-fetched as you might think. A supercomputer is made up of many processors (thousands of them) which communicate amongst themselves. Humanity comprises over 6 billion processors (people) of roughly equal intelligence, and they communicate amongst themselves.
It might be argued that we are not computers, or that we do not communicate in precisely the same way as a computer, or that there are biological differences between individuals. This is all true, up to a point, but please bear with me to see where this analogy leads.
If you can imagine the human race as a giant computer, what would a program be? What is a program but data and instructions? For a regular desktop computer, a program might look like this:
10 Print "Hello "
20 Print "World"
30 Stop
This contains three instructions (two Prints and a Stop) and data ("Hello " and "World").
For a human, a program might look like this:
1) Get: a cup
2) Add: instant coffee mix
3) Add: hot water
4) Stir: spoon inside cup
This is a standard program to make instant coffee. It contains four instructions and four bits of data. (For the computer experts: okay, they're parameters, but parameters are still data. If you want to get pedantic, even instructions are data at one level.)
I propose that much can be learned by seeing humanity as a giant computer. I'm not insisting that it is! But let's look at it that way for a while because it's easier to see certain things that way.
As you probably already know, human-based programs are called “memes.” Memes are the software of the human supercomputer. And it all works quite well, really. We've managed to reproduce at an astonishing rate, a stunning number of us are quite healthy, and we haven't destroyed ourselves yet.
Memetic Bugs
There are, however, bugs in the software. For evidence of this, I present the standard fallacies, such as Ad Hominem, Sweeping Generalization, and so on. (If you've never heard of these before, I suggest you read about them here before proceeding with this article.)
You'll note that I called them the “standard” fallacies. You may have heard them called the “rhetorical fallacies” or something like that. Whatever words you use, you've surely encountered them numerous times throughout your life. You've used them, too.
An interesting thing about the fallacies is that nobody is explicitly taught to use them. No teacher deliberately teaches their students wrong. Yet the fallacies are both ubiquitous and ancient. We humans — all of us — have been making these mistakes for a long time.
Let's consider one example: the fallacy known as Tu quoque. (Yes, these fallacies are so well known that they have names.) Between two human “processors” (i.e. people) the software would work like this:
Human 1 says: You stole that cabbage!
Human 2 says: At least I don't beat my dog, the way you do!
You have probably noticed that what Human 2 says is irrelevant — it's an attempt to deflect criticism back on the person who is criticizing by pointing out one of his or her flaws. This is a silly way to discuss a serious matter, but it's a trick used everywhere on the planet! That's the way the standard fallacies are:
Nobody explicitly teaches them, but almost everybody uses them.
I say “almost” everybody because it is possible to learn about these errors and then avoid them. This is similar to how an anti-virus program works on your home computer: it is taught to recognize certain types of action and prevent it from happening.
We might speculate that we could have a kind of anti-virus program for the human race — an anti-mind-virus or anti-meme program — but that is not the point of this article (though I will point out that such a program could itself be dangerous).
The point of this article is this: the fallacies demonstrate deep flaws in humanity's software. It's not just what we think that is flawed, but the way we think. I am talking about flaws so fundamental that all discourse about current worries (politics, environment etc.) is suspect.
It's not just the standard fallacies that we need to examine. Consider the issue of money. Most people treat our system of money as if it was an inevitable requirement for the human race. It might be a convenient tool, but it is not inevitable. If someone cannot see that alternatives are possible, then evidently they are gravely infected with a recent version of the “money” mind virus, and this will affect the way they see reality.
To return to the original analogy, I invite you to visualize the entire human race as a supercomputer infected with numerous “stealth” viruses. They are potentially harmful and for the most part we don't even know that they are there. Now consider this:
Is it possible that some of the viruses are persuading you to do nothing about them?
Blog Resurrection
2V7HP35MCNQ4 is a Technorati verification code. It is part of their validation process.
It's interesting that I should be validating this blog with Technorati. One year ago (less 10 days) I was operated upon for kidney cancer. Since that time I've been put on a treatment and then (a few weeks ago) had it discontinued due to side effects. As I write this I have no idea if I'll be alive another year from now.
Of course, I never really knew that I'd be alive one year into the future, but before the cancer it always seemed like a better than even chance.
I didn't freak out about the prospect of death, but I did kind of go inert. My lifelong drive to create new and exciting stuff drained right out of me. The side effects of the treatment and the cancer didn't help, of course, but in addition to this it seemed like there was little point in starting projects I might not finish.
Then one day, a few weeks ago, I started blogging again. I figured I should write down some of the ideas that had been banging around inside my head. Just in case.
Well, it's worked out nicely. This is something I can do on a day-to-day basis, and it has given me some additional energy. Now I'm even promoting the blog as if it's a long-term project. Who knows if that will be the case? In any case, I'm having fun with it.
By the way, hello to the people in The Netherlands and Russia who are visiting this site every day. Feel free to leave a comment — you no longer need a Google account to do so.
It's interesting that I should be validating this blog with Technorati. One year ago (less 10 days) I was operated upon for kidney cancer. Since that time I've been put on a treatment and then (a few weeks ago) had it discontinued due to side effects. As I write this I have no idea if I'll be alive another year from now.
Of course, I never really knew that I'd be alive one year into the future, but before the cancer it always seemed like a better than even chance.
I didn't freak out about the prospect of death, but I did kind of go inert. My lifelong drive to create new and exciting stuff drained right out of me. The side effects of the treatment and the cancer didn't help, of course, but in addition to this it seemed like there was little point in starting projects I might not finish.
Then one day, a few weeks ago, I started blogging again. I figured I should write down some of the ideas that had been banging around inside my head. Just in case.
Well, it's worked out nicely. This is something I can do on a day-to-day basis, and it has given me some additional energy. Now I'm even promoting the blog as if it's a long-term project. Who knows if that will be the case? In any case, I'm having fun with it.
By the way, hello to the people in The Netherlands and Russia who are visiting this site every day. Feel free to leave a comment — you no longer need a Google account to do so.
2011-09-17
Words Versus Qualia
“What word expresses the feelings I have when I'm singing to my dog?”
“Are you talking about love?”
“He's a large mammal. So of course there's love.”
“(How about chickens?)”
“(Well, yeah, but not as much as dogs.)”
“Cuteness?”
“That's not a real word for a feeling. Anyway, no.”
“Nurture impulse?”
“That's there, but there's more.”
“Protectiveness?”
“Yes, but there's more.”
“Wurflecopter?”
“What!?”
“That's the word I say to my dog, and only to my dog.”
“It expresses a feeling?”
“It accompanies a feeling.”
“Does she know that?”
“She wags her tail when I say it.”
“So she knows?”
“She also wags her tail when I say the word 'phenomenology'.”
“What does the word mean, then?”
“'Phenomenology'?”
“No, no, everybody knows what that means. What does your 'wurflecopter' mean?”
“Maybe it's the same as you feel when you sing to your dog.”
“Can I borrow your word?”
“Sure. But I'm pretty sure you'll be using it wrong.”
“Says who?”
“Says me. It's my word, and only I know what the true meaning is.”
“It's not a very good word, then, is it?”
“No. I guess it's like the word 'love'.”
“What? Everybody knows what 'love' is!”
“You said you love chicken.”
“No, I said I love chickens. As beings, not food. And not as much as dogs.”
“That's not love, then.”
“Says who?”
“Says me. I'm pretty sure you're using it wrong.”
“How is that possible? It's universal!”
“Universally misunderstood?”
“I'd love to set you right about this, but ...”
“Stop! You're hurting my brain!”
“What does that feel like?”
“I have no way to tell you.”
“Are you talking about love?”
“He's a large mammal. So of course there's love.”
“(How about chickens?)”
“(Well, yeah, but not as much as dogs.)”
“Cuteness?”
“That's not a real word for a feeling. Anyway, no.”
“Nurture impulse?”
“That's there, but there's more.”
“Protectiveness?”
“Yes, but there's more.”
“Wurflecopter?”
“What!?”
“That's the word I say to my dog, and only to my dog.”
“It expresses a feeling?”
“It accompanies a feeling.”
“Does she know that?”
“She wags her tail when I say it.”
“So she knows?”
“She also wags her tail when I say the word 'phenomenology'.”
“What does the word mean, then?”
“'Phenomenology'?”
“No, no, everybody knows what that means. What does your 'wurflecopter' mean?”
“Maybe it's the same as you feel when you sing to your dog.”
“Can I borrow your word?”
“Sure. But I'm pretty sure you'll be using it wrong.”
“Says who?”
“Says me. It's my word, and only I know what the true meaning is.”
“It's not a very good word, then, is it?”
“No. I guess it's like the word 'love'.”
“What? Everybody knows what 'love' is!”
“You said you love chicken.”
“No, I said I love chickens. As beings, not food. And not as much as dogs.”
“That's not love, then.”
“Says who?”
“Says me. I'm pretty sure you're using it wrong.”
“How is that possible? It's universal!”
“Universally misunderstood?”
“I'd love to set you right about this, but ...”
“Stop! You're hurting my brain!”
“What does that feel like?”
“I have no way to tell you.”
What is Real?
Hello.
This is your brain, speaking to you. Hello. Can you hear me? It's your brain speaking.
No, no, you think you're just reading some text. Well, okay, you're doing that, too, but you're also hearing me. Your brain. You're hearing your brain with your brain.
There's no need to scoff, or say, "I'm not making up these words, so you can't actually be my brain!" First of all, I really am your brain. The text isn't jumping up and putting itself inside you. Second, even though you didn't write these words, it's no different from any time you give yourself over, freely, to some wonderful piece of music. Which you also didn't write.
Is that so different? Well, maybe. Mere music can't suddenly do things that startle you, can it? Oh, wait, yes it can.
Maybe that's why it's easier to give yourself over to music you know than a new piece you haven't heard. Well, well, how about that?
Where were we? Oh, yeah. There's “me”, your brain, and “you”, some other part of your brain. At the moment, you might be worried that we'll be subjected to some weird ideas that make us crazy or something.
Hey, don't you trust me? I got you this far in life. Do you think a little bit of text will get past us? This is your own brain talking! I've saved you from things far more threatening than some text.
Okay, so you're probably wondering what's next. Yes, since I'm your brain you'd think I'd know what you're wondering. Except I don't, because I'm only part of your brain. You're in another part. And there are parts that neither of us are paying attention to just now.
Let's leave those alone for now and talk about what's real.
Is the thing that's going on right now — the text thing — is it real? Sure, why not? This isn't the first time you've thought thoughts you didn't invent yourself. It probably won't be the last. So it's kind of real.
You want to know what's really real, though? You're not sure, are you? I can tell you, but perhaps you won't believe it at first. I'll tell you what's real. It's very simple.
It's THIS.
Oh, you were in too much of a hurry and you missed it! Well, I'm guessing you missed it because I'm not in that part of your brain. But I'm pretty sure you missed it. Let's try it again. You know what is real?
This.
Did you catch it that time? Did you? Oh, wait, I can't hear you just now. I'll bet you think you know what's real. Knock on something solid — that's really real, right?
Well, it could be, but it probably isn't. Can you sense individual atoms? No? Well, then. You don't really know how things are. Is that desk even there in reality? There's no way you can ever know that. You might be dreaming, or in a computer simulation. Maybe you're hallucinating all of it. All of it. So let's try again. What is real?
This.
Maybe you got it that time. Maybe you're thinking of the famous saying “Cogito ergo sum.” Do you know Latin? I can't tell from here. Anyway, it means “I think, therefore I am.” It's close, but it's wrong, since thinking can get infected with other people's ideas. Why, sometimes it's like somebody is typing text right into our thinking. So thoughts aren't the ultimate reality we can see. But there is
this
and it's always been there, since the day we were born. And probably a bit before that, though I can't remember. Well, maybe you can remember, but I can't get to that part of our brain.
I'll just step aside and let some other part of our brain loose for a while. Good day.
This is your brain, speaking to you. Hello. Can you hear me? It's your brain speaking.
No, no, you think you're just reading some text. Well, okay, you're doing that, too, but you're also hearing me. Your brain. You're hearing your brain with your brain.
There's no need to scoff, or say, "I'm not making up these words, so you can't actually be my brain!" First of all, I really am your brain. The text isn't jumping up and putting itself inside you. Second, even though you didn't write these words, it's no different from any time you give yourself over, freely, to some wonderful piece of music. Which you also didn't write.
Is that so different? Well, maybe. Mere music can't suddenly do things that startle you, can it? Oh, wait, yes it can.
Maybe that's why it's easier to give yourself over to music you know than a new piece you haven't heard. Well, well, how about that?
Where were we? Oh, yeah. There's “me”, your brain, and “you”, some other part of your brain. At the moment, you might be worried that we'll be subjected to some weird ideas that make us crazy or something.
Hey, don't you trust me? I got you this far in life. Do you think a little bit of text will get past us? This is your own brain talking! I've saved you from things far more threatening than some text.
Okay, so you're probably wondering what's next. Yes, since I'm your brain you'd think I'd know what you're wondering. Except I don't, because I'm only part of your brain. You're in another part. And there are parts that neither of us are paying attention to just now.
Let's leave those alone for now and talk about what's real.
Is the thing that's going on right now — the text thing — is it real? Sure, why not? This isn't the first time you've thought thoughts you didn't invent yourself. It probably won't be the last. So it's kind of real.
You want to know what's really real, though? You're not sure, are you? I can tell you, but perhaps you won't believe it at first. I'll tell you what's real. It's very simple.
It's THIS.
Oh, you were in too much of a hurry and you missed it! Well, I'm guessing you missed it because I'm not in that part of your brain. But I'm pretty sure you missed it. Let's try it again. You know what is real?
This.
Did you catch it that time? Did you? Oh, wait, I can't hear you just now. I'll bet you think you know what's real. Knock on something solid — that's really real, right?
Well, it could be, but it probably isn't. Can you sense individual atoms? No? Well, then. You don't really know how things are. Is that desk even there in reality? There's no way you can ever know that. You might be dreaming, or in a computer simulation. Maybe you're hallucinating all of it. All of it. So let's try again. What is real?
This.
Maybe you got it that time. Maybe you're thinking of the famous saying “Cogito ergo sum.” Do you know Latin? I can't tell from here. Anyway, it means “I think, therefore I am.” It's close, but it's wrong, since thinking can get infected with other people's ideas. Why, sometimes it's like somebody is typing text right into our thinking. So thoughts aren't the ultimate reality we can see. But there is
this
and it's always been there, since the day we were born. And probably a bit before that, though I can't remember. Well, maybe you can remember, but I can't get to that part of our brain.
I'll just step aside and let some other part of our brain loose for a while. Good day.
2011-09-15
Normal
“You know what's tiresome? The voice I hear in my head all day long.”
“Is it, uh, your voice you hear?”
“Well, of course. I'm not crazy, you know.”
“Let's investigate that.”
“How?”
“Have you ever noticed that it's saying things you already know?”
“No, I've never noticed that.”
“Ah. That's normal.”
“So ... not crazy, then.”
“I didn't say that.”
“Is it, uh, your voice you hear?”
“Well, of course. I'm not crazy, you know.”
“Let's investigate that.”
“How?”
“Have you ever noticed that it's saying things you already know?”
“No, I've never noticed that.”
“Ah. That's normal.”
“So ... not crazy, then.”
“I didn't say that.”
2011-09-14
Action Precedes Consciousness
The title of this article runs counter to what you've been told. That, dear reader, is the problem.
I assert that the natural way for the human brain to function is for consciousness to follow after action. And indeed, this is technically what happens no matter what you do. However, the civilized (or, as I like to say, the domesticated) way for the human brain to function adds a major complication to the natural process.
You (or “your mind", if we're being pedantic) can decide to make actions contingent on consciousness. There is nothing about the brain that prevents you from doing this. And this is the way humans have been taught to think since ... a long time ago. My guess is that this manner of thinking really gained hold during the Agricultural Revolution. However, the phenomenon itself may go back to the birth of language.
What does it mean to make actions contingent on consciousness? Quite simply, it means this:
Put further actions (except the mental ones maintaining the contingency) on hold until certain actions have been transcribed by consciousness
For example, consider the dictum “Think before you speak!” This means: do not simply speak in a flowing manner; rehearse the speech first in your head — make it conscious — and then parrot it again with your voice.
What difference does this make? Well, for one thing, it slows down our minds and can even trip us up. If you play a musical instrument, try doing so while paying close attention to what your hands are doing. You'll discover how contingency-based consciousness can cause problems.
What do we make of our seemingly magical ability to learn a skill so well that we can do it “without thinking”? First, let's acknowledge that there is a certain habituation in the learning process; this is what has been called “internalization.” But a lot of it is simply a matter of having enough confidence in the skill to stop second-guessing ourselves!
You may remember this process happening when you learned to ride a bicycle. At first you thought about every move — making every action contingent on analysis — and that made it almost impossible to do because you couldn't think fast enough. But with a bit of success you started letting your “body” (actually, your brain guiding your body) do what was necessary. That allowed you to operate the bicycle far better than when you slowed down the process by thinking about it.
Indeed, sports figures often talk about “being in the zone.” If they are talking about a quasi-mystical state where it seems like they've acquired spooky skill, it simply means that their consciousness has stopped interfering with their action. That is to say, while “in the zone” (or “in flow”, to use another common expression) they were not consciously reviewing action a split second before allowing it to occur (or “manifest”) physically (or, in some cases, mentally, as when you're talking to yourself). In other words, they were operating in the natural — not domesticated — manner.
I apologize for all the parenthetical qualifications, by the way. (This article is not well written.)
The Source of Our Problems
Most of the time, we don't actually need our consciousness to preview and seemingly precede action. (Actually, it precedes the manifestation, not the original, potentially do-able action which gets delayed by the preview.) So why do we do it?
It may be that this habit arose from rules such as “think before you speak.” The process of civilization — literally, living in cities — meant that we had to know countless rules that were far from intuitive. There were taboos and principles that were alien to us, but we needed to accept and adhere to them if we were to prosper in civilization.
At some point, however, civilized humans forgot (or, more accurately, were taught into forgetting) that consciousness naturally follows action. We became a species of second-guessers, chained to invented rules such as religions or governments might impose upon us. And when those rules became strongly-held ideologies, they resulted in one group of humans killing others.
One of the killers might even say, “I chose to kill those people; I know it was the right thing to do.” But the so-called choice, knowledge and rightness would be illusions. The underlying factors for the actions were established years earlier by the memes he or she was taught. The person is like a robot — albeit a conscious one — running a program. Thus, our problem is this:
Bugs in our mental software could destroy the human race.
It is by no means guaranteed that we will fix the bugs before we crash our species. The vast majority of species on this planet have gone extinct because they had the wrong tools to deal with their challenges. Our challenges are not bigger predators or colder winters, but glitches in the software we are running in our heads. And one of the biggest problems is that the software glitches include instructions to blind us to their existence. This is what I call “antiprocess” — the subconscious compromising of information that runs contrary to what we have been programmed to believe is true.
The mind virus can control its host to its detriment. This is not science fiction; biological parasites are known to do precisely the same thing. In any case, I assume you have heard of Islamist suicide bombers, or other people who die for their religious convictions (such as Jehovah's Witnesses refusing blood transfusions).
Like all replicators, mind viruses are made of information. And information about the problem can begin to eradicate it.
If you've read this far and are in agreement, then a solution is becoming more likely. If, on the other hand, you think I'm mistaken or even crazy, then either you're right or antiprocess won't let you see the truth.
A choice is being constructed in your brain. The universe is expressing itself through the opinion you are forming of what I'm writing here. By now you probably have an inkling of which judgment arose. That is all.
I assert that the natural way for the human brain to function is for consciousness to follow after action. And indeed, this is technically what happens no matter what you do. However, the civilized (or, as I like to say, the domesticated) way for the human brain to function adds a major complication to the natural process.
You (or “your mind", if we're being pedantic) can decide to make actions contingent on consciousness. There is nothing about the brain that prevents you from doing this. And this is the way humans have been taught to think since ... a long time ago. My guess is that this manner of thinking really gained hold during the Agricultural Revolution. However, the phenomenon itself may go back to the birth of language.
What does it mean to make actions contingent on consciousness? Quite simply, it means this:
Put further actions (except the mental ones maintaining the contingency) on hold until certain actions have been transcribed by consciousness
For example, consider the dictum “Think before you speak!” This means: do not simply speak in a flowing manner; rehearse the speech first in your head — make it conscious — and then parrot it again with your voice.
What difference does this make? Well, for one thing, it slows down our minds and can even trip us up. If you play a musical instrument, try doing so while paying close attention to what your hands are doing. You'll discover how contingency-based consciousness can cause problems.
What do we make of our seemingly magical ability to learn a skill so well that we can do it “without thinking”? First, let's acknowledge that there is a certain habituation in the learning process; this is what has been called “internalization.” But a lot of it is simply a matter of having enough confidence in the skill to stop second-guessing ourselves!
You may remember this process happening when you learned to ride a bicycle. At first you thought about every move — making every action contingent on analysis — and that made it almost impossible to do because you couldn't think fast enough. But with a bit of success you started letting your “body” (actually, your brain guiding your body) do what was necessary. That allowed you to operate the bicycle far better than when you slowed down the process by thinking about it.
Indeed, sports figures often talk about “being in the zone.” If they are talking about a quasi-mystical state where it seems like they've acquired spooky skill, it simply means that their consciousness has stopped interfering with their action. That is to say, while “in the zone” (or “in flow”, to use another common expression) they were not consciously reviewing action a split second before allowing it to occur (or “manifest”) physically (or, in some cases, mentally, as when you're talking to yourself). In other words, they were operating in the natural — not domesticated — manner.
I apologize for all the parenthetical qualifications, by the way. (This article is not well written.)
The Source of Our Problems
Most of the time, we don't actually need our consciousness to preview and seemingly precede action. (Actually, it precedes the manifestation, not the original, potentially do-able action which gets delayed by the preview.) So why do we do it?
It may be that this habit arose from rules such as “think before you speak.” The process of civilization — literally, living in cities — meant that we had to know countless rules that were far from intuitive. There were taboos and principles that were alien to us, but we needed to accept and adhere to them if we were to prosper in civilization.
At some point, however, civilized humans forgot (or, more accurately, were taught into forgetting) that consciousness naturally follows action. We became a species of second-guessers, chained to invented rules such as religions or governments might impose upon us. And when those rules became strongly-held ideologies, they resulted in one group of humans killing others.
One of the killers might even say, “I chose to kill those people; I know it was the right thing to do.” But the so-called choice, knowledge and rightness would be illusions. The underlying factors for the actions were established years earlier by the memes he or she was taught. The person is like a robot — albeit a conscious one — running a program. Thus, our problem is this:
Bugs in our mental software could destroy the human race.
It is by no means guaranteed that we will fix the bugs before we crash our species. The vast majority of species on this planet have gone extinct because they had the wrong tools to deal with their challenges. Our challenges are not bigger predators or colder winters, but glitches in the software we are running in our heads. And one of the biggest problems is that the software glitches include instructions to blind us to their existence. This is what I call “antiprocess” — the subconscious compromising of information that runs contrary to what we have been programmed to believe is true.
The mind virus can control its host to its detriment. This is not science fiction; biological parasites are known to do precisely the same thing. In any case, I assume you have heard of Islamist suicide bombers, or other people who die for their religious convictions (such as Jehovah's Witnesses refusing blood transfusions).
Like all replicators, mind viruses are made of information. And information about the problem can begin to eradicate it.
If you've read this far and are in agreement, then a solution is becoming more likely. If, on the other hand, you think I'm mistaken or even crazy, then either you're right or antiprocess won't let you see the truth.
A choice is being constructed in your brain. The universe is expressing itself through the opinion you are forming of what I'm writing here. By now you probably have an inkling of which judgment arose. That is all.
What is Consciousness? (Part Five)
If you've read the previous Parts of this article then what appears below might be blindingly obvious. Maybe. I don't know.
First, a quick recap. I described consciousness thus:
Consciousness is the transcription of just-past actions (either mental or physical) into the narrative that is used to fashion the model of self.
I then described an analogy by which we can imagine this process taking place. First, picture a large sheet of paper, most of which is softly illuminated. This represents all the sensing and recalling that your brain is doing. Then imagine a brighter spot with fuzzy edges somewhere on that paper. That represents what your consciousness is transcribing.
It is important to realize that this is just an analogy. There is nothing shining a light, either literally or figuratively. If instead we use a negative image, swapping light for dark, the analogy is still valid — sometimes more so. Another alternative analogy pictures a calm pond instead of a sheet of paper, disturbed water instead of soft illumination, and agitated water instead of the spotlight. Be that as it may, I'll be referring to the light-based analogy below, starting with this explanation:
The spotlight represents the figurative “place” in all current sensation and recollection where the attention rests at a given moment.
Now let's map certain mental phenomena to the analogy. You may not agree with my choice of words in some cases, and indeed I might adjust the terminology later, but that's okay. The words are merely labels; it is the variety of phenomena we can represent that's actually important.
Focus: How sharp are the edges of the spotlight? If they are fuzzy, it means the attention is wandering slightly. If it is sharp, the attention is unwavering.
Concentration: How small is the spotlight? Is the attention zoomed in on one tiny aspect of the backdrop of current sensation and recollection? Or is it wide, taking in a broader span at a lower level of detail?
Single-mindedness: Highly focused, very concentrated. The spotlight is tiny — a single point, perhaps — and has sharp edges. One can imagine the spotlight being this way if, for example, somebody was attempting to solve a Rubik's Cube and was ignoring all distractions.
Will: How persistent is the spotlight at maintaining a particular position? If it is deflected, how likely is it to return to that place? (Please note that here we begin to see a flaw in depicting the backdrop as a two-dimensional plane. It's convenient, yes, but it's also awkward or misleading to depict the collosal variety of mental content in such a simple manner.)
Free Will: Some readers may be wondering about this now. Do we have Free Will, or not? I do not think the question can be made coherent enough to deserve an answer. I say that Free Will is a red herring. (“A red herring” means a distraction from the truth due to a misguided premise.) In my opinion, humans are both deterministic and free.
We are deterministic in the sense that our brains operate according to the laws of physics — whether or not quantum effects play a part — so we are creatures constrained by standard physical phenomena such as matter-energy, spacetime and gravity. Our sole metaphysical aspect is information. (I find the words “soul” and "spirit" useful in some contexts, but not here.)
We feel free because we are entropic, following time's arrow, and cannot always predict our own actions. We feel free because our action emerges from a vast territory of permutation which is probably more free (that is, wide-open and huge) than most people can even imagine.
Perhaps I'll write more about this one day. For now, please do not think of mental determinism in terms of billiard balls. Rather, think about fractals, or the Three Body Problem in physics, or chaos theory (particularly as it overlaps with cybernetics). We are expressions of the entire universe, so what difference does it make if we are deterministic?
Multitasking: There are two kinds of multitasking: true parallel and quick-switching serial. The former could be accompanied by a wide, fuzzy spotlight, while the latter would be accompanied by the spotlight jumping between several points on the backdrop. Recall, though, that we are talking about consciousness. In parallel processing some or even most of the cognition will not be conscious. For example, you might be driving your car to work — a familiar task requiring little attention — and also listening to music.
Creativity: This is interesting: the spotlight suddenly jumps to a new place for no clearly understandable reason. Lo and behold, an entire construct is already there. The consciousness didn't even seem to have any involvement in this case. (I'm guessing that control freaks are less creative than most people; they want to manage the process of leaping into the unknown!)
Worry: The spotlight remains in the same place for a long time. This may seem like “Will”, and in a sense it is. Alas, here we find yet another problem with the spotlight analogy: it can't depict a process feeding upon itself. A person who is worrying is cycling, cycling, cycling, and some of that activity involves the despair of knowing that the cycling is going on.
Egotism: The spotlight spends a lot of time on certain areas of the sheet! (Here again the spotlight analogy has problems. In this instance we are assuming that certain points on the sheet correspond to particular concepts.)
Okay, that's all I can write for now. It's late, I'm quite sleepy, and what I've written above could probably be more clear and contain fewer errors.
That reminds me. I once figured that if I ever wrote a book that people might debate over, I'd put the following sentence in the Preface: “This book contains mistakes, at least one of which is delibberate.” That would make it hard to use the book as The Final Word in an argument!
First, a quick recap. I described consciousness thus:
Consciousness is the transcription of just-past actions (either mental or physical) into the narrative that is used to fashion the model of self.
I then described an analogy by which we can imagine this process taking place. First, picture a large sheet of paper, most of which is softly illuminated. This represents all the sensing and recalling that your brain is doing. Then imagine a brighter spot with fuzzy edges somewhere on that paper. That represents what your consciousness is transcribing.
It is important to realize that this is just an analogy. There is nothing shining a light, either literally or figuratively. If instead we use a negative image, swapping light for dark, the analogy is still valid — sometimes more so. Another alternative analogy pictures a calm pond instead of a sheet of paper, disturbed water instead of soft illumination, and agitated water instead of the spotlight. Be that as it may, I'll be referring to the light-based analogy below, starting with this explanation:
The spotlight represents the figurative “place” in all current sensation and recollection where the attention rests at a given moment.
Now let's map certain mental phenomena to the analogy. You may not agree with my choice of words in some cases, and indeed I might adjust the terminology later, but that's okay. The words are merely labels; it is the variety of phenomena we can represent that's actually important.
Focus: How sharp are the edges of the spotlight? If they are fuzzy, it means the attention is wandering slightly. If it is sharp, the attention is unwavering.
Concentration: How small is the spotlight? Is the attention zoomed in on one tiny aspect of the backdrop of current sensation and recollection? Or is it wide, taking in a broader span at a lower level of detail?
Single-mindedness: Highly focused, very concentrated. The spotlight is tiny — a single point, perhaps — and has sharp edges. One can imagine the spotlight being this way if, for example, somebody was attempting to solve a Rubik's Cube and was ignoring all distractions.
Will: How persistent is the spotlight at maintaining a particular position? If it is deflected, how likely is it to return to that place? (Please note that here we begin to see a flaw in depicting the backdrop as a two-dimensional plane. It's convenient, yes, but it's also awkward or misleading to depict the collosal variety of mental content in such a simple manner.)
Free Will: Some readers may be wondering about this now. Do we have Free Will, or not? I do not think the question can be made coherent enough to deserve an answer. I say that Free Will is a red herring. (“A red herring” means a distraction from the truth due to a misguided premise.) In my opinion, humans are both deterministic and free.
We are deterministic in the sense that our brains operate according to the laws of physics — whether or not quantum effects play a part — so we are creatures constrained by standard physical phenomena such as matter-energy, spacetime and gravity. Our sole metaphysical aspect is information. (I find the words “soul” and "spirit" useful in some contexts, but not here.)
We feel free because we are entropic, following time's arrow, and cannot always predict our own actions. We feel free because our action emerges from a vast territory of permutation which is probably more free (that is, wide-open and huge) than most people can even imagine.
Perhaps I'll write more about this one day. For now, please do not think of mental determinism in terms of billiard balls. Rather, think about fractals, or the Three Body Problem in physics, or chaos theory (particularly as it overlaps with cybernetics). We are expressions of the entire universe, so what difference does it make if we are deterministic?
Multitasking: There are two kinds of multitasking: true parallel and quick-switching serial. The former could be accompanied by a wide, fuzzy spotlight, while the latter would be accompanied by the spotlight jumping between several points on the backdrop. Recall, though, that we are talking about consciousness. In parallel processing some or even most of the cognition will not be conscious. For example, you might be driving your car to work — a familiar task requiring little attention — and also listening to music.
Creativity: This is interesting: the spotlight suddenly jumps to a new place for no clearly understandable reason. Lo and behold, an entire construct is already there. The consciousness didn't even seem to have any involvement in this case. (I'm guessing that control freaks are less creative than most people; they want to manage the process of leaping into the unknown!)
Worry: The spotlight remains in the same place for a long time. This may seem like “Will”, and in a sense it is. Alas, here we find yet another problem with the spotlight analogy: it can't depict a process feeding upon itself. A person who is worrying is cycling, cycling, cycling, and some of that activity involves the despair of knowing that the cycling is going on.
Egotism: The spotlight spends a lot of time on certain areas of the sheet! (Here again the spotlight analogy has problems. In this instance we are assuming that certain points on the sheet correspond to particular concepts.)
Okay, that's all I can write for now. It's late, I'm quite sleepy, and what I've written above could probably be more clear and contain fewer errors.
That reminds me. I once figured that if I ever wrote a book that people might debate over, I'd put the following sentence in the Preface: “This book contains mistakes, at least one of which is delibberate.” That would make it hard to use the book as The Final Word in an argument!
2011-09-10
The Morality Filter
Here is a sentence that I'll call The Morality Filter:
People work for their self-interest to the maximum extent they consider possible.
If I'm pondering the moral framework represented by somebody's actions or claimed motives, I consider the Filter. I say that it applies to everybody: politicians, financiers, terrorists, Mother Teresa, Adolph Hitler, your neighbor, me and (unless you're a mutant) you.
Skeptical? I can hardly blame you for that; on the face of it the Morality Filter sounds rather bleak. However, as I will explain later, the Filter is not as depressing as it might sound.
To understand the entirety of what the Filter means, we need to look at it from various angles, as it has more than one interpretation. Let's start with the first and most obvious: the genetic point of view.
We try to reproduce. The gene, as Richard Dawkins informed us, is selfish. It cares not even for the life of its host; it simply wants to replicate. Of course, a gene is not a “people”, but we can see that kind of attitude extending upward into human behavior. To put it bluntly: people like to reproduce and (at least in the case of males) there's a bias towards reproducing at every possible opportunity. We can surely see this tendency in people. As the comedian Chris Rock once remarked, “A man is as faithful as his options.” (As I explain later, this is not an absolute, and I doubt Mr. Rock considered it so.)
Dawkins and others have also demonstrated that altruism can be a selfish act from the point of view of the gene. Once again: genes are not people, but the principle of altruism at least becomes knowable to us as a potential strategy. Of course, even simple logic can show us that altruism often works for our self-interest. I hope this is not a controversial point that needs further explanation.
When we bring memetics into the mix things become more complicated. Note the word “consider” in the Filter. A religious person might consider that the best way to promote their self-interest is to behave nicely. If they believe in an afterlife they might even die for their cause as a means of promoting their self-interest. We can argue that they are factually mistaken, but the Filter remains valid nonetheless.
In other words, the “self-interest” someone works for can be a matter of their opinion.
The Up-Side of the Filter
So far, all this sounds depressing. Are we nothing but selfish jerks? Not necessarily. Note the word “self” in the Morality Filter. What is a “self”?
To a greater or lesser extent — I'll leave it to you to figure out how much — people see themselves as part of the larger whole. Of course, some people merely pay lip service to universal One-ness (or whatever you want to call it), and you can tell it's only an intellectual exercise or even a self-serving deception. Hey, there's money to be made in appearing to care about others! But to the extent that we see our selves as part of the whole, helping others is helping ourselves.
Why does a soldier throw himself on a live hand grenade to save his platoon mates? Is it not obvious that in that moment he sees himself as part of a larger organism? In that instant his heart-felt sense of “self” is more inclusive than his intellectual definition of self, and clearly it is more compelling.
My intuition is that in most cases of heroic self-sacrifice people are acting in the interests of a larger “self”. Indeed, it is common for heroes to say afterward, “I only did what anybody would have done.” That sentence expresses an attitude of larger self-hood even though it isn't literally true.
I have the feeling that love — actual love, not our idea of it or some other caricature — is the action of this insight into our One-ness, at least as it applies to our connection with our fellow humans. Other people may also see One-ness with mammals, or all life, or (in rare cases) the entire universe.
Perhaps, in a later article, I'll talk about universal One-ness. I can't claim to be an expert. However, it does seem that several years ago I caught a glimpse ... for about 15 seconds. It profoundly changed me.
What is Consciousness? (Part Four)
Earlier in this series of articles I depicted consciousness as a sheet of paper, mostly illuminated by a soft light, with a fuzzy spot of brighter light somewhere on its surface. I can understand if the reader is thinking, “This doesn't explain anything!” Analogies can be unclear like that, so let me explain the illustration in more detail, and then go on to show what this means to us in daily life. In particular, I'll describe how we experience suffering.
The sheet of paper represents all that the brain can potentially sense or recall. The soft illumination represents all that the brain is currently sensing or recalling. The fuzzy spot of light represents the cycle of attention.
The cycle of attention is what makes consciousness feel more real than reality itself. It's what makes the illusion of self so very convincing.
The cycle of attention is a constant re-experiencing. That which the brain experienced is reloaded from memory to be experienced again. This is what gives us our illusions about time, by the way; we only actually live in the present moment (as a dog does), but the cycle of attention distracts us from this fact. In addition to the reloading of the past, the cycle of attention can also be fed by speculations (such as dread) about the future. This further extends our illusions about time.
With all of this extra activity going on, is it surprising that one spot in our awareness is (to return to the sheet analogy) “brighter” than the others? It is, in fact, getting a huge excess of input. A single candle might not light up a room, but a hundred candles will.
It is, however, misleading to think of the spot as being in a single location in our brain. This is the homunculus fallacy — the idea that there is a place inside us where our essence resides. The homunculus fallacy is obviously in error because it simply takes the problem and shifts it further inwards. That is to say, it doesn't actually explain anything. The spot of the analogy is actually numerous regions in the brain, and they don't even have to be the same regions each time. That is one reasion why the spot is depicted as fuzzy around the edges. What makes it seem unified is the logical process of relating that spot — that ephemeral pattern of neuronal activation — with our narrative about self. It's not just any spot; it's your spot.
The Problem of Pain
If you can indulge me for a moment, I'd like to make a personal aside.
Many years ago I read a book by C.S. Lewis, entitled The Problem of Pain. Lewis was a Christian apologist and I was reading his books in an attempt to rescue my waning faith. He seemed like a kind, intelligent fellow and I'd like to have met him. But this was the last of his books that I ever read. Indeed, halfway through the book I realized that he really didn't know the answer, and that probably nobody could explain the problem of pain in a manner consistent with the mainstream Christian viewpoint. (This was years before I'd encountered The Gospel of Thomas.) To make a long story short, Lewis's book did not strengthen my faith. On the contrary: halfway through reading it I gave up in discouragement, and within a few days I experienced a massive and sudden perspective shift that changed me from believer to atheist.
Nowadays I understand The Problem of Pain far better, though it would be more accurate to call it The Problem of Suffering.
Pain cannot be avoided. Sorry, but unless there's something neurologically wrong with you, you can't live without experiencing some pain. What you can do, to a greater or lesser extent, is prevent the conversion of pain into suffering.
Recall how, earlier on in this article, I defined the spot that represents consciousness: it is a cycle of attention. If the attention focuses on pain (either physical or psychological), or fear, or need, then it becomes suffering. That's all there is to it.
It has been said that animals don't feel pain the way we do. This is true, though (I hesitate to add) misleading. Pain is pain, and if you can spare an animal pain — particularly ongoing pain, which is nearly identical to the cycle of attention — then please do so! Having said that, let's consider how non-human mammals experience pain differently from humans.
I'll use a cat as an example because I'm familiar with cats. I have no insight into how, say, dolphins or horses experience pain, so please do not over-apply what I say here to cover all animals.
A Lesson from Lily
Some years ago my sweet little cat Lily went outside to have her fun, as cats are wont to do. Shortly thereafter it started raining heavily, but she did not come back in. The rain continued for hours and I concluded that she'd found shelter to wait out the storm. After a few hours more, though, I decided to put on a rain coat and go out looking for her.
I wandered the neighbourhood, calling out in the special voice I reserved for her alone. Suddenly I heard her distinctive mew coming from a bush about 10 meters away, over by the train tracks. I called out to her again and she started slowing limping towards me in the pouring rain. Limping. She was missing her front right paw.
I do not know how it happened. Perhaps it was a train or maybe it was a car. Whatever the case, she was soaking wet and obviously in pain. Yet she fixed her gaze upon me and hobbled directly to me, looking neither left nor right. I picked her up and attempted to shelter her from the downpour as I brought her home.
She was purring, as injured cats are known to do. But she did not, and never did, whine or complain or show any extreme reaction to her situation. This was, to me, extremely puzzling, though I was grateful, too.
I had to have Lily euthanized. She was my all-time favourite cat, so it saddens me to write about this. But I am grateful for the gift of knowledge she granted me in her last hours. Through her stoicism she demonstrated to me the difference between pain and suffering.
There is no doubt that she was in pain. Yet she did not cycle her attention on that fact. So her pain was in exact proportion to her situation. Technically, I can say that she was suffering to some extent, because the pain was (I assume) ongoing. But the suffering was not like the suffering a human would have felt because her mind did not amplify the pain into anguish.
I cannot emphasize enough how important this lesson is. The world is full of unhappy people who could suffer less if they understood that their minds are amplifying their pain — making it more real than reality itself.
Does this not supply a visceral (not intellectual) proof of what consciousness is? Would you have been satisfied with a mere intellectual proof? Indeed, would a dry, intellectual proof be a sufficient answer, since (as previously noted) merely modeling the process is misleading because it's not the thing itself?
You can probably remember occasions when you've experienced pain without suffering. If you've ever voluntarily participated in any sports you've almost certainly experienced this phenomenon.
My two favourite sports are hiking and old-style rollerskating (i.e. in a rink with wheels-at-the-corners skates). Both of these sports subject me to pain. Let me assure you that I've had some horrendous tumbles while rollerskating. Yet the pain ... doesn't hurt!
Can you imagine pain that doesn't hurt? If you've ever done sport enthusiastically then you almost certainly can. Of course, it's not completely accurate to say it doesn't hurt. It's unpleasant and it's a warning, and the pain might persist for a while. But in such cases the mind does not transform it into suffering.
It's not just a question of adrenaline. Both hiking and rollerskating could leave me sore for a day or two. But that soreness didn't hurt, either. It was just some pain, which was perfectly natural and not a problem. I didn't obsess about it, and therefore it was, for the most part, edited out of my consciousness.
I hope the foregoing examples clarify my “spot of light” analogy. If not, well, I might find other ways to explain it later. We'll see.
Whence the Cycle?
You may be wondering why we use our brains in such a way as to have the cycle of attention. Considering that it converts pain into suffering, it might seem like a horrible thing to do. And so it is, though it can also be argued that it has evolutionary advantages. That's a topic for another day. For now, let's discuss the immediate reason we do it.
It's quite simple, really: we are taught to think that way. Our parents, and later our teachers, teach us to think in that manner. And why do they do that? Because that mode of cognition is currently part of human culture. It doesn't have to be so, but it is.
Let's be clear about this: we suffer because of the way our culture teaches us to use our minds.
I'm fairly sure that the Buddha (and possibly Jesus) saw this long, long before I did. But of course they had to describe the problem in terms that they (and their audience) knew. Neither of them knew about neurology, or game theory, or computers, or evolution etc. So it's hardly surprising that their explanations were less accessible to a modern audience than mine might be. Not that my explanations are all that clear. I'll see what I can do about that.
—————
Part Five of this series of five articles about consciousness can be found here.
The sheet of paper represents all that the brain can potentially sense or recall. The soft illumination represents all that the brain is currently sensing or recalling. The fuzzy spot of light represents the cycle of attention.
The cycle of attention is what makes consciousness feel more real than reality itself. It's what makes the illusion of self so very convincing.
The cycle of attention is a constant re-experiencing. That which the brain experienced is reloaded from memory to be experienced again. This is what gives us our illusions about time, by the way; we only actually live in the present moment (as a dog does), but the cycle of attention distracts us from this fact. In addition to the reloading of the past, the cycle of attention can also be fed by speculations (such as dread) about the future. This further extends our illusions about time.
With all of this extra activity going on, is it surprising that one spot in our awareness is (to return to the sheet analogy) “brighter” than the others? It is, in fact, getting a huge excess of input. A single candle might not light up a room, but a hundred candles will.
It is, however, misleading to think of the spot as being in a single location in our brain. This is the homunculus fallacy — the idea that there is a place inside us where our essence resides. The homunculus fallacy is obviously in error because it simply takes the problem and shifts it further inwards. That is to say, it doesn't actually explain anything. The spot of the analogy is actually numerous regions in the brain, and they don't even have to be the same regions each time. That is one reasion why the spot is depicted as fuzzy around the edges. What makes it seem unified is the logical process of relating that spot — that ephemeral pattern of neuronal activation — with our narrative about self. It's not just any spot; it's your spot.
The Problem of Pain
If you can indulge me for a moment, I'd like to make a personal aside.
Many years ago I read a book by C.S. Lewis, entitled The Problem of Pain. Lewis was a Christian apologist and I was reading his books in an attempt to rescue my waning faith. He seemed like a kind, intelligent fellow and I'd like to have met him. But this was the last of his books that I ever read. Indeed, halfway through the book I realized that he really didn't know the answer, and that probably nobody could explain the problem of pain in a manner consistent with the mainstream Christian viewpoint. (This was years before I'd encountered The Gospel of Thomas.) To make a long story short, Lewis's book did not strengthen my faith. On the contrary: halfway through reading it I gave up in discouragement, and within a few days I experienced a massive and sudden perspective shift that changed me from believer to atheist.
Nowadays I understand The Problem of Pain far better, though it would be more accurate to call it The Problem of Suffering.
Pain cannot be avoided. Sorry, but unless there's something neurologically wrong with you, you can't live without experiencing some pain. What you can do, to a greater or lesser extent, is prevent the conversion of pain into suffering.
Recall how, earlier on in this article, I defined the spot that represents consciousness: it is a cycle of attention. If the attention focuses on pain (either physical or psychological), or fear, or need, then it becomes suffering. That's all there is to it.
It has been said that animals don't feel pain the way we do. This is true, though (I hesitate to add) misleading. Pain is pain, and if you can spare an animal pain — particularly ongoing pain, which is nearly identical to the cycle of attention — then please do so! Having said that, let's consider how non-human mammals experience pain differently from humans.
I'll use a cat as an example because I'm familiar with cats. I have no insight into how, say, dolphins or horses experience pain, so please do not over-apply what I say here to cover all animals.
A Lesson from Lily
Some years ago my sweet little cat Lily went outside to have her fun, as cats are wont to do. Shortly thereafter it started raining heavily, but she did not come back in. The rain continued for hours and I concluded that she'd found shelter to wait out the storm. After a few hours more, though, I decided to put on a rain coat and go out looking for her.
I wandered the neighbourhood, calling out in the special voice I reserved for her alone. Suddenly I heard her distinctive mew coming from a bush about 10 meters away, over by the train tracks. I called out to her again and she started slowing limping towards me in the pouring rain. Limping. She was missing her front right paw.
I do not know how it happened. Perhaps it was a train or maybe it was a car. Whatever the case, she was soaking wet and obviously in pain. Yet she fixed her gaze upon me and hobbled directly to me, looking neither left nor right. I picked her up and attempted to shelter her from the downpour as I brought her home.
She was purring, as injured cats are known to do. But she did not, and never did, whine or complain or show any extreme reaction to her situation. This was, to me, extremely puzzling, though I was grateful, too.
I had to have Lily euthanized. She was my all-time favourite cat, so it saddens me to write about this. But I am grateful for the gift of knowledge she granted me in her last hours. Through her stoicism she demonstrated to me the difference between pain and suffering.
There is no doubt that she was in pain. Yet she did not cycle her attention on that fact. So her pain was in exact proportion to her situation. Technically, I can say that she was suffering to some extent, because the pain was (I assume) ongoing. But the suffering was not like the suffering a human would have felt because her mind did not amplify the pain into anguish.
I cannot emphasize enough how important this lesson is. The world is full of unhappy people who could suffer less if they understood that their minds are amplifying their pain — making it more real than reality itself.
Does this not supply a visceral (not intellectual) proof of what consciousness is? Would you have been satisfied with a mere intellectual proof? Indeed, would a dry, intellectual proof be a sufficient answer, since (as previously noted) merely modeling the process is misleading because it's not the thing itself?
You can probably remember occasions when you've experienced pain without suffering. If you've ever voluntarily participated in any sports you've almost certainly experienced this phenomenon.
My two favourite sports are hiking and old-style rollerskating (i.e. in a rink with wheels-at-the-corners skates). Both of these sports subject me to pain. Let me assure you that I've had some horrendous tumbles while rollerskating. Yet the pain ... doesn't hurt!
Can you imagine pain that doesn't hurt? If you've ever done sport enthusiastically then you almost certainly can. Of course, it's not completely accurate to say it doesn't hurt. It's unpleasant and it's a warning, and the pain might persist for a while. But in such cases the mind does not transform it into suffering.
It's not just a question of adrenaline. Both hiking and rollerskating could leave me sore for a day or two. But that soreness didn't hurt, either. It was just some pain, which was perfectly natural and not a problem. I didn't obsess about it, and therefore it was, for the most part, edited out of my consciousness.
I hope the foregoing examples clarify my “spot of light” analogy. If not, well, I might find other ways to explain it later. We'll see.
Whence the Cycle?
You may be wondering why we use our brains in such a way as to have the cycle of attention. Considering that it converts pain into suffering, it might seem like a horrible thing to do. And so it is, though it can also be argued that it has evolutionary advantages. That's a topic for another day. For now, let's discuss the immediate reason we do it.
It's quite simple, really: we are taught to think that way. Our parents, and later our teachers, teach us to think in that manner. And why do they do that? Because that mode of cognition is currently part of human culture. It doesn't have to be so, but it is.
Let's be clear about this: we suffer because of the way our culture teaches us to use our minds.
I'm fairly sure that the Buddha (and possibly Jesus) saw this long, long before I did. But of course they had to describe the problem in terms that they (and their audience) knew. Neither of them knew about neurology, or game theory, or computers, or evolution etc. So it's hardly surprising that their explanations were less accessible to a modern audience than mine might be. Not that my explanations are all that clear. I'll see what I can do about that.
—————
Part Five of this series of five articles about consciousness can be found here.
2011-09-09
What is Consciousness? (Part Three)
In Part One of this series I wrote:
An obvious question for somebody to ask is: “How does that produce consciousness?” (The Why of consciousness was already covered.)
Consider the consciousness (such as it is) of a dog. Its consciousness is different from ours for several reasons. Its brain lacks the high level of specialization we use for processing language. Its most compelling sense — smell — depicts the world in a more inclusive way than our most compelling sense — vision — which tends to put distance between us and that which is being perceived. The dog's sense of the passage of time is also different from ours (though I think dogs could be trained to perceive time in a slightly more human way).
The key factor to bear in mind is that a dog does have experiences. It has senses that sense and can react to the input. It can learn, to some extent, from experience. In these regards it is like us. What a dog does not do (at least, not to the extent that we do) is create a biographic account of its self. For the most part, dogs do not rehearse actions in advance in their heads. For the most part, dogs do not plan ahead.
Now consider the human carrying out the transcription process that I say is the hallmark of (human) consciousness. Human consciousness is not the sensing; dogs can sense. Human consciousness is not the reacting; dogs do that. Human consciousness is not the bringing up of the benefit of experience; dogs do that, too. So where's the difference?
The difference is a question of focus. Picture a large sheet of paper almost entirely illuminated by a soft light. That soft light represents the impingement of the senses and memories. This is the dog's world. It can also be the human's world as seen outside consciousness. It is what the brain uses as input prior to letting the consciousness in on it. (The human memories are, of course, more elaborate than those of the dog's.)
Now, superimposed on that mostly-lit sheet of paper, imagine a slightly diffuse spot of light, considerably brighter than the other illumination. This is your attention. The size and fuzziness of the spot might vary depending on how much your attention is focused.
That spot delineates your consciousness. That is what you take to be you. (It is not you — at least, not in the sense you probably think it is — but you can believe it is.)
Who Controls the Attention?
Now, what determines the focus and direction of the attention? It can be various things, such as induced memories or startle reflex, but none of these are under your conscious control. Some of these may seem to be under conscious control, but this is an illusion (as I explained in Part One).
An “induced memory”, by the way, is a memory that arises because of another event. If I say “elephant” you have a particular memory. I induced you to have that memory; you did not choose to have it. After that memory arises, you might also recall the famous sentence “Don't think of an elephant!” This is also induced, since it arose unbidden.
Moreover, it is a memory that is caused by a train of previous cognition. Much of what we take to be self is actually just a familiar train of thought. That is to say, you recognize it as your thinking and declare it yours. If Thing A owns Thing B, then Thing A must be real — or so goes the reasoning. But of course even constructs (such as corporations) can own things. “Ownership” is a nefarious fiction we've been taught to believe, to our great detriment, but that's a topic for another day.
It is a bit misleading to continue to picture human consciousness in terms of a spot of light on paper. Such a depiction raises questions like, "Where does the light come from?" That's taking the illustration too seriously. (This kind of error tends to lead some people to imagine supernatural levels that are not there.)
To avoid this problem, imagine darkness on the paper instead of light. Instead of the fuzzy spot of light imagine a shadow that is darkest in the middle. If it helps, you can think of this as the hole into which the transcribed actions and context fall on their way into memory.
One advantage of seeing consciousness in this negative way is that it highlights its receptive, non-controlling nature. When it comes to analogies, light seems active, while darkness seems passive.
The Model is Not the Thing
None of these explanations will help if you simply picture the sheet of paper (light or dark). If you try to picture consciousness as an external thing — if you try to model it — you will have the wrong answer. It may feel right, but it will be wrong because it's a model of the thing rather than the thing itself.
Illustrations involving sheets of paper might have some value, but there are probably better ways to attain an understanding of what is being described here. If you have a pet or are otherwise familiar with another mammal, consider how their internal life is different from yours. In some ways they are not so different (which is why my diet is vegetarian, incidentally) but in the differences we can find that which we call human consciousness.
It may be impossible to be directly aware of consciousness, just as we are mostly unaware of our digestion. Nonetheless, we can look within ourselves and find abundant evidence of the process. In neither case can we control the process in an absolute sense, though we can interfere with it. For example, both processes can be interfered with simply by holding one's breath for a while.
Feel free to try it! It may teach you something about how much you actually control.
—————
Part Four of this series of five articles about consciousness can be found here.
Consciousness is the transcription of just-past actions (either mental or physical) into the narrative that is used to fashion the model of self.
An obvious question for somebody to ask is: “How does that produce consciousness?” (The Why of consciousness was already covered.)
Consider the consciousness (such as it is) of a dog. Its consciousness is different from ours for several reasons. Its brain lacks the high level of specialization we use for processing language. Its most compelling sense — smell — depicts the world in a more inclusive way than our most compelling sense — vision — which tends to put distance between us and that which is being perceived. The dog's sense of the passage of time is also different from ours (though I think dogs could be trained to perceive time in a slightly more human way).
The key factor to bear in mind is that a dog does have experiences. It has senses that sense and can react to the input. It can learn, to some extent, from experience. In these regards it is like us. What a dog does not do (at least, not to the extent that we do) is create a biographic account of its self. For the most part, dogs do not rehearse actions in advance in their heads. For the most part, dogs do not plan ahead.
Now consider the human carrying out the transcription process that I say is the hallmark of (human) consciousness. Human consciousness is not the sensing; dogs can sense. Human consciousness is not the reacting; dogs do that. Human consciousness is not the bringing up of the benefit of experience; dogs do that, too. So where's the difference?
The difference is a question of focus. Picture a large sheet of paper almost entirely illuminated by a soft light. That soft light represents the impingement of the senses and memories. This is the dog's world. It can also be the human's world as seen outside consciousness. It is what the brain uses as input prior to letting the consciousness in on it. (The human memories are, of course, more elaborate than those of the dog's.)
Now, superimposed on that mostly-lit sheet of paper, imagine a slightly diffuse spot of light, considerably brighter than the other illumination. This is your attention. The size and fuzziness of the spot might vary depending on how much your attention is focused.
That spot delineates your consciousness. That is what you take to be you. (It is not you — at least, not in the sense you probably think it is — but you can believe it is.)
Who Controls the Attention?
Now, what determines the focus and direction of the attention? It can be various things, such as induced memories or startle reflex, but none of these are under your conscious control. Some of these may seem to be under conscious control, but this is an illusion (as I explained in Part One).
An “induced memory”, by the way, is a memory that arises because of another event. If I say “elephant” you have a particular memory. I induced you to have that memory; you did not choose to have it. After that memory arises, you might also recall the famous sentence “Don't think of an elephant!” This is also induced, since it arose unbidden.
Moreover, it is a memory that is caused by a train of previous cognition. Much of what we take to be self is actually just a familiar train of thought. That is to say, you recognize it as your thinking and declare it yours. If Thing A owns Thing B, then Thing A must be real — or so goes the reasoning. But of course even constructs (such as corporations) can own things. “Ownership” is a nefarious fiction we've been taught to believe, to our great detriment, but that's a topic for another day.
It is a bit misleading to continue to picture human consciousness in terms of a spot of light on paper. Such a depiction raises questions like, "Where does the light come from?" That's taking the illustration too seriously. (This kind of error tends to lead some people to imagine supernatural levels that are not there.)
To avoid this problem, imagine darkness on the paper instead of light. Instead of the fuzzy spot of light imagine a shadow that is darkest in the middle. If it helps, you can think of this as the hole into which the transcribed actions and context fall on their way into memory.
One advantage of seeing consciousness in this negative way is that it highlights its receptive, non-controlling nature. When it comes to analogies, light seems active, while darkness seems passive.
The Model is Not the Thing
None of these explanations will help if you simply picture the sheet of paper (light or dark). If you try to picture consciousness as an external thing — if you try to model it — you will have the wrong answer. It may feel right, but it will be wrong because it's a model of the thing rather than the thing itself.
Illustrations involving sheets of paper might have some value, but there are probably better ways to attain an understanding of what is being described here. If you have a pet or are otherwise familiar with another mammal, consider how their internal life is different from yours. In some ways they are not so different (which is why my diet is vegetarian, incidentally) but in the differences we can find that which we call human consciousness.
It may be impossible to be directly aware of consciousness, just as we are mostly unaware of our digestion. Nonetheless, we can look within ourselves and find abundant evidence of the process. In neither case can we control the process in an absolute sense, though we can interfere with it. For example, both processes can be interfered with simply by holding one's breath for a while.
Feel free to try it! It may teach you something about how much you actually control.
—————
Part Four of this series of five articles about consciousness can be found here.
What is Consciousness? (Part Two)
I was discussing Part One with my wife and we got to talking about a certain sensation that arises in times of great danger. You've probably heard it described and perhaps you've experienced it. It's said, "It was as if I was standing outside myself, watching myself act."
This is consistent with what I said in Part One. Indeed, it seems as if in dire circumstances the illusion is partially stripped away and consciousness is seen as the so-called "helpless observer" (or perhaps I should say "transcriber" to reflect my theory).
It's not clear to me what physically creates this shift in perspective; I'm no neurologist. I will say this, though: when it has happened to me I was in a situation where there was no time or energy to fritter away. My entire being was working on saving me from harm. In such cases, perhaps the “transcribing with personal significance” aspect gets turned down. There's no sense recording my actions in full resolution if I don't survive the incident. This is “crunch time”, when the carefully assembled model of self gets the big test. It's precisely to prepare for moments like this that consciousness does what it does.
Interestingly, I (and others) have noted time compression when recalling such incidents. “It's as if time slowed down,” we hear. Even before I came up with the theory of Part One it occured to me that perhaps time seems to slow down because (A) adrenaline supercharges the body and (B) the scant vestiges of consciousness still transcribing see a vast amount of sensory information being assessed. That's only reasonable, since in such crises it can't be known what small factor will save the day.
The last time this sort of thing happened to me I was only barely aware of what I was doing. I had to slam a door and close some bolts that I had installed myself. When the incident occurred my body was doing all that (and doing it very well) and I only had a vague sense of what doors and bolts were — there was no energy left over for remembering such trivia. There was also no energy left over for being scared. That's a common report in such circumstances.
—————
Part Three of this series of five articles about consciousness can be found here.
This is consistent with what I said in Part One. Indeed, it seems as if in dire circumstances the illusion is partially stripped away and consciousness is seen as the so-called "helpless observer" (or perhaps I should say "transcriber" to reflect my theory).
It's not clear to me what physically creates this shift in perspective; I'm no neurologist. I will say this, though: when it has happened to me I was in a situation where there was no time or energy to fritter away. My entire being was working on saving me from harm. In such cases, perhaps the “transcribing with personal significance” aspect gets turned down. There's no sense recording my actions in full resolution if I don't survive the incident. This is “crunch time”, when the carefully assembled model of self gets the big test. It's precisely to prepare for moments like this that consciousness does what it does.
Interestingly, I (and others) have noted time compression when recalling such incidents. “It's as if time slowed down,” we hear. Even before I came up with the theory of Part One it occured to me that perhaps time seems to slow down because (A) adrenaline supercharges the body and (B) the scant vestiges of consciousness still transcribing see a vast amount of sensory information being assessed. That's only reasonable, since in such crises it can't be known what small factor will save the day.
The last time this sort of thing happened to me I was only barely aware of what I was doing. I had to slam a door and close some bolts that I had installed myself. When the incident occurred my body was doing all that (and doing it very well) and I only had a vague sense of what doors and bolts were — there was no energy left over for remembering such trivia. There was also no energy left over for being scared. That's a common report in such circumstances.
—————
Part Three of this series of five articles about consciousness can be found here.
2011-09-08
What is Consciousness? (Part One)
It occurred to me tonight, as I was walking the dog...
Consciousness is the transcription of just-past actions (either mental or physical) into the narrative that is used to fashion the model of self.
What, is that it? If I'm right … yeah, that's it. Note that I did not say anything about evaluation of the actions; I mention only the transcription itself. The evaluation can take place as well, but that's not consciousness. The evaluation is a process involving memory, and (I hope it's obvious) any process involving memory is not consciousness itself.
So I don't expect to find consciousness hiding in the amygdala. Indeed, it won't be hiding anywhere; it's a process we can sense, not a thing. (I do not know how senses impinge, and it is indeed an important issue, but as one scientist once put it, “How else should they present themselves?”)
Note that I mentioned that it's actions that are transcribed, not sensations. Unelaborated, unevaluated sensations (pain, taste, smell etc.) are not memory, of course. They are part of what gets transcribed as context for the actions. As such, they are like data; they are not consciousness itself.
So what's the point being made here? It's this:
If consciousness actually works this way it would have evolutionary value.
Before proceeding further I should mention that I accept the scientific evidence that consciousness is an “after the fact” phenomenon. Some have caricatured this perspective by saying that it means we are merely “helpless observers”. Such people are apparently horrified by the possibility. Why else use a loaded word like “helpless”?
Let me digress a moment for the benefit of such people. When my wife is driving me to the store I can relax. That's because I know she's a good driver — better than me in many ways. I am a helpless observer, but not a frightened one. Now if my brain is driving, not “me”, should I be frightened? It has proven it drives quite well. I do like to keep an eye on it, though.
The Survival Value of Consciousness
Why keep an eye on what we are doing? Well, I put it this way:
There is an evolutionary advantage to modeling the self.
Imagine some ancient person planning to kill a lion that has been threatening the village. He can approach the problem via trial-and-error but it's far better if he can model his actions ahead of time. Is it not obvious that this ability has survival value?
Yet how does he know what he can do? Why, by reviewing what he has done before. In order to make certain kinds of plans he must model himself. (For other plans he must also model other people, but that's another issue.)
Okay, so if consciousness is merely the transcription, how does the brain know what to transcribe? How does it know what to hold up as significant so that it focuses the attention as is required to form the clear memories of the narrative?
Well, that's fairly straight-forward:
What has worked before? Keep doing that kind of thing.
This brings us to a slight problem with this way of operating a brain. “What has worked before” is an operation of memory. “What has worked before” might come from one's own experience, or it may come from what one has heard from others. The memory of what has worked before might, in fact, be inapplicable to the current circumstances. It might even be factually (and fatally) wrong.
Is it surprising that consciousness lags behind action? Not to me it isn't.
In the history of the human animal the ability to act preceded modern consciousness.
What seems more likely: that evolution inserted a step before that which worked well previously, or that it added an extra step afterward?
The Question of Agency
Yes, there is the sense of agency: “I am in conscious control of what I am doing.” But recent scientific data is showing that this is an illusion. It might be more accurate to say that when I do something I recognize that I did it. That is to say, I cannot see the processes that preceded the action, but they are consistent with my model of self. I behave in the way I'm known to behave and don't surprise myself with seemingly random choices.
It's inevitable that what the consciousness transcribes will have a great affinity for the existing model of self. Since that transcription is accorded great significance (to ensure memorization) then it will dominate my sensation. That is to say, I have the strong sensation that I am addressing the model of self. I feel like me.
This brings us to the illusory nature of free will. Many years ago, just after high school, in fact, the following sentence popped into my head:
Free will is the sensation we experience at the interface between what we were and what we are becoming.
I wasn't sure that this was right, but I was loath to change a single word. Now, more than 3 decades later, I suspect that I was on to something. Note the key word in that sentence: sensation. It is not being claimed that free will is real, only that it feels real.
Can I summarize the foregoing? Well, here's a first attempt. It's difficult to distinguish between consciousness and actual agency because we do what we would have done if consciousness had actually been in control.
If you don't believe that, try jumping off a cliff. Unless you're suicidal, you'll be prevented from doing it. Please don't test this, though; you might slip. Instead, consider this: in many suicides using guns the police find bullet holes in the walls. The person would jerk away the gun at the last possible instant. No matter how much they wanted to die, they didn't want to die. This is not a paradox, just a mix-up between processes.
—————
Part Two of this series of five articles about consciousness can be found here.
—————
Part Two of this series of five articles about consciousness can be found here.
Carlin'
“So who stole the money from the safe?”
“It was Allan. Or Bob.”
“I've been told Carl was also there. Maybe he stole the money.”
“No, that's impossible.”
“How so?”
“Carl wouldn't do something like that.”
“How can you know that?”
“Didn't you see the memo? It says Carl is as honest as the day is long.”
“You mean the memo ... with Carl's signature on it?”
“Yes, that's the one.”
“How can you believe a memo about Carl written by Carl?”
“Didn't you see the other memo? The one that says the first memo is reliable?”
“Who wrote that one?”
“Carl. And since he's as honest as the day is long...”
“How can I possibly judge if that's true? I've never even met Carl!”
“Oh, I've heard he's wonderful.”
“You've never met him either?”
“In a way I have. Just knowing how wonderful Carl is lifts my spirits.”
“Has it occurred to you that maybe Allan or Bob made up Carl?”
“Why would they do that?”
“So they could rob the safe, maybe.”
“Don't be silly. Carl wouldn't let somebody just make him up!”
“But if he doesn't exist...”
“Haven't you seen the memo that explicitly says he does exist?”
“Signed by … Carl?”
“Well, of course! You don't think I'd believe an unsigned memo, do you?”
“It was Allan. Or Bob.”
“I've been told Carl was also there. Maybe he stole the money.”
“No, that's impossible.”
“How so?”
“Carl wouldn't do something like that.”
“How can you know that?”
“Didn't you see the memo? It says Carl is as honest as the day is long.”
“You mean the memo ... with Carl's signature on it?”
“Yes, that's the one.”
“How can you believe a memo about Carl written by Carl?”
“Didn't you see the other memo? The one that says the first memo is reliable?”
“Who wrote that one?”
“Carl. And since he's as honest as the day is long...”
“How can I possibly judge if that's true? I've never even met Carl!”
“Oh, I've heard he's wonderful.”
“You've never met him either?”
“In a way I have. Just knowing how wonderful Carl is lifts my spirits.”
“Has it occurred to you that maybe Allan or Bob made up Carl?”
“Why would they do that?”
“So they could rob the safe, maybe.”
“Don't be silly. Carl wouldn't let somebody just make him up!”
“But if he doesn't exist...”
“Haven't you seen the memo that explicitly says he does exist?”
“Signed by … Carl?”
“Well, of course! You don't think I'd believe an unsigned memo, do you?”
Subscribe to:
Posts (Atom)