Where God comes from
Apr. 27th, 2006 11:23 am![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
A recent post on
cvirtue's journal, as well as a cnversation with
kestrell last night, got me thinking again about one of the Important Things I Know. At least, I think it's Important, and I think very few other people know about it. I'm not sure if I've mentioned it here before or not, but it's worth bringing up occasionally anyways, in order to let other people know about it.
I think I know where God comes from. Lots of people have explanations for the social/evolutionary reasons why organized religions prosper once founded, but few have tried to explain where they come from in the first place. Most religions are ultimately sourced in one or more people having what we call a "religious experience". This phenomenon is not well understood. Why do some people seem to have direct experience of something so utterly unlike everyday existence? Why are those experiences so similar in broad strokes, and yet so different in the specific details? And why do so many of those who experience them go on to *found* religions? I think I know.
I'm going to start with what may seem like an irrelevant bit of Computer Science history: The Halting Problem. One of the problems that computer programs have always been prone to is the "infinite loop", a program which runs out of control and never stops to produce a result. Early computer researchers thought that they might be able to avoid this situation by making a "supervisor" program that watched other programs while they ran, which could detect the presence of any infinite loops, and would terminate them before they got out of control. Unfortunately, it turned out that the creation of a perfect "supervisor" program like that is logically *impossible*. It is impossible to tell, ahead of time, whether any arbitrary computer program will halt. This is taught to all novice CS students, and is called The Halting Problem.
When I first heard about the Halting Problem, as a student, I had a typically rebellious attitude towards it. "OK, so you can't make a *perfect* supervisor program. But you should be able to make one that is arbitrarily *close* to perfect. You could make one that is good enough for 99.9% of cases that actually happened."
Later on, I came to believe the Strong AI Hypothesis. This is the idea that the human mind is "computable". That is, there's nothing that goes on inside your head that couldn't (in principle at least) be duplicated by a sufficently powerful and well-programmed computer program.
If the mind *is* like a program, then the mind must be subject to the same sorts of problems that programs face. Such as, for example, infinite loops. Obviously, falling into an infinite loop would effectively end the life of that mind. This suggests that there would be *huge* evolutionary pressure to come up with a solution, like the afore-mentioned "supervisor" program. As previously discussed, no *perfect* solution is possible. But evolution doesn't *need* perfection. It just needs "good enough to improve my reproductive fitness". And that much is certainly achievable.
Backing away from the CS perspective, let's examine things from a human perspective for a bit. What would an infinite loop *feel* like, subjectively? Well, it would be a series of repetitious thoughts, circling back upon themselves. I think it would feel a lot like what many people refer to as "existential angst". Many of you may have experienced this. You start by asking "Why am I here?" If you come up with an answer X, you then typically ask "OK, so why X?" Eventually, most people get into a loop like "X must cause Y, but Y must cause X, but X must cause Y, but but but..."
Many people have that sort of experience at times in their lives. Most of them manage to stop thinking those thoughts in some fairly straightforward way. A relatively simple internal "supervisor" function suffices to get them refocused on the daily business of living. But there are a *few* people who (for whatever reason) refuse to give up asking questions until they get an answer. The first-level "supervisor" can't derail them, and they keep looping. So there's pressure to evolve a more complex, higher-level "supervisor".
Let's follow a typical example of such a person. He keeps getting distracted by everyday life, but he doesn't want to be distracted, he wants to follow the chain of reasoning to its (unfortunately nonexistent) end. He decides to head out into a (literal or figurative) desert, away from other people, where he can think uninterrupted. Once he's alone, he thinks, and he thinks, and he thinks, for several days. Eventually, he has a vision! A supernatural authority figure appears before him. This vision fills him with happiness and satisfaction. It reveals the secrets of the universe to him. Later, after the vision has faded, he may not be able to coherently explain these revelations to others, though he usually tries. BUT (and this is the critical point) he remains FIRMLY convinced that these revelations are true, and that there is no longer any need to ask further bothersome questions (that have a tendency to infinite loops).
The supernatural authority figure is the subjective experience of what I believe to be a high-level meta-program designed to abort infinite loops. "God" is just one aspect of the arbitrarily-good "supervisor" program that I described above. And it *is* very effective. The subject is not only booted out of his current loop, but put in a state where he is much less likely to ever enter a new one. Moreover, he often has *greatly* increased reproductive fitness after the experience. Many such people gain huge respect from their communities, in their new role as "prophet", or "enlightened one", or whatever. Many of them found entire new religions, because they are so convinced that they have a perfect understanding of everything.
Curiously, believing this does not prevent me from having spiritual beliefs. Alan Moore said it well: "Now, the rationalist view of all magical encounters is probably that all apparent entities are in fact externalised projections of parts of the self. I have no big argument with that, except that I'd hold the converse to be true as well: we are at the same time externalised projections of them."
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
![[livejournal.com profile]](https://www.dreamwidth.org/img/external/lj-userinfo.gif)
I think I know where God comes from. Lots of people have explanations for the social/evolutionary reasons why organized religions prosper once founded, but few have tried to explain where they come from in the first place. Most religions are ultimately sourced in one or more people having what we call a "religious experience". This phenomenon is not well understood. Why do some people seem to have direct experience of something so utterly unlike everyday existence? Why are those experiences so similar in broad strokes, and yet so different in the specific details? And why do so many of those who experience them go on to *found* religions? I think I know.
I'm going to start with what may seem like an irrelevant bit of Computer Science history: The Halting Problem. One of the problems that computer programs have always been prone to is the "infinite loop", a program which runs out of control and never stops to produce a result. Early computer researchers thought that they might be able to avoid this situation by making a "supervisor" program that watched other programs while they ran, which could detect the presence of any infinite loops, and would terminate them before they got out of control. Unfortunately, it turned out that the creation of a perfect "supervisor" program like that is logically *impossible*. It is impossible to tell, ahead of time, whether any arbitrary computer program will halt. This is taught to all novice CS students, and is called The Halting Problem.
When I first heard about the Halting Problem, as a student, I had a typically rebellious attitude towards it. "OK, so you can't make a *perfect* supervisor program. But you should be able to make one that is arbitrarily *close* to perfect. You could make one that is good enough for 99.9% of cases that actually happened."
Later on, I came to believe the Strong AI Hypothesis. This is the idea that the human mind is "computable". That is, there's nothing that goes on inside your head that couldn't (in principle at least) be duplicated by a sufficently powerful and well-programmed computer program.
If the mind *is* like a program, then the mind must be subject to the same sorts of problems that programs face. Such as, for example, infinite loops. Obviously, falling into an infinite loop would effectively end the life of that mind. This suggests that there would be *huge* evolutionary pressure to come up with a solution, like the afore-mentioned "supervisor" program. As previously discussed, no *perfect* solution is possible. But evolution doesn't *need* perfection. It just needs "good enough to improve my reproductive fitness". And that much is certainly achievable.
Backing away from the CS perspective, let's examine things from a human perspective for a bit. What would an infinite loop *feel* like, subjectively? Well, it would be a series of repetitious thoughts, circling back upon themselves. I think it would feel a lot like what many people refer to as "existential angst". Many of you may have experienced this. You start by asking "Why am I here?" If you come up with an answer X, you then typically ask "OK, so why X?" Eventually, most people get into a loop like "X must cause Y, but Y must cause X, but X must cause Y, but but but..."
Many people have that sort of experience at times in their lives. Most of them manage to stop thinking those thoughts in some fairly straightforward way. A relatively simple internal "supervisor" function suffices to get them refocused on the daily business of living. But there are a *few* people who (for whatever reason) refuse to give up asking questions until they get an answer. The first-level "supervisor" can't derail them, and they keep looping. So there's pressure to evolve a more complex, higher-level "supervisor".
Let's follow a typical example of such a person. He keeps getting distracted by everyday life, but he doesn't want to be distracted, he wants to follow the chain of reasoning to its (unfortunately nonexistent) end. He decides to head out into a (literal or figurative) desert, away from other people, where he can think uninterrupted. Once he's alone, he thinks, and he thinks, and he thinks, for several days. Eventually, he has a vision! A supernatural authority figure appears before him. This vision fills him with happiness and satisfaction. It reveals the secrets of the universe to him. Later, after the vision has faded, he may not be able to coherently explain these revelations to others, though he usually tries. BUT (and this is the critical point) he remains FIRMLY convinced that these revelations are true, and that there is no longer any need to ask further bothersome questions (that have a tendency to infinite loops).
The supernatural authority figure is the subjective experience of what I believe to be a high-level meta-program designed to abort infinite loops. "God" is just one aspect of the arbitrarily-good "supervisor" program that I described above. And it *is* very effective. The subject is not only booted out of his current loop, but put in a state where he is much less likely to ever enter a new one. Moreover, he often has *greatly* increased reproductive fitness after the experience. Many such people gain huge respect from their communities, in their new role as "prophet", or "enlightened one", or whatever. Many of them found entire new religions, because they are so convinced that they have a perfect understanding of everything.
Curiously, believing this does not prevent me from having spiritual beliefs. Alan Moore said it well: "Now, the rationalist view of all magical encounters is probably that all apparent entities are in fact externalised projections of parts of the self. I have no big argument with that, except that I'd hold the converse to be true as well: we are at the same time externalised projections of them."
(no subject)
Date: 2006-04-27 08:00 pm (UTC)