The So-called “Hard Problem”

The so-called “hard problem” is the problem of consciousness.  I call it the so-called problem because it is not a real problem at all. Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function.

There are two assumptions that create the problem:

First assumption is that a human being’s consciousness and self-awareness is somehow magical or special.

Second is that an object (real or imagined) leading to experience can be separated from the observer.

Let’s tackle the first issue. There is nothing special about consciousness or a human being’s self-awareness. Small planarian flatworms move away from light. Immune cells know how to differentiate self from other to a certain extent and so does a human being. Therein, the rudimentary form of awareness and self-awareness exists. It has come about from natural selection as a favorable survival mechanism.

The reason why human awareness is considered special is that the proponents get caught up in dichotomies that create a false view. Dichotomy of mind and body, the dichotomy of self and society, and the dichotomy of observer and the observed are the major issues that they use as assumptions and thus fundamentally fail at grasping the nature of consciousness. As long as these artificial walls are erected, the nature of consciousness will remain obscured.

Let’s consider how to design a sentient and conscious computer to change the hard-problem into an easy problem. The first thing is that the computer must have innate desires and tendencies; secondarily it must be able to write and modify programs and execute them to obtain those desires. The sum of these two would lead it to do some significant things but it would hardly make it conscious. It needs ability to judge and respond to whether or not its desires are fulfilled. So it needs some mechanisms to evaluate the experience. There lie the seeds of awareness. It doesn’t have to be complex. A baby cries when it is hungry, but when the stomach tells the brain that it is full, the baby stops. The brain promotes feeling of well-being. We need feelings for our computer, designing these is the hard part. The feelings that arose over millions of years of evolution are feedback mechanisms for the brain and they aid in survival but only in the conditions that are particularly prevalent in organism’s environment. So the only way to design a conscious computer is to design it in conjunction with (or better yet, as a derivative of) a variably rich environment.

We are not there yet. We have a computer searching for things in an environment where certain things are good for its survival and it has feedback mechanism that tells it what is good vs. what is not. Two more extremely important components for awareness of a higher order are a society and a memory. We need models and teachers for our young computer. It should also have a memory that recreates its contents (remakes them) when a similar stimuli is encountered.

The reasons for models is that it is the culture which creates the mind and creates the individual’s identity, otherwise the question of awareness doesn’t arise. The teachers of this young computer over a period must show it how it should act and how it shouldn’t and create positive and negative feedback. Another thing that ties with this is that the computer should also be able to re-experience the stored memory or memories when stimuli and organism states create a link to that memory. When it should happen is determined by series of probabilities of imperfect sensory data creating an excitation in an imperfect memory, while each part of the system affects the other. This is part of the reason it is hard to maintain concentration on a task or stay in the present, but if the combination of processing power and enjoyment or interest in the task is at the right ratio, it should suppress the ability of memories to surface and focus can be achieved.

A baby when denied its mother’s milk starts to fathom that what it needs is something that it can’t always get instantaneously. Later with reinforcement, it starts to understand the separate selves and with language gains a name for the otherness.

One last component that should arise naturally is the indeterminacy of the exact result. The computer would remember imperfectly how it felt but is unable to quantify it, because the variables are too numerous i.e. the self and the environment are too rich. So its stored results of its interactions would be stored in some truncated fashion with essential features of the interactions, where the focus is remembered but not necessarily all the details. The result should have a bearing on its future conduct and also on its similar memories of the past.
If a computer doesn’t have these abilities it might do a clever impersonation of awareness but it won’t have awareness of the human order.

Let’s take a quick look at the second assumption that qualia are somehow separately bound up within the mind or exist as special unexplainable states. The reality is that no experience, even the ones within our mind (those couldn’t have arisen had there been nothing outside), could exist without the environment in which we exist. The only qualia that might exist have to be hard wired responses to very particular stimuli. This doesn’t mean they can’t be different from person to person. In fact being parts of a particular anatomical construction and being affected by other regulating biological agents (serotonin, dopamine etc.), memory, and external conditions, it is safe to say that it is nigh impossible to experience the same qualia twice. The only reason one can compare one experience to another within oneself or with someone else is because of the similarity of environment and organism. Even then we have to generalize.

The dichotomy of organism and environment is what makes the physicalists unable to make sense of consciousness. Their philosophical position is that everything which exists is no more extensive than its physical properties. The idea is reasonable, but it is the kind of reductionism that has to be used with caution when we deal with consciousness. Consciousness is essentially a mechanism of testing, creating and maintaining relationships between the environment and the organism. The essential premise of physicalists is correct, but their concept that consciousness is an isolated object’s physical property is flawed.

Therefore, I suggest that consciousness is not a property merely of the organism. It is a property of an organism’s complex mental and biological states interacting with a complex external environment. Is it ever quantifiable? Not very likely, unless we can monitor each memory of a human being and see how much and in what transformation does it come up when encountering new or old stimuli and how it is affected by the organism’s existing state and the exact components of the interaction and the environment. It is not a simple task and at least with our current technology the order of complexity is too high. But perhaps in a far-off distant future it would be possible and then the mystery of why someone felt flushed when they smelled an unfamiliar scent won’t be a mystery.

-Navjot Sandhu

Side note: [What about the philosophical zombies? Philosophical zombies are postulated as beings that have identical behavior or anatomy to aware beings but are not aware, to refute the physicalist position that given identical properties an identical object would result. The reason P-zombies can’t exist at least naturally is because the requirement is that they behave identically to aware beings. The most obvious problem is that this requires the P-zombies to know how to respond to unfamiliar phenomenon in a manner identical to how an aware being would respond. To be able to do such a thing, a P-zombie will have to have a catalogue of all responses built in and also would actually have to know all possible situations because, unlike aware beings it won’t be able to improvise since, it cannot know new things (awareness), it can only pretend to improvise. In essence it would have to be an omniscient automaton, a being only magically possible and it would certainly be an impossibility to put that capability within an identical human biological structure.]

This entry was posted in Philosophy, Science and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *