An excerpt from the foundational text of neurophenomenology: Varela, Thompson, and Rosch’s The Embodied Mind

cognitive science, Eleanor Rosch, embodiment, Evan Thompson, Francisco Varela, history of neurophenomenology, introspection, neurophenomenology, The Embodied Mind

It is very,very gratifying to see interest in neurophenomenology increasing. Welcome! Exciting things are happening. If you feel like you could make a contribution to the field, do it! We are still in the early phase, though probably at the “end of the beginning”. In 1996 you could find about three references online (in mid 2013, Google shows 35,400 results). Around then I got a copy of neurophenomenologist/cognitive scientist Francisco Varela, philosopher Evan Thompson, and cognitive psychologist Eleanor Rosch’s The Embodied Mind: Cognitive Science and Human Experience. To this day I am struck by the lucidity of the writing, the patient willingness to explore the virtues of opposing viewpoints, and especially the depth of the challenge to mainstream cognitive neuroscience and psychology. The basic idea is that the science of cognition and the brain needs to somehow reckon with human experience, in all it’s phenomenological, fleshy, ecologically situated complexity. The science of human cognition requires an account of how life seems to us, how it feels, what it means. Not doing so amounts to a shortcut, though an understandable one, given the difficulties routinely encountered. The authors painstakingly present the case for why failing to include the role of the evolutionarily developed phenomenological body and the meaningful, experiential, existential dimensions will hamper scientific accounts of cognition and the brain. Varela, Thompson and Rosch present a radical challenge to the idea that the mind is best modeled based on data and measurements only from the outside, or purely objectively. Cognitive neuroscience describes cognition and consciousness as machinery emerging from the hardware of the brain, and Varela, Thompson and Rosch carefully explore the benefits of this view, but opt for a radical alternative. I am convinced it is the foundational and definitive work in neurophenomenology. Interestingly, Daniel Dennett, a staunch defender of cognitivist orthodoxy, had substantive criticism but went on to say: “the authors find many new ways of putting together old points that we knew were true but didn’t know what to do with, and that in itself is a major contribution to our understanding of cognitive science.” The term “neurophenomenology” does not appear in this book. As I have mentioned elsewhere, the term emerges around 1990 from the work of Charles Laughlin (though there seems to be one mention in a hard-to-find publication from 1988). I had considered directly contacting Varela around 1996 to convince him of the helpfulness of the term “neurophenomenology”, and I must admit to an utter dopamine blast of pleasure when around that time I found his 1996 paper “Neurophenomenology : A Methodological Remedy for the Hard Problem“. Kismet! It was an exciting time, and helped push me towards doing a PhD on one narrow aspect of clinical neurophenomenology: modeling how accurate patients are at reporting on their cardiac rhythm states, and how the brain both enables knowledge and mistaken beliefs about heartbeats . With new people showing an interest in and perhaps coming into this field, we might as well make sure to examine the core text: The Embodied Mind. There is a copy online and here is an excerpt, but I highly recommend getting a physical copy. Here is a section entitled The Retreat into Natural Selection, from Chapter 8: Enaction: Embodied Cognition (for what it’s worth, in my last class as a PhD student, I had my colleagues in David Buss‘ Evolutionary Psychology seminar read and discuss this chapter, and they hated it!) . embodied “In preparation for the next chapter, we now wish to take note of a prevalent view within cognitive science, one which constitutes a challenge to the view of cognition that we have presented so far. Consider, then, the following response to our discussion: “I am willing to grant that you have shown that cognition is not simply a matter of representation but depends on our embodied capacities for action. I am also willing to grant that both our perception and categorization of, say, color, are inseparable from our perceptually guided activity and that they are enacted by our history of structural coupling. Nevertheless, this history is not the result of just any pattern of coupling; it is largely the result of biological evolution and its mechanism of natural selection. Therefore our perception and cognition have survival value, and so they must provide us with some more or less optimal fit to the world. Thus, to use color once more as an example, it is this optimal fit between us and the world that explains why we see the colors we do.” We do not mean to attribute this view to any particular theory within cognitive science. On the contrary, this view can be found virtually anywhere within the field: in vision research, it is common both to the computational theory of Marr and Poggio and to the “direct theory” of J. J. Gibson and his followers.  It is prevalent in virtually every aspect of the philosophical project of “naturalized epistemology.”  It is even voiced by those who insist on an embodied and experientialist approach to cognition. For this reason, this view can be said to constitute the “received view” within cognitive science of the evolutionary basis for cognition. We cannot ignore, then, this retreat into natural selection. Let us begin, once again, with our now familiar case study of color. The cooperative neuronal operations underlying our perception of color have resulted from the long biological evolution of the primate group. As we have seen, these operations partly determine the basic color categories that are common to all humans. The prevalence of these categories might lead us to suppose that they are optimal in some evolutionary sense, even though they do not reflect some pregiven world. This conclusion, however, would be considerably unwarranted. We can safely conclude that since our biological lineage has continued, our color categories are viable or effective. Other species, however, have evolved different perceived worlds of color on the basis of different cooperative neuronal operations. Indeed, it is fair to say that the neuronal processes underlying human color perception are rather peculiar to the primate group. Most vertebrates (fishes, amphibians, and birds) have quite different and intricate color vision mechanisms. Insects have evolved radically different constitutions associated with their compound eyes. One of the most interesting ways to pursue this comparative investigation is through a comparison of the dimensionalities of color vision. Our color vision is trichromatic: as we have seen, our visual system comprises three types of photoreceptors cross-connected to three color channels. Therefore, three dimensions are needed to represent our color vision, that is, the kinds of color distinctions that we can make. Trichromacy is certainly not unique to humans; indeed, it would appear that virtually every animal class contains some species with trichromatic vision. More interesting, however, is that some animals are dichromats, others are tetrachromats, and some may even be pentachromats. (Dichromats include squirrels, rabbits, tree shrews, some fishes, possibly cats, and some New World monkeys; tetrachromats include fishes that live close to the surface of the water like goldfish, and diurnal birds like the pigeon and the duck; diurnal birds may even be pentachromats).  Whereas two dimensions are needed to represent dichromatic vision, four are needed for tetrachromatic vision (see figure 8.6), and five for pentachromatic vision. Particularly interesting are tetrachromatic (perhaps pentachromatic) birds, for their underlying neuronal operations appear to differ dramatically from ours.

  varela graph2

Figure 8.6 Tetrachromatic vs. trichomatic mechanisms are illustrated here on the basis of the different retinal pigments present in various animals. From Neumeyer, Das Farbensehen des Goldfisches. When people hear of this evidence for tetrachromacy, they respond by asking, ”What are the other colors that these animals see?” This question is understandable but naive if it is taken to suggest that tetrachromats are simply better at seeing the colors we see. It must be remembered, though, that a four-dimensional color space is fundamentally different from a three-dimensional one: strictly speaking, the two color spaces are incommensurable, for there is no way to map the kinds of distinctions available in four dimensions into the kinds of distinctions available in three dimensions without remainder. We can, of course, obtain some analogical insights into what such higher dimensional color spaces might be like. We could imagine, for example, that our color space contains an additional temporal dimension. In this analogy, colors would flicker to different degrees in proportion to the fourth dimension. Thus to use the term pink, for example, as a designator in such a four-dimensional color space would be insufficient to pick out a single color: one would have to say rapid-pink, etc. If it turns out that the color space of diurnal birds is pentachromatic (which is indeed possible), then we are simply at a loss to envision what their color experience could be like. It should now be apparent, then, that the vastly different histories of structural coupling for birds, fishes, insects, and primates have enacted or brought forth different perceived worlds of color. Therefore, our perceived world of color should not be considered to be the optimal “solution” to some evolutionarily posed “problem.” Our perceived world of color is, rather, a result of one possible and viable phylogenic pathway among many others realized in the evolutionary history of living beings. Again, the response on the behalf of the “received view” of evolution in cognitive science will be, “Very well, let us grant that color as an attribute of our perceived world cannot be explained simply by invoking some optimal fit, since there is such a rich diversity of perceived worlds of color. Thus the diverse neuronal mechanisms underlying color perception are not different solutions to the same evolutionarily posed problem. But all that follows is that our analysis must be made more precise. These various perceived worlds of color reflect various forms of adaptation to diverse ecological niches. Each animal group optimally exploits different regularities of the world. It is still a matter of optimal fit with the world; it is just that each animal group has its own optimal fit.” This response is a still more refined form of the evolutionary argument. Although optimizations are considered to differ according to the species in question, the view remains that perceptual and cognitive tasks involve some form of optimal adaptation to the world. This view represents a sophisticated neorealism, which has the notion of optimization as its central explanatory tool. We cannot proceed further, then, without examining more closely this idea in the context of evolutionary explanations. We cannot attempt to summarize the state of the art of evolutionary biology today, but we do need to explore some of its classical foundations and their modern alternatives.

History of the development of neurophenomenology pt.II-cognitivism, neurology, and psychology

cognitive science, Francisco Varela, medicine, neurophenomenology

(Part I is here, and part III is here)

In certain respects, development of the view that embodied experience is crucial to understanding the mind and brain reached a nadir in the period after World War II, at least within psychology. Behaviorism had redefined psychology as an “objective” science with no need to refer to consciousness or phenomenology.  There was continuation of phenomenological research from the German gestalt psychologists, but it was not until after World War II that clinically-oriented humanistic psychology explicitly articulated the need for more holistic, “person-centric” perspectives emphasizing existential concerns: the search for meaning, the experience of health and illness, emotions, and consciousness.

While many philosophers in Europe continued to develop phenomenology, Contintental philosophy was increasingly concerned with logical positivism, which emphasized that many traditional problems could be solved through formal logic, and those not approachable in this way were suspect.  Formal logic reached a apotheosis  of sorts with the advent of computers, a class of systems having internal memory storage and symbolic-logical operations, and with them came a number of seminal figures that transformed models of mind and brain. In particular,  Norbert Weiner‘s (1894-1964) meta-discipline of cybernetics, Claude Shannon‘s (1916-2001) information theory, Alan Turing‘s  (1912-1954) and John von Neumann‘s (1903-1957) canonical work on computation, Jean Piaget‘s (1896-1980) theories of the sequential process by which infants and small children learn language and perception in stages, all resulted in an explosion of new perspectives on cognition, language, memory, perception, problem-solving.  By the late 1950’s the overlapping field(s) of artificial intelligence (AI) and cognitive science got the attention of researchers in psychology, linguistics, philosophy of mind, neuroscience, anthropology, therapy, and organizational management. Herbert Simon (1916-2001) modeled human problem solving in the face of uncertain information, and co-developed what became known as general systems theory, and along with Allen Newell (1927-1992) developed automated theorem-provers and chess-playing programs. Noam Chomsky‘s investigation of the symbolic logical rules underlying grammar and syntax generated an attack on environment-produced behaviorist theories of language, the flaws of which dramatically came to a head in the North Texas Symposium on Language in 1959.

While the door to explaining psychological phenomena in terms of mental categories and concepts had been re-opened, these new models generally formulated explanations in terms of impersonal information-processing and rule-based symbolic-logical theories of non-conscious aspects of the mind. These new “cognitivists” had absorbed certain scruples from the behaviorists, and typically disdained concepts such as “consciousness” in their models of the mental processes. Cognitivism remained “system-centric”, not person or body-centered, and focused on reducing mental activity to computational,information-processing and representational processes. There was a general lack of interest in using various first-person, introspection-based methods such as those of William James, or Edmund Husserl, though cognitivism and behaviorism alike asked subjects for verbal reports within experiments.

However, clinical neurologists continued to advance an approach to psychological and cognitive phenomena that reflected a richer and broader understanding of the mind. The First and Second World Wars provided a huge pool of subjects with specific localized lesions and corresponding deficits in memory, speech, motion, etc.  The Russian neurologist Alexander  Luria (1902-1977)  spent about 30 years with a patient, the soldier Zazetsky, who sustained a bullet wound to his left occipito-parietal cortex.  Zazetsky’s struggle to use journal writing cope with being unable to remember new events is described in  The Man With a Shattered World (1972) as a fight “to live, not merely exist.” Zazetsky wrote: “I’m in a fog all the time…. All that flashes through my mind are images…hazy visions that suddenly appear and disappear.”

a7f3f96642a07af2db1e8110.L._AA240_

Taking the long view of the development of a science of the mind, the praxis-driven demands of the clinic balanced somewhat the behaviorist  and cognitivist disavowal of consciousness as a research topic. Focusing on the struggle of a brain-injured patient to live meaningfully meant that at least a small part of the ever-more fragmented field of psychology overtly or implicitly emphasized embodied and conscious aspects of cognition.  It should be emphasized that a division of labor was in effect. Clinicians deal with people, while cognitive scientists deal with systems. As neurologists and psychologists published case studies,  the more theoretically minded extrapolated from these reports to highlight an understanding  of human mental functioning that did not exclude consciousness and the existential, personal, meaningful dimensions of experience that are grounded in the lived body.

Across the ocean, in France, while structuralism began to dominate intellectual life after World War II, developments in phenomenological research continued apace. Most imprtantly was the philosopher Maurice Merleau-Ponty (1908-1961) analyzed and critqued the phenomenology of the philosopher/mathematician Edward Husserl. Foregoing Husserl’s hugely ambitious project of project of providing the most rigorous epistemological foundation possible for science and philosophy through investigations into experience, Merleau-Ponty attempted to reintegrate the penetrating Husserlian observation analysis of conscious phenomena into the structure of how consciousness is grounded and lived out bodily.

Phenomenologist Maurice Merleau-Ponty

Phenomenologist Maurice Merleau-Ponty

This change of emphasis allowed a bridge towards grasping how the lived body is related to the objectively-described physical body of physiology, behaviorism, and brain science. Works such as The Structure of Behavior and The Phenomenology of Perception are tantalizing hints that had Merleau-Ponty lived a long life, neurophenomenology might have emerged decades before the 1990’s. Merleau-Ponty articulated a post-Cartesian view of the mind that subverted the subject-object split. He used the notion of co-constitutionality to grapple with the enigmatic coupling and engagement of embodied mind to the world.  Two quotes from The Phenomenology of Perception are appropos (pg. 407):

Inside and outside are inseparable” (pg. 407)

Insofar as I have hands, feet; a body, I sustain around me intentions which are not dependent on my decisions and which affect my surroundings in a way that I do not choose” (pg. 440)

51OpeN9kQQL

Those psychiatrists and psychologists who attempted to apply the insights of Husserl, Martin Heidegger (1889-1976) and Merleau-Ponty especially developed what became known as phenomenological psychology. Heidegger gave lectures to physicians about ontology,while Ludwig Binswanger (1881-1966) and Medard Boss (1903-1990) attempted to apply his analysis of dasein (“being-there”) to clinical contexts. Phenomenological psychology showed a pronounced clinical influence from a key synthesizer of the neurological and phenomenological research traditions:  the neuropsychiatrist Erwin Straus (1891-1975, who was possibly the first neurophenomenologist.

Erwin Straus, MD: the first neurophenomenologist?

Erwin Straus, MD: the first neurophenomenologist?

He is quoted in Man, Time, and World: Two Contributions to Anthropological Psychology (1982) as stating:

The physiologist, who in the everyday world relates behavior and brain, actually makes three kinds of things into objects of his reflection: behavior, the brain as macroscopic formation, and the brain in its microscopic structure and biophysical processes. From the whole-the living organism-the inquiry descends to the parts: first of all to an organ-the brain-and finally to its histological elements. Statements concerning the elementary processes acquire their proper sense only in reference back to the original whole

Probably the best known exponent of a phenomenogical approach to clinical psychology and psychiatry was RD Laing (1927-1989), who in 1965 wrote a classic case-study analysis of the experience of schizoids in The Divided Self: an existential study in sanity and madness. In it he describes one patient:

“Julie’s self-being had become so fragmented that she could best be described as living a death-in-life existence in a state approaching chaotic nonentity.

In Julie’s case, the chaos and lack of being an identity were not complete. But in being with her one had for long periods that uncanny ‘praecox’ feeling described feeling’ described by the German clinicians, i.e. of being in the presence of another human being and yet feeling that there was no one there. Even when one felt that what was being said was an expression of someone, the fragment of a self behind the words or actions was not Julie. There might be someone addressing us, but in listening to a schizophrenic, it is very difficult to know ‘who’ is talking, and it is just as difficult to know ‘whom’ is addressing.”

In the 1970’s and early 1980’s, neurologists like Oliver Sacks continued in the neuropsychological tradition of Luria, and documented the  existential struggles of patients with brain disorders. In 1970  he produced an eminently readable,  phenomenologically rich classic of neuropsychology: The Man Who Mistook His Wife for a Hat.  He wrote persuasively that while there are indeed computer-like aspects of the brain, the cognitive, computationalist or information-processing model nonetheless does not address the full spectrum of human psychological reality (pg. 20):

But our mental processes, which constitute our being and life, are not just abstract and mechanical, but personal, as well.”

1834-1

The European-flavored, humanistic field of phenomenological psychology (also called existential-phenomenological psychology)  offers an alternative for researchers dissatisfied with mechanistic cognitivitism, behaviorism and physiological psychology. However, as far as I can tell,  after the passing of Erwin Straus, phenomenological psychology has had little or no interest in cognitive neuroscience. The major exception to this I can find was in 1981, when phenomenological/biophysiological psychologist Donald Moss and cognitive neuroscientist Karl Pribram each wrote fascinating essays on comparing brain science and phenomenology  in the collection The Metaphors of Consciousness (Valle and von Eckartsberg, Eds). This is of historical interest as an early instance of an explicit dialoug between neuroscience and existential-phenomenology.

92ebe03ae7a045d335c3d110.L._AA240_

Pribram’s essay “Behaviorism, Phenomenology, and Holism in Psychology” pointed to the need for a broader, phenomeologically and neurobiologically informed  approach to psychology (pg. 142:

“But there are limits to understanding achieved solely through the observation and experimental analysis of behavior. These limits are especially apparent when problems other than overt behavior are addressed, problems related to thought or to decisional processes, to appetive and other motivational mechanisms, to emotions and feelings, and even to images and perception”.

and (pg. 146):

“Existential-phenomenological psychology has not, up to now, been very clear in it’s methods. I suggest that multidimensional analyses (factor analysis, principle components analysis, stepwise discriminant analysis) might serve well as tools to investigate the structure of experience-in-the-world.”

Moss lucidly analyzed the similarities and divergences between neuroscience and existential-phenomenology  in a essay entitled “Phenomenology and Neuropsychology” (pg.159):

“Pribram points to the role of the brain processes in”constructing” the world as perceived. Yet existential-phenomenology has also emphasized the “constituting functions”of the ego (Husserl), the constituitive role of the lived body (Merleau-Ponty), and the role of the human body and upright posture in articulating the world of sensory experience (Straus). Thus, neither school of thought naively recognizes a reality per se unaffected by the presence and condition of the organism. ”

Such exchanges occurred on the margins of mind-science. By the 1960’s, the largely cold-war funded research program of Artificial Intelligence (A.I) and growing interest in cognitive or information-processing approaches to problems in psychology etc. had produced a “cognitive revolution”.  Some brave cognitivists even made use of introspective techniques (though not without drawing fire from behaviorists). Herbert Simon asked his subjects to verbally report on how they solved logic-puzzles, much to the chagrin of the remaining orthodox behaviorists. The renewal of mentalistic language and willingness to freely use data from introspection and verbal reports from subjects  about how they solved logic problems was a robust challenge to the behaviorists, but over time a rapprochement ensued.

But what really allowed the scientific study of consciousness and experience to re-emerge was the success of theoretical and laboratory neuroscience. EEG data had been produced for years with good temporal but limited spatial resolution, but in the 1970’s and 1980’s an alphabet soup of new imaging technologies (CAT, PET, MRI, and recently MEG) allowed neuroscientists to better “peek inside” the living brains of subjects in experiments. Progress in molecular biology, genomics, and biophysics in the postwar West allowed curious researchers to formulate models of emotions in chemical terms, such as the finding of endogenous opiates (or endorphins) and their receptors in the brain. The finding that nerve fibers connect with the organs of the immune system helped ground theories of the effect of emotions and beliefs on health, leading to the interdiscipline of psychoneuroimmunology. A growing industry to synthesize pharmaceutical products based on the molecular structure of receptor proteins has led to neuropharmacology and neuropsychopharmacology.

Some brain researchers looking for theoretical models of the mind found the information-processing/computationalist approach of the cognitivists limiting in understanding emotions and experience. Cognitive science itself had been rocked from its early (late 1950’s-early 1960’s) success to the gradual realization that many aspects of mind are not easily characterized as formal-logical, rule-based systems, as had been predicted by the phenomenologically-informed philosopher Hubert Dreyfus (1972) in What Computers Can’t Do, where he argued that rule-based, symbolic-logical, representationalist models of mind and language fail to deal with the radically embodied nature of cognition. This was hotly rejected by prominent AI researchers, but later influenced Terry Winograd, among others.

images1

Mostly the insights of clinical neurologists and phenomenological psychologists were ignored in postwar cognitive science, which had a great overlap with computer science and Artificial Intelligence (A.I). Indeed, cognitivists and AI engineers might profess agnosticism about the neurobiology of the mind, viewing  brain “hardware” as the domain of other specialists. In the late 1950’s and through the 1960’s, cognitive science and Artificial Intelligence seemed to have revolutionary new insights. AI as engineering of useful artifacts overlapped with AI as cognitive modeling. An early era of exciting optimism eventually gave way to slow progress on “general purpose” problem solving.  The limitations of their symbolic-logical, information-processing, and computationalist approach led others to develop the hybrid field of cognitive neuroscience. Sometimes there were interesting discrepancies between the two: onetime “pure” cognitivist Stephen Kosslyn performed neuroimaging experiments on subjects who were asked to rotate mental objects. According to John McCrone’s report of Kosslyn’s work in Going Inside: A Tour ‘Round a Single Moment of Consciousness, the resulting pattern of distributed activity across disparate brain regions was difficult to reconcile with the neat schematic Kosslyn had developed as an abstract cognitive model possessing a few modules for accomplishing aspects of the rotation operation. This lends credence to those who propose that cognitive science must be much more thoroughly integrated with the “gory details” of neuroscience, with the neural networks/connectionist camp serving as a conceptual bridge fro brain to symbols and representations. Over time, the lack of interest in biology and “implementation agnosticism” of some computationalist cognitive scientists has given way to modern cognitive neuroscience. A movement in the 1980’s to reform cognitive science and artificial intelligence along biologically-inspired and “subsymbolic” lines known as connectionism, artificial neural-networks, and parallel-distributed processing splits cogntivism to this day.

A pathbreaking  (and for some, puzzling*) book appeared in the second half of the 1980’s that seemd to point the way to a synthesis of neurobiology, cognitivism, computer science, and phenomenology: Understanding computers and cognition: a new foundation for design by AI and language-processing expert Terry Winograd and Fernando Flores:

9780201112979

The book proposed a phenomenologically-grounded understanding of how people in real-world environments use systems that software designers build. It took  inspiration from Humberto Maturana and Francisco Varela‘s  idea of autopoesis, a cybernetics-inspired, dynamical theory of organisms self-organizing  their own  structure by regenerating parts and by being coupled to their environment, until death.  The brains of creatures do not represent features (such  as colors) of objects external to them as cognitivists typically assume.  Rather, each ecologically-situated animal brings forth or co-constitutes a perceived world through evolutionarily-selected sensorimotor systems.  Autopoesis is a sort of post-Cartesian biology, and  Maturana and Varela described it in 1981 as:

“a network of processes of production (transformation and destruction) of components which: (i) through their interactions and transformations continuously regenerate and realize the network of processes (relations) that produced them; and (ii) constitute it (the machine) as a concrete unity in space in which they (the components) exist by specifying the topological domain of its realization as such a network.”

While a cognitivist might recognize a consonance with cybernetics here, abandoning representationalism is very difficult for some. What other bridging concepts are there to relate brain and mind events? This is still an open issue.

As it turned out, a sophisticated alternative to cognitivism was on the way: Walter Freeman, Francisco Varela, and others have offered a post-representationalist approach to consciousness, cognition, and the brain based in dynamical systems theory. The undercurrents of dissatisfaction with  understanding the mind as information-processing, rule-based symbolic  logical procedures, and “computations over representations”  emerged in the 1990’s  as embodied cognitive science and neurophenomenology.

(Part I is here, and part III is here)

*-when asked about Understanding Computers and Cognition, a doctoral student in psychology I knew could only shake his head, raise his eyebrows,  and say “that’s a weird book”

How accurate are people at knowing what is happening inside their bodies?

cognitive science, embodiment, interoception, introspection, neurophenomenology, symptom reports, visceral perception

Were people utterly inaccurate at judging their body state and reporting on it, clinical medicine would be deprived of a critical tool.  Evidence has accumulated that in certain circumstances, some people are evidently able to access information about the physiological processes inside of their bodies, and to report on it.  Experiments seem to demonstrate that some people are relatively accurate perceivers of symptoms or physiological state (Jones and Hollandsworth, 1981), (Adam, 1998), and that subjects can be ranked into good or poor estimators of internal state; for instance, with perceivers of heart rate (Schandry, 1981).

When we are actually aware of specific processes inside our bodies and can state this verbally, it would seem that in some fashion unconscious information (or unconscious “information”) has generated or has been transformed into knowledge. However, there is contradictory evidence about accuracy of symptom perception: how good people really are at perceiving various physiological states, and how accurate symptom-reports or other verbal-reports actually are. Many studies have yielded data consistent with the idea that people are not particularly good at accurately reporting on their symptoms or physiological states (Pennebaker, 1982). It is worth pointing out the assertion that people are generally inaccurate about knowing about physiological processes in their bodies reformulates the principle that humans lack epistemological privilege concerning introspective or verbally reported data. In considering the question: are we are likely to be in error when we report on the contents of what is in our minds, or not, it is critical to appreciate the persuasive interpretation of experiments written up in papers such as “Telling More than We Can Know” by the psychologists Robert Nisbett and Timothy Wilson (1977), which seems to show how introspection-based retroactive judgments can are in error.  This category of research typically features subjects placed in circumstances where their choices are influenced by variables controlled by experimenters, and who give explanations for their choosing that display incorrect “folk psychological” constructions. Nisbett and Wilson’s analysis can properly interpreted as to cast doubt on the ability of people to know the causes of our behavior and “higher order” information-processing, and can be summed up with their statement that people may possess “little ability to report accurately on their cognitive processes” (p. 246).

However, I assert that this valuable critique of retrospective judgments has been improperly extrapolated to support a broad skepticism about introspection, what I shall call the “received view” or the “overly skeptical view”, which I might sum-up as the belief that introspective data should generally be regarded with skepticism. As has been noted by careful researchers on introspection, (Schwitzgebel, 2006), this more general rejection of introspection certainly goes beyond what Nisbett and Wilson argued: while they do indeed assert that the evidence of numerous studies shows people are poor at using verbal report-based introspection to the cognitive process behind our judging and deciding, they do not support a general disdain for introspective data. Rather, they state that instead of arguing that introspective reports should simply be discredited, while people do not have introspective access to the cognitive processes, they do have such access to the contents of their cognitions. For instance, Nisbett and Wilson (pg. 255) state that introspection can yield forms of knowledge: knowledge about cognitive content, as an everyday person:

“…knows what his current sensations are and what almost all psychologists and philosophers would assert to be “knowledge” at least quantitatively superior to that of observers concerning his emotions, evaluations, and plans”

Furthermore, the “received view” that introspective reports are to be generally regarded with suspicion is in tension with the clinical use of patient introspection, as well as the high accuracy ratings sometimes displayed in experiments where subjects are asked to evaluate their own physiology. Therefore, while showing appropriate regard for data suggesting limits on introspective access to cognitive information (indeed I will suggest that models of body-knowledge should account for this data), I will nonetheless highlight certain clinical and experimental data that support the following assertion, which  contradicts the view that introspective data should be generally regarded with skepticism:

There exist cognitive processes that allow people to access internal body-state or physiological information in a way that enables fairly, or even highly, accurate verbal reports.  Insofar as this is true, people evidently have some degree of epistemologically privileged access to internal body state or interoceptive information. This relative privilege allows for knowledge of the body, as distinct from mere beliefs.

However, if this is true, some accounting of to what degree or how true it is, with which mitigating conditions, and with what reference to underlying cognitive and neurophysiological mechanisms would be necessary.  For that matter, even if true, demarcating the explanatory power of this principle relative to data adequately explained by the “received view” or “overly skeptical view” is of critical importance. It may be that only special or rare abilities are at issue here, and that the people who have privileged access to their internal physiological information are outliers.

A critical look at the information-processing theories used to explain body-knowledge

Uncategorized

The psychologist Raymond Gibbs (2006) in Embodiment and Cognitive Science asks (pg. 28) “What underlies people’s abilities to move as they do and have any awareness of their bodies?”

images

The conventional answer given by psychology, medicine, and cognitive neuroscience is physiological and cognitive systems using information-processing. Gibbs cites the work of Bermudez, Marcel, and Eilan (1995) who list a series of multiple internal physiological information sources that enable motor activity and somatic perception (pg. 13):

“(a) Information about pressure, temperature, and friction from receptors on the skin and beneath the surface.

(b) Information about the relative state of body signals from receptors in the joints, some sensitive to static position, some to dynamic information.

(c) Information about balance and posture from the vestibular system in the inner ear and the head/trunk dispositional system and information from pressure on any parts of the body that might be in contact with gravity-resisting surfaces.

(d) Information from skin stretch and bodily disposition and volume.

(e) Information from receptors in the internal organs about nutritional and other states relevant to homeostasis and well-being.

(f) Information about effort and muscular fatigue from muscles.

(g) Information about general fatigue from cerebral systems sensitive to blood composition”

These formulations follow the contemporary scientific trend of explaining systems, processes, entities, and relationships in terms of computation and information (and, often enough, representation).  It is relatively rare in such contexts to encounter authors worrying overmuch about the meaning of the term “information” though there are attempts to effectively operationally define it via metrics, i.e. to quantify the information content of a system via Shannon-style or other measurements (Gardner, 1985). Perhaps “information” is indispensable as a term of convenience, but a critical reader should ask what the term means in context, when the term is used as a heuristic, what value the term adds, whether ambiguity is lessened or increased, and whether it would be better to emphasize the provisionality of information concepts.

Probably it would be better to refer to “information” much of the time, but this becomes stylistically cumbersome very quickly.

In any event, current theories of psychophysiological processing, clinical studies of symptom-perception accuracy, cognitive models using mechanisms explaining explicit, verbally-stateable knowledge, and other theories conceptualize our knowledge of our bodies in terms of information-processing, though the connection between “knowledge” and “information” is often not explicit.

How are we to understand the meaning(s) of the term “information” used to explain how and how well people know their own mental and physiological states? What is the relationship of “information” in the sense of physiological or biological systems to consciously reportable sensation, such that a person is getting information about their body state? Are there many kinds of “information” involved in these models of internal state perception or “body cognition” found in clinical neurology, medicine, experimental psychology, and theoretical cognitive neuroscience? Or is there but one type of “information”, with different qualities or aspects that are described or measured in different ways? What metrics or formalisms are most appropriate for each type?

Information-processing, computational, cognitivist, and representational formulations may privilege objective “system-centric” meaning of information, performing an implicit reduction of information in an experiential/phenomenological sense, without calling attention to the reduction. It may be useful, or even true, that quantities can be mapped on to qualities, following the accepted principle in philosophy of science that the ontology (known elements, entities, processes, features, properties and their relations) from one domain can be reduced to that of another domain, with properties from the one domain or level of scientific description to more fundamental explanatory ones in another domain (Churchland, 1989). In this view, the mechanistic and reductionistic work that the ontology of genes and DNA (and possibly information-processing theory) does in explaining cell biology will hold for the relationship between brain and cognition. Indeed, a standard view in psychology and neuroscience (and to an extent philosophy of mind) holds that concepts from cognitive neuroscience, computer science, and information-processing theory can mechanistically explain particular aspects of bodily awareness, and at some point law like or nomological generalizations will become apparent, effectively performing a reduction.

I maintain that we should be careful in using an overdetermined term such as “information” to prematurely unite concepts from different domains. Traditionally, the philosophy of science has had a role in calling attention to instances where the use of concepts from one domain are used to explain another without careful explication of the move that is made (what one might call “stealing a base”), but unfortunately, the specialized nomenclature of philosophy leaves some scientists and clinicians alienated from such discourse.

Stretch your imagination, take the long view, remember Thomas Kuhn: do you think people in 100 years will be explaining body-knowledge as an information-processing system?

more on the status of introspection in psychology and in neuroscience

cognitive science, introspection

An index of the status of introspection within psychology comes from Medin, Markman, and Ross (2004) in the textbook Cognitive Psychology, which notes (pg.20) that:

Although introspection is not an infallible window to the mind, psychological research is leading to principles that suggest when verbal reports are likely to accurately reflect thinking

These perspectives all can be said to implicitly or explicitly challenge what I shall call the “received view” or the “overly skeptical view”, which is an interpretation of the Nisbett and Wilson work that goes beyond what those authors’ famous paper actually said. While it is the case that the “Telling More than we can Know” Nisbett and Wilson paper argued persuasively that introspection-based reports of subjects asked to retrospect on the causes of their behavior are generally not accurate, these authors made a point of not dismissing the value of introspection and verbal reporting on the contents of cognition one is aware of , such as sensation or perception and “private facts”. But the “received view” of their research all too often neglects or ignores the more nuanced and balanced view about introspection of the authors, as well as that of other cognitive scientists who carefully investigated the issues involved, such as Anders Ericsson and Herbert Simon (1993).   This is an important concept: see Eric Schwitzgebel’s excellent take on the “Nisbett-Wilson myth“.

What is the most important concept to take away from the controversies about introspection? Probably it is that insofar as researchers want to be able to take advantage of all possible tools and data sources to make sense of the complex, enigmatic processes characterizing body knowledge, they should follow the example set by many physicians and some experimentalists, and be willing to get data by asking subjects or patients for their observations on body state. But here I will go one step further, and assert that the accuracy, or lack of accuracy, of verbal report data relative to other data, can serve as that which must be explained by a comprehensive and robust model of personal or self-reportable knowledge of the body. Doing so would require experiments where verbally reported data might be compared to, and possibly integrated with, data from external sources, such as from brain measurement: “neurophenomenology” in operationalized form.

One such effort came from a trio of researchers interested in assessing whether introspective data on pain had measurable neural correlates (Coghill, McHaffie, Yen, 2003, pg. 8538):

Using psychophysical ratings to define pain sensitivity and functional magnetic resonance imaging to assess brain activity, we found that highly sensitive individuals exhibited more frequent and more robust pain-induced activation of the primary somatosensory cortex, anterior cingulate cortex, and prefrontal cortex than did insensitive individuals. By identifying objective neural correlates of subjective differences, these findings validate the utility of introspection and subjective reporting as a means of communicating a first-person experience

This forward-looking research in effect turns behaviorism on its head: instead of verbal reports being rejected or at best tolerated within the overall context of strict objectivity, the very phenomenon the model seeks to explain is “subjective”!

neurophenomenology and body alienation in cognitive science

Uncategorized

It is worth exploring the historical, Cartesian “body alienation”, or default privileging of depersonalized, dismebodied, system-centric theories of mind, of much or most of the fields grouped under the label of cognitive science and neuroscience. Psychologists, linguists, philosophers, and neuroscientists have spent decades using computational and information-processing metaphors and models to explain behavior, problem-solving, memory, syntax, and other phenomena. The need to construct a theoretical bridge from brain-biology to mind-science led to the deployment of high-level abstractions such as “symbol,” “algorithm,” “information,” and “representation.” A standard view would be that recognizing features of the world, generating language, rational problem solving, recognizing patterns, and other cognitive operations are enabled by “computations over mental representations.” This may be a useful construct, and the notion of representation need not be solely of the symbolic and rule-based sort that apeared in the 1950-s and 1960’s, but the idea that cognitive scientists, linguists, and pychologists can be “implementation agnostic” and unconcerned with the biological details underlying the mind seems very dated.

Neurobiologists and experimental psychologists took note of the interdisciplinary cybernetics movement in the post-war period, and also adopted such (arguably under-defined) notions to make sense of otherwise low-level systems and phenomena (Werner, 2005). The multi-disciplinary cognitive science research program used core ideas of computation, representation, and information-processing to model a variety of mental systems and phenomena, but the behaviorist influence remained, typically constraining research to the more-readily modeled “objective” aspects of mind.

The theoretical and methodological difficulties associated with modeling emotions, conscious awareness, perception, and body knowledge disorders provided openings for a revised cognitive science and psychology that has come to be called neurophenomenology, or enactive cognitive neuroscience, which emphasizes:

-in contrast to representationalist theories, human cognition does not so much represent features of the outside world as it enables the mind to enact, co-constitute, or co-construct an experienced, perceived environment (Varela, Thompson, and Rosch, 1991) (Merleau-Ponty, 1962) via evolutionarily-selected sensorimotor systems (Lewontin, 1983)

-unlike many traditional cognitive models which look at mental activity “objectively” or from a Cartesian outside standpoint, as a system, the notion of embodiment in cognitive science privileges the notion that mental life is grounded in the lived body, which is to say, cognition has aspects which we are personally aware of and of which we experience consciously and bodily, (or, better, that are phenomenologically lived and felt)

-a recognition that verbal reports from people may very well be a critical and necessary source of data and insight for understanding the conscious, embodied character of mental and neural activity.

-an insistence that data developed from traditional, externalistic, “system-centric” based ideas and models of “mind” or “mental activity” or “cognition” psychology, cognitive science, neuroscience, and other disciplines such as informatics, human-computer interaction need to be re-examined in the light of the above concepts.

A succinct description of how mind and brain are understood in the embodied cognition mode of thinking was offered by Varela (1999, pp. 71-89):

We tend to think that the mind is in the brain, in the head, but the fact is that the environment also includes the rest of the organism; includes the fact that the brain is intimately connected to all of the muscles, the skeletal system, the guts, and the immune system, the hormonal balances and so on and so on. It makes the whole thing into an extremely tight unity. In other words, the organism as a meshwork of entirely co-determining elements makes it so that our minds are, literally, inseparable, not only from the external environment, but also from what Claude Bernard already called the milieu intérieur, the fact that we have not only a brain but an entire body