Evan Thompson on core theories of neurophenomenology and time-consciousness

cognitive science, Evan Thompson, Francisco Varela, neurophenomenology

Evan Thompson, one of the authors of 1991’s The Embodied Mind: Cognitive Science and Human Experience, in 2010 authored a sweeping, dare I say even magisterial, account of how science and philosophy should understand consciousness, embodiment, evolution, and neuroscience. Mind in Life: Biology, Phenomenology, and the Sciences of Mind is a work of serious ambition by a gentle-seeming man of hugely impressive learning:

“How is life related to the mind? The question has long confounded philosophers and scientists, and it is this so-called explanatory gap between biological life and consciousness that Evan Thompson explores in Mind in Life.

Thompson draws upon sources as diverse as molecular biology, evolutionary theory, artificial life, complex systems theory, neuroscience, psychology, Continental Phenomenology, and analytic philosophy to argue that mind and life are more continuous than has previously been accepted, and that current explanations do not adequately address the myriad facets of the biology and phenomenology of mind. Where there is life, Thompson argues, there is mind: life and mind share common principles of self-organization, and the self-organizing features of mind are an enriched version of the self-organizing features of life. Rather than trying to close the explanatory gap, Thompson marshals philosophical and scientific analyses to bring unprecedented insight to the nature of life and consciousness. This synthesis of phenomenology and biology helps make Mind in Life a vital and long-awaited addition to his landmark volume The Embodied Mind: Cognitive Science and Human Experience (coauthored with Eleanor Rosch and Francisco Varela).

Endlessly interesting and accessible, Mind in Life is a groundbreaking addition to the fields of the theory of the mind, life science, and phenomenology.”

Due to it’s scope, scale, and depth, I think many of us are taking time to absorb it the riches it contains. Below is an excerpt of what Thompson has to say about neurophenomenology, time consciousness, and dynamics:

411pT27bOLL._SY344_BO1,204,203,200_

“In recent years, scientists and philosophers interested in the temporal dynamics of consciousness have rediscovered Husserl’s analyses of time-consciousness (Lloyd 2002, 2003; van Gelder 1999b; Varela 1999). Varela in particular puts these analyses to use in his neurophenomenological approach to consciousness and offers a neurophenomenological account of time-consciousness as “an acid test of the entire neurophenomenological enterprise” (Varela 1999, p. 267). Varela formulates the “working hypothesis” of neurophenomenology in the following way: “Phenomenological accounts of the structure of experience and their counterparts in cognitive science relate to each other through reciprocal constraints” (1996. p. 343). By “reciprocal constraints” he means that phenomenological analyses can help guide and shape the scientific investigation of consciousness, and that scientific findings can in turn help guide and shape the phenomenological investigations. A crucial feature of this approach is that dynamic systems theory is supposed to mediate between phenomenology and neuroscience. NeurophenomenoIogy thus comprises three main elements (see Figure 11.2): (1) phenomenological accounts of the structure of experience; (2) formal dynamical models of these structural invariants; and (3) realizations of these models in biological systems. Given that time-consciousness is supposed to be an acid test of the neurophenomenological enterprise, we need to see whether phenomenological accounts of the structure of time-consciousness and neurodynamical accounts of the brain processes relevant to consciousness can be related to each other in a mutually illuminating way. This task is precisely the one Varela undertakes in his neurophenomenology of time-consciousness and in his experimental research on the neurodynamics of consciousness.

Varela’s strategy is to find a common structural level of description that captures the dynamics of both the impressional-retentional-protentional flow of time-consciousness and the large-scale neural processes thought to be associated with consciousness. We have already seen how the flow of time-consciousness is self-constituting. What we now need to examine is how this self-constituting flow is supposed to be structurally mirrored at the biological level by the self-organizing dynamics of large-scale neural activity.

Figure 11.2-Neurophenomenology

Figure 11.2- Neurophenomenology

There is now little doubt in cognitive science that cognitive acts, such as the visual recognition of a face, require the rapid and transient coordination of many functionally distinct and widely distributed brain regions. Neuroscientists also increasingly believe that moment-to moment, transitive (object-directed) consciousness is associated with dynamic, large-scale neural activity rather than any single brain region or structure (Cosmelli, Lachaux, and Thompson, 2007). Hence, any model of the neural basis of mental activity, including consciousness, must account for how large-scale neural activities can operate in an integrated or coherent way from moment to moment.

This problem is known as the large-scale integration problem (Varela et al. 2001). According to dynamical neuroscience, the key variable for understanding large-scale integration is not so much the activity of the individual neural components, but rather the nature of the dynamic links among them. The neural counterparts of mental activity are thus investigated at the level of collective variables that describe emergent and changing patterns of large-scale integration. One recent approach to defining these collective variables is to measure transient patterns of synchronous oscillations between different populations of neurons (Engel, Fries, and Singer 2001; Varela et al. 2001). According to Varela (1995, 1999), these synchrony patterns define a temporal frame of momentary and transient neural integration that corresponds to the duration of the present moment of experience.

Varela presents the reasoning behind this view in the form of three connected, but logically independent, working hypotheses (1994, 1999, pp 274-277):

Hypothesis I: For every cognitive act. there is a singular, specific neural
assembly that underlies its emergence and operation.

According to this hypothesis, the emergence of any cognitive act requires the rapid coordination of many different capacities (attention, perception, memory, motivation, and so on) and the widely distributed neural systems subserving them. The neurophysiological substrate for this large-scale coordination is assumed to be a neural assembly, which can be defined as a distributed subset of neurons with strong reciprocal connections.

In the context of large-scale integration, a dynamic neural assembly engages vast and disparate regions of the brain. There are reciprocal connections within the same cortical area or between areas at the same level of the network; there are also reciprocal connections that link different levels in different brain regions. Because of these strong interconnections across widely distributed areas, a large-scale neural assembly can be activated or ignited from any of its smaller subsets,whether sensorimotor or internal. These assemblies have a transient, dynamic existence that spans the time required to accomplish an elementary cognitive act and for neural activity to propagate through the assembly.

Various empirical and theoretical considerations suggest that the time-scale of such neurocognitive activity—whether it be a perception/ action state (such as an eye or head movement), passing thought or memory, or emotional appraisal—is in the range of a fraction of a second, roughly 250-500 milliseconds or more (see Dennett and Kinsbourne 1992; Poppel 1988). Varela (1999) calls this scale of duration the “1 scale” of large-scale integration and he distinguishes it from the “1/10 scale” of elementary sensorimotor and neural events (10—l00 milliseconds), and the “I0 scale” of descriptive-narrative assessments involving memory. During successive time intervals at the 1/ 10 and 1 scale, there is competition between different neural assemblies: when a neural assembly is ignited from one or more of its smaller subsets, it either reaches coherence or is swamped by the competing activations of other overlapping assemblies. If the assembly holds together after its activation, then one can assume it has a transitory efficacy.”

 

 

 

 

 

Critical Neuroscience-Neurophenomenology in Psychiatry by Laurence Kirmayer, MD

clinical neurophenomenology, disease classification, history of neurophenomenology, medicine, psychiatry, psychology, symptom reports

Here is a thought provoking lecture on YouTube which investigates psychiatry’s problematic foundations, especially in terms of the influence of culture,  individual differences, and neurodiversity.

http://www.youtube.com/watch?v=PsubfDIKgUw Laurence Kirmayer is an MD at McGill University, and he has a lot to say about using clinical neurophenomenology to explore some very murky but important issues in psychiatry.  There really are problems that make psychiatry different from the rest of medicine, because however necessary the reductionistic-biological medical model nearly ubiquitous everywhere else may be, it is not sufficient. I’m very glad Kirmayer is bringing up Daniel Dennett and his work on heterophenomenological methods in the clinical context as well, not because it’s the end-all be-all, but because it orients what have historically been difficult and controversial debates  in an accessible, easy to read, reasonably pragmatic way. He is also doing good work in looking at how psychiatry gets it’s norms, methods, and foundational orientations, prompting him to call for phenomenological investigations in psychiatry. What a timely effort! I can’t help but feel the DSM-V was panned before it was published in 2013 (and not just by angry people with Asperger’s or Scientologists) because this phase of psychiatry may be running out of steam. The mapping between biological mechanisms to the myriad ways individual people in various cultures live out their emotional pain and existential struggles isn’t good enough.  The ontology or foundational ideas about a psychiatric patient must reference existential reality: the meaning of embodiment and how one’s experience brings forth a lived world, while the ontology of neuroscience is based on genes, proteins, signals, action potentials, circuits, modules, information-processing, and maybe even dynamical systems. Current psychiatry seems to me to be inadequately addressing the foundational problem of how to map these domains. All the genome-wide association studies, connectome diagrams, and brain imaging data in the world aren’t enough to create diagnostic categories that cluster the lived meaning, experiences and embodiment of similar bipolar or schizophrenic patients together, and that of dissimilar patients apart. There really is alot of applied work needing to be done on how to model the cognition of patients whose disorders manifest as disturbances of body cognition or existential crises (here’s my version, dealing with heartbeat perception). Moreover, foundational investigations into the ontology of psychiatry may very well provide a needed stimulus to get psychiatry out of it’s current funk betwixt and between medical humanism as a healing art, and bio-reductionistic techno-medicine. Overall I am convinced clinical neurophenomenology is a vital and largely new area, despite the pioneering efforts of the neuropsychiatrist Erwin Straus and the more recent work of neurologists such as  Oliver Sacks and Antonio Damasio. The lodestar of clinical neurophenomenology seems to me to be Varela’s idea of a mutual constraining and mapping between data from lived, embodied phenomenology and theories based on cognition and neuroscience. There is a more about Kirmayer at http://www.mcgill.ca/trauma-globalhealth/people/canada/kirmayer/

Is pain where you feel it in the body, or in the brain? Neurophenomenology and the spatial aspect of nociception

body knowledge, clinical neurophenomenology, embodiment, interoception, introspection, introspective accuracy, medicine, pain, physiology, symptom report accuracy, symptom reports, visceral perception

Pain is interesting, salient, mysterious. It may feel like it is in one specific place in or on the body. It may feel diffuse, with gradations, or it may seem referred from one area to another. What is happening in the brain and in the body as these spatial aspects of pain are experienced? How much of the causation of pain occurs where we feel it, and how much occurs in the brain? Below is a series of probes and thinking aloud about where pain is, with speculations to stimulate my thinking and yours.  I’m not a “pain expert”, nor a bodyworker that heals clients, nor a physiologist with a specialization in nociception, but a cognitive scientist, with clinical psychology training, interested in body phenomenology and the brain.  Please do post this essay to Facebook, share it, critique, respond, and comment (and it would be helpful to know if your background is in philosophy, neuroscience, bodywork, psychology, medicine, a student wanting to enter one of these or another field, etc). Pain should be looked at from multiple angles, with theoretical problems emphasized alongside clinical praxis, and with reductionistic accounts from neurophysiology juxtaposed against descriptions of the embodied phenomenology and existential structures.  As I have mentioned elsewhere, it is still early in the history of neurophenomenology…let a thousand flowers bloom when looking at pain. We need data, observations, insights and theories from both the experience side as well as the brain side. Francisco Varela aptly described how phenomenology and cognitive neuroscience should relate:

“The key point here is that by emphasizing a codetermination of both accounts one can explore the bridges, challenges, insights, and contradications between them. Both domains of phenomena have equal status in demanding full attention and respect for their specificity.”

We all know what pain is phenomenologically, what it feels like, but how to define it? The International Association for the Study of Pain offers this definition: “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.” Of particular interest to neurophenomenology and embodied cognitive science is their claim that “activity induced in the nociceptors and nociceptive pathways by a noxious stimulus is not pain, which is always a psychological state.” Good that they do not try to reduce the experience of pain to the strictly physiological dimension, but I wonder how Merleau-Ponty, with his non-dualistic ontology of the flesh would have responded. Pain seems to transgress the border of mind and body categories, does it not? I am slowly biting off chucks of the work on pain at the Stanford Encyclopedia of Philosophy. Lots of provocative angles, including this one:

“there appear to be reasons both for thinking that pains (along with other similar bodily sensations) are physical objects or conditions that we perceive in body parts, and for thinking that they are not. This paradox is one of the main reasons why philosophers are especially interested in pain.”

Right now I am particularly interested in the spatial aspect of where pain seems to be, what I might label the spatial phenomenology of nociception. When I introspect on aching parts of my feet, it seems as if the pain occupies a volume of space. Using manual pressure I can find places on my feel that are not sore, right next to areas that are slightly sore, which are in turn near focal areas of highest pain. It seems as if the pain is locatable “down there” in my body, and yet what we know about the nociceptive neural networks suggests the phenomenology is produced by complex interactions between flesh, nearby peripheral nerves, the central nervous system, and neurodynamics in the latter especially. A way of probing this this would be to examine the idea that the pain experience is the experiential correlate of bodily harm, a sort of map relating sensations to a corresponding nerve activated by damage to tissue. So, is the place in my body where I feel pain just the same as where the damage or strain is? Or, Is pain caused by pain-receptive nerves registering what is happening around them, via hormonal and electrical signals? Or is pain actually the nerve itself being “trapped” or damaged, yet in a volume of undamaged tissue one can feel hurts? Could the seeming volume of experienced pain-space be a partial illusion, produced by cognizing the tissue damage as some place near or overlapping with yet not spatially identical to where the “actual” damage is, in other words a case of existential-physiological discrepancy? One scenario could be, roughly, that pain “is” or “is made of” nerves getting signals about damage to tissue; another would be that pain “is” the nerves themselves being damaged or sustaining stress or injury. Maybe pain involves both? Maybe some pain is one, or the other? In terms of remembering how my heel pain started, it’s not so easy, but I love to walk an hour or two a day, and have done so for many years. I recall more than ten years ago playing football in the park, wearing what must have been the wrong sort of shoes, and upon waking the next day, having pretty serious pain in my heel. Here are some graphics that, intuitively, seem to map on to the areas where I perceive the pain to be most focal:

from bestfootdoc.com

from bestfootdoc.com

from setup.tristatehand.com

from setup.tristatehand.com

from plantar-fasciitis-elrofeet.com

from plantar-fasciitis-elrofeet.com

If I palpate my heel, I become aware of a phenomenologically complex, rich blend of pleasure and pain. I crave the sensation of pressure there, but it can be an endurance test when it happens. Does the sensation of pressure that I want reflect some body knowledge, some intuitive sense of what intervention will help my body heal? How could this be verified or falsified? It is not easy to describe the raw qualia of pain, actually. I can describe it as achey and moderately distressing when I walk around, and sharp upon palpating. Direct and forceful pressure on the heel area will make me wince, catch my breath, want to gasp or make sounds of pain/pleasure, and in general puts me in a state of heightened activation. But I love it when I can get a therapist to squeeze on it, producing what I call “pain-pleasure”:

from indyheelpaincenter.com

from indyheelpaincenter.com

This diagram below helps me map the sensations to the neuroanatomy. We need to do more of this sort of thing. This kind of representation seems to me a new area for clinical neurophenomenological research (indeed, clinical neurophenomenology in general needs much more work, searching for those terms just leads back to my site, but see the Case History section in Sean Gallagher’s How the Body Shapes the Mind).

from reconstructivefootcaredoc.com

from reconstructivefootcaredoc.com

What is producing the pain-qualia, the particular feeling? Without going too far into varying differential diagnosis, it is commonly attributed to plantar fasciitis.  There the pain would be due to nociceptive nerve fibers activated by damage to the tough, fibrous fascia that attach to the calcaneus (heel bone) being strained, or sustaining small ripped areas, and/or local nerves being compressed or trapped. A 2012 article in Lower Extremity Review states that “evidence suggests plantar fasciitis is a noninflammatory degenerative condition in the plantar fascia caused by repetitive microtears at the medial tubercle of the calcaneus.” There are quite a few opinions out there about the role of bony calcium buildups, strain from leg muscles, specific trapped nerves and so forth, and it would be interesting to find out how different aspects of reported pain qualia map on to these. Below you can see the sheetlike fascia fiber, the posterior tibial nerve, and it’s branches that enable local sensations:

from aafp.org

from aafp.org

Next: fascia and the innervation of the heel, from below:

from mollyjudge.com

from mollyjudge.com

Another view of the heel and innervation:

from mollyjudge.com

from mollyjudge.com

Below is a representation of the fascia under the skin:

from drwolgin.com

from drwolgin.com

There is a very graphic,under the skin, maybe not SFW surgeon’s-eye perspective on these structures available here. Heel pain turns out to be very common, and is evidently one of the most frequently reported medical issues. Searching online for heel pain mapping brings up a representation purportedly of 2666 patients describing where they feel heel pain: heel pain mapping I can’t find where this comes from originally and can’t speak to the methodology, rigor, or quality of the study, but the supposed data are interesting, as is the implicit idea of spatial qualia mapping:  the correspondence of experienced pain to a volume of space in the body. It also quite well represents where the pain is that I feel. The focal area seems to be where the fascia fibers attach to the calcaneus, an area that bears alot of weight, does alot of work, and is prone to overuse. So, where is the pain? Is it in the heel or the brain? Is it in the tissue, the nerve, or both? Is there a volume of flesh that contains the pain? I am going to have to think about these more, and welcome your input. What about the central nervous system that processes nociceptive afferents coming from the body? A good model of pain neurophenomenology should involve a number of cortical and subcortical areas that comprise the nociceptive neural network: -primary somatosensory cortex (S1) and secondary somatosensory cortex (S2): -insula -anterior cingulate cortex (ACC) -prefrontal cortex (PFC) -thalamus Here are some representations of the pain pathways, or the nociceptive neural network:

from Moisset and Bouhassira (2007) "Brain imaging of neuropathic pain"

from Moisset and Bouhassira (2007)

Moisett el (2009)

Moisett el (2009)

 

from Tracey and Mantyh (2012)

from Tracey and Mantyh (2012)

Broadly speaking, pain seems to be generated by tissue damage, inflammation, compromising the integrity of tissue, stress on localized regions, and so forth being processed by peripheral afferent pain pathways in the body, then phylogenetically ancient subcortical structures, and then the aforementioned cortical regions or nociceptive neural network.  As I have mentioned many times, making a robust account of how various regions of the brain communicate such that a person experiences qualia or sensory phenomenology will need to reference neurodynamics, which integrates ideas from the physics of self-organization, complexity, chaos and non-linear dynamics into biology.  It is gradually becoming apparent to many if not most workers in the cognitive neurosciences that there are a host of mechanisms regions of the brain use to send signals, and many of these are as time dependent as space dependent. Michael Cohen puts it thusly: “The way we as cognitive neuroscientists typically link dynamics of the brain to dynamics of behavior is by correlating increases or decreases of some measure of brain activity with the cognitive or emotional state we hope the subject is experiencing at the time. The primary dependent measure in the majority of these studies is whether the average amount of activity – measured through spiking, event-related-potential or -field component amplitude, blood flow response, light scatter, etc. – in a region of the brain goes up or down. In this approach, the aim is to reduce this complex and enigmatic neural information processing system to two dimensions: Space and activation (up/down). The implicit assumption is that cognitive processes can be localized to specific regions of the brain, can be measured by an increase in average activity levels, and in different experimental conditions, either operate or do not. It is naïve to think that these two dimensions are sufficient for characterizing neurocognitive function. The range and flexibility of cognitive, emotional, perceptual, and other mental processes is huge, and the scale of typical functional localization claims – on the order of several cubic centimeters – is large compared to the number of cells with unique physiological, neurochemical, morphological, and connectional properties contained in each MRI voxel. Further, there are no one-to-one mappings between cognitive processes and brain regions: Different cognitive processes can activate the same brain region, and activation of several brain regions can be associated with single cognitive processes. In the analogy of Plato’s cave, our current approach to understanding the biological foundations of cognition is like looking at shadows cast on a region of the wall of the cave without observing how they change dynamically over time.” But what of the original question? Is pain where you feel it in the body, or in the brain? It seems to me the answer must be both.  The experience of pain being localized there or a little on the left is a product of local tissue signals and receptor activation, which produces peripheral afferent nerve firing, which gets processed by spinal afferent neurodynamics, brainstem activation, thalamic gating, and then somatosensory, insular, anterior cingulated, and prefrontal cortical regions. Yet the real model of pain, one that invokes mechanisms and causes, remains elusive. And a good model of pain must account for the possibility of pain without suffering as well! For now, what I can offer are probes to get us speculating, thinking critically, and eventually building a clinical neurophenomenology of pain. If that interests you, by all means get involved.

A complex mapping of the interior sense: why Damasio’s theory of embodied cognition focuses on the brainstem and viscera

body knowledge, clinical neurophenomenology, consciousness, embodiment, interoception, physiology, visceral perception

If, like me,  you are interested in the biological dimensions of cognition, consciousness, and phenomenology, you tend to study the cortex.  Attention, decision-making, having a sense of self, perception, visual awareness, and many other key higher mental processes are modeled with data from cortical measurements, and especially with recent neurodynamics and computational neuroscience, there are increasingly sophisticated theories about the underlying mechanisms. But the cortex possibly gets too much attention compared to the rest of the brain and body.  This is partially because it indeed makes us human, but also for practical reasons, much of what we know about the brain comes from EEG research that with people is usually limited to scalp-based cortical signal acquisition. Dig a little deeper, say, when learning about emotions, and the student or scholar gets at least cursory introduction into the sub-cortical, emotion-regulating structures of the limbic system such as the hippocampus, thalamus, and hypothalamus. Our sense of salience, that things matter, our bodily sensations, our emotions and drives are associated especially with these sub-cortical structures. The study of how the brain processes emotions and bodily sensations has pointed some psychologists, neuroscientists, physicians, and philosophers in recent decades towards the idea of the experienced, phenomenologically lived body as the basis of consciousness and the self (or “self”). The growing sense for some of us about the limitations of traditional computationalist/cognitive theories has led to to the idea that mind and brain must be understood as “enactive“: via  evolutionarily and environmentally situated, physiological, embodied processes that “bring forth an experienced world”. Whatever cognition is, more than a few of us cognitive scientists nowadays think of it as somehow based in temporally ongoing, fleshy, existentially meaningful conscious life, or phenomenology. The mechanics of embodied mind, and the embodied basis of phenomenology, are very poorly understood by cognitive neuroscience, psychology, and medicine. Only very recently has there been a revival of interest especially in how visceral body states get processed by the peripheral nervous system and subsequently transformed by the central nervous system. One could understand this as a subfield of neurophenomenology:  how the brain and body enable “the bodily feelings I have now”, the experiential phenomenology of the internal body (if you have poked around this site you may have seen my own 2011 dissertation work was on this very subject). The rise of interest in embodied cognition has been hugely advanced through the work of the neurologist Antonio Damasio. This pioneer of embodied cognitive neuroscience has been focusing the attention of the psychology and cognitive neuroscience communities on the brainstem as the basis of consciousness.  The brainstem is not the usual topic when scientists bravely try to model conscious aspects of cognition, and my sense is that in the public mind it tends to register mostly when someone famous has medical problems as a result of brainstem damage causing loss of core visceral regulatory processes, such as with the death of Michael Jackson. In Damasios’ account, it is not merely a bit player in the grand drama of how body produces consciousness, but plays a starring role. Damasio (2010) states in Self Comes to Mind:Constructing the Conscious Brain , “I believe that the mind is not made by the cerebral cortex alone. Its first manifestations arise in the brain stem . ” (p. 75).

Courtesy Wikimedia Commons

Courtesy Wikimedia Commons

Above is the brainstem, in red.  It receives inputs from the spinal cord, called “afferents”, that deliver signals from sensory nerves distributed throughout the body. These sensory nerves are affected by the homeostatic and visceral states of organs, such as our heart beating, the fullness of our bladder, blood sugar levels, the gas exchange in our lungs, and so forth. How much of the time these nerve signals resulting from visceral processes enter into conscious is a murky business. Cognitive scientists and others researching consciousness have not particularly referenced interior body psychophysiology and internal body-sensation in most theories about consciousness, but the work of Damasio is changing that. In general, research on the perception of visceral states or “inner psychophysics” goes in and out of scientific fashion, according to Gyorgy Adam’s wonderful overview of many decades of research, Visceral Perception: Understanding Internal Cognition.

Courtesy of Wikimedia Commons

Courtesy Wikimedia Commons

Below is an MRI of a beating heart and other visceral organs, with the spine visible. Afferent nerves “encode” information (or “information”) about the homeostatic and other dynamics of these systems, and send signals to the spinal cord and brain.

Courtesy Wikimedia Commons

Courtesy Wikimedia Commons

While the idea that the brainstem produces the first manifestations of consciousness may seem radical, Damasio cites experimental evidence that perception of visceral states is mediated by the brainstem’s nucleus of the solitary tract and the pons. This gives us a window into understanding interoception:  the awareness of our visceral organs and internal body (though I think interoception also refers to unconscious signals from bodily organs affecting the brain and possibly influencing unconscious cognition). Building on generations of basic research on the neurophysiology of visceral perception, Damasio (2010) defines interoception as a “complex mapping of the interior sense” (p. 97). He emphasizes this occurs through an interoceptive network involving significant processing by the brainstem, which, unlike a mere relay, receives, processes and integrates afferents from the state of the visceral organs, and in turn projects to the thalamus. Through the work of Damasio, Bud Craig and others, we can model cognition as based in a central nervous system which filters and transforms signals from the lifegiving organs of the body. Building on their contributions, here is how I understand the neurophenomenology of visceral perception to work: our body organs are responding to existential life needs, our brainstem gets signals from the body organs and actively filters and transforms the signals, and in turn projects to the thalamus. Thalamo-cortical fibers then make synapses with neurons in the insula, cingulum, and somatosensory and orbitofrontal cortices, regions implicated in interoceptive activity and cognitive processes handling internal body information.  These areas all contribute to both consciousness in the foundational sense Damasio is investigating, and also produces specific awareness of our emotions and of our interior bodily sensations, such as feeling hungry. What goes on at the final stage, when cortical regions take the transformed bodily signal from the brainstem and thalamo-cortical processing, and somehow produce changes in consciousness? That gets into very complicated territory, and nowadays some of our most progressive thinkers use ideas and mechanisms from physics, such as Walter Freeman’s work on cortical neurodynamics, or that of Varela and colleagues. As of 2013 the dynamical aspects of interoception do not seem to be on many people’s radar besides mine and maybe a few others. What does it all mean for theories about the mind? If we accept that thalamic relay nuclei, activated by processed bodily interoceptive inputs in the brainstem, engage in further processing, and subsequently synapse on to (probably) dynamically interactive interoceptive centers in the insula and orbitofrontal, sensory, and cingulated cortical regions, what do we then understand about consciousness and cognition? As far as I can tell, the heart of Damasio’s theory is that visceral and homeostatic body states are “mapped” on to the brain via the brainstem,  and this mapping is what consciousness and the sense of self is “made of”. As we are organisms needing to engage in the right sort of behavior to survive, we depend on our sensory and visceral organs functioning appropriately. Our minds are thus built out of an evolutionarily developed machinery of life preservation. Put another way: the interior chemical milieu in our viscera affects nerve signals into the brainstem, and brainstem-mediated afferent signals tell our brain and mind about the state of our organs by projecting to the “gateway of the cortex”: the thalamus. A series of cortical regions process the thalamus gated body-signals, some of which are  cognitively and phenomenologically processed by a person as more emotionally and behaviorally salient, such as signals associated with food and thirst, pain and sex, and fighting or fleeing. Damasio, never one to shy away from big ideas and bold claims, sums up the state of his thinking in a 2010 interview:

Feelings, especially the kind that I call primordial feelings, portray the state of the body in our own brain. They serve notice that there is life inside the organism and they inform the brain (and its mind, of course), of whether such life is in balance or not. That feeling is the foundation of the edifice we call conscious mind. When the machinery that builds that foundation is disrupted by disease, the whole edifice collapses. Imagine pulling out the ground floor of a high-rise building and you get the picture. That is, by the way, precisely what happens in certain cases of coma or vegetative state. Now, where in the brain is that “feel-making” machinery? It is located in the brain stem and it enjoys a privileged situation. It is part of the brain, of course, but it is so closely interconnected with the body that it is best seen as fused with the body. I suspect that one reason why our thoughts are felt comes from that obligatory fusion of body and brain at brain stem level.

Can we not agree, that this is a profound way of thinking about the human condition?   Buy me a beer! Donate Button

An excerpt from the foundational text of neurophenomenology: Varela, Thompson, and Rosch’s The Embodied Mind

cognitive science, Eleanor Rosch, embodiment, Evan Thompson, Francisco Varela, history of neurophenomenology, introspection, neurophenomenology, The Embodied Mind

It is very,very gratifying to see interest in neurophenomenology increasing. Welcome! Exciting things are happening. If you feel like you could make a contribution to the field, do it! We are still in the early phase, though probably at the “end of the beginning”. In 1996 you could find about three references online (in mid 2013, Google shows 35,400 results). Around then I got a copy of neurophenomenologist/cognitive scientist Francisco Varela, philosopher Evan Thompson, and cognitive psychologist Eleanor Rosch’s The Embodied Mind: Cognitive Science and Human Experience. To this day I am struck by the lucidity of the writing, the patient willingness to explore the virtues of opposing viewpoints, and especially the depth of the challenge to mainstream cognitive neuroscience and psychology. The basic idea is that the science of cognition and the brain needs to somehow reckon with human experience, in all it’s phenomenological, fleshy, ecologically situated complexity. The science of human cognition requires an account of how life seems to us, how it feels, what it means. Not doing so amounts to a shortcut, though an understandable one, given the difficulties routinely encountered. The authors painstakingly present the case for why failing to include the role of the evolutionarily developed phenomenological body and the meaningful, experiential, existential dimensions will hamper scientific accounts of cognition and the brain. Varela, Thompson and Rosch present a radical challenge to the idea that the mind is best modeled based on data and measurements only from the outside, or purely objectively. Cognitive neuroscience describes cognition and consciousness as machinery emerging from the hardware of the brain, and Varela, Thompson and Rosch carefully explore the benefits of this view, but opt for a radical alternative. I am convinced it is the foundational and definitive work in neurophenomenology. Interestingly, Daniel Dennett, a staunch defender of cognitivist orthodoxy, had substantive criticism but went on to say: “the authors find many new ways of putting together old points that we knew were true but didn’t know what to do with, and that in itself is a major contribution to our understanding of cognitive science.” The term “neurophenomenology” does not appear in this book. As I have mentioned elsewhere, the term emerges around 1990 from the work of Charles Laughlin (though there seems to be one mention in a hard-to-find publication from 1988). I had considered directly contacting Varela around 1996 to convince him of the helpfulness of the term “neurophenomenology”, and I must admit to an utter dopamine blast of pleasure when around that time I found his 1996 paper “Neurophenomenology : A Methodological Remedy for the Hard Problem“. Kismet! It was an exciting time, and helped push me towards doing a PhD on one narrow aspect of clinical neurophenomenology: modeling how accurate patients are at reporting on their cardiac rhythm states, and how the brain both enables knowledge and mistaken beliefs about heartbeats . With new people showing an interest in and perhaps coming into this field, we might as well make sure to examine the core text: The Embodied Mind. There is a copy online and here is an excerpt, but I highly recommend getting a physical copy. Here is a section entitled The Retreat into Natural Selection, from Chapter 8: Enaction: Embodied Cognition (for what it’s worth, in my last class as a PhD student, I had my colleagues in David Buss‘ Evolutionary Psychology seminar read and discuss this chapter, and they hated it!) . embodied “In preparation for the next chapter, we now wish to take note of a prevalent view within cognitive science, one which constitutes a challenge to the view of cognition that we have presented so far. Consider, then, the following response to our discussion: “I am willing to grant that you have shown that cognition is not simply a matter of representation but depends on our embodied capacities for action. I am also willing to grant that both our perception and categorization of, say, color, are inseparable from our perceptually guided activity and that they are enacted by our history of structural coupling. Nevertheless, this history is not the result of just any pattern of coupling; it is largely the result of biological evolution and its mechanism of natural selection. Therefore our perception and cognition have survival value, and so they must provide us with some more or less optimal fit to the world. Thus, to use color once more as an example, it is this optimal fit between us and the world that explains why we see the colors we do.” We do not mean to attribute this view to any particular theory within cognitive science. On the contrary, this view can be found virtually anywhere within the field: in vision research, it is common both to the computational theory of Marr and Poggio and to the “direct theory” of J. J. Gibson and his followers.  It is prevalent in virtually every aspect of the philosophical project of “naturalized epistemology.”  It is even voiced by those who insist on an embodied and experientialist approach to cognition. For this reason, this view can be said to constitute the “received view” within cognitive science of the evolutionary basis for cognition. We cannot ignore, then, this retreat into natural selection. Let us begin, once again, with our now familiar case study of color. The cooperative neuronal operations underlying our perception of color have resulted from the long biological evolution of the primate group. As we have seen, these operations partly determine the basic color categories that are common to all humans. The prevalence of these categories might lead us to suppose that they are optimal in some evolutionary sense, even though they do not reflect some pregiven world. This conclusion, however, would be considerably unwarranted. We can safely conclude that since our biological lineage has continued, our color categories are viable or effective. Other species, however, have evolved different perceived worlds of color on the basis of different cooperative neuronal operations. Indeed, it is fair to say that the neuronal processes underlying human color perception are rather peculiar to the primate group. Most vertebrates (fishes, amphibians, and birds) have quite different and intricate color vision mechanisms. Insects have evolved radically different constitutions associated with their compound eyes. One of the most interesting ways to pursue this comparative investigation is through a comparison of the dimensionalities of color vision. Our color vision is trichromatic: as we have seen, our visual system comprises three types of photoreceptors cross-connected to three color channels. Therefore, three dimensions are needed to represent our color vision, that is, the kinds of color distinctions that we can make. Trichromacy is certainly not unique to humans; indeed, it would appear that virtually every animal class contains some species with trichromatic vision. More interesting, however, is that some animals are dichromats, others are tetrachromats, and some may even be pentachromats. (Dichromats include squirrels, rabbits, tree shrews, some fishes, possibly cats, and some New World monkeys; tetrachromats include fishes that live close to the surface of the water like goldfish, and diurnal birds like the pigeon and the duck; diurnal birds may even be pentachromats).  Whereas two dimensions are needed to represent dichromatic vision, four are needed for tetrachromatic vision (see figure 8.6), and five for pentachromatic vision. Particularly interesting are tetrachromatic (perhaps pentachromatic) birds, for their underlying neuronal operations appear to differ dramatically from ours.

  varela graph2

Figure 8.6 Tetrachromatic vs. trichomatic mechanisms are illustrated here on the basis of the different retinal pigments present in various animals. From Neumeyer, Das Farbensehen des Goldfisches. When people hear of this evidence for tetrachromacy, they respond by asking, ”What are the other colors that these animals see?” This question is understandable but naive if it is taken to suggest that tetrachromats are simply better at seeing the colors we see. It must be remembered, though, that a four-dimensional color space is fundamentally different from a three-dimensional one: strictly speaking, the two color spaces are incommensurable, for there is no way to map the kinds of distinctions available in four dimensions into the kinds of distinctions available in three dimensions without remainder. We can, of course, obtain some analogical insights into what such higher dimensional color spaces might be like. We could imagine, for example, that our color space contains an additional temporal dimension. In this analogy, colors would flicker to different degrees in proportion to the fourth dimension. Thus to use the term pink, for example, as a designator in such a four-dimensional color space would be insufficient to pick out a single color: one would have to say rapid-pink, etc. If it turns out that the color space of diurnal birds is pentachromatic (which is indeed possible), then we are simply at a loss to envision what their color experience could be like. It should now be apparent, then, that the vastly different histories of structural coupling for birds, fishes, insects, and primates have enacted or brought forth different perceived worlds of color. Therefore, our perceived world of color should not be considered to be the optimal “solution” to some evolutionarily posed “problem.” Our perceived world of color is, rather, a result of one possible and viable phylogenic pathway among many others realized in the evolutionary history of living beings. Again, the response on the behalf of the “received view” of evolution in cognitive science will be, “Very well, let us grant that color as an attribute of our perceived world cannot be explained simply by invoking some optimal fit, since there is such a rich diversity of perceived worlds of color. Thus the diverse neuronal mechanisms underlying color perception are not different solutions to the same evolutionarily posed problem. But all that follows is that our analysis must be made more precise. These various perceived worlds of color reflect various forms of adaptation to diverse ecological niches. Each animal group optimally exploits different regularities of the world. It is still a matter of optimal fit with the world; it is just that each animal group has its own optimal fit.” This response is a still more refined form of the evolutionary argument. Although optimizations are considered to differ according to the species in question, the view remains that perceptual and cognitive tasks involve some form of optimal adaptation to the world. This view represents a sophisticated neorealism, which has the notion of optimization as its central explanatory tool. We cannot proceed further, then, without examining more closely this idea in the context of evolutionary explanations. We cannot attempt to summarize the state of the art of evolutionary biology today, but we do need to explore some of its classical foundations and their modern alternatives.

Plasticity: why the brain is not like computer hardware

Uncategorized

Brain cells and their connections are not static. The brain grows new cells, these form ever-changing networks, which are to a large extent the physical basis of our psychological and emotional lives. Learning and memory take place because of dynamic alterations in the strength of the connections between cells, the grouping of cells into networks and the connections between networks.

In one sense, biologists and psychologists have known about these changes for a long time, and the label “neuroplasticity” is just a new name for old ideas.

However, there is a change in emphasis that the term points to. In particular, evidence that adult brains grow brand new nerve cells was very much in doubt just ten or twenty years ago.

This and other developments have forced the textbooks to be rewritten. Scientists are also learning more details about changes that take place in the brain in response to healthy and harmful stimuli. Together, these new findings give us an updated picture of the brain as made of dynamic, “self organizing” sets of networks that grow new components and which change in response to stress, damage, learning and novelty.

Are we wired, like microchips, or are we made of something more complex?

Growth of a neuron over time from Wikimedia Commons

Unlike fixed, rigid computer electronics, brains are dynamic, can recover in response to trauma, and show the ability to change structure without losing function (“plasticity”).

A single mote of dust can ruin manufacture of a microprocessor. Circuits in electronic devices eventually fail and do not regenerate; they are replaced if hardware is to function properly. Yet the human brain, presented with stress or damage, can display amazing powers of self-organization, dynamism, response to change at different scales and the ability to “re-wire” itself.

Except that the brain has no wires. It is vastly more complex than any computer ever designed, and is the most complex system known to exist in nature.

Nowadays, one may still encounter the idea that the brain is “hard wired” to engage in language, face recognition, memory and so forth. Psychologists, neuroscientists and cognitive scientists emphasize these functions are innate and instinctual. This is not so much wrong as slightly misleading. We do have instincts and we did evolve to be good at certain tasks, but our brains are not fixed and unchanging the way a microchip is.

Where did the idea that the brain is “hard wired” come from?

Around the time of World War II and it’s immediate aftermath, when brilliant early computer scientists coaxed their whirring vacuum-tube machinery to solve complex mathematical problems, the idea that the brain was a sort of computer took hold. The finding that neurons are electrical and generate all-or-nothing binary-like bursts of current made the brain seem like nature’s version of an electronic computing device.

Over the decades, a different emphasis on how the brain works has emerged, especially in recent years. One still reads about humans being “hard wired” by our genes to do this and that, but an increasing number of researchers use the term “neuroplasticity” to refer to the way the brain responds to change.

Dr. Jill Kays and colleagues explain the shift in thinking that has occurred among scientists and clinicians:

“Until fairly recently, the adult brain was considered largely fixed and stable. Although it was accepted that changes occurred in the context of learning and memory, the general consensus was that major processes essential to normal brain development (e.g., generation of new neurons, neuron migration, pruning) ceased once full development was reached.”

Defining neuroplasticity

The brain’s 100 billion neurons grow into each other like bushes sharing the same space. The branches of the neurons connect with each other to form networks. Cells reach out to each other via electrochemical “synapses”, which are microscopic, fluid-filled connective zones.

A family of hormones called nerve growth factors are periodically released and bind to specialized protein structures called receptors. Growth factors enable nerve cells to persist and to dynamically respond to the changing chemical environment in human physiology. These are only one of a series of hormones that are released in response to new situations, emotionally significant experiences, stressful contexts, brain damage and more. Nothing of the sort occurs with microprocessors.

Like making a new friend that you start hearing from more and more, synaptic connections can grow stronger. Yet they also may weaken, if not used enough. The strengthening and weakening of connections over time, which form the cellular basis of learning and memory, show the brain to be quite unlike a microchip, which is utterly rigid and has fixed circuits.

One of the difficulties with the term “plasticity” is finding a workable definition. Dr. Christopher Shaw and colleagues in Toward a Theory of Neuroplasticity define it as “induced change in some property of the nervous system that results in a corresponding change in function and/or behavior”.

Plasticity refers to the range of changes that brain cells can generate as their physiochemical environments shift. However, neurons cannot handle stress beyond a certain point. For instance, traumatic brain injury can shear the long cable-like axons of brain cells, producing serious neurological damage, coma or even death.

The clinical relevance of neuroplasticity

In previous eras of neurology, psychology and medicine more broadly, it was believed that adults did not develop new neurons. This may seem a technical point of interest primarily to anatomists or physiologists, but it was consistent with a certain emphasis on how development is constrained by genetics, inheritance, and the pre-determined “hard wiring” of the brain.

Overall, the emerging view is one in which the brain is understood to self-organize based on genetic instructions, to grow new cells, to adapt to new conditions, to respond to damage by a sort of re-routing, thereby compensating for damage in one area though other networks taking over that function. A computer must be programmed by someone external to itself, but brains have genetic instructions that allow it to “self-organize”, to grow and to make new connections.

Stress, trauma as well as learning and novelty can affect brain structure and function. Stress hormones such as cortisol are critical for the “fight or flight” response. However they can alter the way the brain’s cortex and hippocampus work together, and subvert the ability form new memories. This is a form of plasticity that clinicians would like to be able to prevent, and medications may in fact help with protecting these vital connections.

A stroke can deprive nerve cells of oxygen and kill them. Areas of “necrosis” (dead tissue) from a stroke or a blow to the head will tend to compromise mental functioning. Yet the plasticity of the brain can allow people to heal by recruiting other populations of cells to handle these processes. New networks can form to compensate for the loss of the old.

read the rest here

Embodied cognition and overeating: a challenge for neurophenomenology and the public health system

clinical neurophenomenology, medicine, physiology, psychiatry, psychology, visceral perception

Many people struggle with overeating. Body image issues influence people’s sense of worth and personal dignity. There is a great popular interest in understanding why it can be so difficult to get this aspect of our lives right. Scientists and doctors try to educate the public about the state of the current science, and about what is known. Modern scientific medicine has developed a way of thinking about disease and health that is holistic, inclusive, and integrative. There is a growing recognition that in the case of overeating, the list of causes are long, including attitudes about food coming from one’s childhood and upbringing, levels of energy expenditures, genes and hormones that regulate metabolism, neural networks in the brain activated when food is smelled, and so forth. Rather than emphasize any one cause, many scientists look at overeating in terms of a network of interacting systems affecting and affected by physiology, thinking, emotions, feelings, and behavior. This counters a historical tendency among hunger scientists to try to isolate a few particular hormones and signaling systems as the cause of overeating. This approach of reducing the complex to the simple has an amazing track record in the history of science, responsible for much of the modern world’s technical achievements. Scientific medicine attempts to understand illness and disease using this powerful “reductionistic” approach, which has produced countless innovations and therapies. Yet some systems in nature defy an overly mechanical understanding. The human body is not a car with faulty parts that can be identified as the cause of performance failures. Hunger can be understood as a bio-psycho-social product of body chemistry, psychological states and environmental context. Overeating involves a person’s experience of craving and not being sated as much as the physiological signaling of brain chemicals like dopamine or hormones such as ghrelin and leptin. Overeating is not just a system for science to investigate, it is a feeling involving thoughts and emotions and attitudes, as well as a behavior. Overeating involves disordered chemical signaling systems in the brain and body There does appear to be a fantastically intricate series of feedback loops and chemical signaling occurring when people get hungry and then eat but are not sated. Feeling “full” or satisfied is actually a complicated business where glucose (blood sugar) levels, brain chemicals such as dopamine, and a suite of hormones produce a network of changes that register in the mind as the feeling of wanting more food, or not. Blood glucose levels are regulated by insulin, but some people have a disorder in which the pancreas fails to produce sufficient insulin (Type 1 diabetes). Or, the cells may not take up the blood insulin correctly (Type 2 diabetes). Failure to metabolize blood sugar properly can influence the formation of adipose tissue (fat). The “metabolic syndrome” of disordered levels of blood sugar and hormones secreted by fat contribute to a person who has eaten plenty to still have cravings or not to feel satisfied. Fat cells secrete a protein known as leptin that acts as a signaling molecule. In healthy people, this hormone acts to inhibit appetite. One of the causes of obesity is from a failure to produce the right amounts of leptin, but sometimes the problem is more a failure to respond to proper leptin levels. In the healthy, leptin works in concert with another hormone named ghrelin, which is secreted as a person becomes hungry. After eating, ghrelin levels decline in a person with an appetite, metabolism, and levels of fat tissue that are regulated normally. Evidently it does not take much to knock these signaling systems out of balance. Obesity, diabetes and overeating disorders are at record levels. Stress, problems with work, romance and family life, the experience of loss and grieving, as well as aging change our metabolisms and leave us vulnerable to craving more than we need. Humans did not evolve in environments with triple bacon cheeseburgers and Super Big Gulps easily available, and the presence of such energy-intense, calorie-rich stimuli in our modern settings triggers our minds to crave what very few of us need. Treatments for overeating and binge-eating disorder Many people coping with eating disorders may alternate between periods of fad diets and binging, or intermittently exercising and then being sedentary. Over time, such sporadic efforts can easily lead to more weight gain. There are other options available, however. There is some evidence that the drug topiramate, also known for it’s anticonvulsant properties, can work as a treatment for overeating and binge-eating disorders. It’s mechanism of action is to dilate blood vessels and reduce activity levels of central nervous system nerve cells. Some people seem to be able to manage their unhealthy cravings for food better after this substance is administered. Denise Wilfley, PhD, is quoted in Psychiatry Online as reporting that “ample research has demonstrated that cognitive-behavioral therapy and interpersonal therapy can counter binge eating and lead to long-term weight loss”, though the benefits are modest. Empowered consumers and patients should not expect topiramate, a talk therapy, counseling or other potential remedies to be a “magic bullet” that cures the desire to binge eat. These therapies will typically deliver marginal improvements for most, though some may benefit more. A cost-benefit analysis is appropriate before trying any potential remedy. There is a complex relationship between the experience of hunger and it’s physiological basis. Science is still establishing some of the core principles that govern how genes, upbringing, diet, stress, attitudes, choices, brain hormones, blood sugar and environmental variables interact to affect the urge to keep eating. There are therapeutic options available for those who poorly manage the urge to overeat. Medication and/or talk-therapies may provide benefits, though individuals coping with the urge to binge eat should expect modest benefits in most cases. People managing this problem have considerably more resources than even ten years ago. While basic science moves forward slowly, there is ever more information available on how to recognize, understand and manage this problem than ever. The neuroscience of perceiving internal body states is proceeding incrementally. The genes that regulate metabolic chemical pathways and the networks of signaling molecules that activate and deactivate those genes are being discovered. Fat may be eventually understood as something like an organ that secretes molecules to regulate it’s own state. More here

Can drugs prevent dementia by altering it’s neurological correlates?

Uncategorized

Alzheimer’s disease will affect more and more people as the population ages. This disorder slowly robs the old and the not-so-old of their wits and memories, and there is no cure.

Evidence has accumulated that some people inherit genes that increase their risk for the condition. A project is underway to stop the dementia before it starts, by giving drugs to those with genes that are a risk factor for Alzheimer’s.

Despite decades of efforts, there is not a clear understanding of the neurological cause of Alzheimer’s. After many millions of dollars spent, countless studies and the efforts of some of the world’s best scientists, we can treat the symptoms and alleviate some of the suffering, but not heal the patient.

There has been great interest among scientists in preventing this disease. Now, a never-before attempted project is underway to use genetics to find those at risk for the disorder, and then to give them a drug called crenezumab.

It is possible crenezumab administered to normal people with family histories of the disease may prevent dementia symptoms from occurring.

Can crenezumab cure Alzheimer’s?

Scientists are particularly interested in individuals who aren’t yet showing symptoms of the disorder, but who have genes or family histories that predispose them to early-onset Alzhemier’s.

An extensive, multi-year, interdisciplinary initiative to measure the possible preventive benefits of crenezumab is being underaken by teams of researchers associated with the pharmaceutical firm Genentech, the Banner Alzheimer’s Institute and the National Institutes of Health.

The effort will administer crenezumab to 300 individuals who are members of families that have been identified as carrying genes connected to early onset Alzhemier’s. The goal is to find out, as stated in a Genentech news release, “if we intervene before cognitive function deteriorates, can we prevent the disease?”

If the answer is yes, it could be a game-changer for those who carry genes increasing risk for Alzheimer’s disease. Family histories and gene sequencing could give individuals crucial information about their risk status, and those who are at higher risk could take crenezumab or another beta amyloid-interacting drug.

Plaques, tangles and the biology of dementia

Scientists have examined the atrophied brains of the severely afflicted and found plaques and tangles. Researchers found these are associated with a complex, sticky material called beta amyloid that seems to be more present in the brains of those with the severest symptoms than in normal adults’ brains.

It’s not quite clear whether the plaques and tangles this material forms are the sole or primary cause of the memory problems and other mental deficits seen with Alzheimer’s patients. Alternatively, the material might be the result of nerve cells coping with the disease, and thus an effect, not a cause.

Scientists have slowly been discovering certain compounds that interact with the plaques and tangles in potentially therapeutic ways. Laboratory science will have real world medical benefits if crenezumab prevents the formation of the plaques and tangles, and slows or prevents dementia symptoms.

Family history and genes offer clues to Alzhemier’s

Researchers have looked at different varieties of Alzheimer’s disease. Intriguingly, a minority of patients who are 30 to 60 years old have an early-onset variant. This subtype is more often inherited and called “familial Alzheimer’s disease.”

Overall, a series of genes on chromosomes connected to the disorder have been discovered, including for the more common common adult-onset Alzheimer’s type that generally afflicts those over 60. Family histories and DNA sequencing have added to what is still a very partial understanding of this condition, and researchers hope this will continue as more related genes are found.

Currently, genetic tests can detect genes that affect the likelihood of developing the memory loss and other cognitive problems characteristic of the disorder. However, the National Institutes of Health states, “It is unlikely that genetic testing will ever be able to predict the disease with 100 percent accuracy because too many other factors may influence its development and progression.”

Science may never have a completely accurate model of all such factors, but it will be very significant if crenezumab prevents or slows the onset of dementia symptoms for those that genetic tests reveal to be at risk. Those families in Columbia who show an unusual incidence of early-onset Alzheimer’s disease may be helping to push scientific knowledge into the new era of genetic medicine.

Are we near the turning point in the war against Alzhemier’s?

The current study will focus on those that family medical studies and genetic tests identify as likely to develop early-onset Alzhemier’s. Depending on the results, the much greater population whose symptoms appear in late middle-age or later may also benefit.

Although the researchers are hopeful about where this work will lead, there are no guarantees. Many medications that have interesting and novel properties in the lab turn out to have limited clinical benefits. Likewise, other drugs may work well therapeutically, but also cause serious and debilitating side effects.

The new initiative is still in its early stages. Crenezumab is not available to the public for combating Alzheimer’s yet. The MDs, Ph.Ds and others associated with Genentech, Banner Alzheimer’s Institute, and National Institutes of Health teams are being cautious about expectations for this initiative.

Read the rest here.

Neurophenomenology conference in the UK during September: call for papers

Uncategorized

 The Consciousness and Experiential Psychology Section of the British Psychological Society

Annual Conference
University of Bristol, 15th & 16th September 2012

Deadline for submissions: 1st May 2012

*** Conference Registration is Now Open – Early Registration Until June 30th ***

Neurophenomenology

Abstract
Standard approaches to understanding consciousness have found their progress interrupted by the explanatory gap purported to exist between the qualitative nature of experience and the quantitative nature of science. Whether it’s third-person scientific methods, which do not easily transfer from observation of the physical to the first-person nature of experience, or phenomenological study, which focuses on the analysis of experience whilst bracketing off theory, there would appear to be an insurmountable difficulty.

These problems are apparently avoided by the neurophenomenological method as first suggested by Francisco Varela (1996). Neurophenomenology operates by investigating the structural parallels between experience, as investigated by the phenomenological method, and the activity of biological systems, as investigated empirically with a particular emphasis on the insights of dynamical systems theory.

The conference will examine the aims and practices of neurophenomenology in an attempt to gauge its success at eradicating the explanatory gap. Our concerns include two core strands:

  • A consideration of neurophenomenology’s radical method, which requires subjects be trained in the practice of epoché and phenomenological reduction

In this strand, we aim to address: the possibility of performing a successful and complete suspension of theories and beliefs about experience; the ability of participants and experimenters to develop open questions which disclose stable experiential invariants; the construction of valid methods of intersubjective corroboration.

  • An evaluation of neurophenomenology’s approach to dynamical systems theory

Addressing questions such as: How should biological systems best be studied, in order to elucidate structural parallels with phenomenal experience?  What can dynamical systems theory contribute to such a study? How, and to what degree, can formal models ever capture experiential structure?

As ever, we aim to hold a conference accessible to all with a broad interest in the academic study of conscious experience. We invite submissions from the full range of academic disciplines with an interest in neurophenomenology, including psychology, philosophy (both analytic and continental), biology, dynamical systems theory and neuroscience.

Keynote Speakers
Prof. Michel Bitbol – Director of Research, Centre National de la Recherche Scientifique (CNRS), at the Centre de Recherche en Epistémologie Appliquée (CREA), Ecole Polytechnique, Paris

Prof. Natalie Depraz – Professor, Department of Philosophy, University of Rouen; Associated researcher, CREA, Ecole Polytechnique/CNRS, Paris

Dr. Claire Petitmengin – Senior Lecturer, Department of Languages and Human Sciences, Institut Télécom, Evry, Essonne; Associated researcher, CREA, Ecole Polytechnique/CNRS, Paris

Dr. Elena Antonova – Lecturer, Institute of Psychiatry, King’s College, London

Location
Wills Hall, University of Bristol
(Accommodation and meals will be available at the conference venue on the 14th, 15th and 16th September, and can be booked via the conference website.)

Submission
Submission of work for presentation in paper or poster format (please specify if you have a preference for poster only) is now open. Non-keynote paper presentations will be 30-minute slots, approximately 20 minutes for the talk and 10 for questions.

Abstracts of up to 300 words should be sent to Dr. Michael Beaton (mjsbeaton@gmail.com) by the 1st May. Please include name and affiliation for all authors.

Key Dates
Submission deadline: 1st May 2012
Responses by: 31st May 2012
Early Registration Deadline: 30th June 2012
Conference: 15th & 16th September 2012

Website
For up-to-date conference information: http://cep.bps.org.uk/
Conference registration pages: http://cep.yolasite.com/

Looks like Autism and neurodevelopmental disorders are going to be re-classified in the new DSM-V

Uncategorized

Some of the work I did for my dissertation dealt with “nosology“, the categorization and classification of symptoms, signs, syndomes, and diseases. I took a class in neuropsychology with David Tucker, an excellent teacher and clinician who got my interest in this subject going. Clinical neuropsychologists confront the problem of how complex an individual’s experience is, and diagnostic criteria may not capture this very well.

A minor theme of my dissertation was the particular issue of knowledge representation for cardiac “body knowledge” or “body cognition” disorders compared to autism. Psychiatrists, neurologists, pediatricians, psychologists, and other clinicians wrestle with how different one autistic patient is compared to another. The new classifications for autistic spectrum disorder coming out in the 2013 DSM-V will re-work how autism is defined, hopefully leading to better diagnoses. I write about this issue for DailyRX:

“Much discussion has centered on exactly who should be considered autistic, based on which diagnostic rules doctors should use. Diaglogue among clinicians, scientists, and patient advocates has focused on the proposed reworked definitions to be published by the American Psychiatric Association’s Fifth Edition of the Diagnostic and Statistical Manual of Mental Disorders in mid-2013.

Currently, the 4th edition of the DSM categorizes autism, Asperger’s disorder, childhood disintegrative disorder, and “pervasive developmental disorder not otherwise specified” as separate conditions.

If the proposed changes are indeed ratified and published, the larger category of “autistic spectrum disorder” will be used to categorize individual experience and behavior, ranging from mild to severe impaired functionality.”