Walter Freeman (http://sulcus.berkeley.edu/) is very prominent within the neurodynamics world, but is perhaps not as well known to the neurophenomenology and emobodied cognition communities as he should be. This is possibly because of the forebodingly technical nature of the physics concepts he employs. He told me at a conference some years ago that his views were very close to those of Francisco Varela, who himself was a dynamical systems neuroscientist. He is quite possibly the world’s foremost expert modeling cognitive neurodynamics with EEG. I am examining his work again as I am in the process of designing an EEG study. Our Science Club in Austin has been wrestling with his paper “Metastability, Instability, and State Tranistions in Neocortex” (Freeman and Holmes, 2005) where he presents a “globalist” alternative to researchers who focus on “modules lighting up:”
“Humans observe and grasp complex events and situations by means of expectations that have the form of theories. A theory determines the techniques of observation, which in turn shape what is observed and
understood. The classic case in physics is the wave–particle duality, in which the choice of one or two slits determines the outcome of the observation. A similar situation holds for the classic debates among proponents of competing theories about neocortical dynamics: localization vs. mass action. In one view, cortex is a collection of modules like a piano keyboard, each with its structure, signal, and contribution to behavior. In the other view, the neocortex is a continuous sheet of neuropil in each cerebral hemisphere, which embeds specialized architectures that were induced by axon tips arriving from extracortical sources during embryological development. Cooperative domains of varying size emerge within each hemisphere during behavior that includes the specialized.
Observers of both kinds use electroencephalograms (EEGs) and units to test their models. Localizationists (e.g. Calvin, 1996; Houk, 2001; Llina´s & Ribary, 1993; Makeig et al., 2002; Singer & Gray, 1995) analogize the neocortex to a cocktail party with standing speakers; each
module gives a signal that, when activated like a voice in a room, by volume conduction occupies the whole head and overlaps other signals. On the assumption of stationarity, the signals can be separated by independent components analysis (ICA) of multichannel EEG recordings. Globalists (e.g. Amit, 1989; Basar, 1998; Freeman, 2000) analogize neocortex to a planetary surface, the storms of which are generated by intrinsic dynamics and modified by the structural features of the surface.
These analogies throw into sharp relief the contrasting assumptions and inferences on which the two theories are based. Further, they justify the different methods by which the EEGs are processed, so that after the processing the two forms of the postprocessed EEG data differ dramatically, each legitimately in support of the parent theory. This is
why any description of a brain theory should be prefaced by a review of the methods used to get the data that supports the theory
Raw EEG data must be preprocessed prior to measurement. Here six decisions are summarized that have to be made by localizationists and globalists before they acquire EEG data. The choices are diametrically opposed (Freeman, Burke, & Holmes, 2003; Freeman & Holmes, 2005).
(i) According to localizationists, specified behaviors require activation of selected cortical modules that give signals at specific stages of the behaviors and are otherwise silent. The background EEG is incompatible
with this expectation, so they adopt the theory established years ago by Bullock (1969) and Elul (1972) that background EEG is dendritic noise, which is so smoothed by volume conduction, particularly at the scalp, that it has no identifiable spatiotemporal structure. They use time ensemble averaging (TEA) to attenuate the noise in proportion to the square root of the number of repeated stimuli that activate the modules, and to extract the expected signals as event related
potentials (ERPs). Globalists view the background
activity as the necessary pre-condition for execution of the specified behavior. That activity is modified by conditioned stimuli in differing ways in various areas of neocortex. The induced modifications are not time-locked to triggering stimuli, so that TEA cannot be used. Instead, spatial ensemble averaging (SEA) is used to extract reference values for sets of
phase and amplitude values from multiple EEGs.
(ii) The sensor of choice for localization is the depth microelectrode, because the size of the tip determines the acuity of spatial resolution. For globalization the spatial resolution is determined by the interelectrode
distances, so the electrode face to minimize noise should be as large as possible without touching neighbor electrodes.
(iii) Both observers use as many electrodes as possible. Localizationists space their electrodes as far apart as possible to sample from as many modules as they can. Globalists space them closely to avoid spatial aliasing and undersampling of spatial patterns of cortical activity.
(iv) Localizationists sharpen the spatial focus of the signals by high-pass spatial filters such as the Laplacian to correct the smoothing by volume
conduction. Globalists use low-pass spatial filters to attenuate contributions that are unique to individual electrodes and enhance the sampling of synchronized field potential activity.
(v) Narrow band-pass filters are favored by localizationists on the premise that modular signals are likely to be bursts at definite frequencies such as 40 Hz. Globalists prefer broad-band filters in expectation that oscillatory signals in EEGs are aperiodic (chaotic).
(vi) Signal sources are localized to modules by fitting equivalent dipoles to the filtered data in order to solve the inverse problem. Global signals are not confined to specific anatomical sites; they are localized not in
the Euclidean space of the forebrain but in multidimensional N-space, where N is the number of available electrodes. These diametrically opposed choices in data processing lead to widely divergent EEG data, and the data lead to theories that are skew. The two theoretical positions are more complementary than conflicting”