Being exposed to popular media and fiction about science, we’ve all heard the term ‘brain waves’, loosely related to the frequencies of electrical oscillations in the brain detected at the scalp using EEG. These signals are extremely vague spatially (you may be able to differentiate left/right or front/back, but you won’t isolate the insula or cuneus), but carry a lot of temporal information (on the order of milliseconds). Per the modern dogma of neuroscience, most every aspect of human behavior or conscious experience can be reduced to a discrete set of electrochemical processes in the brain. This includes, obviously, visual perception. There is an enormous body of research concerning what happens differently in your brain in response to different variations of a stimulus, but not quite as much work has looked at the opposite arrow of causality – that is, what’s going on in your brain that causes you to see the same stimulus in different ways?
For example, let’s say there is a single neuron responsible for perceiving a circular disc. When a disc is flashed, this neuron fires, and you have a vivid perception of a circle (well, as vivid as a circle can be). Without this neuron, you would not perceive a circular disc, even if it were sitting blatantly in front of your eyes, which were accurately transmitting information to your visual brain. This is a fairly straightforward, though grossly simplified account of how we often think about vision (but instead of single neurons, we usually consider enormous, brain-wide networks of neurons working in teams; but see Quiroga et al, 2005 for an observation which may suggest otherwise). But what if we presented this circle in a way that made it difficult to see – say, it is flashed very dimly – so that you sometimes see the circle like normal and sometimes miss it completely. What is that neuron up to in the time before we show the circle that causes it to be more or less likely to fire, thus leading to your perception of the circle?
A recent study published in the 3/4/09 issue of the Journal of Neuroscience by Matthewson and colleagues used EEG to test this idea – what is different about the global electrical activity in the brain when a hard-to-see flash is perceived compared to when it is not perceived? To test this, the research group used a phenomenon called metacontrast masking, in which a short initial flash is rendered undetected by a surrounding flash that follows (here, the researchers used a circular disc, flashed for 12 ms, followed by a ring around the disc presented ~50 ms later for 24 ms). Subjects (whose brain activity was being recorded with a spidery EEG cap) reported whether or not they detected the first ‘target’ circle (there were ‘catch’ trails in which no target was presented to make sure subjects were paying attention). This type of stimulus allowed the researchers to gather EEG data over a roughly equal number of trials in which the target was detected and undetected.
The main finding was that a certain band of ‘brain waves’ (cortical oscillations) before the target was flashed were able to differentiate between trials in which the target was detected and undetected. These oscillations, in the alpha band (a frequency range centered at 10 Hz, typically associated with decreased vigilance and alertness), when measured locked to onset of a fixation cross that came onscreen before the target, were found to have different phases for detected and undetected trials. Since the alpha rhythm is rather slow, which often suggests a greater number of neurons firing in synchrony, the authors suggest that this phase difference reflects a different cortical susceptibility to visual stimulation. When the target was presented at one peak of the oscillation, subjects were much more likely to report perceiving it – at different times of the rhythm, the visual cortex was more receptive of input.
Personally, I find this result quite fascinating. I’ve been working to see if there’s a critical error in the methods used or analysis performed, but it seems like the authors were very careful in their work – this finding appears, by my best understanding, to be legit and important. Though I’ve mentioned throughout this post the possibility of using brain state to predict perceptual state before the stimulus is shown, I want to be clear that this is not what the authors of this study did. All this analysis and classification of alpha states was done after subjects were long-gone.
Now, what I’d really like to see is a group perform the predictive version of this experiment – keep a running monitor of alpha power and present hard-to-see stimuli at different phases of the alpha oscillations. If this finding is robust, and computational power is plentiful enough, this experiment should be feasible and yield a positive result. I’m really excited to see developments like this study, along with several others from the past year, concerning what’s happening neurally in the period before a stimulus is presented or a motor act is performed. In our efforts to piece together an explanation of human behavior (both subjective experience and objective action) in terms of neural events, this kind of work is just as important as understanding the effects of these behaviors (perception and action) on the brain.
What do you think? If you’re experienced with EEG experiments/analysis, I’d love to hear a more in-depth evaluation of the methods used in this paper. Leave a comment, or email me at neurotechnica on gmail.