these are all the posts under neuroscience...

On the Same Wavelength

We all have some idea of what it feels like to be on the same wavelength as someone—to feel charged by some tangible, electric connection with another human being. Psychobiologist Suzanne Dikker has ventured to delve deeper into this feeling to understand the neurological underpinnings of human social interaction.

Brainwaves + associated cognitive states.

Our brains are made up of billions of neurons, which communicate with one another through bursts of electrical activity. The sum of all those electrical impulses can actually be recorded on the surface of our scalps with an electroencephalogram [EEG], allowing researchers to trace and analyze patterns of electrical activity, or brainwaves. To peer into the minds of individuals engaged in paired interaction, Suzanne uses portable EEG headsets to analyze how their brainwaves behave as they communicate and how those patterns track with moments of meaningful social interaction. When we’re on the same wavelength, are our brainwaves actually moving more in sync? And what does that actually mean?

To investigate the neuroscience underlying the art of communication, Suzanne has married the tools and methods of cognitive neuroscience with those of neurofeedback art, collaborating with the likes of performance artist Marina Abramović and interactive media designer Matthias Oostrik to artfully weave aesthetics into each experimental design. These collaborative efforts have culminated in a series of interactive works that actually mirror the fun and frustration that go hand-in-hand with social interaction. These projects creatively crowd-source neuroscience through audience participation, inviting participants to become a part of the research as they engage with the art and with each other.

Below, enjoy ArtLab’s Q+A with Suzanne as she shares the details of her research and her insights into working at the intersection between science and art.

Your work with Marina Abramović was really the jumping off point for your current work, both in terms of investigating brainwave synchronicity and working at art-science intersection. So how did “Measuring the Magic of Mutual Gaze” and your collaboration with Marina come about in the first place?

The Sackler Family Foundation organizes this annual Art and Science: Insights into Consciousness workshop at The Watermill Center. So right after Marina Abramović did her “The Artist Is Present,” they had a meeting in the summer of 2010 and everyone was really intrigued by how connected the audience members felt to her—there were a lot of emotional reactions both from Marina and from the participants’ perspectives. So they thought maybe we could investigate this.

What Marina initially wanted was to take an fMRI machine and have her brain scanned and then do this reverse classification so we could infer from the brain scan how she felt at a certain moment. But that really isn’t really possible in the current state of affairs. So instead, we thought what may be interesting is to look at whether we can see correlated activity between people’s brains, and we can do this using EEG, which is portable.

So I set out to do this project—”Measuring the Magic of Mutual Gaze“—where we took portable EEG headsets and had people sit together making eye contact. I brought in a close friend, Matthias Oostrik, who’s an interactive media artist to help develop the visuals so that the audience could also see something in the background. That first project was called “Measuring the Magic of Mutual Gaze.” We took the EEG signal and split it into different frequency bands so you could see which frequency is dominant in each person’s brain by the frequency of the flickering. You can then compare the brainwaves from one person to the brainwaves of the other by which frequency is dominant in each brain. Whenever the frequencies overlap past a certain threshold, then we see these waves that connect between the two brains. So that’s when there’s strong correlation or synchronization in activity.

Measuring the Magic of Mutual Gaze, 2011. Garage Center for Contemporary Culture. Photo by Maxim Lubimov

So you actually collected a bunch of data from “Measuring the Magic of Mutual Gaze.” What were some of your initial findings and can they actually give any hints into what’s happening in the brain when we intensely engage in eye contact with another person?

That’s actually the best data we’ve gotten because people sat still for 30 minutes and did nothing except look into each other’s eyes. We found the most synchronicty in alpha waves at right posterior sensors, which are the ones that best pick up brain activity associated with visual processes. Alpha frequency waves are associated with concentration and focus. What was interesting was that Marina’s brain had much more alpha activity across the board compared to the other participants’ brains.

So you have your two brains connecting, but there has to be something that mediates that connection in the real world too, at least if you ask me. Marina might say: “Well, that’s some sort of telepathic transfer of energy.” Or she might not even think that that’s necessarily the most interesting question to ask. But we as scientists are trained to ask: What in the physical world mediates that connectivity? What actually drives this? So with the alpha synchronization, for example, maybe you’re better in tune with each other’s blinking rate and that resets alpha so you’re on the same wavelength. Is there another explanation that’s more feasible or plausible? So it’s maybe not so mystical, but it’s still interesting.

It’s kind of a new field to compare brain activity directly between people. But at the same time, in any experimental study, you’re never just looking at one brain. You have a group of subjects, and it’s the average correlated activity between all those people that we use to determine if a result is significant. So it’s not such a drastically new theoretical thing, but people very often present it as such in the literature.

The Compatibility Racer, Berlin 2013: “a competitive, interactive brain-robotics installation.” Photo by Kate Moxham

Once I realized this, I asked myself: “What is actually interesting to ask? Are there actually interesting questions where you do need two people in the same room doing something at the same time where you record brain activity from both of them?” And it’s actually kind of difficult. There’s one study that I investigates face-to-face versus back-to-back communication. That’s one that sort of does it, but even that you could do having different people watch somebody front and back and see if there are differences.

But what is really nice about this line of work, both from an outreach and educational perspective and artistic perspective, is that it’s something that people find very intuitive–this question of what it means to be on the same wavelength. Framing it like that, you can go into things like what brainwaves actually are and how we can trace them to see if they are actually in sync.

Could you talk more about your most recent projects? 

For our most recent installation, “The Mutual Wave Machine,” we’re again measuring brainwave synchrony. So people sit inside this capsule and this visualization grows and shrinks as synchrony increases and you can actually see the neurofeedback. Then we gave the participants a questionnaire about their empathetic predisposition to see if there was a relationship with synchrony.

With this project, we wanted to get to at the dissociation between wanting to connect with someone and actually being able to connect with them. And then of feeling this loneliness that you can feel sometimes when you’re in the presence of another human being that’s much worse than if you’re alone. But sometimes you do connect, and it’s exhilarating! So that’s what we wanted to try to amplify here. We wanted people to feel a sense of frustration.

So you see these light patterns growing as you’re more synchronized, and there’s also this real-time video image of yourself embedded in the noise. So you’re looking at this weird white noise pattern, and you start engaging in eye contact with this noisy projection of yourself as you become more connected with this person. So that adds another layer in mirror imaging.

Mutual Wave Machine

Mutual Wave Machine, 2014. Photo by Sandra Kaas

How has working with artists affected your approach to or appreciation of your research in particular and science in general?

Right now I’m more of a person who wants to grab things from around me and try to see how that can be translated into questions that can be relevant in the lab and for the field. So you start to get these ideas for your own projects and inspiration for the kinds of questions that you want to ask in your own research. I really enjoy that part, and I think that is true of any interdisciplinary interaction, or even just talking to friends who aren’t in your field.

I also find it really interesting and really challenging to make these projects into really hybrid projects. Yes, I’m a scientist, but I’m also the artist on these projects in the sense that I have an idea of what I want them to be visually. So I need to work with people who have more experience in those fields to try to help me and my collaborators answer those scientific questions. And in those projects, it’s not entirely clear who’s what. We’re making projects that can be placed into either category, which is ultimately something that I think is really great.

What I’ve found about the art-science interface and working with artists is that the quest is ultimately the same: how do you ask a question in an original way where people are still interested in finding the answer to that question. But then in art projects, you can often leave it at that question, whereas in a science project you’re asked to provide an answer. Once I was on the radio and I said something like: “This is an ongoing investigation. We don’t know the answer and that’s actually really exciting!” And my friend said, that’s not the message you want to convey; that’s not exciting because people just want to know the answer to a scientific question. But that’s not science. The questions can often be just as exciting as the answers, and that’s not at all portrayed in the media.

There are also some very basic questions that lie at the heart of science that you’re not asking anymore because they’ve become so engrained in your way of thinking. The things that are very basic for us are not necessarily so for others. So sometimes people from outside of the sciences ask those questions or they challenge those questions. And it’s very hard, but it’s very important. Sometimes I’ve even noticed that there’s a gap in my own scientific thinking when I try to explain things to people who aren’t in my field. So they’ll ask these questions and I’ll realize actually maybe I should go back to my little diagram because there’s maybe a step that I skipped, or maybe it’s not actually working the way I thought it was. And that’s just by translating it into these terms for a layperson. So I think it’s really important for the advancement of science to keep yourself rooted in the outside world.

The research and works presented above have been made possible through the collaborative efforts of an incredible team of scientists and artists, credited below.

Measuring the Magic of Mutual Gaze

Marina Abramovic, Suzanne Dikker & Matthias Oostrik, and participants of the Annual Watermill Art & Science: Insights into Consciousness Workshop

Compatibility Racer

Lauren Silbert, Jennifer Silbert, Suzanne Dikker & Matthias Oostrik, Oliver Hess, Amanda Parkes

Mutual Brainwaves Lab

Suzanne Dikker & Matthias Oostrik // Special thanks to Michael Caruso, Katia Tsvetkova, and Jennifer Silbert

Mutual Wave Machine

Suzanne Dikker & Matthias Oostrik, Peter Burr, Diederik Schoorl, Matthew Patterson Curry, Oliver Hess

Bluebrain: Documenting the Frontiers of Brain Research

In his 2009 TED Talk, Dr. Henry Markram publicly announced the Blue Brain Project, a ten-year initiative to reverse-engineer a fully functional supercomputer-powered simulation of the human brain in ten years. Inspired by such an ambitious project, filmmaker Noah Hutton set about chronicling the progress of Markram’s [blue]brainchild in an independently produced documentary spanning the ten-year timeline—a true-life brain-themed “will-they-won’t-they” story.

On January 28, 2013, the European Commission awarded Henry Markram and his team $1.3 billion for the Human Brain Project.

Since 2009, Bluebrain: the Film has evolved into a 14-year-long documentary-in-the-making. By 2013, the Blue Brain team joined forces with scientists across Europe, receiving $1.3 billion in funding awarded by the European Commission for the Human Brain Project: a unified effort to create the brain simulation under a renewed 10-year timeline. 2013 also marked the official US entry into the quest to understand the human brain, as Obama announced the BRAIN Initiative specifically geared towards developing new neuroscience technologies over the next decade.

To accommodate the growing breadth of brain research, Hutton has widened the film’s scope to include scientists from around the world as they attempt to tackle one of the greatest challenges facing science. Each year, he releases an installment of Bluebrain—an annual “state-of-the-union” for the latest from the field—providing a unique, real-time glimpse into the pace of progress + the process of scientific discovery. Below, you can watch the fourth installment of Bluebrain and enjoy ArtLab’s Q&A for more about the film + his insights into these groundbreaking brainy projects.

Could you talk about how you got started making the film and how the project has evolved since you began in 2009?

I graduated from college as a neuroscience major. At that point, I had already made my first feature-length documentary and knew that I wanted to make films. I was aware of the Blue Brain Project because I had seen Henry Markram’s TED Talk in 2009, when he laid out a ten-year timeline to create a simulation of the brain. It was exciting, but also controversial. And it just hit me that I had become so interested in neuroscience anyway, and I really liked the Seven Up! series where the filmmaker follows the lives of 14 people every seven years. So I thought it would be very interesting to do a longitudinal film about such a lofty goal of ten years to understand the human brain.

I took my first trip to Lausanne to interview Dr. Markram and show him my first film. After that first visit he emailed me and gave me exclusive access to make the film. So that was how it started. Originally I thought I was in it until 2020, but it became 14 years after they re-booted the project this year as the Human Brain Project. So now it’s until 2024/2025.

Initially I was going to focus on The Blue Brain Project and building these brain simulations. I thought it was philosophically interesting if a film would develop as the brain simulation would develop—two parallel entities growing and getting more complex. That is still part of my interest in making the film, but over the years, I’ve realized this is a much bigger topic and that there are a lot more people and countries—especially this year—throwing in their hats to create big consortiums and projects to really tackle this in our lifetime.

On April 2, 2013, the US officially entered the so-called race to understand the human brain when Obama announced the BRAIN Initiative. [Photo credit: AP Photo/Charles Dharapak]

So I started gradually widening the scope of the film, which really started last year when I began interviewing critics of brain simulation in general. Scientists working on the connectome, who have a fundamentally different approach to Markram, were interesting to talk to because they are openly critical of simulating the brain to understand it. It has been eye-opening for me to find these critics and talk to them. To be convinced by them and then go back to Dr. Markram and be re-convinced by him that he is really doing it right. My needle is all over the place in terms of what I believe because I don’t really know enough of the science to know. So whoever I’m with at the moment kind of convinces me.

You’ve been following the evolution of this quest to understand the human brain for four years now. As a documentarian, do you have any ideas at this point of the ideal story you’d like to tell looking ten years down the line?

There are several ideal stories. One of them is that Markram succeeds in his quest and he creates a simulation of a full human brain that can be the first real example of artificial intelligence. That of course is what he set out to do with The Blue Brain Project and it would be this unbelievable moment. So that’s definitely a possible narrative, which would be wild and great for the film. 

On the other hand, it would be just as interesting—and I think just as compelling—if all this hard work happened over this fourteen years of the film, and we’re just not that much closer to creating this simulation. Or we’ve done a big part of it, but it’s not behaving like a human. So failure in how the goal was originally defined would show where we really are in understanding the brain. And it also shows philosophically that it might be much further from that level of understanding in these big projects than we think that we are in how they define their goals and their timelines.

While it ushered us into the genomics age, the decade-long Human Genome Project did little to show us what genes make us human or find cures to common diseases. And a number of parallels have been drawn between the HGP and these brain-focused initiatives. So do you have any insights into what makes these projects different and what we might ultimately learn from them?

The biggest thing I’ve learned in the last two years is that there are fundamentally two different approaches to how we’re going to understand the question of the brain right now, which are nicely represented by the European project on the one hand, and by many of the people involved in the US BRAIN Initiative on the other. The European approach, led by Markram, is reverse engineering a simulation of the human brain that will be able to predict the results of experiments we haven’t even done yet. Markram thinks we can actually learn about most of the brain through this simulation, as opposed to collecting every piece of data in the lab. That is opposed to scientists in the connectome crowd, like Sebastian Seung and Jeff Lichtman, who really believe we don’t know enough yet to predict unknown results. Instead, we have to trace every connection in tissue and collect much more data to understand emergent principles before we go about building a model of the brain.

Some of Markram’s critics argue that before we can understand the brain, we must first create a map—the connectome—of every neural connection in the brain. [Image credit: V.J. Wedeen + L.L. Wald, Martinos Center for Biomedical Imaging at MGH

So it’s hard to generalize what everyone thinks we’re going to get out of this. Henry Markram would say once you have a functional simulation that’s accurate—that you’ve tested at every stop and it behaves just like a real biological system—then you can simulate diseases on the model and model different kinds of treatments. It would be this amazing telescope—a predictive, diagnostic tool—that can look at any potential brain disease or malfunction and come up with a set of solutions. The US approach seems much more geared towards continuing basic science and the way that we already go about solving problems—identifying genes and markers of disease and figuring out on animal models how to treat them.

So those are two different timelines and two different ways of approaching a problem. I’m just so interested that that is the case right now—that we don’t know which way is the best way to move forward in understanding the brain. We are just fundamentally divided between both camps in a sense, even though people from both camps will tell you that they are going to work together very nicely. And in fact, they probably will. That’s the most hopeful scenario—that the brain simulation will be getting all this rich data from the US and the US can use the simulations to tell them which parts of the brain we need to study more. So there is hope for massive scale collaboration that speeds up progress, but we will have to overcome some of the tensions in the field, which of course exist.

As you’ve been expanding the scope of the film, have you considered including the perspectives of nonscientists who have been following these projects?

I want to talk to people who have a firm footing in—or have at least checked out—science and materialism. I’m skeptical of talking to people who reject brain science completely. At the same time, there are philosophers who I think would add good context to the film; philosophers who draw upon neuroscience and are firmly grounded in brain science as a basis for their theorizing, like Paul and Patricia Churchland.

I do want to include other people and perspectives as we go on. The cool thing about filming each year is that—now that I feel I’ve covered the beginning of the project, the critics, and this big year of re-launching—every year I have the opportunity to shift the focus of the piece onto a different part of the story. Whatever is most interesting that year, I can check out. Because it is such a long-term project, if there is nothing major to report on in the development of the science, one year I might just talk to philosophers and have it be a philosophical year.

How has your relationship with Henry Markram changed over the course of the four years you’ve been filming?

When I interviewed him for the first time, I was so nervous. There’s this setting on the camera called “gain”—so if you’re in a really dark room, you pump up the gain and it makes things really grainy, but you can see the room. And I was so nervous that in setting up the camera, I had pumped the gain up to the maximum, and that was how I filmed the first interview. It was a disaster. Luckily the footage was salvageable, but my point is that I went from that to now being super comfortable—I consider him a friend now. It has been a very nice evolution of feeling more and more comfortable.

Navigating a relationship with a subject, especially where they’re watching a little film about themselves each year that you make, is very delicate. That’s such a personal thing to have your image put out into the world. The relationship on that level of the subject to the filmmaker is very dynamic. I’m sensitive to not making them feel uncomfortable—I want them to feel good about what’s out there. But at the same time I need to be independent and objective. It’s really interesting to balance that. There’s no one way to do it, and I have to feel it out every year.

To watch Bluebrain from the beginning + stay tuned for more from the project, be sure to visit the official website. And for Noah’s insights on the intersection between neuroscience + art, check out The Beautiful Brain.

The Orpheus Variations: Deconstructing Memory Through Myth

A reimaging of a beloved myth as an exploration of memory + the past, The Orpheus Variations is the perfect marriage of form + content to create a wholly unique + engaging theater-going experience. The piece—collaboratively created by the Deconstructive Theatre Project—tells the story of a man confronting his present reality in the wake of his wife’s disappearance, innovatively interspersing a series of vignettes spanning Orpheus and Eurydice’s shared past. Perfectly encapsulating the semi-deluded state many of us enter when overcome with grief, Orpheus’s idyllic memories become inextricably intertwined with his present reality as the audience is transported into a quasi-dream state.

Robert Kitchens [Orpheus] in The Orpheus Variations. Photo by Mitch Dean.

The Orpheus Variations succeeds beautifully in conveying the tragedy of this tale by artfully mimicking how the brain synthesizes the infinite raw sensory stimuli of everyday experience to construct our own personal versions of reality. In keeping with the company’s name, the cast members deconstruct moments from Orpheus and Eurydice’s time together into their constituent sensory elements. In a rather remarkable technological achievement, sound, lighting, and live action are seamlessly stitched together in real time to create a stunning filmic narrative projected above the ensemble members as they carefully construct, enact, and film each scene below. The chaotic on-stage action is counterbalanced by an overarching voice narration–a sort of lyrical inner monologue–underscored by evocative live music, which, together with the live-fed film, mirror the brain’s imposed narrative on experience. Thus, the audience member is at once invited to use their own brains to patch together the fragments that make up the characters’ past, and to watch Orpheus + Eurydice’s own cohesive interpretations of those very same moments played out on film, illuminating what these characters have subconsciously chosen to compose their own versions of reality given all of the same sensory information.

By creating a piece that is reflective of the brain’s inner workings, the DTP has provided the viewer with the unique opportunity to create their own journey within the overarching theatrical journey unfurling on stage. The piece reveals how truly subjective our own experiences are—how reality is merely an imperfectly generated construct, often vulnerable to our own personal expectations + narratives. As I observed [and was subjected to] how experience + memory are constructed, I became viscerally aware of my own brain’s inherent deceit + trickery. With this frame of mind, the real tragedy of the story [at least for me] became all the more deeply felt—that while desperately in love with each other, these two characters inhabited two distinct yet intimately interwoven realities, which ultimately drove them apart.

Robert Kitchens [Orpheus] & Amanda Dieli [Eurydice] in The Orpheus Variations. Photo by Mitch Dean.

The form of the piece has the added effect of bending space-time, as the audience witnesses past, present, and future unfolding on stage all at once until they are virtually indistinguishable. This time-bending effect is jarringly reminiscent of the inconsequence of time during periods of mourning—how in times of desperation and intense grief we can slip into a world of revised // idealized memory, clinging to the hope that it can somehow become our own present + perfect version of reality. The Orpheus Variations thus reveals the ultimate corruption that comes with such motivated recollection.

All of the above said, I now must disclaim that what I find most exciting about The Orpheus Variations—both in watching the performance and from my earlier conversation with writer + director Adam J. Thompson—is that it is quite literally impossible for any two audience members to experience the piece in the same way. By design, each performance contains an infinite number of possible journeys–from film to construction of film to individual actors and back again–creating an intentionally subjective, wholly viewer-dependent theater-going experience. Once the curtains went down, I sat down with Adam and the DTP team to moderate a talkback in which several of the audience members shared their own personal journeys through the piece, each differing from the next by varying degrees. Given the same inputs—the same film, actors, sound, lighting, narration—every one of us in the theater followed a completely different trajectory to take away a deeply personal experience in a rather elegant reinforcement of formal theme.

By simultaneously deconstructing both the mind’s interpretation of reality and the theater-making process, The Orpheus Variations has masterfully reinvented what live theater can mean for the individual, while illuminating the utter complexity + subjectivity of our own consciousness.

Performed at HERE Arts Center, The Orpheus Variations was conceived and directed by Adam J. Thompson and developed collaboratively by the members of the Deconstructive Theatre Project. For more information about the company please visit their website + Like them on Facebook.

Out on a Limb

B.B. King + “Lucille.” Eric Clapton + “Blackie.” The bond between musician and instrument is sacred.1 If “music is the mediator between the spiritual and the sensual life” [Beethoven], then musical instruments are the mediator of that mediator. The vehicle through which the spiritual can be heard. But in order to fluently and fluidly communicate this inner state, all technical details—the fingering of a note, the plucking of a string—must become as second nature as the most mundane everyday task. done without a moment’s thought. So all that is left is a deep and intense focus on musicality.

astor + pollux. illustration by shawn feeney.

Musicians often describe their instruments as an extension of themselves. To be great, you must become one with your instrument. incredibly, this oneness can actually be achieved because our brain can actually come to see a musical instrument as a physical extension of the body!!

To pilot our bodies through the motions required for everyday tasks, the brain builds an organized model of the body—the body schema. To devise this internal schematic, the brain dynamically integrates tactile and visual information from our sensory receptors with the body’s innate sense of its position in space [a la proprioception]. These sensory syntheses allow us to judge distances, manipulate objects, and approach those around us. To move through and interact with the world.

Amazingly, our brains can even incorporate inanimate objects—particularly tools—into this mental plan.2 These tools essentially become internalized as temporary extensions of the body’s consciousness, though it is important to note that they never become incorporated into the body schema; the loss of a tool is never felt in the same way as the loss of a limb.


jawharp. illustration by shawn feeney

The musical instrument is just like any other tool, except that instead of being used to carry out some traditionally practical task, it is used as a device for expression. And so, like any other tool, it too becomes integrated into the body schema upon use—a natural [and literal] extension of the expressive self.3

Of course, the sort of re-organization required to extend the body schema to the musician’s instrument takes time. [remember, practice makes perfect.] The average person requires more training with an instrument than she would with, say, a fork or a hammer. Still, there do exist those rare prodigies—the Mozarts and Yo Yo Mas—for whom musicality comes naturally. Though I am by no means an expert in neurophysiology or music performance theory, here are two possible explanations i’ll put out there as a fun thought experiment:

one: Perhaps for these few, their body modeling is more plastic. Instead of treating the instrument as a bodily extension only after intensive training, their brains may be more readily and rapidly accepting of the instrument into the body schema.

The violin came naturally, and it really fit me. When I picked it up [and placed it under my chin], it looked like it had grown there. It just fit. ∇Δ Miriam Burns

two: There is still the possibility that the prodigy’s brain sees her instrument as a sort of extra limb. The vast majority of amputees feel bodily sensations in their phantom limbs. These sensations actually allow amputees to successfully incorporate prostheses into their body schema because their minds have not fully registered the loss. Then, perhaps a non-corporeal entity like an instrument could become instantly incorporated into the body schema. Rather than acting as a mere bodily extension, the instrument may be filling a void in the prodigy’s body schema, behaving just as a prosthesis would for an amputee.

This last hypothesis is admittedly more far-fetched. However, considering how little we actually understand the brain, particularly the unconscious brain, not much is out of the realm of possibility: “The brain is the last and grandest biological frontier, the most complex thing we have yet discovered in our universe… the brain boggles the mind” [James Watson].

Doped up on Music

The most basic human instincts to eat and procreate are actually behaviors enforced by a neurochemical reward system hard-wired into our brains. Because we never do something for nothing, our brains have evolved to release a chemical called dopamine—a neurotransmitter—as a reward. Unlike sustenance and reproduction, music has no obvious evolutionary advantage, yet it activates dopamine release in much the same way.

Dopamine is what tells our body that it has been rewarded. dopamine release in the pleasure center of our brain—the nucleus accumbens—is responsible for those feelings of pleasure and satisfaction that keep us asking for more. So when we listen to a song for the very first time, dopamine may be released at certain aurally attractive moments telling us that we like what we’re hearing. So, for some reason, our brains are rewarding us for listening to music.

Dopamine release also occurs in the caudate nucleus, which plays an important role in learning stimulus-reward associations, allowing us to predict when a reward is coming. The more we listen to a particular song, the better we can anticipate when the emotional payoffs will occur. We begin tuning into the sequences of tones and rhythms that lead us to that dopaminergic climax. It is this anticipation—this neurochemical reinforcement—that builds our desire to listen to that song. That makes us feel like we *need* to hear it.

But what is it about music that taps into this adaptive reward-based-learning neural network? The archaeological record suggests that music has been with humans forever, which is not particularly surprising given that animals like birds and whales use song to communicate.* Music that really resonates with us has this natural ability to evoke and even enhance our emotions. Perhaps music has the power to somehow manipulate our neural circuitry, using discordance // resolution, prediction // surprise, anticipation // delay to make our brain think it’s actually experiencing some physical—as opposed to aesthetic + abstract—phenomenon.

The most affective music may in some way be reflecting patterns found in nature, appealing to some instinctual response in us. Or it may mimic the rhythm of some past resonant interaction that has been pre-programmed into our brains, reactivating the experience by going through the dopaminergic motions. Or it may even break with established patterns, toying with our brains by building up to nothing or delaying the delivery of that much-expected, long-awaited hook.

Interestingly, scientists have found that pop songs have become far less dynamic over the last 50 years. Perhaps the likes of Justin Bieber and Carly Rae Jepsen [or the engineers behind their contagious beats] have finally found *the* secret to co-opting our neural circuitry with their viral hits, ushering pop music-making into an almost scientific era. As mass-music-production has become the industry standard, these engineers may have discovered some near-universal [haters gon’ hate] motif that has allowed them to distill pop music-making into a very precise formula designed to appeal to the masses—a sort of unified theory of pop-music synthesis.

Thousands of years of musical evolution culminating in ‘Call Me Maybe’. For better or worse? Let your doped up [or not] brain be the judge.

*Male humpback whales use song to herd fish for feeding and birds use song in courtship rituals. song for food + sex. Whether our brain’s relationship to music can be traced back to our distant animal cousins is unclear to me, however.

The Good Plans of Wise Wizards

As a major tolkein fan [read: nerd], I wanted to write a post honoring the release of The Hobbit: An Unexpected Journey. Living with two filmmakers, I’ve been privy to a lot of criticism surrounding The Hobbit’s *48 frames per second* frame rate.

Frame rate refers to how many frames, or consecutive images, are shown in a given second, while shutter speed relates to the length of time each frame is exposed for. The higher the shutter speed, the shorter the exposure time, or the length of time the camera is open. since the 1920s, movies have been shot at 24 fps and at a shutter speed of 48 [1/48-second exposure time]. This convention is responsible for the motion blur and choppy cadence that has become a part of the cinematic language we have grown up with. Nonetheless, with the advent of new digital film-making technologies, directors like James Cameron and Peter Jackson have begun pushing for the use of high frame rates (HFR) to enhance the movie watching experience. Double your frame rate, double your fun.

Our brains actually only need a frame rate as low as 14 fps to piece the images together enough to perceive constant imagery. Of course, the more frames we have, the smoother the motion, which goes back to the impetus behind using higher frame rates in film. The Hobbit, however, which was shot at 48 fps with a shutter speed of 60 [1/60-second exposure time] has been criticized for this smoothness, which is curious considering hollywood is a place where more is typically… more. Critics have said it just looks too real—that this hyper-reality takes the viewer outside of the movie-watching experience.

The retinal cells of the human eye respond to light [the rods] and color [the cones], releasing chemical messages to the optic nerve. These messages are translated into nerve impulses that then travel up to the brain, where they are pieced together and interpreted as images. Because the world is theoretically in constant motion, our eyes are well-accustomed to handling an infinite frame rate, which is not to say that they record everything they encounter. Because our retinal cells cannot send out signals at an infinite rate, the retina only records a subset of those infinite frames. Though I could not find a consensus, our eyes see somewhere between 100 and 500 fps.* Even then, only a subset of that subset of frames actually gets interpreted by the brain. You can think of the whole process as a sort of information overload for the brain, which only needs a small chunk of that information to actually make sense of it.

Because our visual processing systems can only handle a sub-infinite frame rate, the world our eyes perceive has a natural degree of blurriness. With this in mind, not having seen The Hobbit yet, I would already agree that the frame rate is likely problematic, but not because it makes the movie look ‘too real’. 48 fps is still well below the 100-500 fps frame rate at which we see the real world. However, the crucial difference is that, for technical reasons, they shot the film at a higher shutter speed, reducing the exposure time for each frame. Because the camera is capturing an image for a shorter length of time, it records less blur, making for a much crisper picture [see here for a great example of these effects]. Consequently, The Hobbit-watcher’s eye is experiencing less frames per second at a much higher clarity than in real life, giving our brains more time to take in all the details that we ordinarily cannot see.

This clarity and astonishing detail make for a viewing experience that is nothing at all like our reality. The Hobbit is thus using a completely novel visual language than the one we are accustomed to both in reality and in film, making for a different kind of unreality. Of course, HFR technology is still in its infancy. As more pro-HFR filmmakers continue to experiment, a more palatable and less dizzying aesthetic that is better adapted to our visual processing systems can hopefully be reached.

Meanwhile, hope remains while fans are true…


* When it comes to neuroscience // neurophysiology, we have a literal blind spot because we are using our brains to understand something about our brains. From my reading [and this is by no means my field of specialty] there seems to actually be very little consensus about what is going on with our visual perception, at least with how it relates to the content covered in this post.

Smell Check

My name is Maryam and I’m a congenital anosmic. I was born this way. A rare mutant with a lifelong inability to smell.

Anosmia literally means ‘without smell’. While I most certainly do have a nose [my grandmother would even say it’s impressively large], it is incapable of telling my brain that it’s sensing anything. When the typical person smells, what their nose is detecting is actually a series of tiny odor molecules in the air. Different odor molecules have a characteristic shape, which is recognized by the nose’s odorant receptors in our olfactory neurons. on recognition, these receptors bind the odor, which initiates a series of changes in the neuron. This neuron then *fires* a chemical message, setting off a chain of events—a signaling cascade—that relays the presence of a particular smell up to the brain.

The average person can bind and distinguish up to 10,000 different odor molecules. [Which is a whole heck of a lot considering humans have a relatively poor sense of smell!] I, on the other hand, have a genetic mutation—a typo in my olfactory neurons’ assembly instructions—that leaves me unable to detect a single scent. While it’s likely that my nose’s odorant receptors can still recognize and bind odors*, this smelly message gets lost because some link in the signaling chain to my brain is defunct.

Unfortunate as that may sound, as a  new yorker, I must say that I’d consider my deficiency more of a blessing than a curse. I shrug obliviously as my friends complain that a Bushwick street corner smells like pee. I don’t faint when Sparky the Dog passes gas in a closely quartered Lower East Side apartment. I even get the last seat on the train that no one wants just because it’s next to some [allegedly] super smelly person.

Of course, there are down sides too. For instance, one night, some friends and I were riding the subway home from dinner. The train that came was beyond crowded except for one car. I marched into that car, happy as a clam to find a seat—nay, a whole bench!—for myself. That is, until my friends followed me in and started choking on the stench. At first I panicked, thinking the smell was me [my friends are constantly assuring me that their p-u’s are never for me]. But then I saw the lone man in the car throwing up in his jacket. If not for my friends’ good scents, I would have unknowingly lounged in the smell of vomit for the next 20 minutes.


The biggest down side, however, is that my nose is deaf to the inaudible // invisibile // intangible language of odors. Animals silently communicate with one another through the smells they give off, and humans are no exception. Our brains have the capacity to translate olfactory stimuli into a behavioral response. We transmit emotion through scent—the stink of fear is contagious. We recognize our kin through their signature smell—infants sleep better with just the scent of their mothers nearby. And, most famously, we choose our mates by their fragrance. In a blind study, ovulating women preferred the scent of more symmetrical men based only on the way their slept-in T-shirts smelled. On the flip side, researchers have found that men tip strippers better while the strippers are ovulating!

Anosmics like myself are thought to be indifferent to these behavior-inducing odors. We’ve been accused of being more socially awkward and less confident than the average smelling human because we cannot pick up on these intangible olfactory cues. Reduced scent perception has even been implicated as a marker for psychopathy! As a non-smelling, well-functioning [albeit super nerdy] individual, I wonder if that’s truly the case. If a blind // deaf person makes up for the loss of one sense by heightening the others, who is to say that my other senses aren’t compensating in a similar way? That perhaps ansomics make up for what we can’t smell by being hypersensitive to a person’s tonal inflections or slight changes in facial + body language?

I doubt we’re all that inscentsitve… we just have a different way of smelling the world. Perhaps there is even a way to reveal the hidden world of scent to the unsmelling. A deaf person can ‘hear’ music by feeling its rhythms and melodies. Scientists have found ways to enable a blind person to ‘see’ by translating images into sound waves. Perhaps, then, there is still hope for the inscentient. A way to manipulate the anosmic’s brain—mimicking a smell to evoke a response—to give us a whiff how smell looks // feels // tastes // sounds.


The Scent of Light. Shanghai’s super nature design attempts to evoke scent using ethereal light design.

* Humans have 900+ genes coding for smell receptors, so it’s highly unlikely that every one of these genes is defective in me.

Reading Brainbow

Scientists are story tellers. Biographers of the universe’s constituent components. All of our hypotheses // experiments // theories are aimed at painting a cohesive picture of some phenomenon. At going back and further complicating this picture, seeking to reveal all of its inherent nuances and caveats.

[As their name implies] neuroscientists are constantly pursuing the story of the nervous system, seeking to tell the nerve cell’s tale. our body communicates with itself // with others // with the environment around us via an interconnected and [overwhelmingly] complex network of neurons. All the information transfer required for sentient life occurs along these neuronal tracks.

Tamily Weissman

In an effort to understand how our neurons are connected to one another—to map the informational highways that run through our bodies—Harvard’s Dr. Jeff Lichtman and Dr. Joshua Sanes developed The Brainbow. Believe it or not, the vividly colored image above is no Jackson Pollock! Instead, it is actually a photograph of a mouse’s hippocampus—the part of the brain responsible for spatial navigation and memory—generated by the brainbow technique. What’s more, if we were to take a snapshot of our own human brains using this same method, it would turn out to look very much like the mouse’s above.

the dentate gyrus - the memory making part of our brains. lichtman + sanes 2007.

the dentate gyrus – the memory making part of our brains. lichtman + sanes 2007.

Just as a monitor uses red // green // blue to produce the myriad of colors we see flashing across our television screens, Dr. Lichtman + Dr. Sanes’s brainbow use orange // green // red // cyan *fluorescent proteins* to produce brilliantly colored images of the brain’s many connections. Its connectome. Each fluorescent protein is coded for by a different gene, with different combinations of these genes expressed in a particular neuron to label the neuronal cell any one of roughly one hundred [!] different colors.

These distinct hues can be detected and traced by a computer so that we may follow a given neuron down its individual color-paved path. By chasing the brainbow, we have the potential to follow this cell as it develops over time. We can track what other cells this neuron talks to. we can observe how different stimuli modulate this cell’s behavior. We can better understand how this cell passes along or receives a given biological message.

As we zoom out, we can begin to trace the neural circuitry of the brain as a whole. with brainbow technology, neuroscientists are now working to construct three-dimensional models [shown in video below] of all the connections in the brain by stacking together fluorescent images of thin sections of the brain. Compiling these glowing neural snapshots have begun to untangle and illuminate the mysteries behind how we are wired.

Welcome to ArtLab

Oversimplification is the kryptonite of any scientific idea, oftentimes turning pop science into an elaborate game of telephone, carelessly paring away all the nuances and caveats that make the idea so impactful in the first place. The lateralization of the brain, first studied by Michael Gazzaniga and Roger Walcott Sperry in the 1960s, has been perhaps the biggest victim of bastardization by oversimplification.

The left brain//right brain divide has been pigeon-holing folks for decades now, neatly sorting us into the science-oriented versus the artistically-inclined. The rational male versus the emotional female. The *Spocks* versus  the *Kirks*. The practical, ordered, and scientific world is the territory of the left brain, while the imaginative, aesthetic, artistic world is the right brain’s domain…

… The problem with such a black-and-white picture of the brain is that it doesn’t account for all the grey in your grey matter. Sure, neuroscientists agree that the right hemisphere sees the bigger, interconnected picture, and that the left hemisphere picks out details and organizes information to create a sort of rule-bound world. However, regardless of whether math or science or business or literature or philosophy is your jam, you likely rely heavily on both your left and right brain.

As a molecular biologist, I deal almost exclusively in the microscopic, “hidden” world. The world that belongs to the right side of my brain. Of course I spend most of my days making observations, honing in on details and organizing them in my lab notebook searching for patterns in the data. But, what I depend on while devising my experiments and what I rely on while telling the story of these microscopic molecules is all the right-brain power I can muster.

Scientists are in constant search of patterns inherent not just in the data in front of us, but patterns that can be applied broadly to the natural world. We consider the information gathered from the observable world, and extrapolate it to a model through right-brained induction. More importantly, we must be able to weigh the evidence and see what fits into our existing models and what doesn’t, which is a task our think-inside-the-box, rule-bound left brains cannot do. If not for our right brains, we may to this day still believe that the sun rotates around the earth! We may never have transitioned from Newton’s laws of physics to the law of relativity!

Likewise, artists cannot operate solely with their right hemispheres. Sure our right brains give us a whole sensual picture of the world. And maybe artists are slightly better in touch with their right brains compared to their scientific/mathematical counterpoints. But the fact remains that artists depend on their left brains for the detail, the focusing, the ability to convey meaning through language be it written or musical or moving.

The left brain is what allows the photographer to hone in on one a particular moment in time that is relevant or impactful or just downright gorgeous. The left brain is what releases all the insight and emotion and imagery floating around in the writer’s right brain onto the page through language. The left brain is what gives the painter the ability to capture the details of her subject to get the shading just so.

With all this said, something I have been struggling to grasp for quite some time now is why it is that so many scientists and so many artists feel that we belong to two separate worlds? It’s obviously not so simple as “well scientists and artists exist in two fundamentally different brain spaces” because they don’t. Some of the most creative people I’ve met are scientists and some of the most methodical people i’ve met would count themselves artists. We even deal in the same mediums. Open any scientific journal and you’ll see some of the most stunning images you’ve ever seen. Scientists deal in movies, images, color, sound… We all speak the same language, so why aren’t we talking? I have started this blog as a dare to myself to step outside the Ivory Tower and actually venture to talk about what it is we do up here using the language of art. The language of the so-called right brain.

Welcome to ArtLab.

* Photo taken from Iain McGilchrist’s TED talk “The Divided Brain”