these are all the posts under current science...

Bluebrain: Documenting the Frontiers of Brain Research

In his 2009 TED Talk, Dr. Henry Markram publicly announced the Blue Brain Project, a ten-year initiative to reverse-engineer a fully functional supercomputer-powered simulation of the human brain in ten years. Inspired by such an ambitious project, filmmaker Noah Hutton set about chronicling the progress of Markram’s [blue]brainchild in an independently produced documentary spanning the ten-year timeline—a true-life brain-themed “will-they-won’t-they” story.

On January 28, 2013, the European Commission awarded Henry Markram and his team $1.3 billion for the Human Brain Project.

Since 2009, Bluebrain: the Film has evolved into a 14-year-long documentary-in-the-making. By 2013, the Blue Brain team joined forces with scientists across Europe, receiving $1.3 billion in funding awarded by the European Commission for the Human Brain Project: a unified effort to create the brain simulation under a renewed 10-year timeline. 2013 also marked the official US entry into the quest to understand the human brain, as Obama announced the BRAIN Initiative specifically geared towards developing new neuroscience technologies over the next decade.

To accommodate the growing breadth of brain research, Hutton has widened the film’s scope to include scientists from around the world as they attempt to tackle one of the greatest challenges facing science. Each year, he releases an installment of Bluebrain—an annual “state-of-the-union” for the latest from the field—providing a unique, real-time glimpse into the pace of progress + the process of scientific discovery. Below, you can watch the fourth installment of Bluebrain and enjoy ArtLab’s Q&A for more about the film + his insights into these groundbreaking brainy projects.

Could you talk about how you got started making the film and how the project has evolved since you began in 2009?

I graduated from college as a neuroscience major. At that point, I had already made my first feature-length documentary and knew that I wanted to make films. I was aware of the Blue Brain Project because I had seen Henry Markram’s TED Talk in 2009, when he laid out a ten-year timeline to create a simulation of the brain. It was exciting, but also controversial. And it just hit me that I had become so interested in neuroscience anyway, and I really liked the Seven Up! series where the filmmaker follows the lives of 14 people every seven years. So I thought it would be very interesting to do a longitudinal film about such a lofty goal of ten years to understand the human brain.

I took my first trip to Lausanne to interview Dr. Markram and show him my first film. After that first visit he emailed me and gave me exclusive access to make the film. So that was how it started. Originally I thought I was in it until 2020, but it became 14 years after they re-booted the project this year as the Human Brain Project. So now it’s until 2024/2025.

Initially I was going to focus on The Blue Brain Project and building these brain simulations. I thought it was philosophically interesting if a film would develop as the brain simulation would develop—two parallel entities growing and getting more complex. That is still part of my interest in making the film, but over the years, I’ve realized this is a much bigger topic and that there are a lot more people and countries—especially this year—throwing in their hats to create big consortiums and projects to really tackle this in our lifetime.

On April 2, 2013, the US officially entered the so-called race to understand the human brain when Obama announced the BRAIN Initiative. [Photo credit: AP Photo/Charles Dharapak]

So I started gradually widening the scope of the film, which really started last year when I began interviewing critics of brain simulation in general. Scientists working on the connectome, who have a fundamentally different approach to Markram, were interesting to talk to because they are openly critical of simulating the brain to understand it. It has been eye-opening for me to find these critics and talk to them. To be convinced by them and then go back to Dr. Markram and be re-convinced by him that he is really doing it right. My needle is all over the place in terms of what I believe because I don’t really know enough of the science to know. So whoever I’m with at the moment kind of convinces me.

You’ve been following the evolution of this quest to understand the human brain for four years now. As a documentarian, do you have any ideas at this point of the ideal story you’d like to tell looking ten years down the line?

There are several ideal stories. One of them is that Markram succeeds in his quest and he creates a simulation of a full human brain that can be the first real example of artificial intelligence. That of course is what he set out to do with The Blue Brain Project and it would be this unbelievable moment. So that’s definitely a possible narrative, which would be wild and great for the film. 

On the other hand, it would be just as interesting—and I think just as compelling—if all this hard work happened over this fourteen years of the film, and we’re just not that much closer to creating this simulation. Or we’ve done a big part of it, but it’s not behaving like a human. So failure in how the goal was originally defined would show where we really are in understanding the brain. And it also shows philosophically that it might be much further from that level of understanding in these big projects than we think that we are in how they define their goals and their timelines.

While it ushered us into the genomics age, the decade-long Human Genome Project did little to show us what genes make us human or find cures to common diseases. And a number of parallels have been drawn between the HGP and these brain-focused initiatives. So do you have any insights into what makes these projects different and what we might ultimately learn from them?

The biggest thing I’ve learned in the last two years is that there are fundamentally two different approaches to how we’re going to understand the question of the brain right now, which are nicely represented by the European project on the one hand, and by many of the people involved in the US BRAIN Initiative on the other. The European approach, led by Markram, is reverse engineering a simulation of the human brain that will be able to predict the results of experiments we haven’t even done yet. Markram thinks we can actually learn about most of the brain through this simulation, as opposed to collecting every piece of data in the lab. That is opposed to scientists in the connectome crowd, like Sebastian Seung and Jeff Lichtman, who really believe we don’t know enough yet to predict unknown results. Instead, we have to trace every connection in tissue and collect much more data to understand emergent principles before we go about building a model of the brain.

Some of Markram’s critics argue that before we can understand the brain, we must first create a map—the connectome—of every neural connection in the brain. [Image credit: V.J. Wedeen + L.L. Wald, Martinos Center for Biomedical Imaging at MGH

So it’s hard to generalize what everyone thinks we’re going to get out of this. Henry Markram would say once you have a functional simulation that’s accurate—that you’ve tested at every stop and it behaves just like a real biological system—then you can simulate diseases on the model and model different kinds of treatments. It would be this amazing telescope—a predictive, diagnostic tool—that can look at any potential brain disease or malfunction and come up with a set of solutions. The US approach seems much more geared towards continuing basic science and the way that we already go about solving problems—identifying genes and markers of disease and figuring out on animal models how to treat them.

So those are two different timelines and two different ways of approaching a problem. I’m just so interested that that is the case right now—that we don’t know which way is the best way to move forward in understanding the brain. We are just fundamentally divided between both camps in a sense, even though people from both camps will tell you that they are going to work together very nicely. And in fact, they probably will. That’s the most hopeful scenario—that the brain simulation will be getting all this rich data from the US and the US can use the simulations to tell them which parts of the brain we need to study more. So there is hope for massive scale collaboration that speeds up progress, but we will have to overcome some of the tensions in the field, which of course exist.

As you’ve been expanding the scope of the film, have you considered including the perspectives of nonscientists who have been following these projects?

I want to talk to people who have a firm footing in—or have at least checked out—science and materialism. I’m skeptical of talking to people who reject brain science completely. At the same time, there are philosophers who I think would add good context to the film; philosophers who draw upon neuroscience and are firmly grounded in brain science as a basis for their theorizing, like Paul and Patricia Churchland.

I do want to include other people and perspectives as we go on. The cool thing about filming each year is that—now that I feel I’ve covered the beginning of the project, the critics, and this big year of re-launching—every year I have the opportunity to shift the focus of the piece onto a different part of the story. Whatever is most interesting that year, I can check out. Because it is such a long-term project, if there is nothing major to report on in the development of the science, one year I might just talk to philosophers and have it be a philosophical year.

How has your relationship with Henry Markram changed over the course of the four years you’ve been filming?

When I interviewed him for the first time, I was so nervous. There’s this setting on the camera called “gain”—so if you’re in a really dark room, you pump up the gain and it makes things really grainy, but you can see the room. And I was so nervous that in setting up the camera, I had pumped the gain up to the maximum, and that was how I filmed the first interview. It was a disaster. Luckily the footage was salvageable, but my point is that I went from that to now being super comfortable—I consider him a friend now. It has been a very nice evolution of feeling more and more comfortable.

Navigating a relationship with a subject, especially where they’re watching a little film about themselves each year that you make, is very delicate. That’s such a personal thing to have your image put out into the world. The relationship on that level of the subject to the filmmaker is very dynamic. I’m sensitive to not making them feel uncomfortable—I want them to feel good about what’s out there. But at the same time I need to be independent and objective. It’s really interesting to balance that. There’s no one way to do it, and I have to feel it out every year.

To watch Bluebrain from the beginning + stay tuned for more from the project, be sure to visit the official website. And for Noah’s insights on the intersection between neuroscience + art, check out The Beautiful Brain.

Brain Stethoscope: Transforming Seizure into Song

Borne out of a “very Stanford-ish” collaboration between Chris Chafe, composer + professor of music research, and Josef Parvizi, epilepsy expert + professor of neurology, Brain Stethoscope epitomizes the potential for innovation that lies at the intersection between art + science. Part artistic experiment, part clinical // diagnostic tool, this Stanford duo has created an aural platform for data interpretation by translating traditionally visualized brain activity patterns into music for the ears.

The brain's behind brain stethoscope: Professor Chris Chafe + Dr. Josef Parvizi [Photo: Lea Suzuki, The Chronicle]

The brain’s behind brain stethoscope: Professor Chris Chafe + Dr. Josef Parvizi [Photo: Lea Suzuki, The Chronicle]

Inspired by a performance of the Kronos Quartet’s Sun Rings—which weaves NASA recordings from space into a series of sonic spacescapes for string—Parvizi approached Chafe with an idea to sonify seizure. Chafe, an expert in creating music from natural phenomena—a field known as sonification, or, as he prefers, musification—has developed music-synthesizing platforms to explore everything from Internet traffic properties to CO2 emissions from tomatoes ripening.

Because neurons communicate by firing out electrical messages, brain activity can be measured + recorded by placing a number of electrodes on a patient’s scalp. The resulting electroencephalogram [EEG] can be used to decipher + distinguish inner states of the brain, including the seizure episodes—or ictal states—that would be transformed into sound. Accordingly, in a pilot // proof-of-concept experiment, Parvizi + Chafe manually mined through gigabytes of EEG data—captured using over 100 scalp electrodes over the course of one week—to select the salient neurological moments corresponding to a seizure episode.

To use brain data to drive composition of the seizure song, Chafe integrated the data coming from each electrode to control pitch, tone quality + loudness using a technique known as frequency modulation [FM] synthesis, discovered by John Chowning in the mid-60s. The tones themselves are distinctly human: “The first inclination I had was this is a human brain, a human subject. So I went after a synthesis of human voice as the carrier of the information.” Chafe laughs, “You could call it voices from inside your head, but it’s just that kind of humanness this music is trying to relate to.”

Audio clip: Adobe Flash Player (version 9 or above) is required to play this audio clip. Download the latest version here. You also need to have JavaScript enabled in your browser.

The result is remarkable. According to Chafe: “When Josef first heard the results he knew we were on the right track. It was his encouragement and his recognition of the potential that really started this project off.” A seizure is essentially an electrical storm in the brain—the result of neurons suddenly + uncontrollably firing signals to one another. And indeed, the chorus of electrically-driven voices composed + conducted by these seizing neurons beautifully mirrors the nervous maelstrom in the brain. But beyond simply echoing this chaotic neurological state, the seizure sounds actually aurally relay the seizure’s progression, conveying relevant neurophysiological information. Parvizi explains:

Around 0:20, the patient’s seizure starts in the right hemisphere, and the patient is talking and acting normally. Around 1:50, the left hemisphere starts seizing while the right is in a post-ictal state. The patient is mute and confused. At 2:20 both hemispheres are in the post-ictal state.

Though on first listen you may not discern their precise meaning, you need not be a trained professional to hear these transitions in brain activity—the calm before the chaos, the high-pitched chatter of the right-side seizure, the low yep-yepping of the left-side takeover, the final fatigue of the post-ictal state. Parvizi adds: “It’s very intuitive. You can easily distinguish the very slow, steady sound at the very end of the audio [the post-ictal state] versus the very asynchronous, chaotic, oscillatory sound of the seizure.”

Chafe + Iorga

Undergraduate researcher Michael Iorga + Professor Chafe: “This is a very Stanford-ish project—integration of students into the research and the cross-disciplinary connection between departments is something that happens a lot here. It’s really splendid.” [Photo: L.A. Cicero, Stanford News]

While certainly not destined for the Top 40 charts, the seizure sounds have their own unique musicality. Chafe admits: “Every time I try to go in and recompose the data, it sounds worse. The brain dynamics are in and of themselves just so compelling that I don’t want to touch them—it just has this intrinsic kind of musicality to it. And really my job is to bring that out so it’s appreciated.” That delicate balance between Chafe’s computer composition and the inherent musicality embedded in these neuronal messages is undoubtedly where this brain music derives its potent power—readily relaying relevant neurological information.

Science has a long-standing bias towards visual observation + data, placing particular emphasis on charts, graphical representations + images as the predominant form of scientific “knowing.” And, of course, with good reason. Figures are an easy + effective form of distributing information, with data visualization even gaining particular popularity in the pop-sci world. In fact, as a biologist, I was taught to read scientific articles by first going through the figures to decide if the data was even compelling enough to bother reading the actual text. But listening to data adds an additional, and rather powerful, layer to understanding. Chafe notes:

The fundamental similarity is definitely there. So the graphs we convert into sound are immediately going to have the same landmarks you recognize visually and aurally. The intrinsic difference is really related to real-time, and our sense of hearing is extremely real-time—it’s how we react and how we perceive things that are fast. Our visual system is slower in real-time but very good at understanding time records and capturing patterns that span minutes, centuries, or millennia.

There’s something immediate about data listening. By quite literally giving data a voice, we may gain an intuitive sense of what sounds “normal” and where we ought to listen more deeply. Chafe adds:

Music exists somewhere in the continuum where on one end, we have random nonmusical character—extremely unpredictable with all frequencies going all the time—and on the other sounds that are extraordinarily predictable, like a clock ticking. So if you figure music is somewhere in between—sometimes it has more pattern or less pattern or more invention or more predictability—that’s a lot like the natural systems that we’re often trying to understand.

Given the real-time + intuitive nature of listening to this seizure music, Parvizi + Chafe saw a unique opportunity to apply their experiment to develop a powerful diagnostic medical tool—what they refer to as a brain stethoscope. To do so, the team is currently working on engineering a real-time platform to translate the brain’s electrical activity into sound. Much like a traditional stethoscope, users of this device can move scalp electrodes around a patient’s head and tune into the brain’s dynamics. Importantly, the brain stethoscope can provide a rapid + user-friendly alternative to EEGs, which require sufficient time + training to interpret.

Dr. Parvizi + patient: "We are trying to make this instrument like any other stethoscope that could be used in a clinical setting." [Photo: ]

Dr. Parvizi + patient: “We are trying to make this instrument like any other stethoscope that could be used in a clinical setting.” [Photo: Lea Suzuki, The Chronicle]

Moreover, patients experiencing seizures don’t always present symptoms. As a result, the brain stethoscope may play an invaluable diagnostic role in raising the volume of these silent seizures—both in the hospital and at home. In an interview with Stanford News, Parvizi notes: “Someone – perhaps a mother caring for a child – who hasn’t received training in interpreting visual EEGs can hear the seizure rhythms and easily appreciate that there is a pathological brain phenomenon taking place.” But even more broadly, this technology can be utilized to better understand everyday brain dynamics outside of a clinical setting—perhaps even listening to what music the brain makes while listening to music!

Because this is ArtLab + because this article happens to mark ArtLab’s one-year anniversary, I’ll end by sharing this: After one year of exploring the intersection between art + science, it truly re-ignited my excitement when Parvizi shared: “I’m optimistic that with more scientists and artists collaborating together, we can discover fields that have never been charted before.”

Living in Three-D // Real-D

The most outrageous-seeming science fiction constructions have a rather amazing longstanding habit of becoming reality. It’s actually almost impossible [for me at least] to imagine that in the not-so distant past space travel // robots // the Internet existed solely in the imaginations of sci-fi writers + consumers. Despite being a Millennial, well-versed + up-to-date in the latest-and-greatest innovations and gadgetry, I can’t help but have my mind utterly blown each time science fiction becomes fact. My latest obsession? 3D PRINTING.

3D printing blood vessel networks out of sugar using the Rep Rap at University of Pennsylvania.

As its name suggests, 3D printing creates an object from a three-dimensional digital model, known as a CAD [Computer-Aided Design] file. To print a 3D product out of this virtual blueprint, the CAD file is sliced into a series of 2D cross-sections. Successive slices are printed, stacked, and fused one on top of the other much like a standard inkjet printer, but instead of ink, 3D printer cartridges deposit drops of materials like rubber, plastics, metals, and more.

Because 3D objects are printed + stacked layer by layer from the ground up, 3D printing is often referred to as additive manufacturing to distinguish it from traditional manufacturing methods that build by cutting or drilling away parts + pieces to create a final product. By building up instead of paring back, 3D printing has blown open the doors for creating structures that were far too intricate + complex to fashion by machine or hand. [Just imagine if Michelangelo printed The David instead of chiseling away at a slab of marble for months and months!]

The ProtoHouse

The miniature 1:33 scale model of the fibrous ProtoHouse by Softkill Design printed by the larget available 3D printer.

By making certain technical limitations obsolete, 3D printing is rapidly [and literally] reshaping what is possible in design. As designers and engineers experiment with materials and methods of manufacture, shapes + structures that once only existed deep within our wildest dreams are now becoming a reality [within reason*]. The ProtoHouse, designed by London-based design firm Softkill Design, is one such dream-turned-[pending-]reality. In an effort to build a structure using minimal materials to maximize efficiency, this fibrous architectural fabrication is fashioned after an algorithm designed to mimic bone growth.

A 1:33 scale model of the biologically inspired design was  assembled in October 2012 out of 30 intricate 3D-printed pieces using highly flexible // lightweight bio-plastics without any adhesive material. With the ProtoHouse prototype in place, Softkill is in the process of scaling up the design to create a line of one-story, market-friendly homes that require only 24 hours for assembly, entering the race to build the first 3D-printed home. With such fantastical structures rapidly becoming a feasible reality, I can’t help but wonder how we will continue pushing the limits of our imagination as our dreamed up concoctions become the new normal.

Prosthetic design by San Francisco-based company Bespoke Innovations.

Amazingly, 3D printing technologies have already begun changing our relationship with that which is most sacred: our own bodies! By performing a body scan, San Francisco-based company Bespoke Innovations can 3D print prosthetic covers–known as fairings“that perfectly mirror the sculptural symmetry and function of the wearer’s remaining limb.” In other words, the Bespoke team can essentially fabricate an artificial limb that looks like a real one. Nevertheless, even with this option available, several Bespoke clients choose to make a statement with their prostheses, turning them into custom-tailored beautiful works of art that reflect their personalities: “We envision a day when people are invited to participate in the creation of the products that have meaning to them on a fundamental level, a day when bodies are consulted directly in the creation of the products that enhance or complement them.”

Of course, beyond simply spurring on a design revolution, additive manufacturing stands to have a huge impact on all areas of our lives, from printing medications + edible wonders with the click of a button, to the considerably darker potential for copyright infringement + 3D-printed weapons. Though 3D printing was invented almost three decades ago, the idea has only recently entered the zeitgeist due to increased support from government funding and commercial startups, substantially dropping its cost. As 3D-printing capabilities continue to grow and evolve, so too will the discussion and debate surrounding the Pandora’s box that goes hand in hand with such a powerful technology. Though it certainly remains to be seen how far the so-called “3D printing revolution” will go in reshaping the metaphorical landscape of the future, I can’t help but remain [guardedly] optimistic about what 3D printing has in store!


* 3D printing technology is still very much in its infancy, and while it holds a great deal of potential, there are still a great many limitations. The most important to note is that it takes a great deal of know-how to design structures for 3D printing. After all, 3D printed products are still subject to all the restrictions of the physical and chemical world we live in. As a result, in addition to a strong foundation in CAD software, designers must also have a strong background in mechanical engineering, architecture, biology, chemistry, etc., depending on the product for manufacture.