This image could be hang in a gallery, but it started as a small piece of one woman’s brain. In 2014, a woman undergoing surgery for epilepsy had a small piece of her cerebral cortex removed. This cubic millimeter of tissue has allowed researchers from Harvard and Google to produce the most detailed wiring diagram of the human brain the world has ever seen.
Biologists and machine learning experts have spent 10 years developing a interactive map of brain tissue, which contains about 57,000 cells and 150 million synapses. It shows cells wrapping around themselves, pairs of cells that appear mirrored and egg-shaped “objects” that, according to the research, defy classification. The mind-bogglingly complex diagram is expected to help advance scientific research, from understanding human neural circuits to potential treatments for diseases.
“If we map things at very high resolution, see all the connections between different neurons, and analyze that on a large scale, we might be able to identify wiring rules,” said Daniel Berger, one of the project’s principal investigators and a specialist in connectomics, the science of how individual neurons connect to form functional networks. “From that, we might be able to create models that mechanistically explain how thinking works or memory is stored.”
Jeff Lichtman, a professor of molecular and cellular biology at Harvard, explains that researchers in his lab, led by Alex Shapson-Coe, created the brain map by taking subcellular pictures of the tissue using electron microscopy. The tissue from the 45-year-old woman's brain was stained with heavy metals, which bind to lipid membranes in cells. This was done so that the cells would be visible when viewed through an electron microscope, since heavy metals reflect electrons.
The tissue was then embedded in resin so that it could be sliced into very thin slices, just 34 nanometers thick (for comparison, a typical piece of paper is about 100,000 nanometers thick). This was done to make mapping easier, Berger says, to turn a 3D problem into a 2D one. The team then took electron microscope images of each 2D slice, which amounted to a whopping 1.4 petabytes of data.
When the Harvard researchers had these images, they did what many of us do when faced with a problem: They turned to Google. A team at the tech giant led by Viren Jain aligned the 2D images using machine-learning algorithms to produce 3D reconstructions using automatic segmentation, in which components within an image—different cell types, for example—are automatically differentiated and categorized. Some of the segmentation required what Lichtman called “ground-truth data,” with Berger (who worked closely with the Google team) manually redrawing some of the tissue to further inform the algorithms.
Digital technology, Berger explains, allowed him to see all the cells in this tissue sample and stain them differently depending on their size. Traditional methods of imaging neurons, such as staining samples with a chemical known as the Golgi stain, which has been used for more than a century, leave some elements of nervous tissue hidden.