Study Schizophrenia is 8 diseases:
What we know — and psychiatrists have diagnosed for decades — as schizophrenia may really be eight separate diseases, research published in The American Journal of Psychiatry suggests.
Scientists at Washington University in St. Louis analyzed the DNA of more than 4,000 people with schizophrenia. They matched any gene variations they found in the DNA with study participants’ individual symptoms. In doing so, they found several “gene clusters” that appear to cause eight distinct classes of schizophrenia, according to a statement from the university.
"Complex diseases, such as schizophrenia, may be influenced by hundreds or thousands of genetic variants that interact with one another in complex ways," the study authors wrote in their introduction.
"Genes don’t operate by themselves," Dr. C. Robert Cloninger, one of the study’s senior authors, explained in the statement. "They function in concert much like an orchestra, and to understand how they’re working, you have to know not just who the members of the orchestra are but how they interact."
Schizophrenia is a chronic brain disorder that affects about 1% of the population, according to the American Psychiatric Association. Symptoms can vary from hallucinations to disordered speech to attention and decision-making problems.
Past studies done on twins and families have shown that about 80% of the risk for schizophrenia is inherited, the study authors say. A study published in July showed as many as 108 genes may be tied to the mental health disorder. But scientists have had trouble identifying specific genetic variations that put people at risk.
The Washington University researchers looked at instances where a single unit of DNA was altered, which is known as a single nucleotide polymorphism, or SNP. Then they identified 42 interactive SNP sets that significantly increased people’s risk of schizophrenia, according to the study.
In other words, if study participant Bob had Gene Cluster X, he was 70% more likely to have schizophrenia than study participant Fred who didn’t have that cluster of genes. In some cases, certain gene clusters were matched with close to a 100% increase in risk.
"In the past, scientists had been looking for associations between individual genes and schizophrenia," co-author Dr. Dragan Svrakic said in the statement. "What was missing was the idea that these genes don’t act independently. They work in concert to disrupt the brain’s structure and function, and that results in the illness."
The idea that schizophrenia is not one single disorder is not really new, says Dr. Charles Raison, a professor of psychiatry at the University of Arizona. It’s similar to the way doctors use the term “breast cancer” to describe several different diseases that cause tumors in the breasts.
"Schizophrenia is probably 80 different diseases," Raison says. "All psychiatric conditions likely share this heterogeneity."
There are only so many ways that certain malfunctions in your genetic code can manifest, Raison says. There may be 10 separate gene mutations, but they might only express themselves as one or two symptoms. So what’s causing hallucinations in one person might be different than what’s causing them in another.
So why are scientists trying to separate out the different schizophrenia disorders? Two reasons, Raison says: to help predict who might get schizophrenia, and to help treat it more efficiently.
Take, for example, pleurisy, which is a condition where the liquid around your lungs becomes inflamed. Several things can cause pleurisy, including a viral infection, pneumonia or cancer. If you have a drug that treats pneumonia, it’s going to help only a certain percentage of patients with pleurisy. But if you know that your patient’s pleurisy is caused by cancer, you’ll find a different course of treatment.
The same could hold true for schizophrenia and other mental health conditions, Raison says.
"In psychiatry land we’re still stuck with pleurisy," he says. "They’re descriptions of symptoms, and we only have a vague idea of the underlying causes."
In this videoexplains how Alcohol alters the Brain.
A new paper from researchers working in the UK and Germany dives into how much power the human brain consumes when performing various tasks — and sheds light on how humans might one day build similar computer-based artificial intelligences. Mapping biological systems isn’t as sexy as the giant discoveries that propel new products or capabilities, but that’s because it’s the final discovery — not the decades of painstaking work that lays the groundwork — that tends to receive all the media attention.
This paper — Power Consumption During Neuronal Computation — will run in an upcoming issue of IEEE’s magazine, “Engineering Intelligent Electronic Systems Based on Computational Neuroscience.” Here at ET, we’ve discussed the brain’s computational efficiency on more than one occasion. Put succinctly, the brain is more power efficient than our best supercomputers by orders of magnitude — and understanding its structure and function is absolutely vital.
When we think about compute clusters in the modern era, we think about vast arrays of homogeneous or nearly-homogeneous systems. Sure, a supercomputer might combine two different types of processors — Intel Xeon + Nvidia Tesla, for example, or Intel Xeon + Xeon Phi — but as different as CPUs and GPUs are, they’re both still digital processors. The brain, it turns out, incorporates both digital and analog signaling into itself and the two methods are used in different ways. One potential reason why is that the power efficiency of the two methods varies dramatically depending on how much bandwidth you need and how far the signal needs to travel.
The efficiency of the two systems depends on what SNR (signal to noise) ratio you need to maintain within the system.
One of the other differences between existing supercomputers and the brain is that neurons aren’t all the same size and they don’t all perform the same function. If you’ve done high school biology you may remember that neurons are broadly classified as either motor neurons, sensory neurons, and interneurons. This type of grouping ignores the subtle differences between the various structures — the actual number of different types of neurons in the brain is estimated between several hundred and perhaps as many as 10,000 — depending on how you classify them.
Compare that to a modern supercomputer that uses two or three (at the very most) CPU architectures to perform calculations and you’ll start to see the difference between our own efforts to reach exascale-level computing and simulate the brain, and the actual biological structure. If our models approximated the biological functions, you’d have clusters of ARM Cortex M0 processors tied to banks of 15-core Xeons which pushed data to Tesla GPUs, which were also tied to some Intel Quark processors with another trunk shifting work to a group of IBM Power8 cores — all working in perfect harmony. Just as modern CPUs have vastly different energy efficiencies, die sizes, and power consumption levels, we see exactly the same trends in neurons.
All three charts are interesting, but it’s the chart on the far right that intrigues me most. Relative efficiency is graphed along the vertical axis while the horizontal axis has bits-per-second. Looking at it, you’ll notice that the most efficient neurons in terms of bits transferred per ATP molecule (ATP is a biological unit of energy equivalent to bits-per-watt in computing) is also one of the slowest in terms of bits per second. The neurons that can transfer the most data in terms of bits-per-second are also the least efficient.
Again, we see clear similarities between the design of modern microprocessors and the characteristics of biological organisms. That’s not to downplay the size of the gap or the dramatic improvements we’d have to make in order to offer similar levels of performance, but there’s no mystic sauce here — and analyzing the biological systems should give us better data on how to tweak semiconductor designs to approximate it.
Much of what we cover on ExtremeTech is cast in terms of the here-and-now. A better model of neuron energy consumption doesn’t really speak to any short-term goals — this won’t lead directly to a better microprocessor or a faster graphics card. It doesn’t solve the enormous problems we face in trying to shift conventional computing over to a model that more closely mimics the brain’s own function (neuromorphic design). But it does move us a critical step closer to the long-term goal of fully understanding (and possibly simulating) the brain. After all, you can’t simulate the function of an organ if you don’t understand how it signals or under which conditions it functions. [Read: A bionic prosthetic eye that speaks the language of your brain.]
Emulating a brain has at least one thing in common with emulating an instruction set in computing — the greater the gap between the two technologies, typically the larger the power cost to emulate it. The better we can analyze the brain, the better our chances of emulating one without needing industrial power stations to keep the lights on and the cooling running.
Symphony of a Seizure- Brain Activity turned into sound waves. Chris Chafe and Josef Parvizi have developed a device that can turn patterns in brain activity into an audio portrait. Hear Parvizi walk you through the sound of a seizure. Audio production: Molly Bentley
1. A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost ?
2. If it takes 5 machines 5 minutes to make 5 Widgets , how long would it take 100 machines to make 100 widgets?
3. In a lake , there is a patch of lily pads. Every day , the patch doubles in size. If it takes 48 days for the patch to cover the entire lake , how long would it take for the patch to cover half the lake ?
Julie Boergers, Ph.D., a psychologist and sleep expert from the Bradley Hasbro Children’s Research Center, recently led a study linking later school start times to improved sleep and mood in teens. The article, titled “Later School Start Time is Associated with Improved Sleep and Daytime Functioning in Adolescents,” appears in the current issue of the Journal of Developmental & Behavioral Pediatrics.
"Sleep deprivation is epidemic among adolescents, with potentially serious impacts on mental and physical health, safety and learning. Early high school start times contribute to this problem," said Boergers. "Most teenagers undergo a biological shift to a later sleep-wake cycle, which can make early school start times particularly challenging. In this study, we looked at whether a relatively modest, temporary delay in school start time would change students’ sleep patterns, sleepiness, mood and caffeine use."
Boergers’ team administered the School Sleep Habits Survey to boarding students attending an independent high school both before and after their school start time was experimentally delayed from 8 to 8:25 a.m. during the winter term.
The delay in school start time was associated with a significant (29 minute) increase in sleep duration on school nights, with the percentage of students receiving eight or more hours of sleep on a school night jumping from 18 to 44 percent. The research found that younger students and those sleeping less at the start of the study were most likely to benefit from the schedule change. And once the earlier start time was reinstituted during the spring term, teens reverted back to their original sleep levels.
Daytime sleepiness, depressed mood and caffeine use were all significantly reduced after the delay in school start time. The later school start time had no effect on the number of hours students spent doing homework, playing sports or engaging in extracurricular activities.
Boergers, who is also co-director of the Pediatric Sleep Disorders Clinic at Hasbro Children’s Hospital, said that these findings have important implications for public policy. “The results of this study add to a growing body of research demonstrating important health benefits of later school start times for adolescents,” she said. “If we more closely align school schedules with adolescents’ circadian rhythms and sleep needs, we will have students who are more alert, happier, better prepared to learn, and aren’t dependent on caffeine and energy drinks just to stay awake in class.”
Injuries to the head can leave victims susceptible to early death even years later through impaired judgement, a major analysis of survivors shows.
Those with a history of psychiatric disorders before the injury are most at risk of dying prematurely.
The study, in JAMA Psychiatry, of 40 years of data on more than two million people, showed that overall a brain injury trebled the risk.
Suicide and fatal injuries were among the commonest causes of early death.
More than one million people in Europe are taken to hospital with a traumatic brain injury each year.
The study, by researchers at the University of Oxford and the Karolinska Institute in Stockholm, looked at Swedish medical records between 1969 and 2009.
They followed patients who survived the initial six-month danger period after injury.
The data showed that without injury 0.2% of people were dying prematurely - before the age of 56.
However, the premature-death rate was three-fold higher in patients who had previously suffered traumatic brain injury.
In those who also had a psychiatric disorder the rate soared to 4%.
Dr Seena Fazel, one of the researchers in Oxford, said: “There are these subgroups with really high rates, and these are potentially treatable illnesses, so this is something we can do something about.”
Common causes of premature death among those who had suffered previous brain injury included suicide, being a victim of assault or suffering fatal injuries, for example in a car crash.
It is thought that the injury causes permanent damage to neural networks in the brain and can alter people’s judgement and ability to deal with new situations.
Prof Huw Williams, the co-director of the centre for clinical neuropsychology research at the University of Exeter, said: “The mortality rates are like a reverse-iceberg - they’re the most awful outcome, but the rates of depression and anxiety are huge in the brain injury population.
"People with head injury need monitoring all the time in case they become suicidal."
Dr Richard Greenwood, a consultant neurologist at Homerton Hospital in London, said post-mortem examinations showed 2% of people had evidence of brain injury, and his children were not allowed to play rugby because of the risk to the brain.
There is bad news for anyone relying on last-minute exam cramming, as psychologists publish research showing that learning is much more effective when spaced out over stretches of time.
The study from Sheffield University examined how more than 850,000 people improved skills playing an online game.
It showed leaving a day between practice sessions was a much better way of gaining skills than continuous play.
Researcher Tom Stafford says this reflects how memories are stored.
Prof Stafford, a psychologist from the University of Sheffield, was able to analyse how people around the world improved when playing the Axon computer game.
He found a clear pattern showing that people were more successful when gaps were left between sessions of playing.
Leaving a day between sessions did not weaken performance, but strengthened it, says Prof Stafford.
This is because it makes better use of how the brain stores information, he says.
The study suggests that learning can be improved. You can learn more efficiently or use the same practice time to learn to a higher level”
Cramming for long intense stretches ahead of a test might feel like more is being learned, says Prof Stafford, but this is illusory.
A better way of revising or learning is to plan over a much longer period, with substantial breaks between study sessions.
For instance, practising a skill for two hours and then taking a day-long break before practising for another two hours was more effective than practising continuously for four hours.
Prof Stafford, who analysed the data with Michael Dewar from The New York Times Research and Development Lab, says this study of such a big sample of online game players provides a useful template for understanding other types of learning.
It suggests that the volume of learning is less important than how that time is structured.
"The study suggests that learning can be improved. You can learn more efficiently or use the same practice time to learn to a higher level," says Prof Stafford.
Medical implants, complex interfaces between brain and machine or remotely controlled insects: Recent developments combining machines and organisms have great potentials, but also give rise to major ethical concerns. In a new review, KIT scientists discuss the state of the art of research, opportunities, and risks.
They are known from science fiction novels and films — technically modified organisms with extraordinary skills, so-called cyborgs. This name originates from the English term “cybernetic organism.” In fact, cyborgs that combine technical systems with living organisms are already reality. The KIT researchers Professor Christof M. Niemeyer and Dr. Stefan Giselbrecht of the Institute for Biological Interfaces 1 (IBG 1) and Dr. Bastian E. Rapp, Institute of Microstructure Technology (IMT), point out that this especially applies to medical implants.
In recent years, medical implants based on smart materials that automatically react to changing conditions, computer-supported design and fabrication based on magnetic resonance tomography datasets or surface modifications for improved tissue integration allowed major progress to be achieved. For successful tissue integration and the prevention of inflammation reactions, special surface coatings were developed also by the KIT under e.g. the multidisciplinary Helmholtz program “BioInterfaces.”
Progress in microelectronics and semiconductor technology has been the basis of electronic implants controlling, restoring or improving the functions of the human body, such as cardiac pacemakers, retina implants, hearing implants, or implants for deep brain stimulation in pain or Parkinson therapies. Currently, bioelectronic developments are being combined with robotics systems to design highly complex neuroprostheses. Scientists are working on brain-machine interfaces (BMI) for the direct physical contacting of the brain. BMI are used among others to control prostheses and complex movements, such as gripping. Moreover, they are important tools in neurosciences, as they provide insight into the functioning of the brain. Apart from electric signals, substances released by implanted micro- and nanofluidic systems in a spatially or temporarily controlled manner can be used for communication between technical devices and organisms.
BMI are often considered data suppliers. However, they can also be used to feed signals into the brain, which is a highly controversial issue from the ethical point of view. “Implanted BMI that feed signals into nerves, muscles or directly into the brain are already used on a routine basis, e.g. in cardiac pacemakers or implants for deep brain stimulation,” Professor Christof M. Niemeyer, KIT, explains. “But these signals are neither planned to be used nor suited to control the entire organism — brains of most living organisms are far too complex.”
Brains of lower organisms, such as insects, are less complex. As soon as a signal is coupled in, a certain movement program, such as running or flying, is started. So-called biobots, i.e. large insects with implanted electronic and microfluidic control units, are used in a new generation of tools, such as small flying objects for monitoring and rescue missions. In addition, they are applied as model systems in neurosciences in order to understand basic relationships.
Electrically active medical implants that are used for longer terms depend on reliable power supply. Presently, scientists are working on methods to use the patient body’s own thermal, kinetic, electric or chemical energy.
In their review the KIT researchers sum up that developments combining technical devices with organisms have a fascinating potential. They may considerably improve the quality of life of many people in the medical sector in particular. However, ethical and social aspects always have to be taken into account.
Now, researchers at the Johns Hopkins University have found another use for the popular stimulant: memory enhancer.
Michael Yassa, an assistant professor of psychological and brain sciences at Johns Hopkins, and his team of scientists found that caffeine has a positive effect on our long-term memory. Their research, published by the journal Nature Neuroscience, shows that caffeine enhances certain memories at least up to 24 hours after it is consumed.
"We’ve always known that caffeine has cognitive-enhancing effects, but its particular effects on strengthening memories and making them resistant to forgetting has never been examined in detail in humans," said Yassa, senior author of the paper. "We report for the first time a specific effect of caffeine on reducing forgetting over 24 hours."
The Johns Hopkins researchers conducted a double-blind trial in which participants who did not regularly eat or drink caffeinated products received either a placebo or a 200-milligram caffeine tablet five minutes after studying a series of images. Salivary samples were taken from the participants before they took the tablets to measure their caffeine levels. Samples were taken again one, three, and 24 hours afterwards.
The next day, both groups were tested on their ability to recognize images from the previous day’s study session. On the test, some of the visuals were the same as those from the day before, some were new additions, and some were similar but not the same.
More members of the caffeine group were able to correctly identify the new images as “similar” to previously viewed images rather than erroneously citing them as the same.
The brain’s ability to recognize the difference between two similar but not identical items, called pattern separation, reflects a deeper level of memory retention, the researchers said.
"If we used a standard recognition memory task without these tricky similar items, we would have found no effect of caffeine," Yassa said. "However, using these items requires the brain to make a more difficult discrimination—what we call pattern separation, which seems to be the process that is enhanced by caffeine in our case."
The memory center in the human brain is the hippocampus, a seahorse-shaped area in the medial temporal lobe of the brain. The hippocampus is the switchbox for all short- and long-term memories. Most research done on memory—the effects of concussions in athletes, of war-related head injuries, and of dementia in the aging population—focuses on this area of the brain.
Until now, caffeine’s effects on long-term memory had not been examined in detail. Of the few studies done, the general consensus was that caffeine has little or no effect on long-term memory retention.
The research is different from prior experiments because the subjects took the caffeine tablets only after they had viewed and attempted to memorize the images.
"Almost all prior studies administered caffeine before the study session, so if there is an enhancement, it’s not clear if it’s due to caffeine’s effects on attention, vigilance, focus, or other factors," Yassa said. "By administering caffeine after the experiment, we rule out all of these effects and make sure that if there is an enhancement, it’s due to memory and nothing else."
According to the U.S. Food and Drug Administration, 90 percent of people worldwide consume caffeine in one form or another. In the United States, 80 percent of adults consume caffeine every day. The average adult has an intake of about 200 milligrams—the same amount used in the Yassa study—or roughly one cup of strong coffee per day.
Yassa’s team completed the research at Johns Hopkins before his lab moved to the University of California, Irvine, at the start of this year.
"The next step for us is to figure out the brain mechanisms underlying this enhancement," Yassa said. "We can use brain-imaging techniques to address these questions. We also know that caffeine is associated with healthy longevity and may have some protective effects from cognitive decline like Alzheimer’s disease. These are certainly important questions for the future."
Deanna Barch talks fast, as if she doesn’t want to waste any time getting to the task at hand, which is substantial. She is one of the researchers here at Washington University working on the first interactive wiring diagram of the living, working human brain.
To build this diagram she and her colleagues are doing brain scans and cognitive, psychological, physical and genetic assessments of 1,200 volunteers. They are more than a third of the way through collecting information. Then comes the processing of data, incorporating it into a three-dimensional, interactive map of the healthy human brain showing structure and function, with detail to one and a half cubic millimeters, or less than 0.0001 cubic inches.
Dr. Barch is explaining the dimensions of the task, and the reasons for undertaking it, as she stands in a small room, where multiple monitors are set in front of a window that looks onto an adjoining room with an M.R.I. machine, in the psychology building. She asks a research assistant to bring up an image. “It’s all there,” she says, reassuring a reporter who has just emerged from the machine, and whose brain is on display.
And so it is, as far as the parts are concerned: cortex, amygdala, hippocampus and all the other regions and subregions, where memories, fear, speech and calculation occur. But this is just a first go-round. It is a static image, in black and white. There are hours of scans and tests yet to do, though the reporter is doing only a demonstration and not completing the full routine.
Each of the 1,200 subjects whose brain data will form the final database will spend a good 10 hours over two days being scanned and doing other tests. The scientists and technicians will then spend at least another 10 hours analyzing and storing each person’s data to build something that neuroscience does not yet have: a baseline database for structure and activity in a healthy brain that can be cross-referenced with personality traits, cognitive skills and genetics. And it will be online, in an interactive map available to all.
Dr. Helen Mayberg, a doctor and researcher at the Emory University School of Medicine, who has used M.R.I. research to guide her development of a treatment for depression with deep brain stimulation, a technique that involves surgery to implant a pacemaker-like device in the brain, is one of the many scientists who could use this sort of database to guide her research. With it, she said, she can ask, “how is this really critical node connected” to other parts of the brain, information that will inform future research and surgery.
The database and brain map are a part of the Human Connectome Project, a roughly $40 million five-year effort supported by the National Institutes of Health. It consists of two consortiums: a collaboration among Harvard, Massachusetts General Hospital and U.C.L.A. to improve M.R.I. technology and the $30 million project Dr. Barch is part of, involving Washington University, the University of Minnesota and the University of Oxford.
Dr. Barch is a psychologist by training and inclination who has concentrated on neuroscience because of the desire to understand severe mental illness. Her role in the project has been in putting together the battery of cognitive and psychological tests that go along with the scans, and overseeing their administration. This is the information that will give depth and significance to the images.
She said the central question the data might help answer was, “How do differences between you and me, and how our brains are wired up, relate to differences in our behaviors, our thoughts, our emotions, our feelings, our experiences?”
And, she added, “Does that help us understand how disorders of connectivity, or disorders of wiring, contribute to or cause neurological problems and psychiatric problems?”
The Human Connectome Project is one of a growing number of large, collaborative information-gathering efforts that signal a new level of excitement in neuroscience, as rapid technological advances seem to be bringing the dream of figuring out the human brain into the realm of reality.
In Europe, the Human Brain Project has been promised $1 billion for computer modeling of the human brain. In the United States last year, President Obama announced an initiative to push brain research forward by concentrating first on developing new technologies. This so-called Grand Challenge has been promised $100 million of financing for the first year of what is anticipated to be a decade-long push. The money appears to be real, but it may come from existing budgets, and not from any increase for the federal agencies involved.
A vast amount of research is already going on — so much that the neuroscience landscape is almost as difficult to encompass as the brain itself. The National Institutes of Health alone spends $5.5 billion a year on neuroscience, much of it directed toward research on diseases like Parkinson’s and Alzheimer’s.
M. F. Glasser and D.C. Van Essen for the WU-Minn HCP Consortium
A variety of private institutes emphasize basic research that may not have any immediate payoff. For instance, at the Allen Institute for Brain Science in Seattle, Janelia Farm in Virginia, part of the Howard Hughes Medical Institute, and at numerous universities, researchers are trying to understand how neurons compute — what the brains of mice, flies and human beings do with their information. The Allen Institute is now spending $60 million a year and Janelia Farm about $30 million a year on brain research. The Kavli Foundation has committed $4 million a year for 10 years, and the Salk Institute in San Diego plans to spend a total of $28 million on new neuroscience research. And there are others in the U.S. and abroad.
To be sure, this is not the first time such a focus has been placed on brain research. The 1990s were anointed the decade of the brain by President George H. W. Bush. Strides were made, but many aspects of the brain have remained mysterious.
There is, however, a good reason for the current excitement, and that is accelerating technological change that the most sanguine of brain mappers compare to the growing ability to sequence DNA that led to the Human Genome Project.
Optogenetics is one new technique that has been transformative. It uses light to turn on different parts of the brain in laboratory animals to open and shut modified genes. Powerful developments in microscopy made possible movies of brain activity in living animals. A modified rabies virus can target one brain cell and mark every other cell that is connected to it.
“There is an explosion of new techniques,” said Dr. R. Clay Reid, a senior investigator at the Allen Institute, who recently moved there from Harvard Medical School. “And the end isn’t really in sight,” said Dr. Reid, who is taking advantage of just about every new technology imaginable in his quest to decipher the part of the mouse brain devoted to vision.
Charting the Brain
Of the many metaphors used for exploring and understanding the brain, mapping is probably the most durable, perhaps because maps are so familiar and understandable. “A century ago, brain maps were like 16th-century maps of the Earth’s surface,” said David Van Essen, who is in charge of the Connectome effort at Washington University, where Dr. Barch works. Much was unknown or mislabeled. “Now our characterizations are more like an 18th-century map.”
The continents, mountain ranges and rivers are getting more clearly defined. His hope, he said, is that the Human Connectome Project will be a step toward vaulting through the 19th and 20th centuries and reaching something more like Google Maps, which is interactive and has many layers.
Researchers may not be looking for the best sushi restaurants or how to get from one side of Los Angeles to the other while avoiding traffic, but they will eventually be looking for traffic flow, particularly popular routes for information, and matching traffic patterns to the tasks the brain is doing. They will also be asking how differences in the construction of the pathways that make up the brain’s roads relate to differences in behavior, intelligence, emotion and genetics.
The power of computers and mathematical tools devised for analyzing vast amounts of data made such maps possible. The gathering tool of choice at Washington University is an M.R.I. machine customized at the University of Minnesota.
An M.R.I. machine creates a magnetic field surrounding the body part to be scanned, and sends radio waves into the body. Unlike X-rays, which are known to pose some dangers, M.R.I. scans are considered to be safe. It is one of the few methods of noninvasive scanning that can survey a whole human brain.
Zach Wise for The New York Times
There are a variety of ways to gather and interpret information in an M.R.I. machine. And different types of scans can show both basic structure and activity. When a volunteer is trying to solve a memory problem, the hippocampus, the amygdala and the prefrontal cortex are all going to be involved. An M.R.I. machine can detect the direction of information flow, in a technique called diffusion imaging. In that kind of scan, the movement of water molecules shows not only activity, but which way the traffic is headed.
A Path to Research
For Dr. Barch, 48, another kind of interest in the human brain put her on the path to Washington University. “I always knew I wanted to be a psychologist,” she said — specifically, a school psychologist. But as an undergraduate at Northwestern, she excelled in an abnormal psychology class, and the professor recruited her to do research.
“When I graduated from college, I decided to become a case manager for the chronically mentally ill for a year to kind of suss out, ‘Do I want to do more clinical work or research?’ ” she said. “That was a great experience, but it really made me realize that research is the only way you’re going to have an impact on many lives, rather than sort of individual lives.”
She obtained her Ph.D. in clinical psychology at the University of Illinois at Urbana-Champaign. but then did postdoctoral study in cognitive neuroscience at the University of Pittsburgh and Carnegie Mellon University. Her years in graduate school in the 1990s coincided with the development and use of the so-called functional M.R.I., which can show not just static structure, but the brain in action.
“I got into the field when functional imaging was just at its very beginning, so I was able to learn on the ground floor,” she said.
She moved to Washington University after her postdoctoral research partly because of the number of people there working on imaging, including Dr. Marcus E. Raichle, a pioneer in developing ways of watching the brain at work.
As a professor at Washington University and a leader of one of five teams there working on the Human Connectome Project, Dr. Barch focuses her research on the way individual differences in the brains of healthy people are related to differences in personality or thinking.
For instance she said, people doing memory tasks in the M.R.I. machine may differ in competitiveness and commitment to doing well. That ought to show up in activity in the parts of the brain that involve emotion, like the amygdala. However, she points out that the object of the Connectome Project is not to find the answers to these questions, but to provide the database for others to try to do so.
The project at Washington University requires exhaustive scans of 1,200 healthy people, age 22 to 35, each of whom spends about four hours over two days lying in the noisy, claustrophobia-inducing cylinder of a customized M.R.I. machine. Sometimes they stare at one spot, curl their toes or move their fingers. They might play gambling games, or try memory tests that can flummox even the sharpest minds.
M. F. Glasser and D.C. Van Essen for the WU-Minn HCP Consortium
“In an ideal world, we would have enough tasks to activate every part of the brain,” she said. “We got pretty close. We’re not perfect, but pretty close.”
Over the two days, the research subjects spend another six hours taking other tests designed to measure intelligence, basic physical fitness, tasting ability and their emotional state.
The volunteers (and they are all volunteers, paid a flat $400 for their time and effort) can also be seen in street clothes, doing a kind of race around two traffic cones in the sunlit corridor of the glass-walled psychology building, with data collected on how quickly they complete the course.
Or they can be glimpsed padding down a hallway in their stocking feet from the M.R.I. machine to an office where a technician dabs their tongues with a swab dipped in a mystery liquid, then asks them to identify the intensity and quality of the taste.
In the same office, they type in answers to cognitive tests, and to a psychological survey, for which they are left in solitude because of the personal nature of some of the questions: how they feel about life, how often they are sad. The results are confidential, as are all the test results.
So far almost 500 subjects have gone through the full range of tests, which amounts to about 5,000 hours of work for Dr. Barch and others in the program.
So far, data has been released for 238 subjects, and it is available to everyone for free through a web-based database and software program called Workbench.
The sharing of data is characteristic of most of the new brain research efforts, and particularly important to Dr. Barch.
“The amount of time and energy we’re spending collecting this data, there’s no possible way any one research group could ever use it to the extent that justifies the cost,” she said. “But letting everybody use it — great!”
The Elusive Brain
No one expects the brain to yield its secrets quickly or easily. Neuroscientists are fond of deflecting hope even as they point to potential success. Science may come to understand neurons, brain regions, connections, make progress on Parkinson’s. Alzheimer’s or depression, and even decipher the code or codes the brain uses to send and store information. But, as any neuroscientist sooner or later cautions in discussing the prospects for breakthroughs, we are not going to “solve the brain” anytime soon — not going to explain consciousness, the self, the precise mechanisms that produce a poem.
Perhaps the greatest challenge is that the brain functions and can be viewed at so many levels, from a detail of a synapse to brain regions trillions of times larger. There are electrical impulses to study, biochemistry, physical structure, networks at every level and between levels. And there are more than 40,000 scientists worldwide trying to figure it out.
This is not a case of an elephant examined by 40,000 blindfolded experts, each of whom comes to a different conclusion about what it is they are touching. Everyone knows the object of study is the brain. The difficulty of comprehending the brain may be more aptly compared to a poem by Wallace Stevens, “13 Ways of Looking at a Blackbird.”
Each way of looking, not looking, or just being in the presence of the blackbird reveals something about it, but only something. Each way of looking at the brain reveals ever more astonishing secrets, but the full and complete picture of the human brain is still out of reach.
There is no need, no intention and perhaps no chance, of ever “solving” a poet’s blackbird. It is hard to imagine a poet wanting such a thing. But science, by its nature, pursues synthesis, diagrams, maps — a grip on the mechanism of the thing. We may not solve the brain any time soon, but someday achieving such a solution, at least in scientific terms, is the fervent hope of neuroscience.
Some 30 minutes of meditation daily may improve symptoms of anxiety and depression, a new Johns Hopkins analysis of previously published research suggests.
“A lot of people use meditation, but it’s not a practice considered part of mainstream medical therapy for anything,” says Madhav Goyal, M.D., M.P.H., an assistant professor in the Division of General Internal Medicine at the Johns Hopkins University School of Medicine and leader of a study published online Jan. 6 in JAMA Internal Medicine. “But in our study, meditation appeared to provide as much relief from some anxiety and depression symptoms as what other studies have found from antidepressants.” These patients did not typically have full-blown anxiety or depression.
The researchers evaluated the degree to which those symptoms changed in people who had a variety of medical conditions, such as insomnia or fibromyalgia, although only a minority had been diagnosed with a mental illness.
Goyal and his colleagues found that so-called “mindfulness meditation” — a form of Buddhist self-awareness designed to focus precise, nonjudgmental attention to the moment at hand — also showed promise in alleviating some pain symptoms as well as stress. The findings held even as the researchers controlled for the possibility of the placebo effect, in which subjects in a study feel better even if they receive no active treatment because they perceive they are getting help for what ails them.
To conduct their review, the investigators focused on 47 clinical trials performed through June 2013 among 3,515 participants that involved meditation and various mental and physical health issues, including depression, anxiety, stress, insomnia, substance use, diabetes, heart disease, cancer and chronic pain. They found moderate evidence of improvement in symptoms of anxiety, depression and pain after participants underwent what was typically an eight-week training program in mindfulness meditation. They discovered low evidence of improvement in stress and quality of life. There was not enough information to determine whether other areas could be improved by meditation. In the studies that followed participants for six months, the improvements typically continued.
They also found no harm came from meditation.
Meditation, Goyal notes, has a long history in Eastern traditions, and it has been growing in popularity over the last 30 years in Western culture.
“A lot of people have this idea that meditation means sitting down and doing nothing,” Goyal says. “But that’s not true. Meditation is an active training of the mind to increase awareness, and different meditation programs approach this in different ways.”
Mindfulness meditation, the type that showed the most promise, is typically practiced for 30 to 40 minutes a day. It emphasizes acceptance of feelings and thoughts without judgment and relaxation of body and mind.
He cautions that the literature reviewed in the study contained potential weaknesses. Further studies are needed to clarify which outcomes are most affected by these meditation programs, as well as whether more meditation practice would have greater effects.
“Meditation programs appear to have an effect above and beyond the placebo,” Goyal says.