Sunday, April 20, 2014

Breaking habits and erasing memories

The summer day on Okinawa Island of Japan was so warm that those us who were there to teach had given up on going to the beach in the afternoon and instead had decided to try some early morning tennis.  On this particular early morning, a distinguished scientist and friend had joined our group, and there he was holding the ball and warming up for his serve.  As he tossed the ball up, he bent his body sideways and then back and twisted upwards and finally had his racquet make contact with the ball, delivering a pretty good serve. 

I stood there marveling at how he had learned to serve this way.  He said: “Well, I learned on my own, and despite lots of coaches who have tried to break it down and rebuild it, I haven’t been able to change it much.”




Memories, like that of how to hit a tennis serve, can become so persistent that the brain seems unable to change them.  This, at the surface, may not appear that important, as the cost is looking a little silly and not being able to do something as efficiently as possible.  But what if you are traveling with family on a peaceful day and stop at a gas station, and suddenly the smell of petroleum brings back memories of combat, paralyzing you with fear?  What if you are watching a movie where the hero is climbing the face of a rock and when she reaches the peak, she stands and looks down, and you find your knees shaking?  Does the brain have a mechanism in place to rebuild or even erase unwanted habits and fear-inducing memories?

Until about 15 years ago, it was generally assumed that when the brain learns something new, the newly acquired memory is initially in a labile state and can be readily changed, but after a short period of time (hours), it becomes ‘consolidated’, meaning that it becomes resistant to change.  For example, when rats were given a single pairing of a tone with a food-shock, this made it so that the next time they heard that tone, they got scared and stopped moving.  If a drug was given to them that disrupted the molecular pathways that are involved in consolidation (protein synthesis inhibitors), the next day when they heard the tone they were not scared of it.  However, the drug had to be given soon after the animal’s first experience of the tone-shock pairing.  If it was given even a few hours after the first experience, it did not have much of an effect; the animal still feared the tone.  And so it seemed that once an emotional or fear-inducing memory was acquired, there was little that could be done to change it.

The basis for this idea was a century of work that had described how memories form.  Neurons communicate with each other via their synapses, tiny junctions where one neuron sends and receives messages from another neuron.  Eric Kandel, a Columbia University neuroscientist, had shown that short-term memories, things that last for a few minutes, are due to transient changes to the synapse to make it more efficient, but these changes were sustained only for a short period of time.  To make memories last, the changes at the synapse had be sustained indefinitely, and this required manufacture of new proteins.  If the initial experience was strong enough, with passage of time these new proteins were made by the neuron and the memory was maintained, apparently becoming permanent.

But in the year 2000, Karim Nader, an Egyptian born neuroscientist who was raised in Canada, made a discovery that completely overturned this idea.  He was working in Joseph LeDoux’s laboratory in New York University where he took rats and gave them a single pairing of a tone with a foot-shock, and indeed, the next day he found that when they heard the tone, the rats froze in their tracks (rats express fear by ‘freezing’).  However, right after they heard this tone, he injected into their amygdala (a region of the brain involved in storing fearful memories) a drug that inhibits protein synthesis.  Amazingly, he found that a day later, when they heard the tone their fear was reduced by half (measured by the time spent ‘freezing’).  Interestingly, if the drug was given without the reactivation of the memory (that is, on day 2 don’t play the tone), it had no effect.   And if the animal heard the tone but 6 hours later was given the drug, it still feared the tone the next day.  So the key idea was that the fear-inducing memory could be weakened if the drug was given right after the memory was reactivated, but it could not be weakened if the drug was given alone, or if the memory was reactivated but without the drug.

Unfortunately, protein synthesis inhibitors cannot safely be given to humans, and so until recently, it was unclear whether this new understanding could be applied to fear-inducing memories in people.  In 2009, Merel Kindt and colleagues in Amsterdam asked a few undergraduate students to look at a picture of a spider and then a few seconds later played a loud sound, followed by a mild shock to the hand.  When the students heard the loud sound, they had a startle reflex, producing an eye blink.  They also showed them a picture of another spider, followed by another loud sound, but no shock to the hand.  So the students learned to fear the picture of the 1st spider, but not the second.  The amount of fear was measured by how they reacted to the loud sound.  Indeed, the students feared the 1st spider more than the second.  The students returned on Day 2, and Kindt showed them the picture of the 1st spider, but did not shock them.  Right after this, they gave them a drug called propranolol, which is often used to prevent stage fright, and works to inhibit actions of norepinephrine.  When the students returned on the next day, they did not show fear of the spider.  Importantly, if they gave the drug but did not show them the picture of the spider, the fear-inducing memory remained. 

So it seems possible that in humans, certain fear-inducing memories can be weakened by a combination of reactivation of that memory and consumption of certain drugs like propranolol.  Later work from the Kindt group showed that the key step is that during recall of the memory, there must be a prediction error.  That is, during recall, the brain appears to predict that a bad thing is going to happen (a shock), and if it does not happen, and the drug is present, then the memory is weakened.  Both the prediction error and the presence of the drug seem to be required, as one without the other is much less effective. 

These approaches are now being studied for treatment of PTSD.  In a recent study, propranolol was given to people who were involved in a serious car accident.  Those people were less likely to develop PTSD symptoms in the following 3 months compared to people who were given placebo. 

Notice, however, that all the successes have been on weakening newly formed memories.  What about the old fear-inducing memories?  The news there is less clear.  Older memories may be less likely to be affected when they are reactivated.   Which brings me to one of my favorite quotes from Margaret Thatcher, who was quoting her father when she said:

Watch your thoughts for they become words.
Watch your words for they become actions.
Watch your actions for they become habits.
Watch your habits for they become your character.
And watch your character for it becomes your destiny
.

References
Kindt M, Soeter M, Vervliet B (2009) Beyond extinction: erasing human fear responses and preventing the return of fear. Nature Neurosci 12:256-258.
Nader K, Schafe GE, Le Doux JE (2000) Fear memories require proten synthesis in the amygdala for reconsolidation after retrieval. Nature 406:722-726.
Sevenster D, Beckers T, Kindt M (2013) Prediction error governs pharmacologically induced amnesia for learned fear. Science 339:830-833.


Sunday, March 16, 2014

The puzzle of menopause

Human females appear to be unique in the animal kingdom in that they live far beyond the end of their fertility period.  Typically, menopause occurs in the 4th decade of life, and women can expect to live to their 8th decade.  In men, however, fertility continues to near the end of life.  In men, although there are clear declines affecting the endocrine system, testicular function, and structure of the sperm chromosomes, there appears to be no andropause, that is, men retain a significant probability of fertility, but not women.  (In 1935, three physicians reported what may be the oldest American father on record, a 94 year old North Carolina man who married a 27 year old widow and fathered a child.)  

In contrast to humans, in chimpanzees fertility continues in both females and males until near the end of life.  That is, whereas in women menopause is a mid-life event, in chimpanzee females it is a late-life event.  Why?

A genetic wall of death beyond the fertility years
In 1966, W.D. (Bill) Hamilton, a just minted PHD student in biology, who would  later be called "nature's oracle" because of his mathematical reasoning, and whose work would  lay the foundation for the "selfish gene" of Richard Dawkins, used a mathematical model of genetics to demonstrate that from an evolutionary standpoint, genes that protect against disease and expand the lifespan beyond the age of fertility would tend to be eliminated with natural selection, and so animals should not live much longer than their end of fertility.  

Bill's argument went as follows: imagine four genes that are expressed in females and give immunity against some lethal disease but are expressed only in one particular part of life.  The first gene is expressed in the 1st year of life, the second gene in the 15th year, the third gene in the 30th year, and the fourth gene in the 45th.  Now imagine that fertility ends before age of 45.  If so, the fourth gene confers much less advantage than the first three.  This model explained the fertility-age relationship in men, but could not explain why women lose their fertility at around the midpoint of their life.

Evolutionary biologists have been puzzled by the fact that human females have escaped this “wall of death” that, at least theoretically, looms after menopause, and appears to be present in many other animals.  Numerous theories have been offered.  Perhaps in the past, human longevity was too short for females to experience menopause (defined as surviving for at least one year in good health beyond the last menstrual cycle), and so menopause is a byproduct of increased longevity unique to humans.  Perhaps by entering into menopause, older mothers increased the survival probability of their children and grandchildren (grandmother effect).  Perhaps reproductive aging was more severe than somatic aging, and so unlike other functions that could proceed at less than some high level of accuracy, reproduction in females could not, and therefore stopped when a threshold level of accuracy was reached. 

The jury is still out on whether any of these theories are supported by evolutionary data.  However, the most interesting new hypothesis proposes that women experience menopause at mid-life because of behavior of men.

Male sexual preference may lead to female menopause
In 2007, Shripad Tuljapukar and colleagues revisited Hamilton’s mathematical model of human evolution and like Hamilton assumed that there were genes that gave resistance to fatal diseases at certain age of life.  Unlike Hamilton, they assumed these genes existed in both males and females.  They added to Hamilton’s model a matrix representing mating preference.  In this matrix, \$ M_{i,j} \$ represented the probability of a male age \$ i \$ to mate with a female of age \$ j \$, and if both were fertile, to produce an offspring.  They found that if there was a gene that gave resistance to a fatal disease at say the 45th year in women, and this gene did the same thing in men, then both men and women would benefit from this gene because the older men would continue to be fertile and produce babies with the younger women. The interesting idea was that selection would favor survival of both males and females as long as one of the two groups could reproduce with the fertile sub-population of the other group.

But this idea was not entirely satisfactory because the same model would predict that it was better if females could extend their fertility period and like males, never experience menopause.  Sure, having one group live longer than the menopause age of another group would make both groups live longer, but why did natural selection produce menopause in females, but not males?  That is, what is the origin of female menopause in the first place?

In 2013, Richard Morton and colleagues used a similar mathematical model of genetic evolution, but started with the assumption that prolonged fertility was the ancestral state of both males and females.  That is, they assumed that at some distant past, neither males nor females experienced menopause.  They also assumed existence of a sex-specific infertility causing mutation in the genome which would produce menopause.  They asked about the conditions that might lead to this gene being expressed early in females, but not males.  

They found that if males and females had no preference for the age of their partner, then infertility-causing mutations would not become sex specific.  That is, if the age of the partner did not matter to a male or a female, reflected in the matrix \$ M_{i,j} \$, then both males and females would remain fertile into old age.  However, if males preferred younger females, then something interesting happened: female fertility declined without a loss in their longevity, resulting in female menopause, but male menopause never occurred.  The interesting idea was that a male preference for mating with a younger female would specifically affect fertility in females, limiting it and producing menopause.

An amazing prediction of this model is that evolution could have proceeded in a very different path: if females had shown a preference for mating with younger males, then fertility would have declined in the older males, resulting in male menopause, while allowing females to maintain fertility into old age.


Ramajit Raghav, an Indian man who was reported to have fathered a child at 97 years of age.

References
R. Caspari and S.H. Lee (2004) Older age becomes common late in human evolution. Proceedings of the National Academy of Science 101:10895-10900.

W.D. Hamilton (1966) The moulding of senescence by natural selection. Journal of Theoretical Biology 12:12-45.

J.G. Herndon et al. (2012) Menopause occurs late in life in the captive chimpanzee (Pan troglodytes) Age 34:1145-1156.

R.A. Morton, J.R. Stone, R.S. Singh (2013) Mate choice and the origin of menopause. PLoS Computational Biology 9:e1003092.

F.I. Seymour, C. Duffy, and A. Koerner (1935) A case of authenticated fertility in a man aged 94. Journal of American Medical Association 105:1423-1424.

S.D. Tuljapurkar, C.O. Puleston, and M.D. Gurven (2013) Why men matter: mating patterns drive evolution of human lifespan.  PLoS One 8:e785.

Saturday, January 25, 2014

Why support curiosity driven basic research?

A colleague, recently starting as an assistant professor, with a new laboratory and bright young graduate students, seemed unusually stressed.  I pried, guessing that in the month of January the well of worry for most biomedical scientists is the looming deadline for submission of grant proposals to the National Institutes of Health.  With exacerbation, he said: “The funding line is now less than 10%.  How do I keep my lab open?” 

These days this is a common question, even in elite universities.  Each year tens of thousands biomedical scientists send in a new R01 proposal to the NIH, competing for that small piece of the US budget that has been set aside to fund ‘curiosity-driven’ basic research --- research conducted by independent, often single investigators.  These proposals represent a most remarkable channel for which a small portion of the US budget is allocated: the government allows scientists with a laboratory that houses often only a few students to describe their idea, and then have the peers of those scientists evaluate these ideas and rank them, funding the top 10% or so. 

In contrast to this curiosity-driven basic research is the ‘mission-driven’ research that the government funds, focusing on themes like the Human Genome Project, or the Brain Mapping Project, organized efforts to answer a specific question.  My young friend was facing the existential struggle that is faced by all small, independent laboratories: to research their own questions, rather than the ones that the government dictates.  This struggle has a surprisingly long history.

The day after the bomb

On Tuesday, August 7, 1945, the New York Times printed in giant letters: First atomic bomb dropped on Japan.  Below the headline were reports on speeches made by Truman and Churchill: “New age ushered”, and the report that when the bomb was first tested, it had vaporized a steel tower in the New Mexico desert.  [A small advertisement on page 2 touted a Manhattan bar that had just installed air conditioning, providing a cool relief from the hot NY summer.]

 
But deep inside the newspaper, in the editorial section, there was a paragraph that more than any other foretold the struggle that was coming.  Not the struggle for liberty and the war against dictators and despots, but the struggle for funding of basic science in the United States.

In its editorial section, the NY Times used the success of the Manhattan project to exemplify the merits of organized, mission-driven research “after the manner of industrial laboratories.”  It used the success of the bomb to lambast university professors that held that “fundamental research is based on curiosity”.  It concluded that the path forward was for the government to state the problem, and then solve it by “team work, by planning, by competent direction and not by a mere desire to satisfy curiosity.”  


The Manhattan project set a shining example.  Why not do the same for other important problems?  Why not a Manhattan project to cure heart disease, or Parkinson’s disease?

The struggle to fund curiosity driven research

Just two weeks before the bomb was exploded a report first commissioned by President Roosevelt but with his untimely death, now sent to President Truman, had expressed a different view, one that championed curiosity driven research.  In that July 1945 report, titled Science, The Endless Frontier, Vannevar Bush had written: “Basic science is performed without thought of practical ends, and basic research is the pacemaker of technological improvement.”

Vannevar Bush, Dean of engineering at MIT from 1932-38, convinced President Roosevelt to form the National Defense Research Committee to coordinate scientific research for national defense, which he served as chairman.  By 1941, NDRC became part of Office of Scientific Research and Development, which coordinated the Manhattan Project.  OSRD, under Bush’s directorship, did something revolutionary: scientists were allowed to be ‘chief investigators’ on projects related to the war effort.  Rather than working in a national lab, or being employed by the government, they would stay at their universities, assemble their own staff, use their own laboratories, and then make periodic reports to committees at OSRD.  James Conant, a member of one of these committees, would later write: “Bush’s invention insured that a great portion of the research on weapons would be carried out by men who were neither civil servants of the federal government nor soldiers.”   This idea fundamentally changed research in the US, de-centralizing it, moving it away from industrial and government labs, and placing it at universities.

In 1944, as the war in Europe neared its end, Bush was called into Roosevelt’s office and there, the President asked him: “What’s going to happen to science after the war?” Bush replied: “It’s going to fall flat on its face.” The President replied: “What are we going to do about it?” 

In November 1944, this question was put down formally in a letter from Roosevelt to Bush and OSRD.  The letter asked four questions: 1) How would the US make its scientific achievements of the war years “known to the world” in order to “stimulate new enterprises, provide jobs, … and make possible great strides for the improvement of the national well-being”? 2) How would medical research be encouraged? 3)How could the government aid private and public research, and how should the two be interrelated? and 4) How could the government discover and develop the talent for scientific research in America’s youth?

Bush believed that advances in fundamental science had paid off spectacularly, resulting in new weapons and new medicines.  “[He] believed that you had to stockpile basic knowledge that could be called upon ultimately for its practical applications, and that without basic knowledge, truly new technologies were unlikely to emerge.”

Bush’s ideas took hold, eventually leading to establishment of the National Science Foundation and the NIH, and the current mechanisms that fund basic science in the US.  But the question persisted: should scientists be allowed to define their own questions in basic science, or should the government organize them into teams that go after mission-driven problems?

From pond scum to the human brain

In 1979, in a Scientific American article, Francis Crick (co-discoverer of DNA) suggested that a fundamental problem in brain sciences was to control a single neuron.  He speculated that if single neurons could be controlled, particularly in the mammalian brain, a critical barrier would be crossed to understand both the function of each region of the brain, and the mechanisms necessary to battle neurological disease.  

Crick did not know it at the time, but basic scientists, doing curiosity-driven research, had already found the key piece of the puzzle in an unlikely place, pond scum.  There, in single cell microbes, there was evidence that light-sensitive proteins regulate the flow of electric charge across the cell membrane (allowing the microbe to respond to light and move its flagella).  Thirty years later, building on these basic, seemingly useless results, Karl Deisseroth put the puzzle pieces together, showing how to use light to control single cells in the primate brain, producing a new field of neuroscience called optogenetics.

In a 2010 article, summarizing the remarkable insights gained by his work, Karl Deisseroth reflected on his findings.  He wrote: "I have occasionally heard colleagues suggest that it would be more efficient to focus tens of thousands of scientists on one massive and urgent project at a time --- for example, Alzheimer’s disease --- rather than pursue more diverse explorations.  Yet the more directed and targeted research becomes, the more likely we are to slow overall progress, and the more certain it is that the distant and untraveled realms of nature, where truly disruptive ideas can arise, will be utterly cut off from our common scientific journey." 

Sources
Jonathan R. Cole (2010) The Great American University: its rise to preeminence, its indispensable national role, why it must be protected. PublicAffairs.

Karl Deisseroth (2010) Controlling the brain with light.  Scientific American, November, page 49-55.

Thursday, December 19, 2013

Breathing Bangalore

In the suburbs of Bangalore, in one of the numerous buildings that house research and support facilities for nearly every major tech company in the world, scientists are working on understanding how you spread your attention when you navigate a web page.  A few of them had gathered in a conference room, listening as I described some of our work on how the brain controls movements of the eye.  

Gazing at the teak conference table, high back leather chairs, and sophisticated teleconferencing equipment, I considered the contrast: just a few streets away from this modern world where I was giving my talk, there were goats munching on a pile of refuse, and a small band of cows roaming happily against traffic.  A little farther, in the center of the city, there were scientists and engineers working on fundamental questions in the Indian Institute of Science, a major university on a beautiful wooded campus that housed, in addition to world class laboratories, large families of monkeys, bands of wild dogs, and bats the size of crows, all living freely, and from all indications, contently, alongside humans.

I think the most striking difference with anywhere else that I have visited is that people here seem to have an exceptional respect for life --- life of any form. Like most university campuses, this one also has large, impressive trees that dot the landscape.  But here, the human roads do not prevail.  Indeed, in many places the road has a large tree in the middle of it, with a trunk marked with a few reflectors, and the cars simply go around it.  At our university guest house, a sprawling hotel-like structure, there are a few places where the hallway turns at a strange angle.  Looking closer, I see that the building is bending around an old tree, and not the other way around.  This co-existence is on display with the wildlife that lives alongside us.  The faculty housing is in a wooded area, where monkeys also raise their families.  One morning, as we ate breakfast at the guest house, with the window open to let in the cool breeze, a family of macaque monkeys came to visit.  The mama-monkey took a piece of papaya from a table, and went over and fed her babies.

The weather is mild and pleasant; a pleasure to step outside and feel the sun and smell the trees.  But the university is an oasis.  The peace and quiet of the grounds are in stark contrast to the outside world.  As we step beyond the gates, we leave "jungle book" and enter the human world; with its crushing traffic of cars, motorized three-wheel rickshaws, and scooters, all communicating in the machine-made language of horns.

The human languages are myriad in India, but the main language, at least here in the south, is English.  The students tell me that they rely on English to talk to each other because each comes from a different part of India, with its own languages, and English is the only common tongue.  

The diversity of languages is complemented with the diversity of faiths.  In the mornings, I hear the Muslim call to prayer before sunrise, and then a few hours later, I see the Hindu temple as I walk to the university conference center.  On the steps of the center there is a familiar scene, one of the wild dogs napping in the sun.

Tuesday, December 3, 2013

Ask vs. Axe

A few months back, my administrative assistant was offered a wonderful new job and as a consequence, the department hired a replacement.  The new assistant is a capable, hardworking young lady.  A few days ago I noticed that she tends to use the word /axe/, instead of /ask/.  

I had heard this usage a number of times in Baltimore, particularly among African-Americans.  I wondered, is this a mispronunciation?  Perhaps something like /nuclear/ vs. /nucular/?  A bit of research made me understand that /axe/ has a long history in the English language, and is not a mispronunciation.

Oxford dictionary notes that /ask/ is the descendant of /ascian/, which in Old English means to demand, to seek from.  The alternative form of /ascian/ is /axian/, or in short form, /axe/.  Oxford notes its use in Chaucer: "I axe, why the fyfte man Was nought housband to the Samaritan?" (Wife's Prologue 1386), and "a man that ... cometh for to axe him of mercy." (The Parson's Tale 1386)  The book The Complete Works of Goeffrey Chaucer includes 5 passages where the word /axing/ is used.  The word /axe/ appeared in the first complete English translation of the Bible in 1535 by Miles Coverdale, who wrote: "Axe and it shal be giuen you," and "he axed for wrytinge tables."

According to Random House, “In American English, the /axe/ pronunciation was originally dominant in New England. The popularity of this pronunciation faded in the North early in the 19th century as it became more common in the South. Today the pronunciation is perceived in the US as either Southern or African-American. /axe/ is still found frequently in the South, and is a characteristic of some speech communities as far north as New Jersey, Pennsylvania, Illinois and Iowa.”

So /axe/ is a regional pronunciation, somewhat similar to the regional pronunciation variation of the word /idea/ and /idear/.

Saturday, September 21, 2013

A life without memories

When he walked into the room, he looked a decade younger than 71; his face handsome, with few wrinkles.  I pulled out a chair and asked him to sit in front of a robotic contraption that earlier that morning we had set up in an examination room at the Clinical Research Center at MIT.  He sat calmly and avoided touching the contraption.  I pulled the robotic arm toward him and asked him to hold its handle.  He grabbed the handle and started moving it around, keeping his gaze on the handle.  I asked him to look up at a monitor, where he saw a little cursor moving around as he moved the robot’s handle.  The computer displayed a target box, and he moved the cursor into the box, at which point the computer animated it, producing an explosion. 

A smile came to his face.  He said: “You know, when I was a kid I liked to go bird hunting.” Exuberantly, he described the birds that he hunted, the guns that he owned, and the woods around his childhood home.  He continued doing the task, reaching while holding the robotic arm and making little explosions.  About five minutes later he said: “You know, when I was a kid I liked to go bird hunting.”  The exuberance was unabated.  He had no idea that a few minutes earlier he had told me that same story.

Memory without awareness

The day before, with my two graduate students Kurt Thoroughman and Maurice Smith, I had packed the robot and computers in the back of my wife’s station wagon and drove up from Baltimore to meet and examine Henry Molaison.  Henry, or as he is known to the scientific world ‘H.M.’, had suffered from debilitating seizures.  When he was 27, desperate for something that might help, he had agreed to an experimental procedure that surgically removed the hippocampus and amygdala from both sides of his cerebral cortex.  The surgery was successful, greatly reducing his seizures, but left him with a staggering deficit: an inability to form certain long-term memories.

Now, in the examination room, the robotic arm that Henry was moving started to produce forces, pushing his hand as he approached the target, making it so that he would miss-reach and not get those explosions.  But he kept practicing, and after a few minutes, a part of his brain that was not damaged learned how to generate the right motor commands so that his arm could compensate for those unusual forces.  Once again he got the explosions, and once again he excitedly told me of his childhood bird hunting days.

After about an hour of playing with the robotic arm, Henry left for lunch and an afternoon nap.  He returned about 4 hours later.  I said hello and ask him whether he remembered meeting me and playing with the robotic arm.  He said no.  I pushed the robot aside, showed him the exam chair, and asked him to sit down.  He sat down, but then something interesting happened: rather than avoiding the machine, the behavior of someone who has never done the task before, he grabbed its handle, brought it toward him, looked at the video monitor, and started to move the cursor toward the target. 

He had no awareness that he had seen me before, or that he had played with this robotic arm only hours earlier.  Yet, that experience had left two kinds of memories in his brain:  the memory of how to use the tool, and the memory that associated the sight of the tool, and the act of moving it, with a rewarding outcome.

He was not aware of it, but the sight of the strange tool, the robotic arm, was sufficient for him to want to hold it and move it around so that he could chase targets and get explosions.  Would he have voluntarily reached for the robot if, while playing with it, he had not had a pleasurable experience, recalling those childhood memories?  Probably not.  In 1911, Edouard Claparede, a physician in Geneva, described an amnesic woman much like Henry.  Claparede had wanted to test her memory, so he played a small joke on her: when he reached his hand out to shake hers, he had hidden a small tack in his palm.  When the patient shook his hand, she felt the sharp tack.  The next day when Claparede had approached her, she could not recall having seen him before.  However, when he reached out to shake her hand, she pulled away, despite being unable to say why she did not want to shake his hand. 

Henry could not remember the episode of having played with the robotic arm, but that act left a memory, associating the robot with a rewarding outcome.  The part of his brain that learned this value association was exhibiting its knowledge by reaching out to the robot and moving it in search of a target to explode. 

In addition to this value-action association, he also had a memory of how to use the contraption.  Those forces that he had practiced to overcome had left a different kind of memory: much like picking up a coke can that you expect to be full but is empty, Henry’s movements on this revisit had the ‘after-effects’ of the earlier experience.  The robot was not producing any forces, but he moved it as if he was expecting it to be producing those earlier forces.  When the forces were re-introduced, he moved the robot skillfully.

He did not have the ability to form memories of episodes of his life, but these two other intact forms of memory served him well.  For example, as he aged, he developed osteoporosis and required a walker to keep physically active.  With practice, he learned to use the walker skillfully.  Importantly, he would use the walker without being told to do so.  That is, he ‘knew’ that the walker was a useful tool that helped him get around.

Permanent present tense

What is it like to live a life with such a disability?  In a recent book titled “Permanent present tense”, Suzzane Corkin, a scientist who studied and cared for Henry for 46 years, describes his life in loving, exquisite detail.

In a photograph showing Henry with his mother at his 50th birthday, he recognized his mother, but not himself.  While attending his 35th high school reunion, he did not recognize anyone by sight or by name.  He could not recall any specific event in his life, even events from before his operation.  For example, he could not recall a single specific Christmas gift that his father had given him.  He remembered some of the facts that he had learned from the time before his operation, and the gist of the experienced events, but no recollection of any specific episodes.

Henry rarely spoke of being hungry or thirsty.  He never sought out food for himself; it was simply given to him by his caregivers.  In 1981, Corkin asked him to rate his hunger from 0 (famished) to 100 (absolutely full).  He consistently gave a rating of 50, whether he had just finished eating, or was about to eat.  One evening, Corkin played a small trick on him.  After Henry had finished eating and his tray had been taken away, the kitchen staff brought him another tray, with exactly the same meal.  Henry ate the second dinner, cleaning the plates, except for the salad.  He seemed unable to express a feeling of satiety.

When we were examining him, a caregiver mentioned that Henry rarely verbalized that anything might be wrong.  For example, if he had a tooth ache, he would rarely mention it.  Only by observing that he was deviating from his normal behavior during the day would the caregiver suspect that something was wrong.  The caregiver would then go through a list of things to see if they could find out what may be the problem.

Corkin tested Henry’s ability to perceive pain by using a hairdryer to project a spot of heat onto his skin.  The heat was not intense enough to burn the skin, but the idea was to test whether Henry could feel pain.  Corkin’s results showed that Henry not only could not discriminate normally between various levels of pain, he did not report any of the stimuli as painful, and never withdrew his arm.  It is possible that the inability to normally perceive pain, to know hunger or thirst, was related to his operation; perhaps it was associated with removal of the amygdala, as Corkin suggests.

Henry lived a life without keeping memories of the events, and without pain.  His father had passed away, and his mother, who had taken care of him for much of his life, was in a nursing home.  He kept two notes in his wallet that he had written to himself: “Dad’s gone”, “Mom’s in nursing home—is in good health.”

Henry died at the age of 82. 

References
Corkin S. (2013) Permanent present tense: the unforgettable life of the amnesic patient H.M., Basic Books.
Shadmehr R, Brandt J, Corkin S (1998) Time dependent motor memory processes in amnesic subjects. Journal of Neurophysiology 80:1590-1597.

Sunday, August 25, 2013

Young neurons in an old brain

At the age of 88, my seemingly healthy and vigorous father suddenly died.  He had commanded an army, lived through a revolution, met kings and presidents, and through it all raised a family.  And then one night, all those memories, all those experiences, vanished.  Coming home from the funeral I realized that a library, stacked with history books, had burned to the ground. 

When we experience something, it can become a memory.  But what is this memory?  What is its neural substrate?  Can someday memories that we store in our brain be read out and stored in a machine?  Is there any hope that the library can be saved from the fire that consumes us as we die?

Standard model of memory

Our model of memory today is one of synaptic plasticity.  When we experience something, the neurons that are engaged by that experience produce electrical activity, and that activity can alter the strength of synapses that connect them to other neurons.  The electrical activity can also result in growth of new synapses.  Together, this altered strength of connectivity in an existing network of neurons is thought to be the basis of memory.  So in principle, if one could measure the strength of each synapse, and model the functional properties of each neuron, then one has a representation that approximates the state of brain of an individual.  The lifetime of memories and experience are within this representation. 

The problem, unfortunately, is that this concept of memory relies on the assumption that neurons themselves are fixed nodes, whereas the connections (that is, the synapses) are the changing components through which memories are stored.  This assumption, as it turns out, is false.  New neurons are born every day, and the human brain, even in old age, adds and subtracts nodes to the network.

Finding a neuron’s birthday

Between 1955 and 1963, there were numerous above ground tests of nuclear weapons.  With every explosion, the amount of isotope 14C was elevated in the atmosphere.  In 1963, there was a treaty that banned such tests, and since then the atmospheric level of 14C has declined because of uptake by plants.  This uptake takes place as 14C in the atmosphere reacts with oxygen to make CO2, which is then taken up by plants in photosynthesis. 

When we eat plants, or eat animals that feed on plants, the 14C is transferred to our body.  Once transferred to our body, 14C becomes part of the DNA of new born cells.  This happens when a cell divides and makes a copy of its chromosomes.  The copying process integrates the 14C into the newly made genome, making it so that by looking at the concentration of 14C in a cell’s DNA, and comparing it to the atmospheric DNA, one can tell when that cell was born. 

Kristy Spalding, Jonas Frisen, and their colleagues used this idea to find the birthday of neurons in the human brain.  In their study, they examined brains of people who had died between 2000 and 2012.  These people had had their brains preserved during autopsy, and so their brain could be studied. 

They focused their efforts on the neurons in the hippocampus region of the brain, a location that is critical for formation of new memories.  The hippocampus is the place in our brain where we form autobiographical memories, i.e., the kind of memories that describe places and people that we have met, events that have taken place in our life, etc.  Spalding and colleagues asked, how old are the neurons in the hippocampus of a person who was 30 years old when she died?  You might guess, well, the neuron is probably close to 30 years old.  But that assumes that all neurons are born soon after birth.  Strikingly, Spalding and colleagues found that the neurons were much younger than the person.

Neurons are much younger than the age of the person

The authors found that for a 20 year old, the average age of neurons in the hippocampus was 18.  For a 40 year old, the average age was 29.  For a 60 year old the average age was 37.  Remarkably, for an 80 year old, the average age of hippocampal neurons was 40!  

So the average neuron in the hippocampus of an 80 year old has been around only long enough to experience the last 40 years.  It cannot ‘remember’ anything from the first half of life, because it was not around to experience it.

Therefore, there is substantial neurogenesis throughout life in the hippocampus.  In fact, the rate of neurogenesis showed only a modest decline with aging.  They estimated that each day, 0.004% of the neurons in the dentate gyrus of the hippocampus die and are replaced with new ones.

Now it is possible that neurogenesis in the hippocampus is especially high, and other parts of the cerebral cortex may not have such a high turn-over.  But the relative youth of the neurons in the hippocampus raises a fundamental question:  what is memory if neurons are eliminated and replaced on a daily basis?

Richard Feynman, the celebrated physicist, during a lecture in 1955 to the National Academy of Sciences, described the basic problem:

“The radioactive phosphorus content of the cerebrum of the rat decreases to one half in a period of two weeks.  Now what does that mean?  It means that phosphorus that is in the brain of the rat, and also in mine, and yours, is not the same phosphorous as it was two weeks ago.  It means the atoms that are in the brain are being replaced: the ones that were there before have gone away.  So what is this mind of ours: what are these atoms with consciousness?  Last week’s potatoes! They now can remember what was going on in my mind a year ago, a mind which has long ago been replaced.  To note that the thing I call my individuality is only a pattern or a dance… The atoms come into my brain, dance a dance, and then go out--- there are always new atoms, but always doing the same dance, remembering what the dance was yesterday.”

The problem in neuroscience is to understand how to read this dance.  If we could, then in principle it should be possible to record and preserve our experiences, so that when we die, the library will remain standing.

References
Spalding KL et al. (2013) Dynamics of hippocampus neurogenesis in adult humans.  Cell 153:1219-1227.
Feynman RP (1988) What do you care what other people think? Further adventures of a curious character.  Bantam Books, page 244.