Friday, June 27, 2014

Effort of movements in Parkinson’s disease

The very terms that we use to describe the motor symptoms of Parkinson’s disease (PD) imply a subjective scaling of time and space: bradykinesia (slowness of movement), tachyphemia (cluttering of speech), and micrographica (smallness of handwriting).  Although these symptoms are stable features of the disease, a remarkable property of PD is that under some conditions the symptoms can spontaneously improve.

In 1965, R.S. Schwab and I. Zieper, two neurologists at the Massachusetts General Hospital, described the case of a 62-year old male PD patient who exhibited severe tremor and severe rigidity and was totally dependent on his wife.  His wife would start her day by dressing him, laying out his breakfast, making his lunch, go to work, then come back in the afternoon to make his dinner and finally get him undressed and ready for bed.  One evening his wife had severe abdominal pain and had to be taken to the hospital for emergency surgery.  The next day she woke worried about her husband, and was surprised when the nurse told her that he had come to visit her.  He had dressed himself, made his own breakfast, and then took a taxi to the hospital.  At the hospital his neurologist noticed him and upon examination found that he was able to walk 50% faster than in past examinations.  “All his motor tests were improved in spite of the presence of the same amount of rigidity and tremor that had been present before.”

A second case was another elderly male with advanced stage PD with severe rigidity who was confined to a wheelchair, unable to walk alone, living on the first floor of his home in Providence, RI.  A hurricane approached the city and his wife left to get some supplies from the drugstore.  “As a result of the storm the harbor overflowed 10 feet into the street.  The patient, sitting in his wheelchair, suddenly saw the door blown in and a wall of water entered the house.  Exactly how he did it is not clear, but he managed to get out of his wheelchair and climbed the steps to safety on the second floor where he was found several hours later by his wife, the waters having subsided. She found him seated in a chair as helpless as he was before.”

While these examples are anecdotal, there are other more controlled instances in which the PD patients show marked improvements in their movements.  One example of this is in the movements that are made during sleep.  Although healthy people do not move during REM sleep, people with PD sometimes experience REM sleep behavior disorder (RBD).  Valerie Cochen De Cock and her colleagues studied movements made during sleep by PD patients and reported that the movements were “surprisingly fast, ample, coordinated and symmetrical, without obvious signs of parkinsonism”.  They found one patient singing a song with a “strong and sonorous voice, a wide smile on his face” (he used to sing before his PD), another “declaiming political speeches with a loud voice” (he used to give speeches at the town council), another “shouting and getting hold of a heavy oak table and throwing it across the room”, and another “fighting with an invisible foil, with great agility” (apparently to save his lady-love from an attacking knight).

The mechanisms with which the brain of a Parkinsonian patient produces these feats remain a complete mystery.  But these observations do hint that latent in the PD brain is the ability to make fairly normal movements.  Yet, the movements are apparently unavailable for expression except under extraordinary circumstances.  Why?

Neuroeconomics of movements

Pietro Mazzoni, Anna Hristova, and +John Krakauer studied this question by asking PD patients and healthy controls to reach with their dominant (and more affected) arm to a target.  Visual feedback for the hand was removed at reach onset, and at the end of each reach the volunteers were given feedback with regard to the speed and accuracy of their movement.  Crucially, the trial had to be repeated if the speed was outside the requested range.  The authors found that for a given reach velocity, the endpoint accuracy of the movements made by the PD patients was similar to controls.  This again illustrated the latent abilities of the patients.  However, the patients required many more attempts in order to produce a reach that was as fast as the requested speed.  That is, the patients were capable of producing movements of normal speed and accuracy, but it took them more trials to become motivated to make the fast movements.  The authors proposed that under normal conditions, the patients seem to lack the “motor motivation” that healthy people possessed in generating their movements.

I have suggested that one way to view this result is to consider the possibility that in the brain, each movement is a balance between two factors: the reward that one expects to acquire at the end of the movement, and the effort (or motor cost) that will be spent in generating that movement (Shadmehr et al., Journal of Neuroscience, 2010).  The reward that we expect to acquire represents the subjective value of the movement.  For example, if you see a dear friend, the subjective value for the steps that you are about to take toward your friend are higher than if you are walking to greet someone that you may not be so fond of.  As a result, you will walk faster toward the dear friend.  (I have often thought that to examine how my brain currently values people in my life, I should measure the speed at which I walk toward them.)

Indeed, humans and other animals tend to move faster toward things that they value more.  This was first illustrated by Okihide Hikosaka and his colleagues in saccadic eye movements of monkeys.  In these experiments, thirsty monkeys were trained to move their eyes to a location in exchange for a reward (juice).  In some blocks of trials, the juice volume was a little larger, and in some blocks the volume was a little smaller.  The peak velocity of the saccadic eye movements in blocks in which there was more juice at stake was larger.  That is, the monkey’s eye movements were faster when the subjective value of the movement was higher.

In the real world we do not make saccadic eye movements in exchange for juice.  Rather, we move our eyes to place the part of the visual scene that we are interested in examining on our fovea.  Do we make faster saccades to things that we value more?  In humans, this idea was first illustrated by my former student +Minnan Xu-Wilson.  She asked people to make a saccadic eye movement to spots of light, but after the saccade was completed she ‘rewarded’ them by showing them a picture of a face, an object, or simply a noisy picture.  She found that saccades that were made in anticipation of viewing a face were faster.

These experiments illustrate that one of the factors that influences the speed by which we move, that is the vigor of our movements, is the subjective value of the reward that we expect to attain at the end of the movement.  The higher this expected value, the faster the movement.

The second factor is the subjective cost of the effort that is required to make the movement.  If the subjective value of the reward associated with two potential movements is the same, people pick the movement that requires less effort.  

Now suppose that we have to move a given distance.  How does the brain decide on the speed of the movement?  The faster the speed with which we move to cover that distance, the greater force we have to produce.  If effort is related to force (perhaps because of metabolic cost of generating force), then the subjective cost of effort will be higher for the faster movements as compared to the slower movements that cover the same distance.  So if we move slower, we will produce smaller forces with our muscles and have a lower subjective cost of effort. 

However, the slow movement will bring us to our goal later.  Time discounts reward.  That is, it is better to arrive at a valuable state sooner rather than later.  So the subjective value of the movement drops if we arrive later at the destination, making it better to move fast so we get to our goal sooner. 

In summary, the subjective cost of effort makes it better to move slow so we produce smaller forces, but passage of time makes reward less valuable.  These two factors compete and the movement that the brain produces appears to be one that is the best possible given these two competing factors.  That is, the speed at which we move is one that produces the smallest possible effort (encouraging us to move slow), while at the same time maximizing the subject value of the reward we hope to attain (encouraging us to move fast).

Dopamine disorders alter the neuroeconomics of movements

In Parkinson’s disease, some of the neurons in the substantia nigra, a nucleus in the basal ganglia, gradually degenerate and die.  These neurons provide dopamine to much of the brain, and in particular the striatum, another region of the basal ganglia.  Dopamine appears to play a critical role in regulating the two factors that control movements: subjective value of reward and subject cost of effort.

In the course of the last two decades, John Salamone and his colleagues have been investigating the effects that loss of dopamine has on behavior of rats.  When rats are offered a choice between pressing a lever a few times to obtain good food, vs. eating a less preferred food for which they do not have to press levers, they choose to spend the effort and press the lever to get the preferred food, but only if the lever pressing requires modest effort.  But when a drug is injected into their basal ganglia that acts as an antagonist to dopamine, the rats become less willing to press the lever and forego the better food, settling for the less effortful choice.  On the other hand, if a drug is injected that enhances action of dopamine, the animal becomes more willing to press the lever, even if it has to press it many times in order to earn the better food.  

Therefore, it appears that when dopamine’s actions are disrupted, the balance between subjective value of reward and cost of effort shifts.  Loss of dopamine shifts the balance by increasing cost of effort and decreasing value of reward, whereas increase of dopamine shifts the balance by decreasing cost of effort and increasing the subjective value of reward.  

In this framework, loss of dopamine in PD shifts the neuroeconomics of movements towards ones that have smaller effort costs, which include movements that are slow. This speculation would not explain why certain movements of the patients are better during REM sleep, but does provide a framework for understanding the paradoxically fast and able movements that they exhibit under extraordinary circumstances: perhaps under these conditions, a greater proportion of available dopamine is engaged, increasing the expected reward for the movement, countering the effort costs.

Sunday, April 20, 2014

Breaking habits and erasing memories

The summer day on Okinawa Island of Japan was so warm that those us who were there to teach had given up on going to the beach in the afternoon and instead had decided to try some early morning tennis.  On this particular early morning, a distinguished scientist and friend had joined our group, and there he was holding the ball and warming up for his serve.  As he tossed the ball up, he bent his body sideways and then back and twisted upwards and finally had his racquet make contact with the ball, delivering a pretty good serve. 

I stood there marveling at how he had learned to serve this way.  He said: “Well, I learned on my own, and despite lots of coaches who have tried to break it down and rebuild it, I haven’t been able to change it much.”




Memories, like that of how to hit a tennis serve, can become so persistent that the brain seems unable to change them.  This, at the surface, may not appear that important, as the cost is looking a little silly and not being able to do something as efficiently as possible.  But what if you are traveling with family on a peaceful day and stop at a gas station, and suddenly the smell of petroleum brings back memories of combat, paralyzing you with fear?  What if you are watching a movie where the hero is climbing the face of a rock and when she reaches the peak, she stands and looks down, and you find your knees shaking?  Does the brain have a mechanism in place to rebuild or even erase unwanted habits and fear-inducing memories?

Until about 15 years ago, it was generally assumed that when the brain learns something new, the newly acquired memory is initially in a labile state and can be readily changed, but after a short period of time (hours), it becomes ‘consolidated’, meaning that it becomes resistant to change.  For example, when rats were given a single pairing of a tone with a food-shock, this made it so that the next time they heard that tone, they got scared and stopped moving.  If a drug was given to them that disrupted the molecular pathways that are involved in consolidation (protein synthesis inhibitors), the next day when they heard the tone they were not scared of it.  However, the drug had to be given soon after the animal’s first experience of the tone-shock pairing.  If it was given even a few hours after the first experience, it did not have much of an effect; the animal still feared the tone.  And so it seemed that once an emotional or fear-inducing memory was acquired, there was little that could be done to change it.

The basis for this idea was a century of work that had described how memories form.  Neurons communicate with each other via their synapses, tiny junctions where one neuron sends and receives messages from another neuron.  Eric Kandel, a Columbia University neuroscientist, had shown that short-term memories, things that last for a few minutes, are due to transient changes to the synapse to make it more efficient, but these changes were sustained only for a short period of time.  To make memories last, the changes at the synapse had be sustained indefinitely, and this required manufacture of new proteins.  If the initial experience was strong enough, with passage of time these new proteins were made by the neuron and the memory was maintained, apparently becoming permanent.

But in the year 2000, Karim Nader, an Egyptian born neuroscientist who was raised in Canada, made a discovery that completely overturned this idea.  He was working in Joseph LeDoux’s laboratory in New York University where he took rats and gave them a single pairing of a tone with a foot-shock, and indeed, the next day he found that when they heard the tone, the rats froze in their tracks (rats express fear by ‘freezing’).  However, right after they heard this tone, he injected into their amygdala (a region of the brain involved in storing fearful memories) a drug that inhibits protein synthesis.  Amazingly, he found that a day later, when they heard the tone their fear was reduced by half (measured by the time spent ‘freezing’).  Interestingly, if the drug was given without the reactivation of the memory (that is, on day 2 don’t play the tone), it had no effect.   And if the animal heard the tone but 6 hours later was given the drug, it still feared the tone the next day.  So the key idea was that the fear-inducing memory could be weakened if the drug was given right after the memory was reactivated, but it could not be weakened if the drug was given alone, or if the memory was reactivated but without the drug.

Unfortunately, protein synthesis inhibitors cannot safely be given to humans, and so until recently, it was unclear whether this new understanding could be applied to fear-inducing memories in people.  In 2009, Merel Kindt and colleagues in Amsterdam asked a few undergraduate students to look at a picture of a spider and then a few seconds later played a loud sound, followed by a mild shock to the hand.  When the students heard the loud sound, they had a startle reflex, producing an eye blink.  They also showed them a picture of another spider, followed by another loud sound, but no shock to the hand.  So the students learned to fear the picture of the 1st spider, but not the second.  The amount of fear was measured by how they reacted to the loud sound.  Indeed, the students feared the 1st spider more than the second.  The students returned on Day 2, and Kindt showed them the picture of the 1st spider, but did not shock them.  Right after this, they gave them a drug called propranolol, which is often used to prevent stage fright, and works to inhibit actions of norepinephrine.  When the students returned on the next day, they did not show fear of the spider.  Importantly, if they gave the drug but did not show them the picture of the spider, the fear-inducing memory remained. 

So it seems possible that in humans, certain fear-inducing memories can be weakened by a combination of reactivation of that memory and consumption of certain drugs like propranolol.  Later work from the Kindt group showed that the key step is that during recall of the memory, there must be a prediction error.  That is, during recall, the brain appears to predict that a bad thing is going to happen (a shock), and if it does not happen, and the drug is present, then the memory is weakened.  Both the prediction error and the presence of the drug seem to be required, as one without the other is much less effective. 

These approaches are now being studied for treatment of PTSD.  In a recent study, propranolol was given to people who were involved in a serious car accident.  Those people were less likely to develop PTSD symptoms in the following 3 months compared to people who were given placebo. 

Notice, however, that all the successes have been on weakening newly formed memories.  What about the old fear-inducing memories?  The news there is less clear.  Older memories may be less likely to be affected when they are reactivated.   Which brings me to one of my favorite quotes from Margaret Thatcher, who was quoting her father when she said:

Watch your thoughts for they become words.
Watch your words for they become actions.
Watch your actions for they become habits.
Watch your habits for they become your character.
And watch your character for it becomes your destiny
.

References
Kindt M, Soeter M, Vervliet B (2009) Beyond extinction: erasing human fear responses and preventing the return of fear. Nature Neurosci 12:256-258.
Nader K, Schafe GE, Le Doux JE (2000) Fear memories require proten synthesis in the amygdala for reconsolidation after retrieval. Nature 406:722-726.
Sevenster D, Beckers T, Kindt M (2013) Prediction error governs pharmacologically induced amnesia for learned fear. Science 339:830-833.


Sunday, March 16, 2014

The puzzle of menopause

Human females appear to be unique in the animal kingdom in that they live far beyond the end of their fertility period.  Typically, menopause occurs in the 4th decade of life, and women can expect to live to their 8th decade.  In men, however, fertility continues to near the end of life.  In men, although there are clear declines affecting the endocrine system, testicular function, and structure of the sperm chromosomes, there appears to be no andropause, that is, men retain a significant probability of fertility, but not women.  (In 1935, three physicians reported what may be the oldest American father on record, a 94 year old North Carolina man who married a 27 year old widow and fathered a child.)  

In contrast to humans, in chimpanzees fertility continues in both females and males until near the end of life.  That is, whereas in women menopause is a mid-life event, in chimpanzee females it is a late-life event.  Why?

A genetic wall of death beyond the fertility years
In 1966, W.D. (Bill) Hamilton, a just minted PHD student in biology, who would  later be called "nature's oracle" because of his mathematical reasoning, and whose work would  lay the foundation for the "selfish gene" of Richard Dawkins, used a mathematical model of genetics to demonstrate that from an evolutionary standpoint, genes that protect against disease and expand the lifespan beyond the age of fertility would tend to be eliminated with natural selection, and so animals should not live much longer than their end of fertility.  

Bill's argument went as follows: imagine four genes that are expressed in females and give immunity against some lethal disease but are expressed only in one particular part of life.  The first gene is expressed in the 1st year of life, the second gene in the 15th year, the third gene in the 30th year, and the fourth gene in the 45th.  Now imagine that fertility ends before age of 45.  If so, the fourth gene confers much less advantage than the first three.  This model explained the fertility-age relationship in men, but could not explain why women lose their fertility at around the midpoint of their life.

Evolutionary biologists have been puzzled by the fact that human females have escaped this “wall of death” that, at least theoretically, looms after menopause, and appears to be present in many other animals.  Numerous theories have been offered.  Perhaps in the past, human longevity was too short for females to experience menopause (defined as surviving for at least one year in good health beyond the last menstrual cycle), and so menopause is a byproduct of increased longevity unique to humans.  Perhaps by entering into menopause, older mothers increased the survival probability of their children and grandchildren (grandmother effect).  Perhaps reproductive aging was more severe than somatic aging, and so unlike other functions that could proceed at less than some high level of accuracy, reproduction in females could not, and therefore stopped when a threshold level of accuracy was reached. 

The jury is still out on whether any of these theories are supported by evolutionary data.  However, the most interesting new hypothesis proposes that women experience menopause at mid-life because of behavior of men.

Male sexual preference may lead to female menopause
In 2007, Shripad Tuljapukar and colleagues revisited Hamilton’s mathematical model of human evolution and like Hamilton assumed that there were genes that gave resistance to fatal diseases at certain age of life.  Unlike Hamilton, they assumed these genes existed in both males and females.  They added to Hamilton’s model a matrix representing mating preference.  In this matrix, \$ M_{i,j} \$ represented the probability of a male age \$ i \$ to mate with a female of age \$ j \$, and if both were fertile, to produce an offspring.  They found that if there was a gene that gave resistance to a fatal disease at say the 45th year in women, and this gene did the same thing in men, then both men and women would benefit from this gene because the older men would continue to be fertile and produce babies with the younger women. The interesting idea was that selection would favor survival of both males and females as long as one of the two groups could reproduce with the fertile sub-population of the other group.

But this idea was not entirely satisfactory because the same model would predict that it was better if females could extend their fertility period and like males, never experience menopause.  Sure, having one group live longer than the menopause age of another group would make both groups live longer, but why did natural selection produce menopause in females, but not males?  That is, what is the origin of female menopause in the first place?

In 2013, Richard Morton and colleagues used a similar mathematical model of genetic evolution, but started with the assumption that prolonged fertility was the ancestral state of both males and females.  That is, they assumed that at some distant past, neither males nor females experienced menopause.  They also assumed existence of a sex-specific infertility causing mutation in the genome which would produce menopause.  They asked about the conditions that might lead to this gene being expressed early in females, but not males.  

They found that if males and females had no preference for the age of their partner, then infertility-causing mutations would not become sex specific.  That is, if the age of the partner did not matter to a male or a female, reflected in the matrix \$ M_{i,j} \$, then both males and females would remain fertile into old age.  However, if males preferred younger females, then something interesting happened: female fertility declined without a loss in their longevity, resulting in female menopause, but male menopause never occurred.  The interesting idea was that a male preference for mating with a younger female would specifically affect fertility in females, limiting it and producing menopause.

An amazing prediction of this model is that evolution could have proceeded in a very different path: if females had shown a preference for mating with younger males, then fertility would have declined in the older males, resulting in male menopause, while allowing females to maintain fertility into old age.


Ramajit Raghav, an Indian man who was reported to have fathered a child at 97 years of age.

References
R. Caspari and S.H. Lee (2004) Older age becomes common late in human evolution. Proceedings of the National Academy of Science 101:10895-10900.

W.D. Hamilton (1966) The moulding of senescence by natural selection. Journal of Theoretical Biology 12:12-45.

J.G. Herndon et al. (2012) Menopause occurs late in life in the captive chimpanzee (Pan troglodytes) Age 34:1145-1156.

R.A. Morton, J.R. Stone, R.S. Singh (2013) Mate choice and the origin of menopause. PLoS Computational Biology 9:e1003092.

F.I. Seymour, C. Duffy, and A. Koerner (1935) A case of authenticated fertility in a man aged 94. Journal of American Medical Association 105:1423-1424.

S.D. Tuljapurkar, C.O. Puleston, and M.D. Gurven (2013) Why men matter: mating patterns drive evolution of human lifespan.  PLoS One 8:e785.

Saturday, January 25, 2014

Why support curiosity driven basic research?

A colleague, recently starting as an assistant professor, with a new laboratory and bright young graduate students, seemed unusually stressed.  I pried, guessing that in the month of January the well of worry for most biomedical scientists is the looming deadline for submission of grant proposals to the National Institutes of Health.  With exacerbation, he said: “The funding line is now less than 10%.  How do I keep my lab open?” 

These days this is a common question, even in elite universities.  Each year tens of thousands biomedical scientists send in a new R01 proposal to the NIH, competing for that small piece of the US budget that has been set aside to fund ‘curiosity-driven’ basic research --- research conducted by independent, often single investigators.  These proposals represent a most remarkable channel for which a small portion of the US budget is allocated: the government allows scientists with a laboratory that houses often only a few students to describe their idea, and then have the peers of those scientists evaluate these ideas and rank them, funding the top 10% or so. 

In contrast to this curiosity-driven basic research is the ‘mission-driven’ research that the government funds, focusing on themes like the Human Genome Project, or the Brain Mapping Project, organized efforts to answer a specific question.  My young friend was facing the existential struggle that is faced by all small, independent laboratories: to research their own questions, rather than the ones that the government dictates.  This struggle has a surprisingly long history.

The day after the bomb

On Tuesday, August 7, 1945, the New York Times printed in giant letters: First atomic bomb dropped on Japan.  Below the headline were reports on speeches made by Truman and Churchill: “New age ushered”, and the report that when the bomb was first tested, it had vaporized a steel tower in the New Mexico desert.  [A small advertisement on page 2 touted a Manhattan bar that had just installed air conditioning, providing a cool relief from the hot NY summer.]

 
But deep inside the newspaper, in the editorial section, there was a paragraph that more than any other foretold the struggle that was coming.  Not the struggle for liberty and the war against dictators and despots, but the struggle for funding of basic science in the United States.

In its editorial section, the NY Times used the success of the Manhattan project to exemplify the merits of organized, mission-driven research “after the manner of industrial laboratories.”  It used the success of the bomb to lambast university professors that held that “fundamental research is based on curiosity”.  It concluded that the path forward was for the government to state the problem, and then solve it by “team work, by planning, by competent direction and not by a mere desire to satisfy curiosity.”  


The Manhattan project set a shining example.  Why not do the same for other important problems?  Why not a Manhattan project to cure heart disease, or Parkinson’s disease?

The struggle to fund curiosity driven research

Just two weeks before the bomb was exploded a report first commissioned by President Roosevelt but with his untimely death, now sent to President Truman, had expressed a different view, one that championed curiosity driven research.  In that July 1945 report, titled Science, The Endless Frontier, Vannevar Bush had written: “Basic science is performed without thought of practical ends, and basic research is the pacemaker of technological improvement.”

Vannevar Bush, Dean of engineering at MIT from 1932-38, convinced President Roosevelt to form the National Defense Research Committee to coordinate scientific research for national defense, which he served as chairman.  By 1941, NDRC became part of Office of Scientific Research and Development, which coordinated the Manhattan Project.  OSRD, under Bush’s directorship, did something revolutionary: scientists were allowed to be ‘chief investigators’ on projects related to the war effort.  Rather than working in a national lab, or being employed by the government, they would stay at their universities, assemble their own staff, use their own laboratories, and then make periodic reports to committees at OSRD.  James Conant, a member of one of these committees, would later write: “Bush’s invention insured that a great portion of the research on weapons would be carried out by men who were neither civil servants of the federal government nor soldiers.”   This idea fundamentally changed research in the US, de-centralizing it, moving it away from industrial and government labs, and placing it at universities.

In 1944, as the war in Europe neared its end, Bush was called into Roosevelt’s office and there, the President asked him: “What’s going to happen to science after the war?” Bush replied: “It’s going to fall flat on its face.” The President replied: “What are we going to do about it?” 

In November 1944, this question was put down formally in a letter from Roosevelt to Bush and OSRD.  The letter asked four questions: 1) How would the US make its scientific achievements of the war years “known to the world” in order to “stimulate new enterprises, provide jobs, … and make possible great strides for the improvement of the national well-being”? 2) How would medical research be encouraged? 3)How could the government aid private and public research, and how should the two be interrelated? and 4) How could the government discover and develop the talent for scientific research in America’s youth?

Bush believed that advances in fundamental science had paid off spectacularly, resulting in new weapons and new medicines.  “[He] believed that you had to stockpile basic knowledge that could be called upon ultimately for its practical applications, and that without basic knowledge, truly new technologies were unlikely to emerge.”

Bush’s ideas took hold, eventually leading to establishment of the National Science Foundation and the NIH, and the current mechanisms that fund basic science in the US.  But the question persisted: should scientists be allowed to define their own questions in basic science, or should the government organize them into teams that go after mission-driven problems?

From pond scum to the human brain

In 1979, in a Scientific American article, Francis Crick (co-discoverer of DNA) suggested that a fundamental problem in brain sciences was to control a single neuron.  He speculated that if single neurons could be controlled, particularly in the mammalian brain, a critical barrier would be crossed to understand both the function of each region of the brain, and the mechanisms necessary to battle neurological disease.  

Crick did not know it at the time, but basic scientists, doing curiosity-driven research, had already found the key piece of the puzzle in an unlikely place, pond scum.  There, in single cell microbes, there was evidence that light-sensitive proteins regulate the flow of electric charge across the cell membrane (allowing the microbe to respond to light and move its flagella).  Thirty years later, building on these basic, seemingly useless results, Karl Deisseroth put the puzzle pieces together, showing how to use light to control single cells in the primate brain, producing a new field of neuroscience called optogenetics.

In a 2010 article, summarizing the remarkable insights gained by his work, Karl Deisseroth reflected on his findings.  He wrote: "I have occasionally heard colleagues suggest that it would be more efficient to focus tens of thousands of scientists on one massive and urgent project at a time --- for example, Alzheimer’s disease --- rather than pursue more diverse explorations.  Yet the more directed and targeted research becomes, the more likely we are to slow overall progress, and the more certain it is that the distant and untraveled realms of nature, where truly disruptive ideas can arise, will be utterly cut off from our common scientific journey." 

Sources
Jonathan R. Cole (2010) The Great American University: its rise to preeminence, its indispensable national role, why it must be protected. PublicAffairs.

Karl Deisseroth (2010) Controlling the brain with light.  Scientific American, November, page 49-55.

Thursday, December 19, 2013

Breathing Bangalore

In the suburbs of Bangalore, in one of the numerous buildings that house research and support facilities for nearly every major tech company in the world, scientists are working on understanding how you spread your attention when you navigate a web page.  A few of them had gathered in a conference room, listening as I described some of our work on how the brain controls movements of the eye.  

Gazing at the teak conference table, high back leather chairs, and sophisticated teleconferencing equipment, I considered the contrast: just a few streets away from this modern world where I was giving my talk, there were goats munching on a pile of refuse, and a small band of cows roaming happily against traffic.  A little farther, in the center of the city, there were scientists and engineers working on fundamental questions in the Indian Institute of Science, a major university on a beautiful wooded campus that housed, in addition to world class laboratories, large families of monkeys, bands of wild dogs, and bats the size of crows, all living freely, and from all indications, contently, alongside humans.

I think the most striking difference with anywhere else that I have visited is that people here seem to have an exceptional respect for life --- life of any form. Like most university campuses, this one also has large, impressive trees that dot the landscape.  But here, the human roads do not prevail.  Indeed, in many places the road has a large tree in the middle of it, with a trunk marked with a few reflectors, and the cars simply go around it.  At our university guest house, a sprawling hotel-like structure, there are a few places where the hallway turns at a strange angle.  Looking closer, I see that the building is bending around an old tree, and not the other way around.  This co-existence is on display with the wildlife that lives alongside us.  The faculty housing is in a wooded area, where monkeys also raise their families.  One morning, as we ate breakfast at the guest house, with the window open to let in the cool breeze, a family of macaque monkeys came to visit.  The mama-monkey took a piece of papaya from a table, and went over and fed her babies.

The weather is mild and pleasant; a pleasure to step outside and feel the sun and smell the trees.  But the university is an oasis.  The peace and quiet of the grounds are in stark contrast to the outside world.  As we step beyond the gates, we leave "jungle book" and enter the human world; with its crushing traffic of cars, motorized three-wheel rickshaws, and scooters, all communicating in the machine-made language of horns.

The human languages are myriad in India, but the main language, at least here in the south, is English.  The students tell me that they rely on English to talk to each other because each comes from a different part of India, with its own languages, and English is the only common tongue.  

The diversity of languages is complemented with the diversity of faiths.  In the mornings, I hear the Muslim call to prayer before sunrise, and then a few hours later, I see the Hindu temple as I walk to the university conference center.  On the steps of the center there is a familiar scene, one of the wild dogs napping in the sun.

Tuesday, December 3, 2013

Ask vs. Axe

A few months back, my administrative assistant was offered a wonderful new job and as a consequence, the department hired a replacement.  The new assistant is a capable, hardworking young lady.  A few days ago I noticed that she tends to use the word /axe/, instead of /ask/.  

I had heard this usage a number of times in Baltimore, particularly among African-Americans.  I wondered, is this a mispronunciation?  Perhaps something like /nuclear/ vs. /nucular/?  A bit of research made me understand that /axe/ has a long history in the English language, and is not a mispronunciation.

Oxford dictionary notes that /ask/ is the descendant of /ascian/, which in Old English means to demand, to seek from.  The alternative form of /ascian/ is /axian/, or in short form, /axe/.  Oxford notes its use in Chaucer: "I axe, why the fyfte man Was nought housband to the Samaritan?" (Wife's Prologue 1386), and "a man that ... cometh for to axe him of mercy." (The Parson's Tale 1386)  The book The Complete Works of Goeffrey Chaucer includes 5 passages where the word /axing/ is used.  The word /axe/ appeared in the first complete English translation of the Bible in 1535 by Miles Coverdale, who wrote: "Axe and it shal be giuen you," and "he axed for wrytinge tables."

According to Random House, “In American English, the /axe/ pronunciation was originally dominant in New England. The popularity of this pronunciation faded in the North early in the 19th century as it became more common in the South. Today the pronunciation is perceived in the US as either Southern or African-American. /axe/ is still found frequently in the South, and is a characteristic of some speech communities as far north as New Jersey, Pennsylvania, Illinois and Iowa.”

So /axe/ is a regional pronunciation, somewhat similar to the regional pronunciation variation of the word /idea/ and /idear/.

Saturday, September 21, 2013

A life without memories

When he walked into the room, he looked a decade younger than 71; his face handsome, with few wrinkles.  I pulled out a chair and asked him to sit in front of a robotic contraption that earlier that morning we had set up in an examination room at the Clinical Research Center at MIT.  He sat calmly and avoided touching the contraption.  I pulled the robotic arm toward him and asked him to hold its handle.  He grabbed the handle and started moving it around, keeping his gaze on the handle.  I asked him to look up at a monitor, where he saw a little cursor moving around as he moved the robot’s handle.  The computer displayed a target box, and he moved the cursor into the box, at which point the computer animated it, producing an explosion. 

A smile came to his face.  He said: “You know, when I was a kid I liked to go bird hunting.” Exuberantly, he described the birds that he hunted, the guns that he owned, and the woods around his childhood home.  He continued doing the task, reaching while holding the robotic arm and making little explosions.  About five minutes later he said: “You know, when I was a kid I liked to go bird hunting.”  The exuberance was unabated.  He had no idea that a few minutes earlier he had told me that same story.

Memory without awareness

The day before, with my two graduate students Kurt Thoroughman and Maurice Smith, I had packed the robot and computers in the back of my wife’s station wagon and drove up from Baltimore to meet and examine Henry Molaison.  Henry, or as he is known to the scientific world ‘H.M.’, had suffered from debilitating seizures.  When he was 27, desperate for something that might help, he had agreed to an experimental procedure that surgically removed the hippocampus and amygdala from both sides of his cerebral cortex.  The surgery was successful, greatly reducing his seizures, but left him with a staggering deficit: an inability to form certain long-term memories.

Now, in the examination room, the robotic arm that Henry was moving started to produce forces, pushing his hand as he approached the target, making it so that he would miss-reach and not get those explosions.  But he kept practicing, and after a few minutes, a part of his brain that was not damaged learned how to generate the right motor commands so that his arm could compensate for those unusual forces.  Once again he got the explosions, and once again he excitedly told me of his childhood bird hunting days.

After about an hour of playing with the robotic arm, Henry left for lunch and an afternoon nap.  He returned about 4 hours later.  I said hello and ask him whether he remembered meeting me and playing with the robotic arm.  He said no.  I pushed the robot aside, showed him the exam chair, and asked him to sit down.  He sat down, but then something interesting happened: rather than avoiding the machine, the behavior of someone who has never done the task before, he grabbed its handle, brought it toward him, looked at the video monitor, and started to move the cursor toward the target. 

He had no awareness that he had seen me before, or that he had played with this robotic arm only hours earlier.  Yet, that experience had left two kinds of memories in his brain:  the memory of how to use the tool, and the memory that associated the sight of the tool, and the act of moving it, with a rewarding outcome.

He was not aware of it, but the sight of the strange tool, the robotic arm, was sufficient for him to want to hold it and move it around so that he could chase targets and get explosions.  Would he have voluntarily reached for the robot if, while playing with it, he had not had a pleasurable experience, recalling those childhood memories?  Probably not.  In 1911, Edouard Claparede, a physician in Geneva, described an amnesic woman much like Henry.  Claparede had wanted to test her memory, so he played a small joke on her: when he reached his hand out to shake hers, he had hidden a small tack in his palm.  When the patient shook his hand, she felt the sharp tack.  The next day when Claparede had approached her, she could not recall having seen him before.  However, when he reached out to shake her hand, she pulled away, despite being unable to say why she did not want to shake his hand. 

Henry could not remember the episode of having played with the robotic arm, but that act left a memory, associating the robot with a rewarding outcome.  The part of his brain that learned this value association was exhibiting its knowledge by reaching out to the robot and moving it in search of a target to explode. 

In addition to this value-action association, he also had a memory of how to use the contraption.  Those forces that he had practiced to overcome had left a different kind of memory: much like picking up a coke can that you expect to be full but is empty, Henry’s movements on this revisit had the ‘after-effects’ of the earlier experience.  The robot was not producing any forces, but he moved it as if he was expecting it to be producing those earlier forces.  When the forces were re-introduced, he moved the robot skillfully.

He did not have the ability to form memories of episodes of his life, but these two other intact forms of memory served him well.  For example, as he aged, he developed osteoporosis and required a walker to keep physically active.  With practice, he learned to use the walker skillfully.  Importantly, he would use the walker without being told to do so.  That is, he ‘knew’ that the walker was a useful tool that helped him get around.

Permanent present tense

What is it like to live a life with such a disability?  In a recent book titled “Permanent present tense”, Suzzane Corkin, a scientist who studied and cared for Henry for 46 years, describes his life in loving, exquisite detail.

In a photograph showing Henry with his mother at his 50th birthday, he recognized his mother, but not himself.  While attending his 35th high school reunion, he did not recognize anyone by sight or by name.  He could not recall any specific event in his life, even events from before his operation.  For example, he could not recall a single specific Christmas gift that his father had given him.  He remembered some of the facts that he had learned from the time before his operation, and the gist of the experienced events, but no recollection of any specific episodes.

Henry rarely spoke of being hungry or thirsty.  He never sought out food for himself; it was simply given to him by his caregivers.  In 1981, Corkin asked him to rate his hunger from 0 (famished) to 100 (absolutely full).  He consistently gave a rating of 50, whether he had just finished eating, or was about to eat.  One evening, Corkin played a small trick on him.  After Henry had finished eating and his tray had been taken away, the kitchen staff brought him another tray, with exactly the same meal.  Henry ate the second dinner, cleaning the plates, except for the salad.  He seemed unable to express a feeling of satiety.

When we were examining him, a caregiver mentioned that Henry rarely verbalized that anything might be wrong.  For example, if he had a tooth ache, he would rarely mention it.  Only by observing that he was deviating from his normal behavior during the day would the caregiver suspect that something was wrong.  The caregiver would then go through a list of things to see if they could find out what may be the problem.

Corkin tested Henry’s ability to perceive pain by using a hairdryer to project a spot of heat onto his skin.  The heat was not intense enough to burn the skin, but the idea was to test whether Henry could feel pain.  Corkin’s results showed that Henry not only could not discriminate normally between various levels of pain, he did not report any of the stimuli as painful, and never withdrew his arm.  It is possible that the inability to normally perceive pain, to know hunger or thirst, was related to his operation; perhaps it was associated with removal of the amygdala, as Corkin suggests.

Henry lived a life without keeping memories of the events, and without pain.  His father had passed away, and his mother, who had taken care of him for much of his life, was in a nursing home.  He kept two notes in his wallet that he had written to himself: “Dad’s gone”, “Mom’s in nursing home—is in good health.”

Henry died at the age of 82. 

References
Corkin S. (2013) Permanent present tense: the unforgettable life of the amnesic patient H.M., Basic Books.
Shadmehr R, Brandt J, Corkin S (1998) Time dependent motor memory processes in amnesic subjects. Journal of Neurophysiology 80:1590-1597.