顶级国产视频

More Links in News & Events
Share

Hard Science, Hard Choices

By Sandra Ackerman

Chapter Eight

In view of the millions of prescriptions that are written every year for drugs that act on the brain, it is startling to remember that there are no objective diagnostic tests for mental disorders. Currently, no blood test can tell an individual whether she is at risk for major depression or an anxiety disorder or whether his attentional problems are past a certain threshold. The Diagnostic and Statistical Manual of Mental Disorders (DSM), the diagnostic manual for psychiatry, suggests that various collections of symptoms each add up to a distinct psychiatric disorder,but the reality is that depression, attention deficit disorder, autism, and probably many other conditions are more like qualitative traits, which can exist to a lesser or greater degree. An individual may exhibit several symptoms, some of them very clear and others less so. With each case, psychiatrists are called upon to draw a line somewhere in the continuum of symptoms, distinguishing people who are in need of treatment from those who are not.


Sometimes, with advances in medical science and technology, the borders themselves are subject to change. This can happen when the risk of a certain disease is found to be significant in the presence of milder symptoms or signs than had previously been highlighted. An example, says Steven Hyman, provost of Harvard University and professor of neurobiology at Harvard Medical School, is the circulating blood levels of LDL cholesterol. Epidemiology and clinical trials of cholesterol-lowering agents have led to substantial changes in the threshold for treatment. A change in the borderline for treatment can also come about when new research identifies a previously unnoticed red flag, as when it became clear a few years ago that not only high diastolic but high systolicblood pressure, on its own, constitutes a risk factor for stroke and might require treatment. Borders in the continuum between health and illness in the brain are perhaps particularly subjective, but in fact most of medicine is typified by diagnostic gray zones. Very few conditions exist in which one can simply sort patients into the sick and the nonsick.


Physicians today work in a world in which everyone, from specialist to generalist to layman, seems to have an opinion on many medical topics. One widespread opinion, unfortunately, is that interventions aimed at addressing mental disorders differ in some profound way from interventions that aim to address general medical disorders. This opinion is especially strong when the intervention ought to take place as soon as possible, but the risk hovers over the distant future. To take a hypothetical example, a teenager with a dangerously high level of cholesterol (say, around 300 mg/dL) would most likely receive a prescription for a statin immediately and for many years, even in the absence of any data on the long-term effects of statins on childhood development or in the decades afterward. However, the psychiatric equivalent – such as antidepressant drugs for a young teenager who shows two or three depressive symptoms and therefore carries a significant risk of clinical depression in the next few years – would be met with suspicion, at best. If in both cases the youngster is currently fairly healthy and the consequences of long-term drug treatment are unclear, why are the cases regarded so differently? The answer may lie partly in the categorical difference we persist in seeing between the brain and all the rest of our organs – a largely Western point of view, as mentioned in Part I.

Starting with Safety

The public-health community, whose primary goal is the prevention of large-scale disease, has endorsed the use of statins to combat heart disease. Still, some of the traditionalists worry about another kind of risk, a moral hazard: Will a number of the people who take statins shed their willpower and come to feel they can eschew exercise or eat anything they like without regard to its effects on their health?


Most of us might ignore the theoretical risk or else accept it for the sake of lowering our risk of heart disease. But this abstract point cannot be so easily waved away. Unexamined, it reappears in other aspects of our thinking about the proper role of medicine in our society – and might even bear some weight in the shaping of medicinal policy. The notion is that effortful engagement, whether psychotherapy, physical training, or other kind of arduous workout, carries a moral benefit, whereas taking a pill is morally slovenly and even harmful – amoral shortcut. (Gerard Klerman, longtime professor at Harvard Medical School, referred to this belief as "pharmacological Calvinism.") The brain itself makes no distinction between pharmacology and experience. In this teeming,continually changing network, every physical, mental, or spiritual exertion leaves its mark in the fine-tuning of synapses and neuro transmitter levels, as does every pharmacological intervention.


One reason that the use of statins as a preventative measure has become very common in this country is that we assume these drugs are both safe and very effective. In fact, numerous studies have shown exactly that, and it was only after that they had done so that a consensus on statins began to form among the medical community and the public. If they had not been proven to be both effective and safe, "statins" might not now be a household word.


The necessity of demonstrating these two qualities may seem almost too obvious to mention; but Hyman points out that, as we should remember from experience with one type of antidepressant that carries a small but real risk of enhancing suicide-like behavior, we cannot take either quality for granted. Drugs that have become familiar may not be as safe as we used to think; then again, we should not alarm ourselves into withholding them from people who are really ill.


One problem in drug development often overlooked among consumers is that we have relatively little data so far from clinical trials, especially for children. In the trials of drugs aimed at treating a disease, most studies run for six months or less – even if the drug under study is intended for chronic treatmentthat may continue for years. Hence, a drug deemed safe is known to be so only over a certain period of time, explains Russell Katz, director of the Division of Neuropharmacological Drug Products at the U.S. Food and Drug Administration (FDA). After this period the drug may remain safe to use indefinitely, or may gradually come to carry more risk (of, say, deleterious side effects), or may even, under certain circumstances, become unsafe with long-term use. In the first stages of clinical trials, the term "efficacy," too, applies only within certain limits: the drug need not produce the intended effect 100 percent of the time or even 50 percent of the time. Especially in the case of life-altering disorders, a drug need only work better than a placebo – that is, better than no treatment at all. This point may not be emphasized in a drug advertisement, but it can usually be found somewhere in the small print of the ad or in packaging insert, which is usually considered to be part of the label.


When drugs are used for people who have milder symptoms or are at a lower risk, or to enhance already normal abilities, standards of safety have not yet been developed, says Hyman. To date, the medical and pharmaceutical communities have not formed a consensus on the kinds of studies that would be required to develop standards for prevention in diseases that are not life-threatening. How large would the studies have to be, and how long would they have to run, in order to prove a drug safe and effective in people who are healthy to begin with? Would they have to be placebo-controlled and run for five years? Would they have to have ten thousand patients? We don't yet know the answers to these questions; indeed, we have only recently begun to discuss them.


What features should the FDA take into account when weighing whether or not to approve a drug that may be taken by healthy people to augment or improve some aspect of themselves? Again, safety is the most important consideration, says Hyman. A doctor about to prescribe a drug for someone who is not sick must be confident that the medication itself will not make the person sick.


The proposed labeling of a drug on its way to market receives as much scrutiny as the drug itself. In fact, according to Katz, drug approval is inextricably linked to labeling. The law requires "substantial evidence" – usually at least two well-controlled investigations, including clinical studies – demonstrating that the drug produces the effects that are claimed for it on the label. From the standpoint of legal requirements, developing treatments to enhance normal functions is permissible as long as the research is solid and the labeling describes exactly what the drug does and for whom it is intended.


Would we be willing to relax the safety standards for an enhancement drug that produces a very substantial effect, catapulting the user from, say, average intelligence to brilliance in one dose? We don't yet know the answer to this question, since no such drug exists. As of this writing, says Katz, the FDA has not received notice of any pharmaceutical company working along those lines. True, we already tolerate some treatments that cause considerable harm – we have to only think of amputation, radiation, or chemotherapy – but these measures are reserved for patients who are gravely ill, and they are known to convey far more benefit than harm. Whether healthy people will risk endangering their health for the sake of mental improvement remains to be seen.

Psychiatric Drugs for Children

Entering children into clinical trials of a new drug for mental disorders raises its own set of ethical questions. For one thing, all the data that scientists have up to now have come from trials of drug treatment for recognizable disorders. Little information has been gathered on the early intervention for or prevention of, say, the onset of depression or an anxiety disorder, and none at all on the enhancement of capacities outside of very small laboratory-based studies that don't necessarily mirror real life. Therefore, when we give drugs to a child, especially one who may not be ill, we ought to be clear about whose interest is being served by the treatment. Is it in the interest of society, the child's school or parents, the child himself, or some combination? Hyman points out that these interests are often difficult to distinguish from one another.


Then there are children who have an accepted medical diagnosis, but are they diseased?Pediatric psychiatrists face this question frequently in their practice in a syndrome known as pediatric conduct disorder. This diagnosis, the most common reason for referral to a pediatric psychiatrist, has as its essential feature a persistent pattern of behavior in which the basic rights of others or majorage-appropriate societal norms or rules are violated. In other words, this condition is diagnosed not by the patient's own health or wellbeing but by the effect that the patient's behavior has on other people. The criteria include using a weapon, setting fires, running away from home, and so on – all kinds of behavior that trouble other people for the most part, but not necessarily the patient. Another criterion is that this behavior interferes with the patient's functioning – but of course, if someone sets fire to a house, stands trial, andis sent to jail, that will indeed interfere with her functioning. Clearly, whether or not it is caused by a disease, such conduct is something that needs to be addressed, says Thomas Murray, president of the Hastings Center. Psychiatrists as well as lawyers and judges have a responsibility to think about the larger implications not only for that person but for all the otherson whom that person's life will touch.


The criterion of "interfering with the patient's normal functioning" merits a closer look. One might reasonably argue that the way our society responds to the troubled and troublemaking child, whether by isolating him or rushing to medicate him, interferes with his normal functioning almost as much as a latent psychiatric disorder could do. This may be especially true in cases in which the child shows a few warning signs but not a full-fledged disorder.


Psychiatric conduct disorder, like a number of other pediatric disorders, presents special ethical issues because the subject is a child, whose brain is still developing and therefore more subject to influences from the environment than the brain ofa mature adult. Drug treatment, behavior modification, or whatever else is used to manage the disorder in a fifteen-year-old is likely to affect her future profoundly.


Many children who do not receive a formal diagnosis are nevertheless struggling with psychiatric symptoms: for example, the eighth-grader who has lost his appetite,hardly sleeps, and feels guilty all the time, or the sixth grader who is very bright but does poorly in school because her restlessness keeps her from listening to the teacher. All too often, says Hyman, these children drag themselves through years of underperformance in school, think little of themselves, and are rejected by most of their peers. Perhaps they feel, too, that their teachers or parents see them primarily as problems.They are likely to turn to other self-styled outcasts, from whom they may learn such maladaptive habits as avoiding school entirely or self-medicating with alcohol or illicit drugs. The luckiest among them will be caught, undergo medical examination, and finally receive a diagnosis for what has been hindering them all along. The saddest part of these stories is that even the best psychiatric help in the world cannot make up for lost years of scholastic and social development; they must start a new from where they are now. Just as surely as treatment with powerful drugs, experiences that result from a lack of treatment etch their marks in the brain. This fact alone may offer enough reason to promote early intervention to reduce the impact of psychiatric disorders in children.

Unfair Advantage in a Pill?

Some time ago, when the names "Ritalin" and "Prozac" still sounded strange to most Americans, a few people warned that these drugs, which altered the state of the brain, could one day be used to control large numbers of people for political ends. Fortunately, this potential has not been realized; instead, the opposite problem has developed. According to several studies and surveys, Ritalin and Prozac are used at highest rates among Caucasian populations. This issue now,according to Hyman, is, are we allowing the emergence of two classes, the chemical "haves" and the "have-nots"? Are we going to create a wealthy, comfortable class whose children will have not only the latest computers and special college-preparation courses but pharmacological advantages for competing in school?


This is not a trivial ethical issue for the kind of society we want to have. We need urgently to discuss such questions and come to an understanding. The drugs now in development have the stated goal of treating illness, but almost inevitably some will prove effective for enhancement and will be used by healthy people who can afford to pay for them. In our free society, many believe it is their duty to give their children every advantage. The medical establishment cannot effectively prohibit these uses or persuade all physicians to turn away such requests. We need, therefore, to address the ethical issues at stake so that we can set some guidelines to manage the ever-growing demand.


There is no question that the use of psychotropic medication has increased dramatically over the last ten to fifteen years. At the top of the list have come the stimulants, particularly methylphenidate (Ritalin), and amphetamines in general, which are used to treat the condition of attention-deficit/hyperactivity disorder (ADHD). ADHD is characterized by a level of inattention and/or hyperactivity that is abnormal for the child's stage of development, that usually becomes apparent in preschool or early elementary school years, and that impairs school performance, can lead to academic failure, raises the risk of delinquency and of accidents, and increases overall medical costs. The list of consequences does not mean each child diagnosed with ADHD is doomed to have these negative outcomes, but on average,a population of children with ADHD would have more of them than a comparable group without ADHD. The prevalence of ADHD in American children is estimated at 4 to 5 percent and is more common in boys than girls.


Medical treatment for ADHD frequently includes the use of stimulants, but exactly how this class of drugs works on ADHD has never been settled. Benedetto Vitiello, chief of the Child and Adolescent Treatment and Preventative Interventions Research Branch of the National Institute of Mental Health (NIMH), says it is unclear whether stimulants have a beneficial effect on educational achievement in addition to enhancing cognitive performance. There is plenty of evidence that these drugs increase alertness, attention, and goal-directed activity while reducing fatigue, appetite, and sleep. The evidence is less clear on the question of whether they increase knowledge, accelerate learning, or bring about better academic achievement in general. Experts agree that stimulants improve performance on tests; this is, whatever a person is doing, he or she will do better under the influence of stimulants.


According to a 2002 estimate by Ann Zuvekas and her colleagues at the George Washington University School of Public Health and Health Services, between 2 and 2.5 million children in the United States received a prescription for stimulant medication at least once that year. Large as this figure is, it may be an underestimate, because it is based on reports by parents. Although the study population was carefully selected to represent the nation as a whole, the results of these surveys can be subject to over- or under-reporting.


One recent survey attempted to identify the main factors that might raise a child's likelihood of using stimulants. As might be expected, age is a powerful predictor: children ages ten to twelve show the highest rate of use. Other factors include being a boy, living in a family with fewer children, and living in a community with a higher percentage of Caucasians. Rates of stimulant use among children vary widely with geography; higher in the South and Midwest, lower in the Northeast. Among the states, the lowest rate of use is found in Washington, D.C. (1percent), and the highest in Louisiana (about 6 percent).


An interesting study, from Adrian Angold and his colleagues at Duke University, concerns a survey taken in the late 1990s in western North Carolina. Not content with simply counting how many subjects were taking stimulants, they wanted to find out to what extent the children's symptoms, as reported by parents or teachers, actually were consistent with a diagnosis of ADHD.


The Duke scientists studied an epidemiological sample of about fifteen hundred children, ages nine to sixteen, for the period from 1992 to 1996, and found that about 3.4 percent of this number – about fifty children – met full criteria for ADHD (a figure that is in accord with most such surveys). Of these fifty or so, almost three-quarters had taken medication consistent with their diagnosis –that is, stimulants. Another group, representing about 2.7 percent of the study population, had some symptoms of ADHD but did not meet the full criteria for diagnosis; about a fourth of them used stimulants. The great majority of the study population, more than 93 percent, showed no symptoms suggestive of ADHD at all – yet, within this group, about 4.5 percent had used stimulants at least once in those four years.


Focusing in on this 4.5 percent yields a few additional insights, says Vitello. It appears that, of a given one hundred children in the community who were treated with stimulants in the four years, about 34 percent had symptoms that met the full criteria for ADHD; about 9 percent had symptoms but not full-fledged ADHD; and about 57 percent had neither. However, about 84 percent of the children who had no ADHD symptoms were nevertheless rated by their teachers as impaired in school.


Another epidemiological study, called Monitoring the Future, was sponsored by the National Institute on Drug Abuse. This was a survey carried out on a representative sample of eighth-, tenth-, and twelfth-grade students. The response (again, by self-report) indicated that some 4 percent of those queried had taken Ritalin without a doctor's order at least once in the preceding twelve months. Several factors stood out as predictive: being white, being in a higher grade, having low school grades, and having a history of substance abuse. Some geographical variation in these rates may indicate this type of use was carried out not to improve school performance but for recreation.


In Vittello's view, stimulants have a wide range of possible applications, from use to misuse to abuse. The favorable ratio of benefit to harm that has often been cited for stimulants has been found specifically for children who meet the full criteria for ADHD; no one knows just what the ratio is in other groups of stimulant users, such as children who have symptoms of ADHD without the full diagnosis. Yet, treatment without a full diagnosis is not unheard-of in medical practice. In fact, most medications are used at times in subsyndromal cases or in cases with a different, though related, diagnosis – not just in psychiatry but, for instance, in cases of antibiotics being prescribed for infections that may not be caused by bacteria.


Moving to the next point on the range of applications, stimulants may be used for treating children who do not have symptoms of ADHD but may have other learning disabilities such as dyslexia or even a level of aggression that keeps them from focusing on their schoolwork. There is some evidence that treatment with stimulants improves these children's performance; whether they actually learn more is still an open question. Another example of non therapeutic use is the practice of giving stimulants to a healthy, nondisabled child to improve her performance in school.


The end point on the range of applications, taking stimulants to bring on euphoria, is clearly drug abuse. The two non therapeutic uses that fall short may arguably constitute misuse; if so, it is a misuse being carried out millions of times every year. The arguments for and against the non therapeutic use of stimulants each have their passionate advocates. However, the key element of this debate –solid evidence about the risks and benefits – is missing, because the necessary research has not been carried out in healthy children. Until this has been accomplished, no argument on either side can be entirely convincing.

Copyright 2006, Used with permission from Dana Press.

Sign Up For Email

Please sign up below for important news about the work of 顶级国产视频 and special event invitations.

Please leave this field empty
Now, we invite you to
Back To Top