Article body

Most scholars date the modern bioethics movement back to the 1960s, whether from the founding of the Hastings Center and Georgetown University’s Kennedy Institute of Ethics[1], or perhaps from a period beginning with Henry K. Beecher’s courageous bit of medical muckraking, detailing in the New England of Medicine no fewer than 22 unethical medical experiments, and ending with the New Jersey Supreme Court’s decision to respect the request of Karen Ann Quinlan’s parents to turn off the respirator[2].

In retrospect, the 1960s in North America were a “perfect storm” of factors that combined to create a truly revolutionary change in the relationship of patients to doctors; of health professionals to each other; of professionals to laypeople[3]. Medicine itself had undergone revolutionary changes, beginning with the discovery of penicillin near the end of WWII. What medicine could do for (and to) patients exploded, from antibiotics to make surgery safer, to kidney dialysis and eventually organ transplants. As medicine did more, there was more to argue about, and what had been essentially private decisions became matters for public debate. A famous example is the article by Shana Alexander in Life[4], a mass market magazine, shedding journalistic daylight on the deliberations of the “God Committee,” in Seattle, which had the unenviable job of deciding whose life would be saved by dialysis when there were not enough machines to meet the demand.

New technologies allowed medicine to keep alive patients with poor quality of life or little chance of long-term survival. Patients, families, and their advocates began to push back against the automatic assumption that longer life meant better life[5]. Plays and movies such as Brian Clark’s “Whose Life Is It, Anyway?” brought the issues into popular discussion. The Hospice movement, begun in the 1960s by Dame Cicely Saunders in London, offered an alternative to the “do everything” mandate of conventional medicine. Elisabeth Kubler-Ross published On Death and Dying in 1969; this hugely influential book argued against the secrecy that isolated dying patients.

In a different arena, unethical medical experiments with human subjects became the object of public scandal. Although the field of bioethics “should have begun” as a response to revelations of the role played by physicians in Nazi atrocities, in fact most Americans thought of those atrocities as the work of “ideological lunatics”[6] and saw the Nuremburg Code as irrelevant and unnecessary for “civilized” people. It took the revelations of the Tuskegee experiment and other scandals to raise awareness among the public and in government, culminating in the first presidential commission on bioethics, and eventually in regulations governing the ethical conduct of research with human subjects.

For me, the 1960s are perfectly captured by the slogan on a button still rattling around in my desk drawer: Question Authority. Fortresses of hierarchy were crumbling, and there was no hierarchy more entrenched than the practice of medicine.

It was […] the individual physician who decided […] matters at the bedside or in the privacy of the hospital room, without formal discussions with patients, their families, or even their colleagues, and certainly without drawing the attention of journalists, judges, or professional philosophers[7].

Thomas Szasz slyly analogized the physician to the (preVatican II) priest: he was male; wore special ritual garb; practiced with his back to his audience; spoke a sacred language (Latin) to which his flock was not privy; could not be questioned. Patients at the time were rarely permitted to see their hospital charts; prescriptions were written in Latin in notoriously bad handwriting[8]. But just as Vatican II blew fresh air into the Church (at least for a while), the 1960s challenged the singular authority of the individual doctor. A study in 1961 reported that 90% of physicians would not tell a cancer patient her diagnosis; by 1977, 97% said they would inform the patient, an extraordinary turnaround in less than a generation[9].

One of the greatest challenges to medical authority came from the women’s movement. Women had always constituted the majority of patients, in part because of conception, contraception, pregnancy and childbirth, in part because it was usually mothers who took their children to the doctor. Women often perceived doctors as “condescending, judgmental, paternalistic and non-informative”[10].

It is sometimes hard for my students to imagine a world in which the internet does not exist, and all the different “Dummies” and “Idiots” guides to just about everything had not yet been published. In the 1960s, if you didn’t have a doctor in your family, it was extremely difficult to get information. The Boston Women’s Health Book Collective began to address the information gap with Our Bodies, Ourselves, in 1971 and later editions, explicitly giving women the tools to challenge doctors’ autocratic rule over such matters as childbirth and contraception. Movements such as Lamaze, and other “natural childbirth” alternatives did their part, by giving women an alternative to being passively anesthetized while the doctor “gave” them the baby, and by recruiting midwives as sources of information and support. Women began to reimagine the physician-patient relationship as one of more equal power. Rather than dutifully following doctor’s orders, Our Bodies, Ourselves urged women to “be alert to your responsibility in the relationship, just as you would in any other adult relationship where you are purchasing services”[11].

In fact, the rise of the consumer movement is an overlooked factor in the birth of bioethics. Consumers Union, which produced Consumer Reports, began in the U.S. in 1936, but really took off after WWII, when the “baby boom” resulted in an explosion of goods for sale and the advertising to tout them. With its independent testing laboratories and user surveys, CR was the place to go before buying a washing machine or a car. Ralph Nader’s Unsafe at Any Speed, a critique of the safety record of American automakers, was published in 1965. Although no one remarked on it at the time, the consumers’ movement, the women’s movement, and the bioethics movement came together in 1979 when Consumer Reports published a piece on amniocentesis, detailing its pros and cons, how it worked, and seventeen points to discuss with your genetic counselor[12]. Other articles in that issue included an evaluation of five different pancake mixes and a guide to the best spackling compounds. Suddenly, a sophisticated, even arcane, piece of health care was being treated like a vacuum cleaner or coffee maker, and women who made use of the article were consumers, empowered with information, rather than passive objects.

The bioethics movement promoted the patient as the physician’s partner, and as the ultimate decision maker. The concept of informed consent became paramount in both clinical and research settings. Acknowledging the patient’s right to make her own decisions, and to have access to the information necessary to make those decisions, was the hallmark of respect for autonomy[13].

Thus, the bioethics movement was born as a full-throated defense of patient empowerment. That word sounds tired now, but in the 1960s and 1970s, power was very much the issue. Respect for persons meant respect for the voluntary, informed choice of the competent patient, the better to support personal autonomy. The challenges to be overcome at the time were lack of information and a paternalistic medical profession.

Although the theoretical pendulum has swung a bit in the last couple of decades, no one has questioned the basic commitment to informed consent of competent patients in clinical settings. The threat to autonomy comes instead from opportunistic testing, combined with pressures of time and money, and extruded through the trend toward routinization that seems pervasive in medicine[14]. This is one of the major issues facing bioethics today, and one that will only grow larger in the next decade. I truly fear that that this threat has already seriously eroded any semblance of informed consent in some of our most basic and common medical decision-making.

In what follows, I will explain what I mean by opportunistic testing and consider three different examples of how it threatens informed consent:

  • PSA Screening

  • Newborn Screening

  • Maternal blood tests for fetal anomalies

PSA Screening

I began to think about this issue a few years ago, in the kitchen of friends I will call Jack and Kate. We were putting the finishing touches on dinner when Jack told me, in a rather distraught voice, that he had just found out the results of a PSA test, and it was high. Hmmm, I said — what made you decide to take the test? He looked at me blankly — he hadn’t decided, hadn’t been given the choice — hadn’t even realized his doctor had ordered the test until he was given the results.

I am very fond of my friend, who was rather upset by all this, and it made me quite angry. I immediately got on the Web and reminded myself — and Jack and Kate — of what I already knew. Prostate specific antigen (PSA) is a protein produced by the prostate gland. A very small amount escapes into the bloodstream, which allows for simple testing with a blood sample. PSA can be used as a screening device for men not known to have prostate cancer or as a test to monitor men who have already been treated.

As H. Gilbert Welch writes, “Like all other efforts to diagnose disease early, cancer screening is a double-edged sword. It can produce benefit: providing the opportunity to intervene early can reduce the number of deaths from cancer. It can produce harm: overdiagnosis and overtreatment. And it can do both at the same time. So while a strong case can be made for cancer screening, there are good reasons to approach it cautiously”[15]. PSA screening is especially difficult to assess. On the one hand, prostate cancer is the second most common cause of cancer death in men. On the other hand, it turns out that most prostate cancer is “indolent,” causing no symptoms and no harm. Many more men die with prostate cancer than die of it. A number of studies looked for prostate cancer in men who had died of other causes and who were unaware that they had prostate cancer. 40% of men in their 40s, and a whopping 80% of men in their 70s, were found to have had nonsymptomatic prostate cancer[16]. The problem with cancer screening is that it cannot distinguish between nonprogressive or very slow-growing cancers, for which treatment is unnecessary, cancers that are so aggressive that treatment is pointless, and cancers for which treatment will make a difference. Meanwhile, treatment for prostate cancer is hardly harmless; substantial numbers of men who receive surgery or radiation for prostate cancer will experience irreversible impotence or incontinence, or both[17].

All the reputable websites essentially echoed this statement from the American Cancer Society (ACS):

The American Cancer Society recommends that men have a chance to make an informed decision with their health care provider about whether to be screened for prostate cancer. The decision should be made after getting information about the uncertainties, risks, and potential benefits of prostate cancer screening. Men should not be screened unless they have received this information[18].

The National Institutes of Health websites advises that the value of PSA screening is “controversial,” and recommends that men should discuss with their doctors the reasons for and against having the test before making a decision[19].

Otis Webb Brawley, Director of Research at the ACS and “the public face of the cancer establishment,” has long been criticized for his very public skepticism about PSA testing and his own refusal to take the test[20]. (As an African-American male, Brawley might be considered an especially “bad” role model for screening advocates, because African-American males have higher rates of prostate cancer.)

All this was a few years ago, before the May 2012 recommendation by the independent United States Preventive Services Task Force (USPFTF) against routine screening for men of any age group.[21] Co-Chair Michael Lefevre explained that “for every 1,000 men treated for prostate cancer, five die of perioperative complications; 10-70 suffer significant complications but survive; and 200-300 suffer long-term problems, including urinary incontinence, impotence or both. That’s a lot of harm for a cancer that didn’t need to be treated in the first place”[22]. Dr. Richard Ablin, who discovered PSA in 1970, wrote in 2010 that “The test’s popularity has led to a hugely expensive, profit driven public health disaster”[23].

So… why was my friend given such a controversial test without his informed consent? Because the test is opportunistic. My friend was used to having his blood screened at each routine visit for, e.g., lipids, and the physician could piggyback the PSA test on top of the other tests, without getting extra blood or doing anything else that he would have to explain or get permission for. While I don’t condone that practice, anything but, it is easy to imagine the physician’s thought process — or perhaps that of the institution. To not offer PSA might lead to a lawsuit down the road. To offer it with an appropriate discussion would take a bit of time, at least 15-20 minutes[24]. To precede this routine medical visit with a pamphlet or dvd about PSA screening say, a couple of weeks earlier, to attempt to speed up the discussion in the office, would be smart, but display a degree of planning rarely seen. The average office visit is 19.3 minutes, according to one study[25]. Better to just give the test to everyone, and save precious time to discuss it only when the results are problematic, which might be about 5% of the time.

In fact, when one considers the controversy swirling around this test, it is outrageous that so many patients are subjected to it without their knowledge or consent. It is difficult, however, to document what percentage of patients are given the opportunity to make an informed choice before engaging in PSA testing. Researchers in the United Kingdom reported in 2010 that only about a third of 106 men given a PSA test were aware of such basic facts as the goals of the test and the likelihood that it would lead to further testing, but does not address whether the men were even told that a PSA test had been ordered[26].

A 1999 study in the United States found that one third of patients at a primary care clinic were

“unaware that they had received a screening PSA test, and among patients who were aware of having the test done, fewer than half recalled having a discussion about the associated benefits and risks. […] We found that most men did not know that treatment of localized prostate cancer has not been shown to increase survival and can lead to impotence and incontinence. The results indicate that, in most cases, the process of verbal informed consent between patients and health care providers was either ineffective or not done”[27].

It appears that men who do undergo an informational process are significantly less likely to express interest in PSA testing than those who were not given that opportunity[28].

Newborn screening

State-mandated newborn screening began in the 1960s by targeting phenylketonuria (PKU). In fact, in some places newborn screening is still referred to as “the PKU test.” In this genetic condition a baby is born without the ability to break down an amino acid called phenylalanine. On a normal diet, babies with PKU become irreversibly developmentally delayed, but if put on a strict diet that excludes phenylalanine, they can progress normally. Because early intervention (before symptoms become apparent) is crucial, and because we have an effective intervention, PKU remains the “poster child” of a successful newborn screening program.

The trend toward mandatory newborn screening began with PKU. Dr. Robert Guthrie developed the test for PKU with support from the National Association of Retarded Children. Guthrie and NARC members lobbied state governments, provided draft legislation, and were successful in making PKU screening mandatory in 48 states[29].

For some time after PKU screening began, other tests were added in a very frugal fashion; screening for each new condition required a whole different test and different lab equipment. Given the tremendous expense of testing all newborns in the state, and the relatively small number of children identified, each new test had to surmount a rigorous test of its own in order to be adopted. That all changed with the invention of tandem mass spectrometry, which allows for “multiplex testing” on the same blood sample for many conditions at once. Mass spectrometry has allowed for unprecedented expansion of newborn screening[30]. The DNA chip, already in use in the private sector, will soon make possible “additional exponential expansion”[31] of newborn screening programs. Whereas mass spectrometry measures levels of various metabolites in the blood, the microchips will screen directly for the genetic basis of various disorder..[32] The National Human Genome Research Institute is currently working to reduce the cost of sequencing an entire human genome to $1,000 by 2014[33].

The explosive expansion of new conditions to be screened is controversial. When each condition had to be justified on its own, new screens were added sparingly. Now, there are many pressures to expand screening and no clear criteria for adding new conditions. One could think of this like the national census: once you have put in place a huge army of people to ask a million households to answer a list of questions, every one with a cause or a research agenda wants to add a question to the list. Rachel Grob identifies a number of factors contributing to the rapid expansion of newborn screening, including “technological innovation, political opportunity, interstate rivalries, and competitive pressure on state programs from national laboratories”[34].

Advocacy groups, often propelled by families whose own child might have been saved from the consequence of a rare disease had timely screening been available, push hard to add “their” disease to the screening panel. Interestingly, parents of children with disorders for which there is currently no medical intervention, such as Fragile X, are equally enthusiastic about routine screening[35]. Ross and Waggoner point out that parent advocacy groups often found their arguments on personal experience (anecdotal evidence), rather than scientific, peer-reviewed evidence. Advocacy groups often side-step existing procedures for adding new tests, by lobbying legislators directly. Further, advocacy groups’ focus appears to have shifted from “the public interest” to “their members’ interests”[36]. Ross and Waggoner attribute this shift at least in part to funding of advocacy groups by pharmaceutical companies:

“Although private individuals and charitable foundations were the historical source of funding for advocacy groups, today advocacy groups are often funded by pharmaceutical companies that have a vested interest in the promotion of the treatments that they are developing for the disorders represented by the advocates”[37].

Newborn screening expansion has engendered a great deal of discussion as well as controversy, although no amount of discussion seems to have any effect on the screening juggernaut. Underlying the debate is the fact that almost all newborn screening is done without the informed consent of the parents[38]. In many states, parents can theoretically refuse screening, but in fact as they are rarely told of it beforehand, or told only in very vague terms, this right to refuse is meaningless. Only two states require parental consent, although 13 other states require that parents be “informed”[39]. A test given without parental consent can only ethically be defended on the grounds of potential benefit to children, backed up by strong evidence. PKU screening fulfills those requirements (assuming that the state follows up with parental education and makes sure there is access to the expensive diet), but screening for other conditions may not. Screening for cystic fibrosis, for example, has been controversial because not everyone agrees that there is a medical advantage to early, presymptomatic diagnosis. However, studies show that early diagnosis prevents malnutrition and improves children’s growth and cognitive function[40]. A different concern has been expressed about Fragile X syndrome, the most common form of inherited intellectual disability. Although Fragile X can be devastating, one third to half of all females with the mutation are intellectually normal. Identifying those children could cause unnecessary anxiety in parents or lead them to have mistakenly low expectations of what their daughters can achieve[41]. Unfortunately, even when parents are aware enough to exercise their right to consent or refuse newborn screening, consent is an “all-or-nothing” process. Parents cannot choose which tests their newborns get. Thus, fear of missing something crucial, drives parents to consent to everything, and produces “broad pediatric health professional consensus to discourage parental refusals”[42].

Everyone agrees that screening ought to result in a demonstrable benefit, but there is disagreement on what kinds of “benefits” count[43]. There is no question that saving a child from the devastating effects of PKU is a wonderful benefit. But by what rationale should we screen for disorders for which there is no known medical intervention? Even if the infant itself does not benefit directly, one could argue that there benefits to the family, or to society as a whole. Parents could benefit from having a heritable disease diagnosed early, before they embarked on another pregnancy. Parents (and arguably the child) could also benefit by being spared a “diagnostic odyssey” when the child does become symptomatic. Society could benefit from knowing more about the incidence of a disease. This, of course, is a research question, and normally we do not allow research on children without parental permission. In fact, Ross and Waggoner argue that many new screening initiatives should actually be considered research, and should be placed under the scrutiny of an Institutional Review Board for the Protection of Human Subjects, commonly known as an IRB.

Why is IRB approval so important? By placing such (medical) conditions under an IRB protocol, it acknowledges that there is much we do not know and that needs to be learned. It acknowledges that we need parents to be coadventurers as we diagnose their children with genetic or metabolic disorders of unknown significance. It also means that additional reviews will be necessary before these conditions become entrenched in newborn screening programs[44].

From some perspectives, there are few if any ethical limits on newborn screening. Duane Alexander, recently Director of NIH’s National Institute of Child Health and Human Development, considers the principle that one should screen only for disorders for which a treatment exists, as an “outmoded dogma.” Alexander and others call for the development of multiplex screening that would screen newborns for “every medically significant genetic marker”[45]. Rather than demanding a rationale for adding a screen, every marker is presumptively screenable in the absence of a good reason to exclude it. The President’s Commission on Bioethics rather dubiously terms this approach “newborn profiling”[46]. In the 1990s, a number of reports argued that the only justification for newborn screening was the possibility of substantial benefit to the child. In the twenty-first century, that perspective seems to be losing out to a wider and not well delineated notion of “benefit” and of appropriate beneficiaries[47].

In the era when newborn screening meant solely PKU, one could at least make the argument that no parent should risk a baby’s health by refusing the test, although that would not, in my opinion, obviate the need for parental consent. As time went on, however, the number of tests increased exponentially, and began to include conditions that were not responsive to treatment or that were collected for research purposes only. This is why I use the term opportunistic — you start out with a well-established test that the subject expects or is conditioned to or that has some sort of rationale, and piggy-back onto that one or more tests on the same sample. Of course, more tests should equal more need for consent, especially when the purpose shifts from clinical to research, but in fact all the pressures push in the other direction.

First, the actual incidence of finding the disease when one screens a general population is pretty low. One of 15,000 infants born in the US every year has PKU, for example. So, it’s one thing to push an informational folder into the hand of every distracted new parent, but should we really ask a health professional to spend 10 or 15 minutes explaining the advantages of screening for PKU to every new parent, when that discussion will prove largely irrelevant 14,999 times out of 15,000? With the relative ease of adding one more test onto the panel, informed consent becomes harder and harder to support. Grob argues that “this extraordinary substitution of state power for parental autonomy” can be summarized as a combination of economics; lack of logistical support, e.g., not enough genetic counselors; and fear that requiring parental consent would result in too many parental refusals that would imperil children’s health[48].

Oddly, the same information that is surrounded by the highest level of concern in the genetic counseling context is treated quite cavalierly in newborn screening. Screening not only identifies babies who are actually at risk for a genetic disease, such as cystic fibrosis; it also identifies, as an unintended consequence, babies who carry only one copy of the genetic mutation. When a disease is recessive, such as cystic fibrosis or sickle cell anemia, this information has no health implications for the child. But identifying a newborn as a carrier has other potentially serious implications. First, it is this carrier status that is likely to show up as a false positive, causing considerable anxiety for parents until further tests uncover the full story. This anxiety can persist even after the child is pronounced healthy, and influence how parents react to the child, perceiving it, for example, as more fragile[49]. Second, reporting children’s carrier status to parents is a serious breach of respect for the privacy and autonomy of the child when it becomes an adult. One’s carrier status is only relevant when one is making marital and reproductive decisions, decisions adults have the right to make on their own. It is for the person herself to decide whether or not she wants to share her carrier status with her parents as she chooses her mate and decides whether or not to have children[50]. Third, testing newborns inevitably means one is testing the parents as well. If a newborn has one copy of the cystic fibrosis mutation, then at least one of her parents is a carrier as well. The American Academy of Pediatrics says that newborn screening should not be used as a “surrogate” for parental testing[51], but it is hard to see how to avoid that consequence. Grob argues that “[t]he state’s delivery of unsolicited genetic risk information to women of child-bearing age is a real threat to reproductive autonomy, yet a sustained dialogue about this consequence of universal screening is sorely lacking amid the willy-nilly rush to expand state programmes”[52]. This genetic revelation may be inevitable, but at least parents should know beforehand that, through the mechanism of newborn screening, they are essentially being screened as well. Otherwise they are like my friend Jack, totally blindsided by a phone call telling him about results of a test he had never been aware he was taking.

Ross and Waggoner argue that “[t]he mandatory nature of newborn screening is anachronistic in that it is the only testing of children that is performed without parental permission and was made mandatory despite national recommendations […] in favor of parental permission”[53]. An anachronism, however, is a chronological inconsistency usually understood as a “throwback” to an earlier time. In this sense, mandatory testing of newborns without parental permission would be considered a throwback to days before the bioethics revolution that vested consent in the individual. I fear, however, that rather than being a throwback, newborn screening without permission is a dispiriting harbinger of the future.

Noninvasive Prenatal Diagnosis

Finally we come to noninvasive testing for Down Syndrome via cell-free fetal nucleic acids. Current standard practice for screening and testing pregnant women for fetal chromosomal abnormalities is a mix of noninvasive and invasive screening and tests, carried out throughout the first and second trimesters.[54] An array of screening tools provides each pregnant woman with an individual risk assessment, but is not diagnostic and will not detect all chromosomal abnormalities.[55] Invasive testing — chorionic villus sampling (CVS) or amniocentesis — is extremely accurate, but carries with it a small but significant risk of miscarriage. Ironically, while the risk of trisomy increases with maternal age, maternal age is also associated with lessened fertility, so it is precisely those women with the highest risk of chromosomal abnormality who can least afford to lose a wanted pregnancy. Invasive testing is also time-consuming and labor intensive. Therefore, a noninvasive, highly accurate test for Down Syndrome is the “holy grail” of prenatal diagnosis, or at least one holy grail.

In October 2011, the company Sequenom announced that it was releasing a test that detects 99% of Down Syndrome via cell-free fetal DNA in maternal blood, in the first trimester of pregnancy. The company said the test is aimed at the estimated 750,000 pregnancies at high risk for Down Syndrome annually in the U.S.”[56], but as the cost comes down, as it inevitably will, there is no reason to reserve it only for pregnancies at high risk for Down Syndrome, since the test itself is risk-free. The diagnostics company Natera is currently engaged in NIH-funded trials to validate clinical use of its parallel diagnostic technology[57].

There are enormous emotional and ethical issues attached to prenatal diagnosis. An oft-heard statistic is that 90% of pregnancies diagnosed with Down Syndrome are terminated, but that is 90% of a population that had already agreed to go ahead with testing, presumably with at least some openness to termination. Other people choose not to test because they would not terminate. When the gold standard for detection of Down Syndrome is an invasive test, i.e. CVS or amniocentesis, informed consent is a sine qua non. The risk of miscarriage obviously makes consent crucial, as each woman will evaluate and balance the risks in an individual way. But, to state the obvious, it would be unthinkable to perform CVS or amniocentesis without consent because 1) it is invasive, and 2) it is a stand-alone procedure that cannot be piggybacked onto something else. If you are having an amniocentesis you know you are having an amniocentesis — or at least that something very different and very specific is happening to you. Thus, CVS and amniocentesis are the focus of thoughtful, often anguished decision-making. Weighing the risks of having a baby with Down Syndrome versus the risks of losing a healthy fetus, forces couples to think about Down Syndrome and what a child with Down would mean for their family.

I am not extolling anguish for its own sake — there’s enough anguish in the world already — and I think that a decisive test for Down Syndrome that is noninvasive and riskfree is a wonderful thing. And I think it is great that women will be able to focus on the question of testing for Down Syndrome, detached from issues of possible miscarriage. But… I worry that this will become another opportunistic test that will often be performed without informed consent. Pregnant women are used to giving blood at virtually every prenatal visit; the same logistical arguments that push PSA testing without a prior discussion could easily lead to the same outcome here. There is already evidence that women are given inadequate information preceding the routine “triple” or “quadruple” screens (performed on maternal blood) currently in use[58]. I fear that screening without informed consent will become testing without informed consent in a fairly seamless way.

Conclusion

There are surely many candidates for the most salient questions in bioethics as we enter the 21st century. I am certain critics will point out that with many millions of Americans still without health insurance, not to mention the third world countries in which malaria netting and clean water are in tragically short supply, fussing over whether those of us lucky enough to get adequate healthcare are being tested without our consent, is pure solipsism. Nonetheless, as medical technology presses forward, it is individuals who have to make decisions about its use. To be tested without one’s knowledge and consent is a slap to one’s autonomy that threatens to return us to the dark ages of medical paternalism. Paternalism, however, at least stems from a desire to benefit its object. Routinized testing on uninformed, unconsenting persons stems from less benign forces: economic pressures, fear of lawsuits, and lack of respect for individual decision making.