Category Archives: Publication

Why Mental Illness-Focused Gun Control May Be More Harmful Than Helpful

This article was originally published here.

Dr. Sara Gorman examines the risks involved with mental illness-focused gun control.

In the aftermath of the mass shooting earlier this year in Newtown, Connecticut, debates have been raging in the U.S. about what steps to take to prevent such tragedies in the future. In particular, policy officials and the public alike have been pondering whether more stringent controls on potential gun buyers and gun owners with mental illness should be implemented and what these controls might look like. Shortly after the Newtown shootings, Senator elect Marco Rubio called for guns to be “kept out of the hands of the mentally ill.” In a more extreme statement, the National Rifle Association (NRA) suggested an “active national database of the mentally ill.” A recent study by researchers at the John Hopkins Bloomberg School of Public Health found striking similarities in the opinions of gun-owners and non-gun-owners when it came to restricting the ability of people with mental illness to own guns. 85% of all respondents to the survey supported requiring states to report people to national background-check systems who are prohibited from owning guns because of a history of being involuntarily committed or being declared mentally incompetent by a court. Most respondents, whether gun-owners or non-gun-owners, were resistant to allowing people with mental illness to own guns. Clearly, the provision of tight restrictions on potential gun owners with mental illness is an unusual arena in which gun-owners and non-gun-owners can agree.

There is no question that guns pose a potentially serious problem for people with mental illness. Some forms of mental illness can be associated with heightened potential for violence, but, more importantly, the risk for successful suicide among depressed persons with guns is much higher than the risk for those without guns. Nevertheless, are gun control efforts that require the names of people with mental illness to be kept in a national database such a good idea?

The truth is, we have to be much more careful about gun control efforts that have the potential to target people with mental illness. There are two main reasons to approach these kinds of laws with a healthy dose of caution: one reason is that gun control efforts focusing on mental illness have the potential to exacerbate public stigma about the potential violence associated with mental disorders; the second reason is that gun laws that in particular involve collecting the names of people with mental illness in national databases have the potential to deter people from seeking the care they may desperately need.

Ample evidence has suggested that stigma and discrimination against people with mental illness is often correlated with perceptions that people with mental illness are inherently violent. People who believe that mental illness is associated with violence are more likely to condone forced legal action and coerced treatment of people with mental illness and may feel that victimizing and bullying people with mental illness is in some way justified. The idea that mental illness and violence are closely related is quite common. A 2006 national survey found that 60% of Americans believed that people with schizophrenia were likely to act violently toward another individual. Even so, research has repeatedly established that psychiatric disorders do not make people more likely to act in a violent manner. Gun laws targeting people with mental illness are likely to worsen the perception that mental illness and violence go hand in hand, and, as a result, stigma and discrimination are likely to be exacerbated.

Gun laws targeting people with mental illness may in some instances save lives. Successful suicides, or even suicide attempts, might be avoided, for instance. On the other hand, in addition to perpetuating a stigmatizing belief that people with mental illness are dangerous, gun laws that focus on people with mental illness might involve measures that deter people from seeking psychiatric care. If people are afraid that the government and other parties will have access to their confidential mental health information, they may be much more reluctant to seek help in the first place. In the end, this kind of deterrence could cause more harm than good, not to mention that increased stigma and discrimination also often lead to a decrease in help-seeking behaviors.

It is true that the U.S. mental health system is in need of reform and that strategies to detect people in danger of hurting themselves or others earlier are desperately needed. Even so, it is difficult even for mental health professionals to predict the future violence potential of their patients. Furthermore, it is not only misguided but also potentially harmful to focus gun control efforts on people with psychiatric disorders. What’s more, these kinds of efforts will probably make very little difference in the homicide rate in the U.S. It would be more worth our while to focus gun control efforts not on mental illness per se but perhaps more importantly on alcohol abuse. The association between alcohol abuse and gun violence is convincing. As a result, in Pennsylvania, for example, people who have been convicted of more than three drunk driving offenses may not purchase a gun. Keeping guns out of bars and other drinking establishments is also probably a wise move. Doing background checks for domestic violence is also a useful measure in reducing gun violence in the home. As the U.S. reconsiders gun control legislation, it is important to recognize that some measures might do more harm than good. Paying closer attention to scientific evidence and remaining focused on the most effective strategies for targeting those most likely to commit violent acts must be the strategy going forward.

Leave a comment

Filed under Publication

Should women who have undergone FGM be granted asylum in the U.S. on medical grounds?

This article was originally published here.

Since 1994, when a Nigerian woman and her two daughters were granted asylum in the U.S. based on fear of female genital mutilation (FGM) in their native country, the legal community has been avidly debating the question of whether FGM should be considered grounds for asylum. A 1996 case, in re Kasinga, established a precedent for granting asylum to women based on a well-found fear of persecution in the form of FGM.

Today, the question is still, however, controversial. There is no standard definition of “persecution,” a fear of which is required for asylum seekers to gain asylum, and even though “membership in a particular social group” may help an individual gain asylum, this definition has not been officially extended to include women subject to gender-based injustices. In addition, the Kasinga case doesn’t apply to women who have already undergone FGM, who are almost never granted asylum on the logic that they have no real grounds to fear further persecution. In other words, since the persecution has already occurred, there is no requirement to protect these women under the Refugee Act of 1980 .

Although the law does not generally allow it, many legal scholars remain advocates of granting asylum to women who have already undergone FGM. Yet whether granting asylum for women who have already undergone FGM on clinical grounds is legitimate remains an open question. It is our belief that clinicians have a significant role to play in asserting that women who have undergone FGM have significant grounds to be granted asylum in the U.S. and other Western countries.

There is abundant evidence that female genital mutilation results in both short-term and long-term obstetric and gynecologic damage. A World Health Organization (WHO) prospective study of 6 African countries found that women with FGM were more likely to have postpartum hemorrhaging, extended maternal hospital stays, stillbirth or early neonatal death, and give birth to infants with low birthweight than women without FGM. Several studies have established that risk of transmission of HIV is increased for women with FGM. The practice of FGM may indeed be contributing to the perpetually increasing HIV epidemic in sub-Saharan Africa. In the case of infibulation, in which the vaginal orifice is narrowed by cutting and appositioning the labia minora and/or labia majora, complications include dysmenorrhea, stagnation of blood in the uterus or vagina, chronic pelvic infection, repeated urinary tract infections, chronic vaginitis, and dysuria.

These medical complications have frequently been cited as reasons to grant asylum to young women who fear the possibility of future FGM if returned to their native countries. But what about women who have already undergone FGM? There are two essential reasons to insure that these women are not returned to the countries in which FGM took place and in which the practice is still sanctioned. First, the surgical procedure deinfibulation, which is performed in the United States and other Western countries, can reverse some of the anatomical damage done by female circumcision and reduce risks for medical and obstetrical complications. It is unlikely that women will be able to access this procedure in their native countries. Moreover, those who do undergo reparative deinfibulation are at risk of facing ostracism, stigmatization, discrimination and even re-infibulation in their home countries. Given that clinicians have an ethical duty to recommend the reparative procedure for their patients who are victims of FGM, it would seem clear that there is an affirmative duty to insure that these patients not be returned to an environment that would disrupt the benefits of surgical repair.

Second, it is essential to examine the psychological consequences of FGM in order to understand the risks of forced repatriation. Although it  has always been assumed that FGM causes increased rates of mental disorders, especially trauma responses, measuring the adverse psychological effects of FGM is difficult, particularly because of the pride some women feel in having taken part in an ancient cultural tradition. Some reactions to FGM among women might include feelings of pride, beauty, cleanliness, and faithfulness and respect to tradition. Despite these positive associations, several recent studies have determined that women with FGM may be more prone to psychiatric disorders than women without FGM. In one study, circumcised women in Senegal showed significantly higher rates of PTSD, psychiatric illnesses, and memory problems than uncircumcised women. In a more recent study of 4800 pregnant women, 38% of whom had undergone FGM, 80% of those circumcised continued to have flashbacks to the FGM event, 58% had some form of affective disorder, 38% had anxiety disorders, and 30% had PTSD. Evidence has begun to accumulate that FGM does indeed have a significant impact on women’s mental health. A hallmark of PTSD is pathological “re-experiencing” of the original trauma and avoidance of cues related to the original traumatic event. Clearly, returning a woman who has undergone FGM to the scene of her traumatic event engenders substantial risk of worsening trauma-related psychiatric illness. This is especially unacceptable given the likelihood that treatment for the psychological and psychiatric consequences of FGM is not readily available in cultures that encourage the practice in the first place.

Sending women with psychological disorders as a result of FGM back to their native countries is akin to denying them a basic human right: healthcare. In cultures in which FGM is considered a point of pride, women are not likely to be able to obtain mental healthcare for the psychological consequences of their circumcision experience. While it is absolutely essential that Western physicians not stigmatize those women, some of whom may value and take pride in their circumcision, it is equally important that the medical field take a more vocal stand on the reality of the psychological fallout from FGM. A heated legal debate about whether women who have already undergone FGM should be granted asylum continues. Clinicians should add their voices to this debate, armed with clinical evidence that FGM has serious consequences for physical and mental health.

Leave a comment

Filed under Publication

Why are we so afraid of vaccines?

Article originally published here

Despite the official retraction of the 1998 Lancet study that suggested a connection between vaccines and autism, as of 2010, 1 in 4 U.S. parents still believed that vaccines cause autism.

This belief is often cited as part of the cause of rising rates of “philosophical exemptions” from vaccines among parents in the U.S. Twenty states allow “philosophical exemptions” for those who object to vaccination on the basis of personal, moral, or other beliefs. In recent years, rates of philosophical exemptions have increased, rates of vaccination of young children have decreased, and resulting infectious disease outbreaks among children have been observed in places such as California and Washington. In California, the rate of parents seeking philosophical exemptions rose from 0.5% in 1996 to 1.5% in 2007. Between 2008 and 2010 in California, the number of kindergarteners attending schools in which 20 or more children were intentionally unvaccinated nearly doubled from 1937 in 2008 to 3675 in 2010. Vaccination rates have also decreased all over Europe, resulting in measles and rubella outbreaks in France, Spain, Italy, Germany, Switzerland, Romania, Belgium, Denmark, and Turkey.

The current outbreak of about 1000 cases of measles in Swansea, Wales is a jarring example of the serious effects of vaccine scares. A little over a decade ago, there were only a handful of cases of measles in England and Wales, and the disease was considered effectively eliminated. Yet after Andrew Wakefield’s discredited study in 1998, measles vaccination rates plummeted, with the lowest levels occurring in 2003-2004. There is evidence that the outbreak may in part be due to parents’ responses to media reporting. There is evidence that medical scare stories affect health behavior in general, and the reporting on MMR has been subjected to a kind of “false balance,” conveying the sense that there is a legitimate and sizeable conflict in the medical community about the dangers of MMR when in reality those touting this “danger” represent a fringe minority.

It is easy to become mired in philosophical and ethical debates about who in these situations has the right to make these decisions. Should parents have the liberty to put their children and other children at risk of contracting often fatal vaccine-preventable diseases? Yet a more immediate question should be: what kinds of communication from doctors and public health officials could realistically assuage parents’ concerns about the risks associated with vaccination? In order to disabuse parents of unfounded notions about risks associated with vaccines, it is vital to understand how most people form perceptions of risk in the first place. Armed with a better understanding of public perceptions of risks associated with vaccination, doctors and public health officials can begin to craft communications strategies that specifically target these beliefs. In other words, we should be applying risk perception theory to the development of communications strategies to encourage vaccination of children.

In 1987, Paul Slovic published a landmark article in Science about how the public conceives of and responds to various risk factors. Slovic emphasized that lay people consistently understand risk differently than experts do. Experts tend to evaluate risk using quantitative measures such as morbidity and mortality rates. Yet the public may not understand risk this way. Qualitative risk characteristics, such as involuntary risks, or risks that originate from unknown or unfamiliar sources, may greatly influence the average person’s valuation of risk.

Risk perception theory may go a long way in explaining why some parents still insist that vaccines cause disorders such as autism in the face of abundant evidence to the contrary. Research into risk perception indicates that vaccines are an excellent candidate for being perceived as high-risk. There are several features of vaccines that align them with features considered high-risk by most people: man-made risks are much more frightening than natural risks; risks seem more threatening if their benefits are not immediately obvious, and the benefits of vaccines against diseases such as measles and mumps are not immediately obvious since the illnesses associated with these viruses—but not the viruses themselves– have largely been eliminated by vaccines; and a risk imposed by another body (the government in this case) will feel riskier than a voluntary risk. Research has shown that risk perception forms a central component of health behavior. This means that if parents view vaccines as high risk, they will often behave in accordance with these beliefs and choose not to vaccinate their children.

An interesting and not frequently addressed question about vaccine anxiety in the U.S. and Europe is how culture-bound these fears are. Can we find the same or similar fears of vaccines in low and middle-income countries? Cross-cultural comparisons might aid us in understanding the entire phenomenon better. Recent, tragic events have demonstrated resistance to foreign vaccine programs in Nigeria and Pakistan, spurred by the belief that vaccines were being used to sterilize Muslim children. In general, however, social resistance to vaccines and fear of vaccines causing illnesses such as autism is less common in low- and middle-income countries, in part because death from vaccine-preventable illnesses is more visible and desire for vaccines is therefore more immediate. There is, however, some evidence that confusion and fear of new vaccines, including doubts about their efficacy, does exist in some low- and middle-income countries. The examples of resistance to polio vaccination programs in Nigeria and Pakistan demonstrate a general belief, also held by many parents in the U.S., that vaccines contain harmful materials and that government officials are either not being appropriately forthcoming with this information or are deliberately covering it up. At the same time, anti-vaccine sentiments, although widespread, are often bound by culture and may even in some cases serve as a proxy for other culturally-based fears. These fears, heterogeneous as they are, are often constructed from local socially- and politically-informed concepts of risk rather than from close analysis of the actual risk data. This is an instance in which understanding how individuals conduct risk analysis might be more helpful than trying to present the actual evidence on the risks of vaccines over and over to a skeptical population. Yet even cross-cultural perspectives indicate that there is something fundamental about vaccines that can stir fear of diverse kinds in people. Although the content of these fears might differ, I would argue that the fundamental cause of fear is the same: vaccines, as man-made, unfamiliar substances injected into the body, are a classic candidate for high risk perception.

Understanding where the persistent fears of vaccination originate is the first step in effectively relinquishing them. Perhaps reminding people of other man-made inventions with crucial benefits would help assuage fears of the “unnatural” vaccine. Whether or not this particular strategy would help is an empirical question that merits urgent scientific enquiry. Isolating the precise elements that constitute irrational fears of vaccination is a vital component of designing effective public health campaigns to encourage parents to immunize their children against devastating illnesses.

Leave a comment

Filed under Publication

The importance of improving mental health research in developing countries

Originally published at The Pump Handle

In response to the realization that between 16% and 49% of people in the world have psychiatric and neurological disorders and that most of these individuals live in low- and middle-income countries, the World Health Organization (WHO) launched the Mental Health Gap Action Programme to provide services for priority mental health disorders in 2008. This focus on services is essential, but the WHO ran into a significant problem when confronting mental health disorders in the developing world: lack of research made it difficult to understand which mental health disorders should be prioritized and how best to reach individuals in need of care.

In 2011, The World Health Organization (WHO) embarked on a report entitled “No health without research.” The release of the report was recently postponed, but the problem identified by the report remains no less dire. In order to improve health systems in low- and middle-income countries, support for more research in epidemiology, healthcare policy, and healthcare delivery within these countries is essential.

Over the course of the past year and a half, PLoS Medicine has published a series of papers corresponding with this theme. In one paper, M. Taghi Yasamy and colleagues emphasize the importance of scaling up resources for mental health research in particular. This research, they explain, will help policymakers determine directions for improving policy and delivery of mental healthcare. Advancing this research will be challenging, though, because good governance for mental health research in developing countries is lacking.

Some of the most immediate problems with mental health research in developing countries are financial. Most developing countries lack institutions like the National Institute of Mental Health (NIMH) to help fund and structure research. Physicians and mental health professionals often have no incentive to conduct research because providing other health services is much more lucrative. In some cases, as in many countries in Latin America, researchers must fund their own research and experience no financial gain as a result of conducting research.

Yet financial reasons are not the only reasons for lack of mental health research in developing countries. Restructuring medical education could go a long way toward preparing physicians to participate in research. While research is valued as a key part of medical education and success in the United States, research is not a determining factor for getting into residency or achieving academic success in low-income countries. Many physicians-in-training thus encounter a lack of incentive to contribute to research initiatives. Making research a fundamental part of success in medical training could help make universities in low- and middle-income countries the research centers they are in high-income countries.

Even when clinicians and scientists in low- and middle-income countries are able to conduct mental health research, they often find it difficult to publish their findings in prestigious, widely circulating international medical journals. Researchers from developing countries often struggle to meet the requirements of indexed journals because of lack of access to information, lack of guidance in research design and statistical analysis, and difficulty communicating in foreign languages. Researchers in developing countries often work in research centers or universities that are not considered “prestigious” on an international scale and may not garner the attention of international journals. Editors may be more likely to give serious consideration to submissions from authors at big-name universities. Another serious problem with publication of research from developing countries in prestigious medical and scientific journals is the language barrier, with most top journals being English-language. Procuring better translation services for scientists in developing countries could be key in overcoming the dearth of publications from these areas of the world.

Policymakers and providers in developing countries may also struggle to learn about findings published in expensive journals for which their institutions cannot afford subscriptions. Open access policies represent one way to alleviate some of the problems mental health researchers in developing countries confront. Free access to a wider body of research published in highly-regarded journals could vastly improve mental health research in developing countries and help researchers attract the attention of these high-level journals.

Mental health interventions that truly help communities in low- and middle-income countries cannot succeed if data on epidemiology of mental disorders, current problems in the delivery of healthcare services, and evidence-based solutions are not available. A survey of mental health research priorities in low- and middle-income countries in 2009 found that stakeholders and researchers ranked three types of research as most important: epidemiological studies of burden and risk factors, health systems research, and social sciences research. Researchers and stakeholders agreed that attending to the growing problems of depression, anxiety, and substance abuse disorders, among other frequently occurring mental disorders, was dependent on procuring better resources for research.

Improving service gaps in mental healthcare is vital, especially in light of a growing epidemic of mental illness globally. But this work cannot be done without more research to identify the problems and evidence-based solutions that will help bring mental healthcare to all those in need.

Leave a comment

Filed under Publication

Economic development and mortality: Samuel Preston’s 1975 classic

Originally published at The Pump Handle

In the late 1940s and 1950s, it became increasingly evident that mortality rates were falling rapidly worldwide, including in the developing world. In a 1965 analysis, economics professor George J. Stolnitz surmised that survival in the “underdeveloped world” was on the rise in part due to a decline in “economic misery” in these regions. But in 1975, Samuel Preston published a paper that changed the course of thought on the relationship between mortality and economic development.

In the Population Studies article “The changing relation between mortality and level of economic development,” Preston re-examined the relationship between mortality and economic development, producing a scatter diagram of relations between life expectancy and national income per head for nations in the 1900s, 1930s, and 1960s that has become one of the most important illustrations in population sciences. The diagram shows that life expectancy rose substantially in these decades no matter what the income level. Preston concluded that if income had been the only determining factor in life expectancy, observed income increases would have produced a gain in life expectancy of 2.5 years between 1938 and 1963, rather than the actual gain of 12.2 years. Preston further concluded that “factors exogenous to a country’s current level of income probably account for 75-90% of the growth in life expectancy for the world as a whole between the 1930s and the 1960s” and that income growth accounts for only 10-25% of this gain in life expectancy.

Preston’s next main task was to contemplate what these “exogenous factors” might be. Preston proposed that a number of factors aside from a nation’s level of income had contributed to mortality trends in more developed as well as in less developed countries over the previous quarter century.  These factors were not necessarily developed in the country that enjoyed an increase in lifespan but rather were imported and therefore are, according to Preston, less dependent on endogenous wealth. The exact nature of these “exogenous” factors differed according to the level of development of the nation in question.  Preston identified vaccines, antibiotics, and sulphonamides in more developed areas and insect control, sanitation, health education, and maternal and child health services in less developed areas as the main factors that contributed to increased life expectancy.

Preston’s paper continues to provide guidance in development theory and economics today. But there was and continues to be considerable resistance to Preston’s theory, mostly from economists. Shortly after Preston’s article appeared, Thomas McKeown published two books that argued essentially the opposite: that mortality patterns have everything to do with economic growth and standards of living. Pritchett and Summers argued in 1996 that national income growth naturally feeds into better education and health services, which in turn contribute to higher life expectancy.

How well does Preston’s analysis hold up today? For one thing, Preston did not foresee the seemingly intimate connection between development and the recent rapid increased incidence prevalence of some chronic diseases. As developing nations urbanize and become more affluent, noncommunicable diseases, such as cancer and heart disease, many secondary to “lifestyle” issues like obesity and lack of physical exercise, are now on the rise, with the potential to lower life expectancy significantly. So is wealthier healthier, to use the words of Pritchett and Summers? Not necessarily, as we are seeing increasingly.

Why is it so important to try to work out the relationship between health and wealth? If we assume that improvements in healthcare systems grow naturally out of increased wealth, then developing countries should be focusing primarily on economic growth in order to improve their healthcare. This must be true to a certain extent, but, as Preston is quick to point out, there are other factors that affect the health of a nation, and it is not sufficient to assume that economic growth will automatically lead to improved life expectancy. Preston’s analysis tends to emphasize instead that health systems strengthening and biological innovation must always take place beside economic growth to insure better health. Whether or not we can completely agree with Preston’s assertion that wealthier is not necessarily healthier, it is certainly the case that his landmark article stimulated an essential conversation about the relationship between economic development and mortality that continues avidly to the present day.

Leave a comment

Filed under Publication

Is Disease Eradication Always the Best Path?

Originally published at PLoS Speaking of Medicine

There is no question that the eradication of smallpox, a devastating illness costing millions of lives, was one of the greatest achievements of 20th-century medicine. The disease was triumphantly declared eradicated by the World Health Assembly in 1980. Smallpox eradication required extremely focused surveillance as well as the use of a strategy called “ring vaccination,” in which anyone who could have been exposed to a smallpox patient was vaccinated immediately. Why was smallpox eradication possible? For one thing, smallpox is easily and quickly recognized because of the hallmark rash associated with the illness. Second, smallpox can be transmitted only by humans. The lack of an animal reservoir makes controlling the illness much simpler.

The success of smallpox eradication campaigns has resulted in persistent calls to eradicate other infectious diseases in the years since 1980. Unfortunately, disease eradication can be difficult and even impossible in the case of many infectious diseases, and it is crucial to consider the features of each illness in order to come to a proper conclusion about whether the pursuit of disease eradication is the best approach. In the first place, it is important to be clear about what “eradication” means. Eradication refers to deliberate efforts to reduce worldwide incidence of an infectious disease to zero. It is not the same as extinction, the complete destruction of all disease pathogens or vectors that transmit the disease. Elimination, a third concept, encapsulates the complete lack of a disease in a certain population at a certain point in time. Disease eradication therefore specifies a particular strategy for dealing with infectious diseases; other options exist that in some circumstances may be more desirable.

Can the pursuit of disease eradication ever be detrimental? It could be in the case of certain diseases that do not lend themselves easily to total eradication. A claim of eradication logically ends prophylactic efforts, reduces efforts to train health workers to recognize and treat the eradicated disease, and halts research on the disease and its causes. When eradication campaigns show some measure of success, financial support for the control of that illness plummets dramatically. Wide dissemination of information about eradication efforts without the certification of success can therefore prove detrimental. In these cases, complacency may prematurely replace much needed vigilance. If there is a reasonable chance of recurrence of the disease or if lifelong immunity against the disease is impossible, then attempting eradication may prove disastrous because infrastructure to control the disease would be lacking in the event of resurgence. Tracking down the remaining cases of an illness on the brink of eradication can be incredibly costly and divert government money in resource-poor nations from more pressing needs.

Another potential problem with disease eradication efforts is that, as a vertical approach, they may drain resources from horizontal approaches, such as capacity building and health system strengthening. Some advocate a more “diagonal” approach that uses disease-specific interventions to drive improvements of the overall health system. Still others have argued that vertical approaches that treat one disease at a time may divert resources from primary healthcare and foster imbalances in local healthcare services. Vertical schemes may also produce disproportional dependence on international NGO’s that can result in the weakening of local healthcare systems.

Malaria offers an excellent example of a case in which debate rages about whether eradication efforts would be successful. There are four species of single-cell parasite that cause malaria, the most common of which are P. falciparum and P. vivax. P. falciparum is the most deadly and P. vivax is the most prevalent. These two species make it difficult to engineer a single, fool-proof vaccine. Further complicating developing a vaccine for malaria are the ability of the parasites to mutate so that even contracting malaria does not confer life-long immunity. Furthermore, malaria involves an animal vector (mosquitoes). It would clearly be a huge challenge and perhaps even impossible to wipe out malaria completely. Beginning in 1955, there was a global attempt to eradicate malaria after it was realized that spraying houses with DDT was a cheap and effective way of killing mosquitoes. The initiative was successful in eliminating malaria in nations with temperate climates and seasonal malaria transmission. Yet some nations, such as India and Sri Lanka, had sharp reductions in malaria cases only to see sharp increases when efforts inevitably ceased. The state of affairs in India and Sri Lanka demonstrates some of the negative effects of eradication campaigns that are not carried to fruition. The project was abandoned in the face of widespread drug resistance, resistance to available insecticides, and unsustainable funding from donor countries. This failure was detrimental because the abandoned vector control efforts led to the emergence of severe, resistant strains that were much harder to treat.

Recently, discussions of malaria eradication have begun again. At the moment, there is considerable political will and funding for malaria eradication efforts from agencies such as the Gates Foundation. The Malaria Eradication Research Agenda Initiative, in part funded by the Gates Foundation, has resulted in substantial progress in identifying what needs to be done to achieve eradication. Even so, proponents of malaria eradication admit that this goal would take at least 40 years to achieve. It is not clear how long current political will and funding will last. There are concerns that political will might wither in the face of the estimated $5 billion annual cost to sustain eradication efforts.

Disease eradication can clearly be an incredibly important public health triumph, as seen in the case of smallpox. But when should the strategy be employed and when is it best to avoid risks associated with eradication efforts that might fail? Numerous scientific, social, and economic factors surrounding the disease in question must be taken into consideration. Can the microbe associated with the disease persist and multiply in nonhuman species? Does natural disease or immunization confer lifelong immunity or could reinfection potentially occur? Is surveillance of the disease relatively straightforward or do long incubation periods and latent infection make it difficult to detect every last case of the illness? Are interventions associated with eradication of the disease, including quarantine, acceptable to communities globally? Does the net benefit of eradication outweigh the costs of eradication efforts? Proposals for disease eradication must be carefully weighed against potential risk. Rather than being presented as visionary, idealistic goals, disease eradication programs must be clearly situated in the context of the biological and economic aspects of the specific disease and the challenges it presents.

Leave a comment

Filed under Publication

Preventive care in medicine: Dugald Baird’s 1952 obstetrics analysis

Originally published at The Pump Handle

How much of a patient’s social context should physicians take into account? Is an examination of social factors contributing to disease part of the physician’s job description, or is the practice of medicine more strictly confined to treatment rather than prevention? In what ways should the physician incorporate public health, specifically prevention, into the practice of medicine?

These are the questions at the heart of Dugald Baird’s 1952 paper in The New England Journal of Medicine, “Preventive Medicine in Obstetrics.” The paper originated in 1951 as a Cutter Lecture, so named after John Clarence Cutter, a 19th-century medical doctor and professor and physiology and anatomy. Cutter allocated half of the net income of his estate to the founding of an annual lecture on preventive medicine. Baird was the first obstetrician to deliver a Cutter Lecture. Baird’s paper draws much-needed attention to the role of socioeconomic factors in pregnancy outcomes.

Baird begins by describing the Registrar General’s reports in Britain, which divide the population into five social classes. Social Class I comprises highly paid professionals while Social Class V encompasses the “unskilled manual laborers.” In between are the “skilled craftsmen and lower-salaried professional and clerical groups”; the categorization recognizes that job prestige as well as income is important in social class. Baird proceeds to present data on maternal and child health and mortality according to social group as classified by the Registrar General’s system. He makes several essential observations: social class makes relatively little difference in the stillbirth rate, but mortality rates in the first year of life are lowest for the highest social class (Social Class I) and highest for the lowest social class (Social Class V). Social inequality is thus felt most keenly in cases of infant death from infection, which Baird calls “a very delicate index of environmental conditions.”

Baird goes on to analyze data on stillbirths and child mortality from the city of Aberdeen, Scotland, which he chose because the number of annual primigravida (first pregnancy) deliveries at the time was relatively small and therefore manageable from an analytic standpoint and because the population in the early 1950’s was relatively “uniform.”  When comparing births in a public hospital versus a private facility (called a “nursing home” in the paper, although not in the sense generally understood in the U.S. today), many more premature and underweight babies died in the public hospital than in the private nursing home, even though only the former had medical facilities for the care of sick newborns. The difference could not, therefore, be explained by the quality of medical care in the two facilities.

Baird concludes that this discrepancy must have something to do with the health of the mothers. Upon closer examination, Baird recognizes that the mothers in the private nursing home are not only healthier but also consistently taller than the mothers in the public facility. According to Baird, the difference in height must have to do with environmental conditions such as nutrition, a reasonable conclusion although Baird in fact did not have available data on ethnicity or other factors that might have also contributed. As the environment deteriorates, the percentage of short women increases. Baird notes that height affects the size and shape of the pelvis, and that caesarean section is more common in shorter women than taller women. Baird began classifying patients in the hospital in one of 5 physical and functional classes. Women with poorer “physical grades,” who also tended to be shorter, had higher fetal mortality rates. He also observes that most women under the age of 20 had low physical grades, stunted growth, and came from lower socioeconomic statuses. Baird spends some time examining the effects of age on childbearing, looking at women aged 15-19, 20-24, 25-29, 30-34, and over 35. Baird found that the most significant causes of fetal death in the youngest age group (15-19) were toxemia, fetal deformity, and prematurity. Fetal deaths in women aged 30-34 tended to be due more frequently to birth trauma and unexplained intrauterine death. The incidence of forceps delivery and caesarean section grew sharply with age, and labor lasting over 48 hours was much more common among the older age groups.

In a turn that was unusual at the time, Baird considers the emotional stress associated with difficult childbirth and quotes a letter from a woman who decided not to have any more children after the “terrible ordeal” of giving birth to her first child. This close consideration of the patient’s whole experience is a testament to Baird’s concern with the patient’s entire context, including socioeconomic status.

Baird concludes by making a series of recommendations for remedying social inequalities in birth outcomes, some of which make perfect sense and some of which now strike us as outrageously dated. An example of the latter is his suggestion that “the removal of barriers to early marriage” would improve birth outcomes among young women. In fact, we now know that early marriage can have a negative impact on women’s sexual health, sometimes increasing incidence of HIV/AIDS.

Despite the occasional “datedness” of Baird’s paper, his analysis is not only a public health classic in its attempt to bring social perspective back into the practice of medicine but it also contains lessons that are still crucial today. Baird’s paper reminds us that gender is often at the very center of health inequities, and that maternal and infant mortality constitute a major area in which socioeconomic inequalities directly and visibly affect health outcomes. While maternal and infant mortality rates are not high in the developed world, they still constitute serious health problems in developing countries. Infant mortality in particular can be used as a useful indicator of socioeconomic development. Most importantly, Baird’s paper, written in an age when the medical field began relying increasingly on biology and technology, reminds us that it has much to gain from paying attention to social factors that have a crucial impact on health.

Leave a comment

Filed under Publication

How do we perceive risk?: Paul Slovic’s landmark analysis

Originally published at The Pump Handle

In the 1960s, a rapid rise in nuclear technologies aroused unexpected panic in the public. Despite repeated affirmations from the scientific community that these technologies were indeed safe, the public feared both long-term dangers to the environment as well as immediate radioactive disasters. The disjunction between the scientific evidence about and public perception of these risks prompted scientists and social scientists to begin research on a crucial question: how do people formulate and respond to notions of risk?

Early research on risk perception assumed that people assess risk in a rational manner, weighing information before making a decision. This approach assumes that providing people with more information will alter their perceptions of risk. Subsequent research has demonstrated that providing more information alone will not assuage people’s irrational fears and sometimes outlandish ideas about what is truly risky. The psychological approach to risk perception theory, championed by psychologist Paul Slovic, examines the particular heuristics and biases people invent to interpret the amount of risk in their environment.

In a classic review article published in Science in 1987, Slovic summarized various social and cultural factors that lead to inconsistent evaluations of risk in the general public. Slovic emphasizes the essential way in which experts’ and laypeople’s views of risk differ. Experts judge risk in terms of quantitative assessments of morbidity and mortality. Yet most people’s perception of risk is far more complex, involving numerous psychological and cognitive processes. Slovic’s review demonstrates the complexity of the general public’s assessment of risk through its cogent appraisal of decades of research on risk perception theory.

Slovic’s article focuses its attention on one particular type of risk perception research, the “psychometric paradigm.” This paradigm, formulated largely in response to the early work of Chauncey Starr, attempts to quantify perceived risk using psychophysical scaling and multivariate analysis. The psychometric approach thus creates a kind of taxonomy of hazards that can be used to predict people’s responses to new risks.

Perhaps more important than quantifying people’s responses to various risks is to identify the qualitative characteristics that lead to specific valuations of risk. Slovic masterfully summarizes the key qualitative characteristics that result in judgments that a certain activity is risky or not. People tend to be intolerant of risks that they perceive as being uncontrollable, having catastrophic potential, having fatal consequences, or bearing an inequitable distribution of risks and benefits. Slovic notes that nuclear weapons and nuclear power score high on all of these characteristics. Also unbearable in the public view are risks that are unknown, new, and delayed in their manifestation of harm. These factors tend to be characteristic of chemical technologies in public opinion. The higher a hazard scores on these factors, the higher its perceived risk and the more people want to see the risk reduced, leading to calls for stricter regulation. Slovic ends his review with a nod toward sociological and anthropological studies of risk, noting that anxiety about risk may in some cases be a proxy for other social concerns. Many perceptions of risk are, of course, also socially and culturally informed.

Slovic’s analysis goes a long way in explaining why people persist in extreme fears of nuclear energy while being relatively unafraid of driving automobiles, even though the latter has caused many more deaths than the former. The fact that there are so many automobile accidents enables the public to feel that it is capable of assessing the risk. In other words, the risk seems familiar and knowable. There is also a low level of media coverage of automobile accidents, and this coverage never depicts future or unknown events resulting from an accident. On the other hand, nuclear energy represents an unknown risk, one that cannot be readily analyzed by the public due to a relative lack of information. Nuclear accidents evoke widespread media coverage and warnings about possible future catastrophes. In this case, a lower risk phenomenon (nuclear energy) actually induces much more fear than a higher risk activity (driving an automobile).

Importantly, Slovic correctly predicted 25 years ago that DNA experiments would someday become controversial and frighten the public. Although the effects of genetically modified crops on ecosystems may be a cause for concern, fears of the supposed ill effects of these crops on human health are scientifically baseless. Today, although biologists insist that genetically modified crops pose no risk to human health, many members of the public fear that genetically modified crops will cause cancer and birth defects. Such crops grow under adverse circumstances and resist infection and destruction by insects in areas of the world tormented by hunger, and therefore have the potential to dramatically improve nutritional status in countries plagued by starvation and malnutrition. Yet the unfamiliarity of the phenomenon and its delayed benefits make it a good candidate for inducing public fear and skepticism.

There is a subtle yet passionate plea beneath the surface of Slovic’s review. The article calls for assessments of risk to be more accepting of the role of emotions and cognition in public conceptions of danger. Rather than simply disseminating more and more information about, for example, the safety of nuclear power, experts should be attentive to and sensitive about the public’s broad conception of risk. The goal of this research is a vital one: to aid policy-makers by improving interaction with the public, by better directing educational efforts, and by predicting public responses to new technologies. In the end, Slovic argues that risk management is a two-way street: just as the public should take experts’ assessments of risk into account, so should experts respect the various factors, from cultural to emotional, that result in the public’s perception of risk.

Leave a comment

Filed under Publication

The infelicities of quarantine

Originally published at PLoS Speaking of Medicine

In 2009, as panic struck global health systems confronted with the H1N1 flu epidemic, a familiar strategy was immediately invoked by health officials worldwide: quarantine. In Hong Kong, 300 hotel guests were quarantined in their hotel for at least a week after one guest came down with H1N1. Such measures are certainly extreme, but they do raise important questions about quarantine. How do we regulate quarantine in practice? How do we prevent this public health measure from squashing civil liberties?

Quarantine as a method of containing infectious disease might be as old as the ancient Greeks, who implemented strategies to “avoid the contagious.” Our oldest and most concrete evidence of quarantine comes from Venice circa 1374. Fearing the plague, a forty-day quarantine for ships entering the city was enacted, during which passengers had to remain at the port and could not enter the city. In 1893, the United States enacted the National Quarantine Act, which created a national system of quarantine and permitted state-run regulations, including routine inspection of immigrant ships and cargoes.

“Quarantine” must be differentiated from “isolation.” While isolation refers to the separation of people infected with a particular contagious disease, “quarantine” is the separation of people who have been exposed to a certain illness but are not yet infected. Quarantine is particularly important in cases in which a disease can be transmitted even before the individual shows signs of illness. Although quarantine’s origins are ancient, it is still a widely used intervention. For example, the U.S. is authorized to quarantine individuals with exposure to the following infectious diseases: cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral hemorrhagic fevers, SARS, and flu. Federal authorities may quarantine individuals at U.S. ports of entry.

The history of quarantine is intimately intertwined with xenophobia. There is no question that quarantine has been frequently abused, serving as a proxy for discrimination against minorities. This was especially true in late nineteenth- and early twentieth-century America, coinciding with large numbers of new immigrants entering the country. A perfect example of the enmeshed history of quarantine abuse and xenophobia occurred in 1900 in San Francisco. After an autopsy of a deceased Chinese man found bacteria suspected to cause bubonic plague, the city of San Francisco restricted all Chinese residents from traveling outside of the city without evidence that they had been vaccinated against the plague. In 1894, confronted with a smallpox epidemic, Milwaukee forcibly quarantined immigrants and poor residents of the city in a local hospital. In these cases, quarantine served as a method of containing and controlling ethnic minorities and immigrants whose surging presence in the U.S. was mistrusted.

A more recent example stems from the beginning of the AIDS epidemic in the early 1980s. In 1986, Cuba began universal HIV testing. Quarantines were instituted for all people testing positive for HIV infection. In 1985, officials in the state of Texas contemplated adding AIDS to the list of quarantinable diseases. These strategies were considered in a state of panic and uncertainty about the mode of transmission of HIV/AIDS. In retrospect, we know that instituting quarantine for HIV would have been not only ineffective but also a severe violation of individual liberties. Early in the AIDS epidemic, some individuals even called for the mass quarantine of gay men, indicating how quarantine could be used as a weapon against certain groups, such as immigrants and homosexuals. Because of their extreme nature and their recourse to arguments about protecting public safety, quarantine laws are especially prone to abuse of the sort witnessed in these cases.

How can we prevent quarantine laws from being abused? For one thing, these laws must be as specific as possible. How long can someone be quarantined before being permitted to appeal to the justice system? In what kinds of facilities should quarantined individuals be kept? The answer to this question would depend on the illness, type of exposure, and risk of contracting the disease, but in general, places of quarantine should never include correctional facilities. How are quarantined individuals monitored? How long can they be kept in quarantined conditions without symptoms before it is determined that they pose no public health risk? Quarantine laws should be sufficiently flexible to be amended according to updated knowledge about modes of transmission in the case of new or emerging infectious diseases. Quarantine measures should not be one-size-fits-all but modified according to scientific evidence relating to the disease in question. Transparency in all government communications about quarantine regulations must be standard in all cases. Most importantly, science should determine when to utilize quarantine. In order to quarantine an individual, the mode of transmission must be known, transmission must be documented to be human to human, the illness must be highly contagious, and the duration of the asymptomatic incubation period must be known. Without these scientific guidelines, quarantine may be subject to serious and unjust abuse.

In the case of infectious diseases with long incubation periods, quarantine laws can be an effective means of containing possible epidemics. Similarly, in cases in which isolation alone is not effective in containing an infectious disease outbreak, quarantine might be useful. In the case of the 2003 SARS outbreak, measures that quarantined individuals with definitive exposure to SARS were effective in preventing further infections, although mass quarantines, such as the one implemented in Toronto, were relatively ineffective. Quarantine can become a serious encroachment on civil rights, but there are intelligent ways of regulating these laws to prevent such damaging outcomes. It is important not to confuse quarantine per se with the abuse of quarantine. At the same time, when quarantine has the capacity to marginalize certain populations and perpetuate unwarranted fear of foreigners, scientific certainty is essential before implementation.

Leave a comment

Filed under Publication

What can we learn from disease stigma’s long history?

Originally published at PLoS Speaking of Medicine

Although tremendous strides in fighting stigma and discrimination against people with HIV/AIDS have been made since the beginning of the epidemic, cases of extreme discrimination still find their way into the US court system regularly. Just this year, a man in Pennsylvania was denied a job as a nurse’s assistant when he revealed his HIV status to his employer. Even more appallingly, HIV-positive individuals in the Alabama and South Carolina prison systems are isolated from other prisoners, regularly kept in solitary confinement, and often given special armbands to denote their HIV-positive status. On a global level, HIV stigma can lead to difficulty accessing testing and healthcare, which will almost certainly have a substantial impact on the quality of an individual’s life. Legal recourse often rights these wrongs for the individual, but this kind of discrimination leads to the spread of false beliefs about transmission, the very driver of stigma. In the U.S., as of 2009, one in five Americans believed that HIV could be spread by sharing a drinking glass, swimming in a pool with someone who is HIV-positive, or touching a toilet seat.

Discrimination against people with HIV/AIDS is probably the most prominent form of disease stigma in the late 20th and early 21st centuries. But disease stigma has an incredibly long history, one that spans back to the medieval period’s panic over leprosy. Strikingly, in nearly every stage of history in reference to almost every major disease outbreak, one stigmatizing theme is constant: disease outbreaks are blamed on a “low” or “immoral” class of people who must be quarantined and removed as a threat to society. These “low” and “immoral” people are often identified as outsiders, on the fringes of society, including foreigners, immigrants, racial minorities, and people of low socioeconomic status.

Emerging infectious diseases in their early stages, especially when modes of transmission are unknown, are especially vulnerable to stigma. Consider the case of polio in America.  In the early days of the polio epidemic, although polio struck poor and rich alike, public health officials cited poverty and a “dirty” urban environment as major drivers of the epidemic. The early response to polio was therefore often to quarantine low-income urban dwellers with the disease.

The 1892 outbreaks of typhus fever and cholera in New York City are two other good examples. These outbreaks were both blamed on Jewish immigrants from Eastern Europe. Upon arriving in New York, Jewish immigrants, healthy and sick, were quarantined in unsanitary conditions on North Brother Island at the command of the New York City Department of Health. Although it is important to take infectious disease control seriously, these measures ended up stigmatizing an entire group of immigrants rather than pursuing control measures based on sound scientific principles. This “us” versus “them” dynamic is common to stigma in general and indicates a way in which disease stigma can be viewed as a proxy for other types of fears, especially xenophobia and general fear of outsiders.

The fear of the diseased outsider is still pervasive. Until 2009, for instance, HIV-positive individuals were not allowed to enter the United States. The lifting of the travel ban allowed for the 2012 International AIDS Conference to be held in the United States for the first time in over 20 years. The connection between foreign “invasion” and disease “invasion” had become so ingrained that an illness that presented no threat of transmission through casual contact became a barrier to travel.

What can we learn from this history? Stigma and discrimination remain serious barriers to care for people with HIV/AIDS and tuberculosis, among other illnesses. Figuring out ways to reduce this stigma should be seen as part and parcel of medical care. Recognizing disease stigma’s long history can give us insight into how exactly stigmatizing attitudes are formed and how they are disbanded. Instead of simply blaming the ignorance of people espousing stigmatizing attitudes about certain diseases, we should try to understand precisely how these attitudes are formed so that we can intervene in their dissemination.

We should also be looking to history to see what sorts of interventions against stigma may have worked in the past. How are stigmatizing attitudes relinquished? Is education the key, and if so, what is the most effective way of disseminating this kind of knowledge? How should media sources depict epidemiological data without stirring fear of certain ethnic, racial, or socioeconomic groups in which incidence of a certain disease might be increasing? How can public health experts and clinicians be sure not to inadvertently place blame on those afflicted with particular illnesses? Ongoing research into stigma should evaluate what has worked in the past. This might give us some clues about what might work now to reduce devastating discrimination that keeps people from getting the care they need.

Leave a comment

Filed under Publication