Category Archives: Publication

Why are we so afraid of vaccines?

Article originally published here

Despite the official retraction of the 1998 Lancet study that suggested a connection between vaccines and autism, as of 2010, 1 in 4 U.S. parents still believed that vaccines cause autism.

This belief is often cited as part of the cause of rising rates of “philosophical exemptions” from vaccines among parents in the U.S. Twenty states allow “philosophical exemptions” for those who object to vaccination on the basis of personal, moral, or other beliefs. In recent years, rates of philosophical exemptions have increased, rates of vaccination of young children have decreased, and resulting infectious disease outbreaks among children have been observed in places such as California and Washington. In California, the rate of parents seeking philosophical exemptions rose from 0.5% in 1996 to 1.5% in 2007. Between 2008 and 2010 in California, the number of kindergarteners attending schools in which 20 or more children were intentionally unvaccinated nearly doubled from 1937 in 2008 to 3675 in 2010. Vaccination rates have also decreased all over Europe, resulting in measles and rubella outbreaks in France, Spain, Italy, Germany, Switzerland, Romania, Belgium, Denmark, and Turkey.

The current outbreak of about 1000 cases of measles in Swansea, Wales is a jarring example of the serious effects of vaccine scares. A little over a decade ago, there were only a handful of cases of measles in England and Wales, and the disease was considered effectively eliminated. Yet after Andrew Wakefield’s discredited study in 1998, measles vaccination rates plummeted, with the lowest levels occurring in 2003-2004. There is evidence that the outbreak may in part be due to parents’ responses to media reporting. There is evidence that medical scare stories affect health behavior in general, and the reporting on MMR has been subjected to a kind of “false balance,” conveying the sense that there is a legitimate and sizeable conflict in the medical community about the dangers of MMR when in reality those touting this “danger” represent a fringe minority.

It is easy to become mired in philosophical and ethical debates about who in these situations has the right to make these decisions. Should parents have the liberty to put their children and other children at risk of contracting often fatal vaccine-preventable diseases? Yet a more immediate question should be: what kinds of communication from doctors and public health officials could realistically assuage parents’ concerns about the risks associated with vaccination? In order to disabuse parents of unfounded notions about risks associated with vaccines, it is vital to understand how most people form perceptions of risk in the first place. Armed with a better understanding of public perceptions of risks associated with vaccination, doctors and public health officials can begin to craft communications strategies that specifically target these beliefs. In other words, we should be applying risk perception theory to the development of communications strategies to encourage vaccination of children.

In 1987, Paul Slovic published a landmark article in Science about how the public conceives of and responds to various risk factors. Slovic emphasized that lay people consistently understand risk differently than experts do. Experts tend to evaluate risk using quantitative measures such as morbidity and mortality rates. Yet the public may not understand risk this way. Qualitative risk characteristics, such as involuntary risks, or risks that originate from unknown or unfamiliar sources, may greatly influence the average person’s valuation of risk.

Risk perception theory may go a long way in explaining why some parents still insist that vaccines cause disorders such as autism in the face of abundant evidence to the contrary. Research into risk perception indicates that vaccines are an excellent candidate for being perceived as high-risk. There are several features of vaccines that align them with features considered high-risk by most people: man-made risks are much more frightening than natural risks; risks seem more threatening if their benefits are not immediately obvious, and the benefits of vaccines against diseases such as measles and mumps are not immediately obvious since the illnesses associated with these viruses—but not the viruses themselves– have largely been eliminated by vaccines; and a risk imposed by another body (the government in this case) will feel riskier than a voluntary risk. Research has shown that risk perception forms a central component of health behavior. This means that if parents view vaccines as high risk, they will often behave in accordance with these beliefs and choose not to vaccinate their children.

An interesting and not frequently addressed question about vaccine anxiety in the U.S. and Europe is how culture-bound these fears are. Can we find the same or similar fears of vaccines in low and middle-income countries? Cross-cultural comparisons might aid us in understanding the entire phenomenon better. Recent, tragic events have demonstrated resistance to foreign vaccine programs in Nigeria and Pakistan, spurred by the belief that vaccines were being used to sterilize Muslim children. In general, however, social resistance to vaccines and fear of vaccines causing illnesses such as autism is less common in low- and middle-income countries, in part because death from vaccine-preventable illnesses is more visible and desire for vaccines is therefore more immediate. There is, however, some evidence that confusion and fear of new vaccines, including doubts about their efficacy, does exist in some low- and middle-income countries. The examples of resistance to polio vaccination programs in Nigeria and Pakistan demonstrate a general belief, also held by many parents in the U.S., that vaccines contain harmful materials and that government officials are either not being appropriately forthcoming with this information or are deliberately covering it up. At the same time, anti-vaccine sentiments, although widespread, are often bound by culture and may even in some cases serve as a proxy for other culturally-based fears. These fears, heterogeneous as they are, are often constructed from local socially- and politically-informed concepts of risk rather than from close analysis of the actual risk data. This is an instance in which understanding how individuals conduct risk analysis might be more helpful than trying to present the actual evidence on the risks of vaccines over and over to a skeptical population. Yet even cross-cultural perspectives indicate that there is something fundamental about vaccines that can stir fear of diverse kinds in people. Although the content of these fears might differ, I would argue that the fundamental cause of fear is the same: vaccines, as man-made, unfamiliar substances injected into the body, are a classic candidate for high risk perception.

Understanding where the persistent fears of vaccination originate is the first step in effectively relinquishing them. Perhaps reminding people of other man-made inventions with crucial benefits would help assuage fears of the “unnatural” vaccine. Whether or not this particular strategy would help is an empirical question that merits urgent scientific enquiry. Isolating the precise elements that constitute irrational fears of vaccination is a vital component of designing effective public health campaigns to encourage parents to immunize their children against devastating illnesses.

Leave a comment

Filed under Publication

The importance of improving mental health research in developing countries

Originally published at The Pump Handle

In response to the realization that between 16% and 49% of people in the world have psychiatric and neurological disorders and that most of these individuals live in low- and middle-income countries, the World Health Organization (WHO) launched the Mental Health Gap Action Programme to provide services for priority mental health disorders in 2008. This focus on services is essential, but the WHO ran into a significant problem when confronting mental health disorders in the developing world: lack of research made it difficult to understand which mental health disorders should be prioritized and how best to reach individuals in need of care.

In 2011, The World Health Organization (WHO) embarked on a report entitled “No health without research.” The release of the report was recently postponed, but the problem identified by the report remains no less dire. In order to improve health systems in low- and middle-income countries, support for more research in epidemiology, healthcare policy, and healthcare delivery within these countries is essential.

Over the course of the past year and a half, PLoS Medicine has published a series of papers corresponding with this theme. In one paper, M. Taghi Yasamy and colleagues emphasize the importance of scaling up resources for mental health research in particular. This research, they explain, will help policymakers determine directions for improving policy and delivery of mental healthcare. Advancing this research will be challenging, though, because good governance for mental health research in developing countries is lacking.

Some of the most immediate problems with mental health research in developing countries are financial. Most developing countries lack institutions like the National Institute of Mental Health (NIMH) to help fund and structure research. Physicians and mental health professionals often have no incentive to conduct research because providing other health services is much more lucrative. In some cases, as in many countries in Latin America, researchers must fund their own research and experience no financial gain as a result of conducting research.

Yet financial reasons are not the only reasons for lack of mental health research in developing countries. Restructuring medical education could go a long way toward preparing physicians to participate in research. While research is valued as a key part of medical education and success in the United States, research is not a determining factor for getting into residency or achieving academic success in low-income countries. Many physicians-in-training thus encounter a lack of incentive to contribute to research initiatives. Making research a fundamental part of success in medical training could help make universities in low- and middle-income countries the research centers they are in high-income countries.

Even when clinicians and scientists in low- and middle-income countries are able to conduct mental health research, they often find it difficult to publish their findings in prestigious, widely circulating international medical journals. Researchers from developing countries often struggle to meet the requirements of indexed journals because of lack of access to information, lack of guidance in research design and statistical analysis, and difficulty communicating in foreign languages. Researchers in developing countries often work in research centers or universities that are not considered “prestigious” on an international scale and may not garner the attention of international journals. Editors may be more likely to give serious consideration to submissions from authors at big-name universities. Another serious problem with publication of research from developing countries in prestigious medical and scientific journals is the language barrier, with most top journals being English-language. Procuring better translation services for scientists in developing countries could be key in overcoming the dearth of publications from these areas of the world.

Policymakers and providers in developing countries may also struggle to learn about findings published in expensive journals for which their institutions cannot afford subscriptions. Open access policies represent one way to alleviate some of the problems mental health researchers in developing countries confront. Free access to a wider body of research published in highly-regarded journals could vastly improve mental health research in developing countries and help researchers attract the attention of these high-level journals.

Mental health interventions that truly help communities in low- and middle-income countries cannot succeed if data on epidemiology of mental disorders, current problems in the delivery of healthcare services, and evidence-based solutions are not available. A survey of mental health research priorities in low- and middle-income countries in 2009 found that stakeholders and researchers ranked three types of research as most important: epidemiological studies of burden and risk factors, health systems research, and social sciences research. Researchers and stakeholders agreed that attending to the growing problems of depression, anxiety, and substance abuse disorders, among other frequently occurring mental disorders, was dependent on procuring better resources for research.

Improving service gaps in mental healthcare is vital, especially in light of a growing epidemic of mental illness globally. But this work cannot be done without more research to identify the problems and evidence-based solutions that will help bring mental healthcare to all those in need.

Leave a comment

Filed under Publication

Economic development and mortality: Samuel Preston’s 1975 classic

Originally published at The Pump Handle

In the late 1940s and 1950s, it became increasingly evident that mortality rates were falling rapidly worldwide, including in the developing world. In a 1965 analysis, economics professor George J. Stolnitz surmised that survival in the “underdeveloped world” was on the rise in part due to a decline in “economic misery” in these regions. But in 1975, Samuel Preston published a paper that changed the course of thought on the relationship between mortality and economic development.

In the Population Studies article “The changing relation between mortality and level of economic development,” Preston re-examined the relationship between mortality and economic development, producing a scatter diagram of relations between life expectancy and national income per head for nations in the 1900s, 1930s, and 1960s that has become one of the most important illustrations in population sciences. The diagram shows that life expectancy rose substantially in these decades no matter what the income level. Preston concluded that if income had been the only determining factor in life expectancy, observed income increases would have produced a gain in life expectancy of 2.5 years between 1938 and 1963, rather than the actual gain of 12.2 years. Preston further concluded that “factors exogenous to a country’s current level of income probably account for 75-90% of the growth in life expectancy for the world as a whole between the 1930s and the 1960s” and that income growth accounts for only 10-25% of this gain in life expectancy.

Preston’s next main task was to contemplate what these “exogenous factors” might be. Preston proposed that a number of factors aside from a nation’s level of income had contributed to mortality trends in more developed as well as in less developed countries over the previous quarter century.  These factors were not necessarily developed in the country that enjoyed an increase in lifespan but rather were imported and therefore are, according to Preston, less dependent on endogenous wealth. The exact nature of these “exogenous” factors differed according to the level of development of the nation in question.  Preston identified vaccines, antibiotics, and sulphonamides in more developed areas and insect control, sanitation, health education, and maternal and child health services in less developed areas as the main factors that contributed to increased life expectancy.

Preston’s paper continues to provide guidance in development theory and economics today. But there was and continues to be considerable resistance to Preston’s theory, mostly from economists. Shortly after Preston’s article appeared, Thomas McKeown published two books that argued essentially the opposite: that mortality patterns have everything to do with economic growth and standards of living. Pritchett and Summers argued in 1996 that national income growth naturally feeds into better education and health services, which in turn contribute to higher life expectancy.

How well does Preston’s analysis hold up today? For one thing, Preston did not foresee the seemingly intimate connection between development and the recent rapid increased incidence prevalence of some chronic diseases. As developing nations urbanize and become more affluent, noncommunicable diseases, such as cancer and heart disease, many secondary to “lifestyle” issues like obesity and lack of physical exercise, are now on the rise, with the potential to lower life expectancy significantly. So is wealthier healthier, to use the words of Pritchett and Summers? Not necessarily, as we are seeing increasingly.

Why is it so important to try to work out the relationship between health and wealth? If we assume that improvements in healthcare systems grow naturally out of increased wealth, then developing countries should be focusing primarily on economic growth in order to improve their healthcare. This must be true to a certain extent, but, as Preston is quick to point out, there are other factors that affect the health of a nation, and it is not sufficient to assume that economic growth will automatically lead to improved life expectancy. Preston’s analysis tends to emphasize instead that health systems strengthening and biological innovation must always take place beside economic growth to insure better health. Whether or not we can completely agree with Preston’s assertion that wealthier is not necessarily healthier, it is certainly the case that his landmark article stimulated an essential conversation about the relationship between economic development and mortality that continues avidly to the present day.

Leave a comment

Filed under Publication

Is Disease Eradication Always the Best Path?

Originally published at PLoS Speaking of Medicine

There is no question that the eradication of smallpox, a devastating illness costing millions of lives, was one of the greatest achievements of 20th-century medicine. The disease was triumphantly declared eradicated by the World Health Assembly in 1980. Smallpox eradication required extremely focused surveillance as well as the use of a strategy called “ring vaccination,” in which anyone who could have been exposed to a smallpox patient was vaccinated immediately. Why was smallpox eradication possible? For one thing, smallpox is easily and quickly recognized because of the hallmark rash associated with the illness. Second, smallpox can be transmitted only by humans. The lack of an animal reservoir makes controlling the illness much simpler.

The success of smallpox eradication campaigns has resulted in persistent calls to eradicate other infectious diseases in the years since 1980. Unfortunately, disease eradication can be difficult and even impossible in the case of many infectious diseases, and it is crucial to consider the features of each illness in order to come to a proper conclusion about whether the pursuit of disease eradication is the best approach. In the first place, it is important to be clear about what “eradication” means. Eradication refers to deliberate efforts to reduce worldwide incidence of an infectious disease to zero. It is not the same as extinction, the complete destruction of all disease pathogens or vectors that transmit the disease. Elimination, a third concept, encapsulates the complete lack of a disease in a certain population at a certain point in time. Disease eradication therefore specifies a particular strategy for dealing with infectious diseases; other options exist that in some circumstances may be more desirable.

Can the pursuit of disease eradication ever be detrimental? It could be in the case of certain diseases that do not lend themselves easily to total eradication. A claim of eradication logically ends prophylactic efforts, reduces efforts to train health workers to recognize and treat the eradicated disease, and halts research on the disease and its causes. When eradication campaigns show some measure of success, financial support for the control of that illness plummets dramatically. Wide dissemination of information about eradication efforts without the certification of success can therefore prove detrimental. In these cases, complacency may prematurely replace much needed vigilance. If there is a reasonable chance of recurrence of the disease or if lifelong immunity against the disease is impossible, then attempting eradication may prove disastrous because infrastructure to control the disease would be lacking in the event of resurgence. Tracking down the remaining cases of an illness on the brink of eradication can be incredibly costly and divert government money in resource-poor nations from more pressing needs.

Another potential problem with disease eradication efforts is that, as a vertical approach, they may drain resources from horizontal approaches, such as capacity building and health system strengthening. Some advocate a more “diagonal” approach that uses disease-specific interventions to drive improvements of the overall health system. Still others have argued that vertical approaches that treat one disease at a time may divert resources from primary healthcare and foster imbalances in local healthcare services. Vertical schemes may also produce disproportional dependence on international NGO’s that can result in the weakening of local healthcare systems.

Malaria offers an excellent example of a case in which debate rages about whether eradication efforts would be successful. There are four species of single-cell parasite that cause malaria, the most common of which are P. falciparum and P. vivax. P. falciparum is the most deadly and P. vivax is the most prevalent. These two species make it difficult to engineer a single, fool-proof vaccine. Further complicating developing a vaccine for malaria are the ability of the parasites to mutate so that even contracting malaria does not confer life-long immunity. Furthermore, malaria involves an animal vector (mosquitoes). It would clearly be a huge challenge and perhaps even impossible to wipe out malaria completely. Beginning in 1955, there was a global attempt to eradicate malaria after it was realized that spraying houses with DDT was a cheap and effective way of killing mosquitoes. The initiative was successful in eliminating malaria in nations with temperate climates and seasonal malaria transmission. Yet some nations, such as India and Sri Lanka, had sharp reductions in malaria cases only to see sharp increases when efforts inevitably ceased. The state of affairs in India and Sri Lanka demonstrates some of the negative effects of eradication campaigns that are not carried to fruition. The project was abandoned in the face of widespread drug resistance, resistance to available insecticides, and unsustainable funding from donor countries. This failure was detrimental because the abandoned vector control efforts led to the emergence of severe, resistant strains that were much harder to treat.

Recently, discussions of malaria eradication have begun again. At the moment, there is considerable political will and funding for malaria eradication efforts from agencies such as the Gates Foundation. The Malaria Eradication Research Agenda Initiative, in part funded by the Gates Foundation, has resulted in substantial progress in identifying what needs to be done to achieve eradication. Even so, proponents of malaria eradication admit that this goal would take at least 40 years to achieve. It is not clear how long current political will and funding will last. There are concerns that political will might wither in the face of the estimated $5 billion annual cost to sustain eradication efforts.

Disease eradication can clearly be an incredibly important public health triumph, as seen in the case of smallpox. But when should the strategy be employed and when is it best to avoid risks associated with eradication efforts that might fail? Numerous scientific, social, and economic factors surrounding the disease in question must be taken into consideration. Can the microbe associated with the disease persist and multiply in nonhuman species? Does natural disease or immunization confer lifelong immunity or could reinfection potentially occur? Is surveillance of the disease relatively straightforward or do long incubation periods and latent infection make it difficult to detect every last case of the illness? Are interventions associated with eradication of the disease, including quarantine, acceptable to communities globally? Does the net benefit of eradication outweigh the costs of eradication efforts? Proposals for disease eradication must be carefully weighed against potential risk. Rather than being presented as visionary, idealistic goals, disease eradication programs must be clearly situated in the context of the biological and economic aspects of the specific disease and the challenges it presents.

Leave a comment

Filed under Publication

Preventive care in medicine: Dugald Baird’s 1952 obstetrics analysis

Originally published at The Pump Handle

How much of a patient’s social context should physicians take into account? Is an examination of social factors contributing to disease part of the physician’s job description, or is the practice of medicine more strictly confined to treatment rather than prevention? In what ways should the physician incorporate public health, specifically prevention, into the practice of medicine?

These are the questions at the heart of Dugald Baird’s 1952 paper in The New England Journal of Medicine, “Preventive Medicine in Obstetrics.” The paper originated in 1951 as a Cutter Lecture, so named after John Clarence Cutter, a 19th-century medical doctor and professor and physiology and anatomy. Cutter allocated half of the net income of his estate to the founding of an annual lecture on preventive medicine. Baird was the first obstetrician to deliver a Cutter Lecture. Baird’s paper draws much-needed attention to the role of socioeconomic factors in pregnancy outcomes.

Baird begins by describing the Registrar General’s reports in Britain, which divide the population into five social classes. Social Class I comprises highly paid professionals while Social Class V encompasses the “unskilled manual laborers.” In between are the “skilled craftsmen and lower-salaried professional and clerical groups”; the categorization recognizes that job prestige as well as income is important in social class. Baird proceeds to present data on maternal and child health and mortality according to social group as classified by the Registrar General’s system. He makes several essential observations: social class makes relatively little difference in the stillbirth rate, but mortality rates in the first year of life are lowest for the highest social class (Social Class I) and highest for the lowest social class (Social Class V). Social inequality is thus felt most keenly in cases of infant death from infection, which Baird calls “a very delicate index of environmental conditions.”

Baird goes on to analyze data on stillbirths and child mortality from the city of Aberdeen, Scotland, which he chose because the number of annual primigravida (first pregnancy) deliveries at the time was relatively small and therefore manageable from an analytic standpoint and because the population in the early 1950’s was relatively “uniform.”  When comparing births in a public hospital versus a private facility (called a “nursing home” in the paper, although not in the sense generally understood in the U.S. today), many more premature and underweight babies died in the public hospital than in the private nursing home, even though only the former had medical facilities for the care of sick newborns. The difference could not, therefore, be explained by the quality of medical care in the two facilities.

Baird concludes that this discrepancy must have something to do with the health of the mothers. Upon closer examination, Baird recognizes that the mothers in the private nursing home are not only healthier but also consistently taller than the mothers in the public facility. According to Baird, the difference in height must have to do with environmental conditions such as nutrition, a reasonable conclusion although Baird in fact did not have available data on ethnicity or other factors that might have also contributed. As the environment deteriorates, the percentage of short women increases. Baird notes that height affects the size and shape of the pelvis, and that caesarean section is more common in shorter women than taller women. Baird began classifying patients in the hospital in one of 5 physical and functional classes. Women with poorer “physical grades,” who also tended to be shorter, had higher fetal mortality rates. He also observes that most women under the age of 20 had low physical grades, stunted growth, and came from lower socioeconomic statuses. Baird spends some time examining the effects of age on childbearing, looking at women aged 15-19, 20-24, 25-29, 30-34, and over 35. Baird found that the most significant causes of fetal death in the youngest age group (15-19) were toxemia, fetal deformity, and prematurity. Fetal deaths in women aged 30-34 tended to be due more frequently to birth trauma and unexplained intrauterine death. The incidence of forceps delivery and caesarean section grew sharply with age, and labor lasting over 48 hours was much more common among the older age groups.

In a turn that was unusual at the time, Baird considers the emotional stress associated with difficult childbirth and quotes a letter from a woman who decided not to have any more children after the “terrible ordeal” of giving birth to her first child. This close consideration of the patient’s whole experience is a testament to Baird’s concern with the patient’s entire context, including socioeconomic status.

Baird concludes by making a series of recommendations for remedying social inequalities in birth outcomes, some of which make perfect sense and some of which now strike us as outrageously dated. An example of the latter is his suggestion that “the removal of barriers to early marriage” would improve birth outcomes among young women. In fact, we now know that early marriage can have a negative impact on women’s sexual health, sometimes increasing incidence of HIV/AIDS.

Despite the occasional “datedness” of Baird’s paper, his analysis is not only a public health classic in its attempt to bring social perspective back into the practice of medicine but it also contains lessons that are still crucial today. Baird’s paper reminds us that gender is often at the very center of health inequities, and that maternal and infant mortality constitute a major area in which socioeconomic inequalities directly and visibly affect health outcomes. While maternal and infant mortality rates are not high in the developed world, they still constitute serious health problems in developing countries. Infant mortality in particular can be used as a useful indicator of socioeconomic development. Most importantly, Baird’s paper, written in an age when the medical field began relying increasingly on biology and technology, reminds us that it has much to gain from paying attention to social factors that have a crucial impact on health.

Leave a comment

Filed under Publication

How do we perceive risk?: Paul Slovic’s landmark analysis

Originally published at The Pump Handle

In the 1960s, a rapid rise in nuclear technologies aroused unexpected panic in the public. Despite repeated affirmations from the scientific community that these technologies were indeed safe, the public feared both long-term dangers to the environment as well as immediate radioactive disasters. The disjunction between the scientific evidence about and public perception of these risks prompted scientists and social scientists to begin research on a crucial question: how do people formulate and respond to notions of risk?

Early research on risk perception assumed that people assess risk in a rational manner, weighing information before making a decision. This approach assumes that providing people with more information will alter their perceptions of risk. Subsequent research has demonstrated that providing more information alone will not assuage people’s irrational fears and sometimes outlandish ideas about what is truly risky. The psychological approach to risk perception theory, championed by psychologist Paul Slovic, examines the particular heuristics and biases people invent to interpret the amount of risk in their environment.

In a classic review article published in Science in 1987, Slovic summarized various social and cultural factors that lead to inconsistent evaluations of risk in the general public. Slovic emphasizes the essential way in which experts’ and laypeople’s views of risk differ. Experts judge risk in terms of quantitative assessments of morbidity and mortality. Yet most people’s perception of risk is far more complex, involving numerous psychological and cognitive processes. Slovic’s review demonstrates the complexity of the general public’s assessment of risk through its cogent appraisal of decades of research on risk perception theory.

Slovic’s article focuses its attention on one particular type of risk perception research, the “psychometric paradigm.” This paradigm, formulated largely in response to the early work of Chauncey Starr, attempts to quantify perceived risk using psychophysical scaling and multivariate analysis. The psychometric approach thus creates a kind of taxonomy of hazards that can be used to predict people’s responses to new risks.

Perhaps more important than quantifying people’s responses to various risks is to identify the qualitative characteristics that lead to specific valuations of risk. Slovic masterfully summarizes the key qualitative characteristics that result in judgments that a certain activity is risky or not. People tend to be intolerant of risks that they perceive as being uncontrollable, having catastrophic potential, having fatal consequences, or bearing an inequitable distribution of risks and benefits. Slovic notes that nuclear weapons and nuclear power score high on all of these characteristics. Also unbearable in the public view are risks that are unknown, new, and delayed in their manifestation of harm. These factors tend to be characteristic of chemical technologies in public opinion. The higher a hazard scores on these factors, the higher its perceived risk and the more people want to see the risk reduced, leading to calls for stricter regulation. Slovic ends his review with a nod toward sociological and anthropological studies of risk, noting that anxiety about risk may in some cases be a proxy for other social concerns. Many perceptions of risk are, of course, also socially and culturally informed.

Slovic’s analysis goes a long way in explaining why people persist in extreme fears of nuclear energy while being relatively unafraid of driving automobiles, even though the latter has caused many more deaths than the former. The fact that there are so many automobile accidents enables the public to feel that it is capable of assessing the risk. In other words, the risk seems familiar and knowable. There is also a low level of media coverage of automobile accidents, and this coverage never depicts future or unknown events resulting from an accident. On the other hand, nuclear energy represents an unknown risk, one that cannot be readily analyzed by the public due to a relative lack of information. Nuclear accidents evoke widespread media coverage and warnings about possible future catastrophes. In this case, a lower risk phenomenon (nuclear energy) actually induces much more fear than a higher risk activity (driving an automobile).

Importantly, Slovic correctly predicted 25 years ago that DNA experiments would someday become controversial and frighten the public. Although the effects of genetically modified crops on ecosystems may be a cause for concern, fears of the supposed ill effects of these crops on human health are scientifically baseless. Today, although biologists insist that genetically modified crops pose no risk to human health, many members of the public fear that genetically modified crops will cause cancer and birth defects. Such crops grow under adverse circumstances and resist infection and destruction by insects in areas of the world tormented by hunger, and therefore have the potential to dramatically improve nutritional status in countries plagued by starvation and malnutrition. Yet the unfamiliarity of the phenomenon and its delayed benefits make it a good candidate for inducing public fear and skepticism.

There is a subtle yet passionate plea beneath the surface of Slovic’s review. The article calls for assessments of risk to be more accepting of the role of emotions and cognition in public conceptions of danger. Rather than simply disseminating more and more information about, for example, the safety of nuclear power, experts should be attentive to and sensitive about the public’s broad conception of risk. The goal of this research is a vital one: to aid policy-makers by improving interaction with the public, by better directing educational efforts, and by predicting public responses to new technologies. In the end, Slovic argues that risk management is a two-way street: just as the public should take experts’ assessments of risk into account, so should experts respect the various factors, from cultural to emotional, that result in the public’s perception of risk.

Leave a comment

Filed under Publication

The infelicities of quarantine

Originally published at PLoS Speaking of Medicine

In 2009, as panic struck global health systems confronted with the H1N1 flu epidemic, a familiar strategy was immediately invoked by health officials worldwide: quarantine. In Hong Kong, 300 hotel guests were quarantined in their hotel for at least a week after one guest came down with H1N1. Such measures are certainly extreme, but they do raise important questions about quarantine. How do we regulate quarantine in practice? How do we prevent this public health measure from squashing civil liberties?

Quarantine as a method of containing infectious disease might be as old as the ancient Greeks, who implemented strategies to “avoid the contagious.” Our oldest and most concrete evidence of quarantine comes from Venice circa 1374. Fearing the plague, a forty-day quarantine for ships entering the city was enacted, during which passengers had to remain at the port and could not enter the city. In 1893, the United States enacted the National Quarantine Act, which created a national system of quarantine and permitted state-run regulations, including routine inspection of immigrant ships and cargoes.

“Quarantine” must be differentiated from “isolation.” While isolation refers to the separation of people infected with a particular contagious disease, “quarantine” is the separation of people who have been exposed to a certain illness but are not yet infected. Quarantine is particularly important in cases in which a disease can be transmitted even before the individual shows signs of illness. Although quarantine’s origins are ancient, it is still a widely used intervention. For example, the U.S. is authorized to quarantine individuals with exposure to the following infectious diseases: cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral hemorrhagic fevers, SARS, and flu. Federal authorities may quarantine individuals at U.S. ports of entry.

The history of quarantine is intimately intertwined with xenophobia. There is no question that quarantine has been frequently abused, serving as a proxy for discrimination against minorities. This was especially true in late nineteenth- and early twentieth-century America, coinciding with large numbers of new immigrants entering the country. A perfect example of the enmeshed history of quarantine abuse and xenophobia occurred in 1900 in San Francisco. After an autopsy of a deceased Chinese man found bacteria suspected to cause bubonic plague, the city of San Francisco restricted all Chinese residents from traveling outside of the city without evidence that they had been vaccinated against the plague. In 1894, confronted with a smallpox epidemic, Milwaukee forcibly quarantined immigrants and poor residents of the city in a local hospital. In these cases, quarantine served as a method of containing and controlling ethnic minorities and immigrants whose surging presence in the U.S. was mistrusted.

A more recent example stems from the beginning of the AIDS epidemic in the early 1980s. In 1986, Cuba began universal HIV testing. Quarantines were instituted for all people testing positive for HIV infection. In 1985, officials in the state of Texas contemplated adding AIDS to the list of quarantinable diseases. These strategies were considered in a state of panic and uncertainty about the mode of transmission of HIV/AIDS. In retrospect, we know that instituting quarantine for HIV would have been not only ineffective but also a severe violation of individual liberties. Early in the AIDS epidemic, some individuals even called for the mass quarantine of gay men, indicating how quarantine could be used as a weapon against certain groups, such as immigrants and homosexuals. Because of their extreme nature and their recourse to arguments about protecting public safety, quarantine laws are especially prone to abuse of the sort witnessed in these cases.

How can we prevent quarantine laws from being abused? For one thing, these laws must be as specific as possible. How long can someone be quarantined before being permitted to appeal to the justice system? In what kinds of facilities should quarantined individuals be kept? The answer to this question would depend on the illness, type of exposure, and risk of contracting the disease, but in general, places of quarantine should never include correctional facilities. How are quarantined individuals monitored? How long can they be kept in quarantined conditions without symptoms before it is determined that they pose no public health risk? Quarantine laws should be sufficiently flexible to be amended according to updated knowledge about modes of transmission in the case of new or emerging infectious diseases. Quarantine measures should not be one-size-fits-all but modified according to scientific evidence relating to the disease in question. Transparency in all government communications about quarantine regulations must be standard in all cases. Most importantly, science should determine when to utilize quarantine. In order to quarantine an individual, the mode of transmission must be known, transmission must be documented to be human to human, the illness must be highly contagious, and the duration of the asymptomatic incubation period must be known. Without these scientific guidelines, quarantine may be subject to serious and unjust abuse.

In the case of infectious diseases with long incubation periods, quarantine laws can be an effective means of containing possible epidemics. Similarly, in cases in which isolation alone is not effective in containing an infectious disease outbreak, quarantine might be useful. In the case of the 2003 SARS outbreak, measures that quarantined individuals with definitive exposure to SARS were effective in preventing further infections, although mass quarantines, such as the one implemented in Toronto, were relatively ineffective. Quarantine can become a serious encroachment on civil rights, but there are intelligent ways of regulating these laws to prevent such damaging outcomes. It is important not to confuse quarantine per se with the abuse of quarantine. At the same time, when quarantine has the capacity to marginalize certain populations and perpetuate unwarranted fear of foreigners, scientific certainty is essential before implementation.

Leave a comment

Filed under Publication

What can we learn from disease stigma’s long history?

Originally published at PLoS Speaking of Medicine

Although tremendous strides in fighting stigma and discrimination against people with HIV/AIDS have been made since the beginning of the epidemic, cases of extreme discrimination still find their way into the US court system regularly. Just this year, a man in Pennsylvania was denied a job as a nurse’s assistant when he revealed his HIV status to his employer. Even more appallingly, HIV-positive individuals in the Alabama and South Carolina prison systems are isolated from other prisoners, regularly kept in solitary confinement, and often given special armbands to denote their HIV-positive status. On a global level, HIV stigma can lead to difficulty accessing testing and healthcare, which will almost certainly have a substantial impact on the quality of an individual’s life. Legal recourse often rights these wrongs for the individual, but this kind of discrimination leads to the spread of false beliefs about transmission, the very driver of stigma. In the U.S., as of 2009, one in five Americans believed that HIV could be spread by sharing a drinking glass, swimming in a pool with someone who is HIV-positive, or touching a toilet seat.

Discrimination against people with HIV/AIDS is probably the most prominent form of disease stigma in the late 20th and early 21st centuries. But disease stigma has an incredibly long history, one that spans back to the medieval period’s panic over leprosy. Strikingly, in nearly every stage of history in reference to almost every major disease outbreak, one stigmatizing theme is constant: disease outbreaks are blamed on a “low” or “immoral” class of people who must be quarantined and removed as a threat to society. These “low” and “immoral” people are often identified as outsiders, on the fringes of society, including foreigners, immigrants, racial minorities, and people of low socioeconomic status.

Emerging infectious diseases in their early stages, especially when modes of transmission are unknown, are especially vulnerable to stigma. Consider the case of polio in America.  In the early days of the polio epidemic, although polio struck poor and rich alike, public health officials cited poverty and a “dirty” urban environment as major drivers of the epidemic. The early response to polio was therefore often to quarantine low-income urban dwellers with the disease.

The 1892 outbreaks of typhus fever and cholera in New York City are two other good examples. These outbreaks were both blamed on Jewish immigrants from Eastern Europe. Upon arriving in New York, Jewish immigrants, healthy and sick, were quarantined in unsanitary conditions on North Brother Island at the command of the New York City Department of Health. Although it is important to take infectious disease control seriously, these measures ended up stigmatizing an entire group of immigrants rather than pursuing control measures based on sound scientific principles. This “us” versus “them” dynamic is common to stigma in general and indicates a way in which disease stigma can be viewed as a proxy for other types of fears, especially xenophobia and general fear of outsiders.

The fear of the diseased outsider is still pervasive. Until 2009, for instance, HIV-positive individuals were not allowed to enter the United States. The lifting of the travel ban allowed for the 2012 International AIDS Conference to be held in the United States for the first time in over 20 years. The connection between foreign “invasion” and disease “invasion” had become so ingrained that an illness that presented no threat of transmission through casual contact became a barrier to travel.

What can we learn from this history? Stigma and discrimination remain serious barriers to care for people with HIV/AIDS and tuberculosis, among other illnesses. Figuring out ways to reduce this stigma should be seen as part and parcel of medical care. Recognizing disease stigma’s long history can give us insight into how exactly stigmatizing attitudes are formed and how they are disbanded. Instead of simply blaming the ignorance of people espousing stigmatizing attitudes about certain diseases, we should try to understand precisely how these attitudes are formed so that we can intervene in their dissemination.

We should also be looking to history to see what sorts of interventions against stigma may have worked in the past. How are stigmatizing attitudes relinquished? Is education the key, and if so, what is the most effective way of disseminating this kind of knowledge? How should media sources depict epidemiological data without stirring fear of certain ethnic, racial, or socioeconomic groups in which incidence of a certain disease might be increasing? How can public health experts and clinicians be sure not to inadvertently place blame on those afflicted with particular illnesses? Ongoing research into stigma should evaluate what has worked in the past. This might give us some clues about what might work now to reduce devastating discrimination that keeps people from getting the care they need.

Leave a comment

Filed under Publication

What “causes” disease?: Association vs. Causation and the Hill Criteria

Originally published at The Pump Handle

Does cigarette smoking cause cancer? Does eating specific foods or working in certain locations cause diseases? Although we have determined beyond doubt that cigarette smoking causes cancer, questions of disease causality still challenge us because it is never a simple matter to distinguish mere association between two factors from an actual causal relationship between them. In an address to the Royal Society of Medicine in 1965, Sir Austin Bradford Hill attempted to codify the criteria for determining disease causality. An occupational physician, Hill was primarily concerned with the relationships among sickness, injury, and the conditions of work. What hazards do particular occupations pose? How might the conditions of a specific occupation cause specific disease outcomes?

In an engaging and at times humorous address, Hill delineates nine criteria for determining causality. He is quick to add that none of these criteria can be used independently and that even as a whole they do not represent an absolute method of determining causality. Nevertheless, they represent crucial considerations in any deliberation about the causes of disease, considerations that still resonate half a century later.

The criteria, which Hill calls “viewpoints,” are as follows:

1. Strength.  The association between the projected cause and the effect must be strong. Hill uses the example of cigarette-smoking here, noting that “prospective inquiries have shown that the death rate from cancer of the lung in cigarette smokers is nine to ten times the rate in non-smokers.” Even when the effects are objectively small, if the association is strong, causality can be contemplated. For example, during London’s 1854 cholera outbreak, John Snow observed that the death rate of customers supplied with polluted drinking water from the Southwark and Vauxhall Company was low in absolute terms (71 deaths in 10,000 houses). Yet in comparison to the death rate in houses supplied with the pure water of the Lambeth Company (5 in 10,000), the association became significant.  Even though the mechanism by which polluted water causes cholera—transmission of the bacteria vibrio cholera—was then still unknown, the strength of this association was sufficient in Snow’s mind to correctly assign a causal link.

2. Consistency. The effects must be repeatedly observed by different people, in different places, circumstances and times.

3. Specificity. Hill admits this is a weaker criterion, since diseases may have many causes and etiologies. Nevertheless, the specificity of the association, meaning how limited the association is to specific workers and sites and types of disease, must be taken into account in order to determine causality.

4. Temporality. Cause must precede effect.

5. Biological gradient. This criterion is also known as the dose-response curve. A good indicator of causality is whether, for example, death rates from cancer rise linearly with the number of cigarettes smoked. A small amount of exposure should result in a smaller effect. This is indeed the case; the more cigarettes a person smokes over a lifetime, the greater the risk of getting lung cancer.

6. Plausibility. The cause-and-effect relationship should be biologically plausible. It must not violate the known laws of science and biology.

  1. 7.  Coherence. The cause-and-effect hypothesis should be in line with known facts and data about the biology and history of the disease in question.

8. Experiment. This would probably be the most important criterion if Hill had produced these “viewpoints” in 2012. Instead, Hill notes that “Occasionally it is possible to appeal to experimental, or semi-experimental, evidence.” An example of an informative experiment would be to take preventive action as a result of an observed association and see whether the preventive action actually reduces incidence of the disease.

9. Analogy. If one cause results in a specific effect then a similar cause can be said to result in a similar effect. Hill uses the example of thalidomide and rubella, noting that similar evidence with another drug and another viral disease in pregnancy might be accepted on analogy, even if the evidence is slighter.

The impact of Hill’s criteria has been enormous. They are still widely accepted in epidemiological research and have even spread beyond the scientific community. In this short yet captivating address, Hill managed to propose criteria that would constitute a crucial aspect of epidemiological research for decades to come. One wonders how Hill would respond to the plethora of reports published today claiming a cause and effect relationship between two factors based on an odds ratio of 1.2, with a statistically significant probability value of less than 0.05. While such an association may indeed be real, it is far smaller than those Hill discusses in his first criterion (“strength”). Hill does say, “We must not be too ready to dismiss a cause-and-effect hypothesis merely on the grounds that the observed association appears to be slight.” Yet he also wonders if “the pendulum has not swung too far” in substituting statistical probability testing for biological common sense. Claims that environmental exposures, food, chemicals, and types of stress cause a myriad of diseases pervade both scientific and popular literature today. In evaluating these issues, Hill’s sobering ideas, albeit 50 years old, are still useful guidance.

Leave a comment

Filed under Publication

Culturally Sensitive Psychiatric Care for Refugees: A Reassessment

Originally published at Health and Human Rights

In recent years, a welcome increase in attention to specific psychological problems among refugees has led to important new insights. Some of this research interest stems from experience with troops and veterans of the Iraqi and Afghani wars. New research has focused not only on the psychiatric effects of torture and human rights abuses, but also on the mental health consequences of victims’ subsequent forced migration. These consequences include the process of seeking asylum, isolation in a new country, and guilt and concern about leaving one’s native land.1, 2, 6

With a renewed focus on the special psychiatric needs of the refugee community, several essential areas of new research have emerged. The epidemiological research on the effects of torture and forced migration on refugee populations has proven helpful in identifying the burden of chronic mental illness on asylum seekers. But are refugees really getting the mental health care they need? Evidence from the Mental Health Commission of Canada suggests that there is a major gap between need and treatment. The report demonstrates that some of the reasons for this inadequacy are related to lack of awareness of available services and socioeconomic barriers. Yet refugees also frequently cite perceived stigma and discrimination as a major barrier to care. This discrimination may not come directly from mental health providers but may be a perceived effect of a system that ignores their special needs. The system inevitably offers poorer treatment options to persons of different cultural backgrounds, including refugees, who may feel cultural barriers particularly prominently as a result of rapid resettlement.

It is quickly becoming apparent that in order to help refugees access treatment and use it successfully, mental health care modules must be adapted to the cultural diversity of the refugee client population.3, 4 Only approaches that incorporate recognition of such diversity have the potential to overcome the low rate of help-seeking behavior among refugees and the often inadequate quality of mental health care they receive.

The language barrier is an urgent problem. Feeling misunderstood by a health care provider is a major barrier to health-seeking behaviors among refugee populations. It may end up unjustly excluding refugees from obtaining treatment. The language barrier between patient and physician causes complex problems. In the case of refugee mental health care, there is the added complication of cultural differences that inform terms for sadness, depression, anxiety, and even psychosis. Semantic equivalency can be achieved in some cases, but it may require extensive consultation with health care workers in the native country who are familiar with the illnesses and words used to describe them.

There is also the question of whether diagnostic questions proposed by the DSM-IV, and its successor, the DSM-V, are phrased in a way that is meaningful in all languages and cultures. For example, in the Somali language, what Western medicine calls PTSD is associated with a form of madness termed waali, basically meaning “madness from trauma.”5 No such sense of “madness” is conveyed by the DSM-IV diagnostic markers for PTSD, which emphasize feelings of anxiety and sadness. The result is that refugees from Somalia may have profound PTSD but may not associate it with the version of the illness presented by the DSM-IV and DSM-V. Quests for semantic equivalency should not only take into account whether a word is translated correctly, but also whether the same concept of the illness exists in the other culture. Translations of diagnostic questions should therefore be culturally, as well as linguistically, informed.

Refugees are needlessly being excluded from the mental health care they desperately need and to which they are entitled. In many cases, this exclusion is the result of unintentional misunderstanding on the part of clinicians. More research should be conducted into how accurately Western notions of PTSD, depression, and other mental illnesses translate across cultures, and how well treatments traditionally used in Western countries for these illnesses work in widespread cultural settings. Most importantly, mental health providers should always remain aware of cultural differences in order to provide the most sensitive, effective, and appropriate care to a population in great need.

Leave a comment

Filed under Publication