Category Archives: Publication

Economic development and mortality: Samuel Preston’s 1975 classic

Originally published at The Pump Handle

In the late 1940s and 1950s, it became increasingly evident that mortality rates were falling rapidly worldwide, including in the developing world. In a 1965 analysis, economics professor George J. Stolnitz surmised that survival in the “underdeveloped world” was on the rise in part due to a decline in “economic misery” in these regions. But in 1975, Samuel Preston published a paper that changed the course of thought on the relationship between mortality and economic development.

In the Population Studies article “The changing relation between mortality and level of economic development,” Preston re-examined the relationship between mortality and economic development, producing a scatter diagram of relations between life expectancy and national income per head for nations in the 1900s, 1930s, and 1960s that has become one of the most important illustrations in population sciences. The diagram shows that life expectancy rose substantially in these decades no matter what the income level. Preston concluded that if income had been the only determining factor in life expectancy, observed income increases would have produced a gain in life expectancy of 2.5 years between 1938 and 1963, rather than the actual gain of 12.2 years. Preston further concluded that “factors exogenous to a country’s current level of income probably account for 75-90% of the growth in life expectancy for the world as a whole between the 1930s and the 1960s” and that income growth accounts for only 10-25% of this gain in life expectancy.

Preston’s next main task was to contemplate what these “exogenous factors” might be. Preston proposed that a number of factors aside from a nation’s level of income had contributed to mortality trends in more developed as well as in less developed countries over the previous quarter century.  These factors were not necessarily developed in the country that enjoyed an increase in lifespan but rather were imported and therefore are, according to Preston, less dependent on endogenous wealth. The exact nature of these “exogenous” factors differed according to the level of development of the nation in question.  Preston identified vaccines, antibiotics, and sulphonamides in more developed areas and insect control, sanitation, health education, and maternal and child health services in less developed areas as the main factors that contributed to increased life expectancy.

Preston’s paper continues to provide guidance in development theory and economics today. But there was and continues to be considerable resistance to Preston’s theory, mostly from economists. Shortly after Preston’s article appeared, Thomas McKeown published two books that argued essentially the opposite: that mortality patterns have everything to do with economic growth and standards of living. Pritchett and Summers argued in 1996 that national income growth naturally feeds into better education and health services, which in turn contribute to higher life expectancy.

How well does Preston’s analysis hold up today? For one thing, Preston did not foresee the seemingly intimate connection between development and the recent rapid increased incidence prevalence of some chronic diseases. As developing nations urbanize and become more affluent, noncommunicable diseases, such as cancer and heart disease, many secondary to “lifestyle” issues like obesity and lack of physical exercise, are now on the rise, with the potential to lower life expectancy significantly. So is wealthier healthier, to use the words of Pritchett and Summers? Not necessarily, as we are seeing increasingly.

Why is it so important to try to work out the relationship between health and wealth? If we assume that improvements in healthcare systems grow naturally out of increased wealth, then developing countries should be focusing primarily on economic growth in order to improve their healthcare. This must be true to a certain extent, but, as Preston is quick to point out, there are other factors that affect the health of a nation, and it is not sufficient to assume that economic growth will automatically lead to improved life expectancy. Preston’s analysis tends to emphasize instead that health systems strengthening and biological innovation must always take place beside economic growth to insure better health. Whether or not we can completely agree with Preston’s assertion that wealthier is not necessarily healthier, it is certainly the case that his landmark article stimulated an essential conversation about the relationship between economic development and mortality that continues avidly to the present day.

Leave a comment

Filed under Publication

Is Disease Eradication Always the Best Path?

Originally published at PLoS Speaking of Medicine

There is no question that the eradication of smallpox, a devastating illness costing millions of lives, was one of the greatest achievements of 20th-century medicine. The disease was triumphantly declared eradicated by the World Health Assembly in 1980. Smallpox eradication required extremely focused surveillance as well as the use of a strategy called “ring vaccination,” in which anyone who could have been exposed to a smallpox patient was vaccinated immediately. Why was smallpox eradication possible? For one thing, smallpox is easily and quickly recognized because of the hallmark rash associated with the illness. Second, smallpox can be transmitted only by humans. The lack of an animal reservoir makes controlling the illness much simpler.

The success of smallpox eradication campaigns has resulted in persistent calls to eradicate other infectious diseases in the years since 1980. Unfortunately, disease eradication can be difficult and even impossible in the case of many infectious diseases, and it is crucial to consider the features of each illness in order to come to a proper conclusion about whether the pursuit of disease eradication is the best approach. In the first place, it is important to be clear about what “eradication” means. Eradication refers to deliberate efforts to reduce worldwide incidence of an infectious disease to zero. It is not the same as extinction, the complete destruction of all disease pathogens or vectors that transmit the disease. Elimination, a third concept, encapsulates the complete lack of a disease in a certain population at a certain point in time. Disease eradication therefore specifies a particular strategy for dealing with infectious diseases; other options exist that in some circumstances may be more desirable.

Can the pursuit of disease eradication ever be detrimental? It could be in the case of certain diseases that do not lend themselves easily to total eradication. A claim of eradication logically ends prophylactic efforts, reduces efforts to train health workers to recognize and treat the eradicated disease, and halts research on the disease and its causes. When eradication campaigns show some measure of success, financial support for the control of that illness plummets dramatically. Wide dissemination of information about eradication efforts without the certification of success can therefore prove detrimental. In these cases, complacency may prematurely replace much needed vigilance. If there is a reasonable chance of recurrence of the disease or if lifelong immunity against the disease is impossible, then attempting eradication may prove disastrous because infrastructure to control the disease would be lacking in the event of resurgence. Tracking down the remaining cases of an illness on the brink of eradication can be incredibly costly and divert government money in resource-poor nations from more pressing needs.

Another potential problem with disease eradication efforts is that, as a vertical approach, they may drain resources from horizontal approaches, such as capacity building and health system strengthening. Some advocate a more “diagonal” approach that uses disease-specific interventions to drive improvements of the overall health system. Still others have argued that vertical approaches that treat one disease at a time may divert resources from primary healthcare and foster imbalances in local healthcare services. Vertical schemes may also produce disproportional dependence on international NGO’s that can result in the weakening of local healthcare systems.

Malaria offers an excellent example of a case in which debate rages about whether eradication efforts would be successful. There are four species of single-cell parasite that cause malaria, the most common of which are P. falciparum and P. vivax. P. falciparum is the most deadly and P. vivax is the most prevalent. These two species make it difficult to engineer a single, fool-proof vaccine. Further complicating developing a vaccine for malaria are the ability of the parasites to mutate so that even contracting malaria does not confer life-long immunity. Furthermore, malaria involves an animal vector (mosquitoes). It would clearly be a huge challenge and perhaps even impossible to wipe out malaria completely. Beginning in 1955, there was a global attempt to eradicate malaria after it was realized that spraying houses with DDT was a cheap and effective way of killing mosquitoes. The initiative was successful in eliminating malaria in nations with temperate climates and seasonal malaria transmission. Yet some nations, such as India and Sri Lanka, had sharp reductions in malaria cases only to see sharp increases when efforts inevitably ceased. The state of affairs in India and Sri Lanka demonstrates some of the negative effects of eradication campaigns that are not carried to fruition. The project was abandoned in the face of widespread drug resistance, resistance to available insecticides, and unsustainable funding from donor countries. This failure was detrimental because the abandoned vector control efforts led to the emergence of severe, resistant strains that were much harder to treat.

Recently, discussions of malaria eradication have begun again. At the moment, there is considerable political will and funding for malaria eradication efforts from agencies such as the Gates Foundation. The Malaria Eradication Research Agenda Initiative, in part funded by the Gates Foundation, has resulted in substantial progress in identifying what needs to be done to achieve eradication. Even so, proponents of malaria eradication admit that this goal would take at least 40 years to achieve. It is not clear how long current political will and funding will last. There are concerns that political will might wither in the face of the estimated $5 billion annual cost to sustain eradication efforts.

Disease eradication can clearly be an incredibly important public health triumph, as seen in the case of smallpox. But when should the strategy be employed and when is it best to avoid risks associated with eradication efforts that might fail? Numerous scientific, social, and economic factors surrounding the disease in question must be taken into consideration. Can the microbe associated with the disease persist and multiply in nonhuman species? Does natural disease or immunization confer lifelong immunity or could reinfection potentially occur? Is surveillance of the disease relatively straightforward or do long incubation periods and latent infection make it difficult to detect every last case of the illness? Are interventions associated with eradication of the disease, including quarantine, acceptable to communities globally? Does the net benefit of eradication outweigh the costs of eradication efforts? Proposals for disease eradication must be carefully weighed against potential risk. Rather than being presented as visionary, idealistic goals, disease eradication programs must be clearly situated in the context of the biological and economic aspects of the specific disease and the challenges it presents.

Leave a comment

Filed under Publication

Preventive care in medicine: Dugald Baird’s 1952 obstetrics analysis

Originally published at The Pump Handle

How much of a patient’s social context should physicians take into account? Is an examination of social factors contributing to disease part of the physician’s job description, or is the practice of medicine more strictly confined to treatment rather than prevention? In what ways should the physician incorporate public health, specifically prevention, into the practice of medicine?

These are the questions at the heart of Dugald Baird’s 1952 paper in The New England Journal of Medicine, “Preventive Medicine in Obstetrics.” The paper originated in 1951 as a Cutter Lecture, so named after John Clarence Cutter, a 19th-century medical doctor and professor and physiology and anatomy. Cutter allocated half of the net income of his estate to the founding of an annual lecture on preventive medicine. Baird was the first obstetrician to deliver a Cutter Lecture. Baird’s paper draws much-needed attention to the role of socioeconomic factors in pregnancy outcomes.

Baird begins by describing the Registrar General’s reports in Britain, which divide the population into five social classes. Social Class I comprises highly paid professionals while Social Class V encompasses the “unskilled manual laborers.” In between are the “skilled craftsmen and lower-salaried professional and clerical groups”; the categorization recognizes that job prestige as well as income is important in social class. Baird proceeds to present data on maternal and child health and mortality according to social group as classified by the Registrar General’s system. He makes several essential observations: social class makes relatively little difference in the stillbirth rate, but mortality rates in the first year of life are lowest for the highest social class (Social Class I) and highest for the lowest social class (Social Class V). Social inequality is thus felt most keenly in cases of infant death from infection, which Baird calls “a very delicate index of environmental conditions.”

Baird goes on to analyze data on stillbirths and child mortality from the city of Aberdeen, Scotland, which he chose because the number of annual primigravida (first pregnancy) deliveries at the time was relatively small and therefore manageable from an analytic standpoint and because the population in the early 1950’s was relatively “uniform.”  When comparing births in a public hospital versus a private facility (called a “nursing home” in the paper, although not in the sense generally understood in the U.S. today), many more premature and underweight babies died in the public hospital than in the private nursing home, even though only the former had medical facilities for the care of sick newborns. The difference could not, therefore, be explained by the quality of medical care in the two facilities.

Baird concludes that this discrepancy must have something to do with the health of the mothers. Upon closer examination, Baird recognizes that the mothers in the private nursing home are not only healthier but also consistently taller than the mothers in the public facility. According to Baird, the difference in height must have to do with environmental conditions such as nutrition, a reasonable conclusion although Baird in fact did not have available data on ethnicity or other factors that might have also contributed. As the environment deteriorates, the percentage of short women increases. Baird notes that height affects the size and shape of the pelvis, and that caesarean section is more common in shorter women than taller women. Baird began classifying patients in the hospital in one of 5 physical and functional classes. Women with poorer “physical grades,” who also tended to be shorter, had higher fetal mortality rates. He also observes that most women under the age of 20 had low physical grades, stunted growth, and came from lower socioeconomic statuses. Baird spends some time examining the effects of age on childbearing, looking at women aged 15-19, 20-24, 25-29, 30-34, and over 35. Baird found that the most significant causes of fetal death in the youngest age group (15-19) were toxemia, fetal deformity, and prematurity. Fetal deaths in women aged 30-34 tended to be due more frequently to birth trauma and unexplained intrauterine death. The incidence of forceps delivery and caesarean section grew sharply with age, and labor lasting over 48 hours was much more common among the older age groups.

In a turn that was unusual at the time, Baird considers the emotional stress associated with difficult childbirth and quotes a letter from a woman who decided not to have any more children after the “terrible ordeal” of giving birth to her first child. This close consideration of the patient’s whole experience is a testament to Baird’s concern with the patient’s entire context, including socioeconomic status.

Baird concludes by making a series of recommendations for remedying social inequalities in birth outcomes, some of which make perfect sense and some of which now strike us as outrageously dated. An example of the latter is his suggestion that “the removal of barriers to early marriage” would improve birth outcomes among young women. In fact, we now know that early marriage can have a negative impact on women’s sexual health, sometimes increasing incidence of HIV/AIDS.

Despite the occasional “datedness” of Baird’s paper, his analysis is not only a public health classic in its attempt to bring social perspective back into the practice of medicine but it also contains lessons that are still crucial today. Baird’s paper reminds us that gender is often at the very center of health inequities, and that maternal and infant mortality constitute a major area in which socioeconomic inequalities directly and visibly affect health outcomes. While maternal and infant mortality rates are not high in the developed world, they still constitute serious health problems in developing countries. Infant mortality in particular can be used as a useful indicator of socioeconomic development. Most importantly, Baird’s paper, written in an age when the medical field began relying increasingly on biology and technology, reminds us that it has much to gain from paying attention to social factors that have a crucial impact on health.

Leave a comment

Filed under Publication

How do we perceive risk?: Paul Slovic’s landmark analysis

Originally published at The Pump Handle

In the 1960s, a rapid rise in nuclear technologies aroused unexpected panic in the public. Despite repeated affirmations from the scientific community that these technologies were indeed safe, the public feared both long-term dangers to the environment as well as immediate radioactive disasters. The disjunction between the scientific evidence about and public perception of these risks prompted scientists and social scientists to begin research on a crucial question: how do people formulate and respond to notions of risk?

Early research on risk perception assumed that people assess risk in a rational manner, weighing information before making a decision. This approach assumes that providing people with more information will alter their perceptions of risk. Subsequent research has demonstrated that providing more information alone will not assuage people’s irrational fears and sometimes outlandish ideas about what is truly risky. The psychological approach to risk perception theory, championed by psychologist Paul Slovic, examines the particular heuristics and biases people invent to interpret the amount of risk in their environment.

In a classic review article published in Science in 1987, Slovic summarized various social and cultural factors that lead to inconsistent evaluations of risk in the general public. Slovic emphasizes the essential way in which experts’ and laypeople’s views of risk differ. Experts judge risk in terms of quantitative assessments of morbidity and mortality. Yet most people’s perception of risk is far more complex, involving numerous psychological and cognitive processes. Slovic’s review demonstrates the complexity of the general public’s assessment of risk through its cogent appraisal of decades of research on risk perception theory.

Slovic’s article focuses its attention on one particular type of risk perception research, the “psychometric paradigm.” This paradigm, formulated largely in response to the early work of Chauncey Starr, attempts to quantify perceived risk using psychophysical scaling and multivariate analysis. The psychometric approach thus creates a kind of taxonomy of hazards that can be used to predict people’s responses to new risks.

Perhaps more important than quantifying people’s responses to various risks is to identify the qualitative characteristics that lead to specific valuations of risk. Slovic masterfully summarizes the key qualitative characteristics that result in judgments that a certain activity is risky or not. People tend to be intolerant of risks that they perceive as being uncontrollable, having catastrophic potential, having fatal consequences, or bearing an inequitable distribution of risks and benefits. Slovic notes that nuclear weapons and nuclear power score high on all of these characteristics. Also unbearable in the public view are risks that are unknown, new, and delayed in their manifestation of harm. These factors tend to be characteristic of chemical technologies in public opinion. The higher a hazard scores on these factors, the higher its perceived risk and the more people want to see the risk reduced, leading to calls for stricter regulation. Slovic ends his review with a nod toward sociological and anthropological studies of risk, noting that anxiety about risk may in some cases be a proxy for other social concerns. Many perceptions of risk are, of course, also socially and culturally informed.

Slovic’s analysis goes a long way in explaining why people persist in extreme fears of nuclear energy while being relatively unafraid of driving automobiles, even though the latter has caused many more deaths than the former. The fact that there are so many automobile accidents enables the public to feel that it is capable of assessing the risk. In other words, the risk seems familiar and knowable. There is also a low level of media coverage of automobile accidents, and this coverage never depicts future or unknown events resulting from an accident. On the other hand, nuclear energy represents an unknown risk, one that cannot be readily analyzed by the public due to a relative lack of information. Nuclear accidents evoke widespread media coverage and warnings about possible future catastrophes. In this case, a lower risk phenomenon (nuclear energy) actually induces much more fear than a higher risk activity (driving an automobile).

Importantly, Slovic correctly predicted 25 years ago that DNA experiments would someday become controversial and frighten the public. Although the effects of genetically modified crops on ecosystems may be a cause for concern, fears of the supposed ill effects of these crops on human health are scientifically baseless. Today, although biologists insist that genetically modified crops pose no risk to human health, many members of the public fear that genetically modified crops will cause cancer and birth defects. Such crops grow under adverse circumstances and resist infection and destruction by insects in areas of the world tormented by hunger, and therefore have the potential to dramatically improve nutritional status in countries plagued by starvation and malnutrition. Yet the unfamiliarity of the phenomenon and its delayed benefits make it a good candidate for inducing public fear and skepticism.

There is a subtle yet passionate plea beneath the surface of Slovic’s review. The article calls for assessments of risk to be more accepting of the role of emotions and cognition in public conceptions of danger. Rather than simply disseminating more and more information about, for example, the safety of nuclear power, experts should be attentive to and sensitive about the public’s broad conception of risk. The goal of this research is a vital one: to aid policy-makers by improving interaction with the public, by better directing educational efforts, and by predicting public responses to new technologies. In the end, Slovic argues that risk management is a two-way street: just as the public should take experts’ assessments of risk into account, so should experts respect the various factors, from cultural to emotional, that result in the public’s perception of risk.

Leave a comment

Filed under Publication

The infelicities of quarantine

Originally published at PLoS Speaking of Medicine

In 2009, as panic struck global health systems confronted with the H1N1 flu epidemic, a familiar strategy was immediately invoked by health officials worldwide: quarantine. In Hong Kong, 300 hotel guests were quarantined in their hotel for at least a week after one guest came down with H1N1. Such measures are certainly extreme, but they do raise important questions about quarantine. How do we regulate quarantine in practice? How do we prevent this public health measure from squashing civil liberties?

Quarantine as a method of containing infectious disease might be as old as the ancient Greeks, who implemented strategies to “avoid the contagious.” Our oldest and most concrete evidence of quarantine comes from Venice circa 1374. Fearing the plague, a forty-day quarantine for ships entering the city was enacted, during which passengers had to remain at the port and could not enter the city. In 1893, the United States enacted the National Quarantine Act, which created a national system of quarantine and permitted state-run regulations, including routine inspection of immigrant ships and cargoes.

“Quarantine” must be differentiated from “isolation.” While isolation refers to the separation of people infected with a particular contagious disease, “quarantine” is the separation of people who have been exposed to a certain illness but are not yet infected. Quarantine is particularly important in cases in which a disease can be transmitted even before the individual shows signs of illness. Although quarantine’s origins are ancient, it is still a widely used intervention. For example, the U.S. is authorized to quarantine individuals with exposure to the following infectious diseases: cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral hemorrhagic fevers, SARS, and flu. Federal authorities may quarantine individuals at U.S. ports of entry.

The history of quarantine is intimately intertwined with xenophobia. There is no question that quarantine has been frequently abused, serving as a proxy for discrimination against minorities. This was especially true in late nineteenth- and early twentieth-century America, coinciding with large numbers of new immigrants entering the country. A perfect example of the enmeshed history of quarantine abuse and xenophobia occurred in 1900 in San Francisco. After an autopsy of a deceased Chinese man found bacteria suspected to cause bubonic plague, the city of San Francisco restricted all Chinese residents from traveling outside of the city without evidence that they had been vaccinated against the plague. In 1894, confronted with a smallpox epidemic, Milwaukee forcibly quarantined immigrants and poor residents of the city in a local hospital. In these cases, quarantine served as a method of containing and controlling ethnic minorities and immigrants whose surging presence in the U.S. was mistrusted.

A more recent example stems from the beginning of the AIDS epidemic in the early 1980s. In 1986, Cuba began universal HIV testing. Quarantines were instituted for all people testing positive for HIV infection. In 1985, officials in the state of Texas contemplated adding AIDS to the list of quarantinable diseases. These strategies were considered in a state of panic and uncertainty about the mode of transmission of HIV/AIDS. In retrospect, we know that instituting quarantine for HIV would have been not only ineffective but also a severe violation of individual liberties. Early in the AIDS epidemic, some individuals even called for the mass quarantine of gay men, indicating how quarantine could be used as a weapon against certain groups, such as immigrants and homosexuals. Because of their extreme nature and their recourse to arguments about protecting public safety, quarantine laws are especially prone to abuse of the sort witnessed in these cases.

How can we prevent quarantine laws from being abused? For one thing, these laws must be as specific as possible. How long can someone be quarantined before being permitted to appeal to the justice system? In what kinds of facilities should quarantined individuals be kept? The answer to this question would depend on the illness, type of exposure, and risk of contracting the disease, but in general, places of quarantine should never include correctional facilities. How are quarantined individuals monitored? How long can they be kept in quarantined conditions without symptoms before it is determined that they pose no public health risk? Quarantine laws should be sufficiently flexible to be amended according to updated knowledge about modes of transmission in the case of new or emerging infectious diseases. Quarantine measures should not be one-size-fits-all but modified according to scientific evidence relating to the disease in question. Transparency in all government communications about quarantine regulations must be standard in all cases. Most importantly, science should determine when to utilize quarantine. In order to quarantine an individual, the mode of transmission must be known, transmission must be documented to be human to human, the illness must be highly contagious, and the duration of the asymptomatic incubation period must be known. Without these scientific guidelines, quarantine may be subject to serious and unjust abuse.

In the case of infectious diseases with long incubation periods, quarantine laws can be an effective means of containing possible epidemics. Similarly, in cases in which isolation alone is not effective in containing an infectious disease outbreak, quarantine might be useful. In the case of the 2003 SARS outbreak, measures that quarantined individuals with definitive exposure to SARS were effective in preventing further infections, although mass quarantines, such as the one implemented in Toronto, were relatively ineffective. Quarantine can become a serious encroachment on civil rights, but there are intelligent ways of regulating these laws to prevent such damaging outcomes. It is important not to confuse quarantine per se with the abuse of quarantine. At the same time, when quarantine has the capacity to marginalize certain populations and perpetuate unwarranted fear of foreigners, scientific certainty is essential before implementation.

Leave a comment

Filed under Publication

What can we learn from disease stigma’s long history?

Originally published at PLoS Speaking of Medicine

Although tremendous strides in fighting stigma and discrimination against people with HIV/AIDS have been made since the beginning of the epidemic, cases of extreme discrimination still find their way into the US court system regularly. Just this year, a man in Pennsylvania was denied a job as a nurse’s assistant when he revealed his HIV status to his employer. Even more appallingly, HIV-positive individuals in the Alabama and South Carolina prison systems are isolated from other prisoners, regularly kept in solitary confinement, and often given special armbands to denote their HIV-positive status. On a global level, HIV stigma can lead to difficulty accessing testing and healthcare, which will almost certainly have a substantial impact on the quality of an individual’s life. Legal recourse often rights these wrongs for the individual, but this kind of discrimination leads to the spread of false beliefs about transmission, the very driver of stigma. In the U.S., as of 2009, one in five Americans believed that HIV could be spread by sharing a drinking glass, swimming in a pool with someone who is HIV-positive, or touching a toilet seat.

Discrimination against people with HIV/AIDS is probably the most prominent form of disease stigma in the late 20th and early 21st centuries. But disease stigma has an incredibly long history, one that spans back to the medieval period’s panic over leprosy. Strikingly, in nearly every stage of history in reference to almost every major disease outbreak, one stigmatizing theme is constant: disease outbreaks are blamed on a “low” or “immoral” class of people who must be quarantined and removed as a threat to society. These “low” and “immoral” people are often identified as outsiders, on the fringes of society, including foreigners, immigrants, racial minorities, and people of low socioeconomic status.

Emerging infectious diseases in their early stages, especially when modes of transmission are unknown, are especially vulnerable to stigma. Consider the case of polio in America.  In the early days of the polio epidemic, although polio struck poor and rich alike, public health officials cited poverty and a “dirty” urban environment as major drivers of the epidemic. The early response to polio was therefore often to quarantine low-income urban dwellers with the disease.

The 1892 outbreaks of typhus fever and cholera in New York City are two other good examples. These outbreaks were both blamed on Jewish immigrants from Eastern Europe. Upon arriving in New York, Jewish immigrants, healthy and sick, were quarantined in unsanitary conditions on North Brother Island at the command of the New York City Department of Health. Although it is important to take infectious disease control seriously, these measures ended up stigmatizing an entire group of immigrants rather than pursuing control measures based on sound scientific principles. This “us” versus “them” dynamic is common to stigma in general and indicates a way in which disease stigma can be viewed as a proxy for other types of fears, especially xenophobia and general fear of outsiders.

The fear of the diseased outsider is still pervasive. Until 2009, for instance, HIV-positive individuals were not allowed to enter the United States. The lifting of the travel ban allowed for the 2012 International AIDS Conference to be held in the United States for the first time in over 20 years. The connection between foreign “invasion” and disease “invasion” had become so ingrained that an illness that presented no threat of transmission through casual contact became a barrier to travel.

What can we learn from this history? Stigma and discrimination remain serious barriers to care for people with HIV/AIDS and tuberculosis, among other illnesses. Figuring out ways to reduce this stigma should be seen as part and parcel of medical care. Recognizing disease stigma’s long history can give us insight into how exactly stigmatizing attitudes are formed and how they are disbanded. Instead of simply blaming the ignorance of people espousing stigmatizing attitudes about certain diseases, we should try to understand precisely how these attitudes are formed so that we can intervene in their dissemination.

We should also be looking to history to see what sorts of interventions against stigma may have worked in the past. How are stigmatizing attitudes relinquished? Is education the key, and if so, what is the most effective way of disseminating this kind of knowledge? How should media sources depict epidemiological data without stirring fear of certain ethnic, racial, or socioeconomic groups in which incidence of a certain disease might be increasing? How can public health experts and clinicians be sure not to inadvertently place blame on those afflicted with particular illnesses? Ongoing research into stigma should evaluate what has worked in the past. This might give us some clues about what might work now to reduce devastating discrimination that keeps people from getting the care they need.

Leave a comment

Filed under Publication

What “causes” disease?: Association vs. Causation and the Hill Criteria

Originally published at The Pump Handle

Does cigarette smoking cause cancer? Does eating specific foods or working in certain locations cause diseases? Although we have determined beyond doubt that cigarette smoking causes cancer, questions of disease causality still challenge us because it is never a simple matter to distinguish mere association between two factors from an actual causal relationship between them. In an address to the Royal Society of Medicine in 1965, Sir Austin Bradford Hill attempted to codify the criteria for determining disease causality. An occupational physician, Hill was primarily concerned with the relationships among sickness, injury, and the conditions of work. What hazards do particular occupations pose? How might the conditions of a specific occupation cause specific disease outcomes?

In an engaging and at times humorous address, Hill delineates nine criteria for determining causality. He is quick to add that none of these criteria can be used independently and that even as a whole they do not represent an absolute method of determining causality. Nevertheless, they represent crucial considerations in any deliberation about the causes of disease, considerations that still resonate half a century later.

The criteria, which Hill calls “viewpoints,” are as follows:

1. Strength.  The association between the projected cause and the effect must be strong. Hill uses the example of cigarette-smoking here, noting that “prospective inquiries have shown that the death rate from cancer of the lung in cigarette smokers is nine to ten times the rate in non-smokers.” Even when the effects are objectively small, if the association is strong, causality can be contemplated. For example, during London’s 1854 cholera outbreak, John Snow observed that the death rate of customers supplied with polluted drinking water from the Southwark and Vauxhall Company was low in absolute terms (71 deaths in 10,000 houses). Yet in comparison to the death rate in houses supplied with the pure water of the Lambeth Company (5 in 10,000), the association became significant.  Even though the mechanism by which polluted water causes cholera—transmission of the bacteria vibrio cholera—was then still unknown, the strength of this association was sufficient in Snow’s mind to correctly assign a causal link.

2. Consistency. The effects must be repeatedly observed by different people, in different places, circumstances and times.

3. Specificity. Hill admits this is a weaker criterion, since diseases may have many causes and etiologies. Nevertheless, the specificity of the association, meaning how limited the association is to specific workers and sites and types of disease, must be taken into account in order to determine causality.

4. Temporality. Cause must precede effect.

5. Biological gradient. This criterion is also known as the dose-response curve. A good indicator of causality is whether, for example, death rates from cancer rise linearly with the number of cigarettes smoked. A small amount of exposure should result in a smaller effect. This is indeed the case; the more cigarettes a person smokes over a lifetime, the greater the risk of getting lung cancer.

6. Plausibility. The cause-and-effect relationship should be biologically plausible. It must not violate the known laws of science and biology.

  1. 7.  Coherence. The cause-and-effect hypothesis should be in line with known facts and data about the biology and history of the disease in question.

8. Experiment. This would probably be the most important criterion if Hill had produced these “viewpoints” in 2012. Instead, Hill notes that “Occasionally it is possible to appeal to experimental, or semi-experimental, evidence.” An example of an informative experiment would be to take preventive action as a result of an observed association and see whether the preventive action actually reduces incidence of the disease.

9. Analogy. If one cause results in a specific effect then a similar cause can be said to result in a similar effect. Hill uses the example of thalidomide and rubella, noting that similar evidence with another drug and another viral disease in pregnancy might be accepted on analogy, even if the evidence is slighter.

The impact of Hill’s criteria has been enormous. They are still widely accepted in epidemiological research and have even spread beyond the scientific community. In this short yet captivating address, Hill managed to propose criteria that would constitute a crucial aspect of epidemiological research for decades to come. One wonders how Hill would respond to the plethora of reports published today claiming a cause and effect relationship between two factors based on an odds ratio of 1.2, with a statistically significant probability value of less than 0.05. While such an association may indeed be real, it is far smaller than those Hill discusses in his first criterion (“strength”). Hill does say, “We must not be too ready to dismiss a cause-and-effect hypothesis merely on the grounds that the observed association appears to be slight.” Yet he also wonders if “the pendulum has not swung too far” in substituting statistical probability testing for biological common sense. Claims that environmental exposures, food, chemicals, and types of stress cause a myriad of diseases pervade both scientific and popular literature today. In evaluating these issues, Hill’s sobering ideas, albeit 50 years old, are still useful guidance.

Leave a comment

Filed under Publication