It is part of the human condition to have implicit biases—and remain blissfully ignorant of them. Academic researchers, scientists, and clinicians are no exception; they are as marvelously flawed as everyone else. But it is not the cognitive bias that’s the problem. Rather, the denial that there is a problem is where the issues arise. Indeed, our capacity for self-deception was beautifully captured in the title of a recent book addressing researchers’ self-justificatory strategies, Mistakes Were Made (But Not by Me).

Decades of research have demonstrated that cognitive biases are commonplace and very difficult to eradicate, and more recent studies suggest that disclosure of financial conflicts of interest may actually worsen bias. This is because bias is most often manifested in subtle ways unbeknownst to the researcher or clinician, and thus is usually implicit and unintentional. For example, although there was no research misconduct or fraud, re-evaluations of...

Indeed, recent neuroscience investigations demonstrate that effective decision-making involves not just cognitive centers but also emotional areas such as the hippocampus and amygdala. This interplay of cognitive-emotional processing allows conflicts of interest to affect decision-making in a way that is hidden from the person making the decision.

Despite these findings, many individuals are dismissive of the idea that researchers’ financial ties to industry are problematic. For example, in a recent essay in The Scientist, Thomas Stossel of Brigham & Women’s Hospital and Harvard Medical School asked, “How could unrestricted grants, ideal for research that follows up serendipitous findings, possibly be problematic? The money leads to better research that can benefit patients.” Many argue that subjectivity in the research process and the potential for bias can be eradicated by strict adherence to the scientific method and transparency about industry relationships. Together, scientists believe, these practices can guarantee evidence-based research that leads to the discovery and dissemination of “objective” scientific truths. The assumption is that the reporting of biased results is a “bad apple” problem—a few corrupt individuals engaging in research fraud. But what we have today is a bad barrel.

Some have begun to use the analytic framework of “institutional corruption” to bring attention to the fact that the trouble is not with a few corrupt individuals hurting an organization whose integrity is basically intact. Institutional corruption refers to the systemic and usually legal—and often accepted and widely defended—practices that bring an organization or institution off course, undermine its mission and effectiveness, and weaken public trust. Although the entire field of biomedicine has come under scrutiny because of concerns about an improper dependence on industry and all medical specialties have struggled with financial conflicts of interest, psychiatry has been particularly troubled, being described by some as having a crisis of credibility.

This credibility crisis has been played out most noticeably in the public controversy surrounding the latest revision to the Diagnostic and Statistical Manual of Mental Disorders (DSM). The DSM is often referred to as the “Bible” of mental disorders, and is produced by the American Psychiatric Association (APA), a professional organization with a long history of industry ties. DSM-5, the revised edition scheduled for publication in May, 2013, has already been criticized for “disease mongering,” or pathologizing normal behavior. Concerns have been raised that because the individuals responsible for making changes and adding new disorders have strong and long-standing financial associations to pharmaceutical companies that manufacture the drugs used to treat these disorders, the revision process may be compromised by undue industry influence.

Researchers, clinicians, and psychiatrists who served on the DSM-IV have pointed out that adding new disorders or lowering the diagnostic threshold of previously included disorders may create “false positives,” individuals incorrectly identified as having a mental disorder and prescribed psychotropic medication. For example, there was a heated debate about pathologizing the normal grieving process if DSM-5 eliminated the bereavement exclusion for major depressive disorder (MDD). The concern was that widening the diagnostic boundaries of depression to include grief as a “qualifying event,” thereby allowing for a diagnosis of MDD just 2 weeks after the loss of a loved one, would falsely identify individuals as depressed. Although it is not the APA’s intent to play handmaiden to industry, the reality is that such a change would result in more people being prescribed antidepressants following the loss of a loved one. In fact, psychiatrist Allen Frances, who chaired the DSM-IV task force, has noted that DSM-5 would be a “bonanza” for drug companies.

After receiving criticism about potential bias in the development of the DSM-IV, the APA required that DSM-5 panel members file financial disclosures. Additionally, during their tenure on the panels they were not allowed to receive more than $10,000 from pharmaceutical companies or have more than $50,000 in stock holdings in pharmaceutical companies (unrestricted research grants were excluded from this policy). The majority of diagnostic panels, however, continue to have the majority of their members with financial ties to the pharmaceutical industry. Specifically, 67 percent of the 12-person panel for mood disorders, 83 percent of the 12-person panel for psychotic disorders, and all 7 members of the sleep/wake disorders panel (which now includes ‘‘Restless Leg Syndrome’’) have ties to the pharmaceutical companies that manufacture the medications used to treat these disorders or to companies that service the pharmaceutical industry.

Clearly, the new disclosure policy has not been accompanied by any reduction in the financial conflicts of interest of DSM panel members. Moreover, Darrel Regier, speaking on behalf of the APA and in defense of DSM panel members with industry ties, told USA Today. “There’s this assumption that a tie with a company is evidence of bias. But these people can be objective.” However, as science has repeatedly shown, transparency alone cannot mitigate bias and is an insufficient solution for protecting the integrity of the revision process. Objectivity is not a product that can be easily secured by adherence to the scientific method. Rather, there is a generic risk that a conflict of interest may result in implicit, unintentional bias. Similarly, as Sinclair Lewis said, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”

Lisa Cosgrove is a clinical psychologist and associate professor at the University of Massachusetts, Boston, and a research lab fellow at the Edmond J. Safra Center for Ethics, Harvard University. Her current research agenda addresses ethical and medico-legal issues that arise in psychiatry because of financial conflicts of interest.

Interested in reading more?

Become a Member of

Receive full access to more than 35 years of archives, as well as TS Digest, digital editions of The Scientist, feature stories, and much more!
Already a member?