I went to a keynote talk on suicide prevention recently where the presenter listed a grant he received to research suicide prevention as a potential conflict.  I thought that was strange. One, he was an invited speaker, so I’d expect him to be a grant-funded researcher on the topic he was invited to speak about.  Two, it was a government grant. Surely the money he had been given was not predicated on him reaching a particular outcome? (I think you could make a case that if he failed to show a positive result that it would make it harder for him to win future grants, but that’s a problem with the way we reward research & beyond the scope here).

Anyways, people seem to be confused about what financial conflicts are, and how & why they work. Headlines like “It’s silly to assume all research funded by corporations is bent” don’t help.  Does anyone assume that?

There are a number of ways trials can be misleading.  Many of these are apparent by, you know, reading the trial itself: inclusion and exclusion criteria, choice of comparators, surrogate outcomes, massaging the way the data is framed (like Merck did with Vioxx did by changing reference points for risks vs harms), paying attention to p-value instead of effect size, etc, etc.

Sometimes it’s harder. This 2012 meta-analysis on sodium restriction in systolic heart failure is a good example – retracted because two of the studies it cited contained duplicated data. Had I looked closer at the paper itself, I might have noticed that one of its authors was an author on every study cited in the review, a warning sign for sure.  But that’s a level of scrutiny that is beyond simply asking “how good is this study?” Rather, it’s  “are these authors lying to me?”  There are also numerous examples of publication bias, or times when questions weren’t asked because the results were likely to be unprofitable (pradaxa dosing in the elderly, eg).

These kinds of cases are harder to deal with, because you can’t tell by looking how they are bent. This is similar to the market for lemons.  Some studies are peaches – their flaws are self-disclosing.  Others are lemons- they look like peaches because the authors are lying to us.  By definition, we can’t know which is which. The rational response here is to reduce our trust in medical science across the board.  If it turns out that we’re finding lemons disproportionately among literature funded by industry, that’d be a cause for concern about industry-funded research in particular (uh, obviously).

I don’t know of any robust & reliable data that that’s the case; a lot of the evidence is either anecdotal or circumstantial. By its nature (because pharma is often the only one doing the kinds of studies it does), there are no good control groups. And in some ways pharma behaves better than peers in academia.

But even so, there are a couple good reasons to be concerned about financial conflicts in industry-funded trials.

(1) The social responsibility of a business is to increase its profits, as they say. I think that it’s pretty clear that misleading the public about drug efficacy and harms can be profitable.  The LA Times recent article on OxyContin, for instance, points out that the drug has made 31 billion in profit, and Purdue executives were fined 635 million after fraud convictions.  There’s a long run economic disadvantage to being unreliable, which is that if it’s unchecked, nobody will trust anything anymore. But that still affords a fair amount of leeway.

(2) Studies cost money.  So it turns out that groups with a lot of money are going to have more say in what studies are done, and how they are done. We should ask ourselves: why are they willing to spend that money, and are they interested in seeing particular kinds of results?

So anyways, there’s a fairly robust theoretical construct for why pharma money might bias results, and in what direction it will do so.  But this also presents us with several solutions: increase the punishments for true acts of fraud (definition intentionally omitted) so that the incentives are reduced.  More transparency with data and rules for its interpretation.  Alternative funding models for clinical trials. (I’ll stop short of exploring how any of this would work; if it were easy it would probably already be happening). Financial conflict disclosures are helpful not because they are a scarlet letter of shame and stigma, but because those conflicts are not apparent otherwise, and there is reason to think we should care about them. Most of the real debate should be about how much we should care; I agree we need better data on this.