More on conflict of interest — May 18, 2016

More on conflict of interest

I went to a keynote talk on suicide prevention recently where the presenter listed a grant he received to research suicide prevention as a potential conflict.  I thought that was strange. One, he was an invited speaker, so I’d expect him to be a grant-funded researcher on the topic he was invited to speak about.  Two, it was a government grant. Surely the money he had been given was not predicated on him reaching a particular outcome? (I think you could make a case that if he failed to show a positive result that it would make it harder for him to win future grants, but that’s a problem with the way we reward research & beyond the scope here).

Anyways, people seem to be confused about what financial conflicts are, and how & why they work. Headlines like “It’s silly to assume all research funded by corporations is bent” don’t help.  Does anyone assume that?

There are a number of ways trials can be misleading.  Many of these are apparent by, you know, reading the trial itself: inclusion and exclusion criteria, choice of comparators, surrogate outcomes, massaging the way the data is framed (like Merck did with Vioxx did by changing reference points for risks vs harms), paying attention to p-value instead of effect size, etc, etc.

Sometimes it’s harder. This 2012 meta-analysis on sodium restriction in systolic heart failure is a good example – retracted because two of the studies it cited contained duplicated data. Had I looked closer at the paper itself, I might have noticed that one of its authors was an author on every study cited in the review, a warning sign for sure.  But that’s a level of scrutiny that is beyond simply asking “how good is this study?” Rather, it’s  “are these authors lying to me?”  There are also numerous examples of publication bias, or times when questions weren’t asked because the results were likely to be unprofitable (pradaxa dosing in the elderly, eg).

These kinds of cases are harder to deal with, because you can’t tell by looking how they are bent. This is similar to the market for lemons.  Some studies are peaches – their flaws are self-disclosing.  Others are lemons- they look like peaches because the authors are lying to us.  By definition, we can’t know which is which. The rational response here is to reduce our trust in medical science across the board.  If it turns out that we’re finding lemons disproportionately among literature funded by industry, that’d be a cause for concern about industry-funded research in particular (uh, obviously).

I don’t know of any robust & reliable data that that’s the case; a lot of the evidence is either anecdotal or circumstantial. By its nature (because pharma is often the only one doing the kinds of studies it does), there are no good control groups. And in some ways pharma behaves better than peers in academia.

But even so, there are a couple good reasons to be concerned about financial conflicts in industry-funded trials.

(1) The social responsibility of a business is to increase its profits, as they say. I think that it’s pretty clear that misleading the public about drug efficacy and harms can be profitable.  The LA Times recent article on OxyContin, for instance, points out that the drug has made 31 billion in profit, and Purdue executives were fined 635 million after fraud convictions.  There’s a long run economic disadvantage to being unreliable, which is that if it’s unchecked, nobody will trust anything anymore. But that still affords a fair amount of leeway.

(2) Studies cost money.  So it turns out that groups with a lot of money are going to have more say in what studies are done, and how they are done. We should ask ourselves: why are they willing to spend that money, and are they interested in seeing particular kinds of results?

So anyways, there’s a fairly robust theoretical construct for why pharma money might bias results, and in what direction it will do so.  But this also presents us with several solutions: increase the punishments for true acts of fraud (definition intentionally omitted) so that the incentives are reduced.  More transparency with data and rules for its interpretation.  Alternative funding models for clinical trials. (I’ll stop short of exploring how any of this would work; if it were easy it would probably already be happening). Financial conflict disclosures are helpful not because they are a scarlet letter of shame and stigma, but because those conflicts are not apparent otherwise, and there is reason to think we should care about them. Most of the real debate should be about how much we should care; I agree we need better data on this.

 

 

Advertisements
Links — May 15, 2016

Links

  1. The evidence has increasingly suggested that fear-based campaigns ‘work.’ Emotionally charged public health messages have, as a consequence, become more commonplace. We conclude that an ethics of public health, which prioritises population well-being, as contrasted with the contemporary focus of bioethics on autonomy, provides a moral warrant for ensuring that populations understand health risk ‘in their guts.’ “
  2. (h/t @cbpolis) “I expect that some of the best work in the future for population, reproductive, and sexual health will be accomplished by scientists with very different underlying assumptions or ideologies who find ways to work together. A contemporary philosopher has suggested a paradigm of “oppositional collaboration” for areas with high ideological polarization, in which bioethicists (or scientists) with fundamentally opposed viewpoints work together to generate data that they all agree is as objective as possible for the relevant questions. This does not necessarily result in a change in values or agreement of the respective colleagues, but it can result in more accurate data and increased understanding and respect, extremely valuable outcomes.”

  3. Whole-body hyperthermia holds promise as a safe, rapid-acting, antidepressant modality with a prolonged therapeutic benefit.”
Links. — May 13, 2016

Links.

  1. Research-ethics-big-data hot take: this is bad. “A student and a co-researcher have publicly released a dataset on nearly 70,000 users of the dating site OkCupid, including their sexual turn-ons, orientation, usernames and more. And critics say it may be possible to work out users’ real identities from the published data.”
  2. Clinical-ethics-euthanizing-the-mentally-ill-in-a-van hot take: so is this. “Twenty-seven percent (n = 18) of patients received the procedure from physicians new to them, 14 of whom were physicians from the End-of-Life Clinic, a mobile euthanasia clinic. Consultation with other physicians was extensive, but 11% (n = 7) of cases had no independent psychiatric input, and 24% (n = 16) of cases involved disagreement among consultants.”
  3. meh. Clinical outcomes and markets can aggregate information from complex systems. But markets fail and polls often beat prediction markets and I don’t think this analogy actually says anything that couldn’t be said without reference to Hayek. No knock on Hayek.
  4. Who indeed, Stuart, who indeed?Screen Shot 2016-05-13 at 6.07.50 PM
Morning Links — May 11, 2016
Hating on Haidt —

Hating on Haidt

I’m going to talk about Beyonce.

But first some thoughts on Jonathan Haidt/Lee Jussim’s WSJ piece on race on campus. Haidt and Jussim have founded “The Heterodox Academy,” with the intention of increasing the diversity of viewpoints in academia.  But their commitment seems a little thin.

The WSJ article has two main thrusts: (1) increasing minority enrollment through affirmative action will reinforce harmful racial stereotypes, and (2) training people to be more sensitive to things like “microaggressions”will backfire and if anything increase racial tension. I have questions about both.

***

Haidt has written about microaggressions before, and I think it’s fair to lump him in with the group of people who don’t take the idea particularly seriously.  By this I mean, he does not think that the concept is serious or worthy of respectful engagement, and his discussion of it is unserious. On his blog and in the WSJ he makes the point that something as innocuous as asking “where are you from” can be interpreted as aggressive. He gives no context for why someone might see the question this way. And it seems to me like he’s chosen it as an example not to give his readers a balanced or nuanced understanding of what people who talk about microaggressions are talking about, but to emphasize that concept is ridiculous on its face and exists only to manufacture outrage at the cost of reasonable and productive conversation.

But it is not that hard to imagine how the question “where are you from?” can be seen as an act of aggression or intolerance. Like, if College Humor can do it …

In the blog post linked above, Haidt worries that talking about microaggressions signals a shift towards a culture of victimhood, and away from one of dignity, where “dignity is inherent and cannot be alienated by others.” But, obviously, that worry assumes we have a culture in which dignity is inherent to begin with. Marginalized communities argue that we don’t, and microaggressions are both symptomatic and emblematic of that.

***

Haidt and Jussim make the point that racial differences on SAT scores mirror and predict differences in college grades and graduation rates. They argue this achievement gap means that increasing minority enrollment will lead to self-segregation and reinforced stereotypes about the capacity for achievement, etc.  Brian Earp has an interesting article on the racial achievement gap and academic self-concept that’s worth a read.  I’m sure it’s just a scratch on the surface of an entire field of study, and I’m venturing into intellectual territory that’s largely foreign to me. But Earp makes the point convincingly, that what schools and tests and grades measure is in large part the ability to meet the rules and expectations of a culture that values certain ways of being, communicating, and thinking over others.

Academic performance is not some neutral and Platonic measure of a person. So one goal of increasing the number of black students admitted to a college & black faculty hires would be to engender support for constructing new and better measures.

***

Ok, here goes. White men: you can’t and shouldn’t watch Lemonade without thinking about a world of experience that isn’t yours. These worlds exist, and they are inhabited by people with things to say. One way to encourage a diversity of viewpoints is to listen to them.

 

bookmarking for a later date — May 10, 2016
4 Links and a thought. —

4 Links and a thought.

  1. Trisha Greenhalgh on implementing scientific knowledge as practice.
  2. Mapping coincidences, and a reminder to myself to read more Spiegelhalter.
  3. Value-Based purchasing & pay doesn’t improve mortality.
  4. There is no 4

It’s National Nurse Appreciation Week, so let me say: I should be reading the nursing literature more.  Here’s a good example. (OK, it’s not from a nursing journal, but there are similar studies that are).

The structure of a hospital’s medical ward is a complex hierarchy, and the basic process of fighting disease and caring for patients (two distinct tasks that are often conflated) can be broken down to a crude military metaphor: strategy, tactics, and execution. I’m not sure that the roles of medical professionals can be analogized to the military as neatly, but in general attending doctors tend to be involved with strategy, interns and medical students and nurses with various aspects of execution. Tactics (and residents) fall somewhere in-between.

This largely makes sense because everything we do is complicated and time consuming and requires practice and expertise, and the basic reality of modern medical practice is that it’s all made more difficult by demands of documentation, billing, interfacing with an unwieldy electronic medical record.  The upshot is that there are order-givers and order-takers, all of whom are stressed and busy. Often, decisions need to be made quickly, and even in non-emergencies there may not be time or desire to explain why something is being done the way it’s being done. “That’s why it’s called an order.”

It’s hard, in general, to know what the right thing is to do.  A lot of medical ethics training for physicians (at least mine, as far as I can recall) has dealt with this problem – theories of right and wrong, principles of bioethics. Some discussion of landmark legal cases.

But it’s harder still to do the right thing,  when you’ve been asked by someone in a position of authority to do something  wrong.  Theoretical knowledge is not enough. Goodness isn’t an intrinsic character trait or a virtue to be built by knowledge and reflection alone.  It’s a skill to be practiced.

All I Really Need to Know I Learned from Montaigne. — May 9, 2016

All I Really Need to Know I Learned from Montaigne.

This is a medical blog.  But if I envision myself to have a guiding philosophy, it’s this, from “On Physiognomy.”Screen Shot 2016-05-09 at 11.06.46 AM

A lot of what’s controversial in medicine, science, politics, ethics are debated by appeals to competing authorities.  It’s a mistake to say that science alone will resolve those debates.

A piece by Lisa Rosenbaum in the New England Journal  this week on bias and conflict of interest policies misses this point.

If a pharmaceutical company funds a trial for their drug and that trial is successful there are two different questions to be asked. The first is, based on the data presented, am I convinced that this successful trial implies my patients would experience similar success if I were to prescribe it to them? The second is, do I think this successful trial is true?

Here’s a quote from the Journal:Screen Shot 2016-05-09 at 11.26.01 AM

But you can’t judge a study solely on its merits; it will always be judged in the context of what is known.  To an extent, this is something that Bayes’ theorem can try to quantify. But trust is important.  If oseltamivir was made to look more effective than it is because Roche was selective in the data it published, that will not be evident from what is published.  If we know that drug companies have been selective and not-forthcoming about these things, then that should influence our interpretation of the data they give us.

The call for transparency on financial conflicts of interest is only part of the program of so-called “pharmascolds”.  Calls for data transparency & sharing, being open about outcome switching, strict adherence to clinical trial registries are equally, if not more important.

A published paper implicitly requests the reader take on faith that it includes everything needed for adequate interpretation of its meaning.  Conflict of interest statements don’t invalidate the data, but they do speak to how we should apportion our trust.