Tuesday, February 02, 2010

Crowdsourcing Vs. Science

I think the elevation of "crowdsourcing" as a legitimate indication for determining drug benefits or risks in the real world is a luddite-like anti-science trend that must be counteracted.

The latest example of this was just published on the Dose of Digital blog in the post "The Best Pharma Products According to Patients". In that post, Jonathan Richman reports drug ratings from iGuard.org and says:
"...which are the top-rated products? Forget about all those head-to-head trials that payors want, but most companies are hesitant to conduct (for many reasons). If you want to know which treatment is best, why not check out its ratings? How far away is a future where patients select which products they want to take by using reviews such as those found on iGuard? I’m sure some of you are scoffing at this idea because you think physicians should be recommending treatments, not iGuard. Two questions for those of you thinking this: aren’t objective ratings guiding treatment requests better than DTC TV ads that also aim to get people to ask for a specific treatment? And if these ratings are available, why would physicians ignore them? How long before they too use these types of reviews to decide which treatments to prescribe?"
Heaven forbid that physicians use these ratings to decide on treatment options for their patients!!! If Jonathan really believes this, then he is promoting the most irresponsible and anti-scientific methodology I have ever come across!

According to the person who coined the term crowdsourcing, “A central principle animating crowdsourcing is that the group contains more knowledge than individuals.”

Therefore, a group of patients who take a specific drug has more knowledge than a single individual. OK, maybe a group of patients has more knowledge than a single patient, but do they have more knowledge than a single physician who has treated a large number of patients?

But more than that, what is this "group" that rated drugs like Viagra on iGuard.org?

More precisely, how MANY people are in this group and who are they?

Richman presents a table of drug ratings and for each drug there is a column that is labeled "Number of Patients." The number of patients in the Viagra row is 21,500, which implies that the "Patient Effectiveness Score" of 6.7 for Viagra (vs. 7.4 for Cialis) is based on 21,500 ratings.

This is NOT the case. iGuard says it "tracks" 21,500 (now 22,000) Viagra patients but it does not say how many patients RESPONDED by submitting ratings. That is, we do not know what N is in this dataset. If it's the same order of magnitude as number of comments received (ie, 36), then this is NOT good science nor is it even good "crowdsourcing."

I am currently reading a small book that I recommend. It is entitled "The Numbers Game: The Commonsense Guide to Understanding Numbers in the News, in Politics, and in Life" by Michael Blasland and Andrew Dilnot. The title should have included blogs as well as "News" because more and more people are reading blogs that have even less fact-checking than your average newspaper!

5 comments:

  1. John,

    Somehow you have a way of sensationalizing my posts sometimes. I'm going to have to block you somehow.

    Kidding, of course, I appreciate your challenges and at least one has made me go back and update a post. However, I think part of my post was either taken a bit out of context or I wasn't very clear.

    TO BE SURE, I don't think that a doctor using iGuard ratings (though they do use the validated TSQM) to make treatment decisions is either wise nor a substitute for randomized clinical trials and the physician's own personal experience. I can see where you might get this idea from the way I wrote my post.

    What I was TRYING to say was simply this: physicians shouldn't ignore these TYPE of ratings. That is, ratings provided by actual patients in the real world. Why should they ignore them? It gives yet another data point (but not THE data point) to help them learn about treatments. I didn't mean to imply that they should pick a treatment based on a number, but rather that they might benefit from some of the verbatim comments about particular treatments. Again, not to make a decision, but to simply get more information. These are the complaints and issues (and positive experiences) that their patients simply don't share with them (shown in study after study). These verbatim comments could give a physician some idea of areas to address with with patients to avoid problems later. For example, if the comments show a lot of people saying something like, "I wish my doctor told me about this joint pain ahead of time," the doctor can address that in advance and reduce the risk that the patient stops treatment when this occurs. That has to a positive outcome.

    iGuard used to show something that was basically this "What I wish my doctor told me" for each drug. Not sure what happened to that.

    Give me a bit more credit than this post allows, John. I, like you, have my training in science, so I know what good objective science is and what's not. Nowhere did I suggest that iGuard ratings should be a substitute for clinical experience and randomized trials.

    I did say, "How long before they too use these types of reviews to decide which treatments to prescribe?" but, as I said in my post, the current systems are still not "clean" enough to allow them to be be used in this manner. In the future though, this could be different, when these comments are tied to EMRs and involve millions of people.

    As for the numbers of patients reviewing the treatment versus on the treatment, I'll get back with you. I honestly don't know the answer. Regardless, I don't think that 22,000 patients in iGuard (as I said above) is enough to direct what treatment physicians prescribe. We're a long way from this.

    Best,
    Jonathan
    Dose of Digital

    ReplyDelete
  2. Jon,

    Glad to see that you really do not believe what you implied :-)

    ReplyDelete
  3. John,

    As you suggested (correctly it turns out), the figures I presented for patients are NOT the same as the number who completed the surveys. The response rates for these are in the 10-20% range.

    iGuard also supplied me with some more information about how they conduct these surveys and the tools they use. I've added this as an update to my post. I'd invite you to take a look.

    Thanks for the always incisive investigative reporting and for keeping me honest.

    Jonathan
    Dose of Digital

    ReplyDelete
  4. Anonymous4:58 PM

    Wisdom of the crowds also fails through peer pressure, herd instinct and collective hysteria.

    As the self-proclaimed voice of the 'Twitterati' toward the FDA, an information cascade may have developed with a fragile outcome.

    Perhaps this is the reason why you blew your chance last year in Washington and have been treading water in getting people to answer your survey ever since.

    ReplyDelete
  5. Dear Anonymous,

    At the Nov hearing, my survey had 354 responses. It now has 427, which is a 20% increase. I bet you wish your stock market investments did as well.

    ReplyDelete

Related Posts Plugin for WordPress, Blogger...