Pathological [Updated]

While the dust in the case of the fabricated data published in Science last December remains very much unsettled, the deadline by which the primary author, Michael LaCour, claimed he would respond to the allegations has passed [Correction: his self-imposed deadline has not yet passed; it is tomorrow, May 29. Note to self: try looking at a calendar sometime.], seemingly without a response, while more accusations of dishonesty on LaCour’s part have surfaced.

First, another fraud accusation has surfaced, this time concerning unpublished results (the results have, however, been presented at at least one conference) in a paper titled “The Echo Chambers are Empty: Direct Evidence of Balanced, Not Biased Exposure to Mass Media”. Economist Tim Groseclose looked into LaCour’s methods and writes:

The paper examines the news “diet” of voters. It concludes that the news diet of Republicans hardly differs from that of Democrats. In contrast to conventional wisdom, voters do not, primarily, get their news from “echo chambers.”

I could find no problem with the main results of the paper. However, to derive those results, LaCour writes a section describing a way to measure media bias, which he uses to classify a media outlet as “conservative,” “liberal,” or “centrist.” I found many problems with this section, and I am highly confident that LaCour faked the results for this section.

This is just an accusation at this point, and Groseclose has not yet seen the raw data or the code LaCour used to derive it, but he presents some strong statistical evidence for the accusation. However, we can be fairly certain about another example of LaCour’s dishonesty. Virginia Hughes of Buzzfeed discovered lies on his CV concerning his research funding, which she details in perhaps the best article I have read on the case:

In the study’s acknowledgements, LaCour states that he received funding from three organizations — the Ford Foundation, Williams Institute at UCLA, and the Evelyn and Walter Haas, Jr., Fund. But when contacted by BuzzFeed News, all three funders denied having any involvement with LaCour and his work. (In 2012, the Haas, Jr. Fund gave a grant to the Los Angeles LGBT Center related to their canvassing work, but the Center said that LaCour’s involvement did not begin until 2013.)

There are at least two CVs that were reportedly published on LaCour’s website but have since been taken down. Both list hundreds of thousands of dollars in grants for his work. One of these listings, a $160,000 grant in 2014 from the Jay and Rose Phillips Family Foundation of Minnesota, was made up, according to reporting by Jesse Singal at The Science of Us.

If you were thinking that things couldn’t get worse than made up data on one, possibly two papers, and made up funding sources, you were wrong. It turns out LaCour may also have lied about receiving a teaching award, as Jesse Singal of Science of Us discovered:

In that section [of LaCour’s CV], he lists as one of his awards: “Emerging Instructor Award, UCLA Office of Instructional Development, 2013-2014. One of three UCLA graduate student instructors selected for excellence in their first year of teaching” (formatting his). But a staffer in the office of instructional development told Science of Us that it does not give out an award of that name. “I don’t know if he either misnamed our department or if it’s from another department,” said the staffer, who only agreed to be quoted if I didn’t use her name. “I’m not clear on what happened.”

To make himself seem even sleazier, when Singal contacted him about the award, LaCour first asked Singal not to publish anything on the award until he released his official response, then took his CV off his webpage, then put a new version of the CV that does not list the award back up, after which he emailed Singal saying,

“I’m not sure which CV you are referring to, but the CV posted on my website has not had that information or the grants listed for at least a year.”

In my original post on the case, I speculated about how LaCour may have come to the decision to commit fraud of this magnitude, suggesting that he may have found himself between a professional rock and a hard place and chosen to lie his way out of it. As more and more examples of possible dishonesty have surfaced, I have started to believe that the real explanation may simply be that LaCour isn’t a very honest person.

UPDATE: It is offical! Science has retracted the paper. The statement just posted gives two confirmed (via LaCour’s attorney) cases of false information in the paper, the statistical irregularities first noted by Broockman and Kalla, and LaCour’s failure to produce his data as the reasons for the retraction. The statement ends, “Michael J. LaCour does not agree to this Retraction.”

Please do be so kind as to share this post.
Share

62 thoughts on “Pathological [Updated]

    • – not sure if you saw that io9 link I put on Chris’ earlier post this AM on how easy it is to perpetrate and disseminate bunk; but even if outright fraud is rare, I suspect that there is a lot – a LOT – of bad information being passed through.

      Sometimes because the researchers are being careless (or dishonest); sometimes because the people reporting on it do not understand what the research really means.

        Quote  Link

      Report

    • I don’t think anyone really has any idea. I mean, we can look at how often fraud is discovered — maybe one high profile case a year, or thereabouts, and a few other smaller ones, plus likely more that never see the light of day because a grad student or post doc or whoever never publishes the fabricated data — but there are undoubtedly cases that go undiscovered, some of which will never be discovered because the findings simply aren’t important enough or the literature has already moved on, so no one’s going to go back and look at the numbers really closely. And then there is the almost certainly more common case of simply incorrect results due to lack of proper statistical controls (see the comments earlier today on the original post), which can be dishonest (researchers know perfectly what they’re doing when they run a bunch of studies and then only publish the one that “works”), but much of which is just the result of a lack of attention or statistical/methodological knowledge.

      I don’t think I’ve ever seen a case quite like this. I mean, Hauser’s fraud may have been pretty widespread (and very nearly went undiscovered, and if it hadn’t been for grad students, might have been undiscoverable), and his research was widely reported in the popular press, so it was a high profile case, but his lying didn’t extend to every aspect of his CV, as this dude’s may have.

        Quote  Link

      Report

    • At some point in the last few years, Bayer announced that they were only able to reproduce the results in about one-third of the papers published in technical journals about potential new drug molecules. They believe that the practice of fudging the statistics by repeating trials until a positive result appears, then reporting that trial, has become very widespread.

        Quote  Link

      Report

      • Of all the things I was forced to read in high school, this is the only thing I do remember, which is of some wonder to me, since I know head injury has lead to lost memories and memories connected in all sorts of odd ways not necessarily reflective of actual events.

        I don’t know that I’ve read any others, and perhaps I shall.

        This whole thing reminds me of those folk who get fancy jobs only to have it revealed they made much of the credentials up, and weren’t as accomplished as they seem on paper. My favorite was local, guy named Warren Cook who ran Jackson Lab in Bar Harbor, ME — place that supplies genetically-tailored mice for research. At the time I had him in my rolodex, he was considered a hot shot and major success by a whole lot of folk around the state and in the bio-tech industries which consumes so many of those poor little mice. And boom. He’s made the stuff up on his resume years before and never purged it and out the door he goes.

          Quote  Link

        Report

        • That is, essentially, what LaCour did, and it got him a great job. I imagine he won’t be able to actually start that great job, though, as I assume the offer will be rescinded. Unless, perhaps, he reveals that all of this was his actual research, and we’re all now data points in his dissertation.

          On Montaigne: when I was a sophomore in college, a bright-eyed and bushy-tailed philosophy major, a professor gave me a copy of the essays with a book mark on the one titled “That to Study Philosophy Is to Learn to Die.” I fell in love immediately.

            Quote  Link

          Report

        • There was a big scandal a few years ago when it was revealed that the Director of Admissions at MIT made up all of her credentials. She wasn’t even a college graduate.

            Quote  Link

          Report

    • [A]fter a tongue has once got the knack of lying, ’tis not to be imagined how impossible it is to reclaim it whence it comes to pass that we see some, who are otherwise very honest men, so subject and enslaved to this vice.

        Quote  Link

      Report

  1. I’ve added it in an update, but I’ll stick it here too, as it’s a rather big deal: Science has now officially retracted the paper. Statement here. Best part is the final sentence:

    Michael J. LaCour does not agree to this Retraction.

    It’s like he’s a shovel and can’t help but view everything as an opportunity to dig his hole deeper.

      Quote  Link

    Report

    • Oh dear Lord, the results in the second paper were almost certainly fabricated. My favorite part:

      First and most obviously, LaCour’s figure includes estimates of ideological positioning for a number of radio shows, such as The Radio Factor and The Savage Nation, that do not appear at all in the UCLA closed caption archive on which LaCour’s estimates are supposedly based.

      In other words, he’s not even a very good fabricator.

      I don’t know how this could possibly get worse for the dude, but given its brief history it probably will, and dramatically so, within the next week.

        Quote  Link

      Report

        • I’d rather replications happen organically, as they will tend to for anything big enough to be published in Science. Even when attempts to replicate fail, they will tend to involve requesting the data from the original research, which is usually enough to catch fabrication, though not enough to solve the “run it a bunch of times until it comes out the way we want it” issue.

            Quote  Link

          Report

    • First Rule of Holes : when you find yourself in one, stop digging.
      Second Rule of Holes : trying to dig out of the hole with heavy machinery & high explosives is ill advised.

        Quote  Link

      Report

  2. LaCour issued his response last night.

    Cynics called this a “Friday Night News Dump” but it was pointed out to me that, on Friday nights, Social Scientists were likely to be home and unoccupied.

      Quote  Link

    Report

        • Briefly: he admits lying about the funding and the compensation. He says he really used a raffle for Apple products, and links to receipts with some timing issues (some of the receipts come after data collection). He says he deleted the raw data per UCLA policy, but the policy, which he quotes, only requires deleting P.I.I.

          He doesn’t challenge the claim that the survey company never ran, nor could it have run, the surveys. Instead, he provide details and evidence of unrelated conversations with Qualtrics.

          He says another researcher has replicated his finding, but we do not yet know whether this is true.

          And finally, the statistical argument (I should add he accuses Broikman et al of mistakes, a lack of ethics, and dishonesty throughout, culminating here in the data): he says they “manipulated” the other study data to make it look like his, used the wrong field anyway, and screwed up the reliability data.

          The “manipulation” was simply recoding non-responses to 50. The wrong variable was the recoded version, rather than the raw one LaCout says they should have used. If you ask me, if merely recoding non-responses yields data identical to yours, your data is a copy of the other data. He is making a play for the public, not other researchers here, hoping that “manipulated” convinced the same folks who thought “hide the decline” indicated conspiracy.

          The reliability stats ate a bit trickier, and will require verification with the data, but to me, by that point the fraud is demonstrated, so the reliability argument he makes is superfluous.

          Added: I forgot to include this: the evidence he provides appears to indicate that he either never got IRB approval or got it after data was collected, which by itself likely would have cost him his dissertation and the Science publication. However, that is an even bigger failure, as some, perhaps many in his department should have been aware of this.

            Quote  Link

          Report

          • I read the thing too, and only – barely! – understood the parts written in English. But two things jumped out at me. He admits to lying about the funding; and he admits he destroyed all the raw data while strenuously claiming that doing so was not only the norm in his field, but required by the institution he works for. The first is just an admission of a straight up lie. The second, tho, seems like a more nuanced fabrication, based on interpreting an institutional policy self-servingly. He’s basically saying: “I have my conclusions, you guys have yours, and the only thing that could settle the matter has been destroyed as per institutional requirements.”

            I have a hard time believing that institutional norms would require destroying data prior to a vetting of conclusions derived from that very data. But that’s not my world. Seems very counterintuitive to my understanding of scientific practices.

              Quote  Link

            Report

          • OMG OMG OMG! I just realized the most hilarious aspect of his reply, and a sure sign that he doesn’t know what he’s doing.

            As I said in the previous comment, he accuses Brookman et al of using the wrong field in the other survey to compare to his. The most damning claim Brookman made was that LaCour’s data was indistinguishable from that of the other survey, so this is supposed to defeat that claim.

            LaCour said they compared a field with recoded responses, when they should have compared the raw responses one. However, it’s important to note that the field with unrecoded responses isn’t really raw. It recodes all non-responses to 101 or a value over 100, on a measure from 0 to 100. This allows anyone analyzing the days to quickly and easily deal with the non-responses (missing values and non-responses can be handled a variety of ways).

            The big difference that LaCour points out between his data and the raw field in the other survey? The modal response in the other one is 100. However, this is because it’s including the non-responses, 101, as 100 in the histogram. In other words, he’s just shown that his data includes recoded non-responses, and has inadvertently demonstrated that Brookman used the correct variable and he did just copy the other survey’s data.

              Quote  Link

            Report

  3. Chris,

    You’ve done a really good job of explaining this, both in your posts and in your comments. This fiasco is something I would have had only a very dim knowledge of, and it’s interesting to see what’s at play. (It’s also interesting to see how claims of fraud in the sciences compare to those that go down in history, my discipline.)

      Quote  Link

    Report

Leave a Reply

Your email address will not be published. Required fields are marked *