Tuesday, November 17, 2015

Stanford researchers uncover patterns in how scientists lie about their data

When scientists falsify data, they try to cover it up by writing differently in their published works. A pair of Stanford researchers have devised a way of identifying these written clues.
Even the best poker players have "tells" that give away when they're bluffing with a weak hand. Scientists who commit fraud have similar, but even more subtle, tells, and a pair of Stanford researchers have cracked the writing patterns of scientists who attempt to pass along falsified data.
The work, published in the Journal of Language and Social Psychology, could eventually help scientists identify falsified research before it is published.
There is a fair amount of research dedicated to understanding the ways liars lie. Studies have shown that liars generally tend to express more negative emotion terms and use fewer first-person pronouns. Fraudulent financial reports typically display higher levels of linguistic obfuscation – phrasing that is meant to distract from or conceal the fake data – than accurate reports.
To see if similar patterns exist in scientific academia, Jeff Hancock, a professor of communication at Stanford, and graduate student David Markowitz searched the archives of PubMed, a database of life sciences journals, from 1973 to 2013 for retracted papers. They identified 253, primarily from biomedical journals, that were retracted for documented fraud and compared the writing in these to unretracted papers from the same journals and publication years, and covering the same topics.
They then rated the level of fraud of each paper using a customized "obfuscation index," which rated the degree to which the authors attempted to mask their false results. This was achieved through a summary score of causal terms, abstract language, jargon, positive emotion terms and a standardized ease of reading score.
"We believe the underlying idea behind obfuscation is to muddle the truth," said Markowitz, the lead author on the paper. "Scientists faking data know that they are committing a misconduct and do not want to get caught. Therefore, one strategy to evade this may be to obscure parts of the paper. We suggest that language can be one of many variables to differentiate between fraudulent and genuine science."
The results showed that fraudulent retracted papers scored significantly higher on the obfuscation index than papers retracted for other reasons. For example, fraudulent papers contained approximately 1.5 percent more jargon than unretracted papers.
"Fradulent papers had about 60 more jargon-like words per paper compared to unretracted papers," Markowitz said. "This is a non-trivial amount."
The researchers say that scientists might commit data fraud for a variety of reasons. Previous research points to a "publish or perish" mentality that may motivate researchers to manipulate their findings or fake studies altogether. But the change the researchers found in the writing, however, is directly related to the author's goals of covering up lies through the manipulation of language. For instance, a fraudulent author may use fewer positive emotion terms to curb praise for the data, for fear of triggering inquiry.
In the future, a computerized system based on this work might be able to flag a submitted paper so that editors could give it a more critical review before publication, depending on the journal's threshold for obfuscated language. But the authors warn that this approach isn't currently feasible given the false-positive rate.
"Science fraud is of increasing concern in academia, and automatic tools for identifying fraud might be useful," Hancock said. "But much more research is needed before considering this kind of approach. Obviously, there is a very high error rate that would need to be improved, but also science is based on trust, and introducing a 'fraud detection' tool into the publication process might undermine that trust."
Bjorn Carey, Stanford News Service: (650) 725-1944, bccarey@stanford.edu

Saturday, November 14, 2015

Friday, November 13, 2015

Open letter to the editor of The Lancet, Dr. Richard Horton by 6 professors about the PACEtrial's fatal flaws

@ www.virology.ws:

Dr. Richard Horton
The Lancet125 London Wall
London, EC2Y 5AS, UK
Dear Dr. Horton:
In February, 2011, The Lancet published an article called “Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomized trial.” The article reported that two “rehabilitative” approaches, cognitive behavior therapy and graded exercise therapy, were effective in treating chronic fatigue syndrome, also known as myalgic encephalomyelitis, ME/CFS and CFS/ME. The study received international attention and has had widespread influence on research, treatment options and public attitudes.
The PACE study was an unblinded clinical trial with subjective primary outcomes, a design that requires strict vigilance in order to prevent the possibility of bias. Yet the study suffered from major flaws that have raised serious concerns about the validity, reliability and integrity of the findings. The patient and advocacy communities have known this for years, but a recent in-depth report on this site, which included statements from five of us, has brought the extent of the problems to the attention of a broader public. The PACE investigators have replied to many of the criticisms, but their responses have not addressed or answered key concerns.
The major flaws documented at length in the recent report include, but are not limited to, the following:
*The Lancet paper included an analysis in which the outcome thresholds for being “within the normal range” on the two primary measures of fatigue and physical function demonstrated worse health than the criteria for entry, which already indicated serious disability. In fact, 13 percent of the study participants were already “within the normal range” on one or both outcome measures at baseline, but the investigators did not disclose this salient fact in the Lancet paper. In an accompanying Lancet commentary, colleagues of the PACE team defined participants who met these expansive “normal ranges” as having achieved a “strict criterion for recovery.” The PACE authors reviewed this commentary before publication.
*During the trial, the authors published a newsletter for participants that included positive testimonials from earlier participants about the benefits of the “therapy” and “treatment.” The same newsletter included an article that cited the two rehabilitative interventions pioneered by the researchers and being tested in the PACE trial as having been recommended by a U.K. clinical guidelines committee “based on the best available evidence.” The newsletter did not mention that a key PACE investigator also served on the clinical guidelines committee. At the time of the newsletter, two hundred or more participants—about a third of the total sample–were still undergoing assessments.
*Mid-trial, the PACE investigators changed their protocol methods of assessing their primary outcome measures of fatigue and physical function. This is of particular concern in an unblinded trial like PACE, in which outcome trends are often apparent long before outcome data are seen. The investigators provided no sensitivity analyses to assess the impact of the changes and have refused requests to provide the results per the methods outlined in their protocol.
*The PACE investigators based their claims of treatment success solely on their subjective outcomes. In the Lancet paper, the results of a six-minute walking test—described in the protocol as “an objective measure of physical capacity”–did not support such claims, notwithstanding the minimal gains in one arm. In subsequent comments in another journal, the investigators dismissed the walking-test results as irrelevant, non-objective and fraught with limitations. All the other objective measures in PACE, presented in other journals, also failed. The results of one objective measure, the fitness step-test, were provided in a 2015 paper in The Lancet Psychiatry, but only in the form of a tiny graph. A request for the step-test data used to create the graph was rejected as “vexatious.”
*The investigators violated their promise in the PACE protocol to adhere to the Declaration of Helsinki, which mandates that prospective participants be “adequately informed” about researchers’ “possible conflicts of interest.” The main investigators have had financial and consulting relationships with disability insurance companies, advising them that rehabilitative therapies like those tested in PACE could help ME/CFS claimants get off benefits and back to work. They disclosed these insurance industry links in The Lancet but did not inform trial participants, contrary to their protocol commitment. This serious ethical breach raises concerns about whether the consent obtained from the 641 trial participants is legitimate.
Such flaws have no place in published research. This is of particular concern in the case of the PACE trial because of its significant impact on government policy, public health practice, clinical care, and decisions about disability insurance and other social benefits. Under the circumstances, it is incumbent upon The Lancet to address this matter as soon as possible.
We therefore urge The Lancet to seek an independent re-analysis of the individual-level PACE trial data, with appropriate sensitivity analyses, from highly respected reviewers with extensive expertise in statistics and study design. The reviewers should be from outside the U.K. and outside the domains of psychiatry and psychological medicine. They should also be completely independent of, and have no conflicts of interests involving, the PACE investigators and the funders of the trial.
Thank you very much for your quick attention to this matter.
Ronald W. Davis, PhD
Professor of Biochemistry and Genetics
Stanford University
Jonathan C.W. Edwards, MD
Emeritus Professor of Medicine
University College London
Leonard A. Jason, PhD
Professor of Psychology
DePaul University
Bruce Levin, PhD
Professor of Biostatistics
Columbia University
Vincent R. Racaniello, PhD
Professor of Microbiology and Immunology
Columbia University

      Arthur L. Reingold, MD
      Professor of Epidemiology
      University of California, Berkeley

      Thursday, November 12, 2015

      Shirley applying PACE trial logic to a broken ankle ...

      It's not the broken ankle but the thoughts about the ankle
      that make it painful to jog.

      PS by Janet: recovery depends on not speaking to others with broken ankles ...

      Wednesday, November 11, 2015

      Professor Coyne: Why the scientific community needs the PACE trial data to be released

      By James C. Coyne, PhD, Professor of Health Psychology at University Medical Center, Groningen, the Netherlands where he teaches scientific writing and critical thinking:

      There are obvious parallels between the politics behind persistence of the claim in the US for psychotherapy increasing survival time for cancer patients and those in the UK about cognitive behavior therapy being sufficient treatment for schizophrenia in the absence of medication or producing recovery from the debilitating medical condition, Chronic Fatigue Syndrome/Myalgic Encephalomyelitis. There are also parallels to investigators making controversial claims based on multivariate analyses, but not allowing access to data to independently evaluate the analyses. In both cases, patient well-being suffers.
      If the ICO upholds the release of data for the PACE trial in the UK, it will pressure the US NIH to stop hypocritically endorsing data sharing and rewarding investigators whose credibility depends on not sharing their data.
      As seen in a PLOS One study, unwillingness to share data in response to formal requests is
      associated with weaker evidence (against the null hypothesis of no effect) and a higher prevalence of apparent errors in the reporting of statistical results. The unwillingness to share data was particularly clear when reporting errors had a bearing on statistical significance.
      Why the PACE investigators should not appeal
      In the past, PACE investigators have been quite dismissive of criticism, appearing to have assumed that being afflicted with Chronic Fatigue Syndrome/Myalgic Encephalomyelitis precludes a critic being taken seriously, even when the criticism is otherwise valid. However, with publication of the long-term follow-up data in Lancet Psychiatry, they are now contending with accomplished academics whose criticisms cannot be so easily brushed aside. Yes, the credibility of the investigators’ interpretations of their data are being challenged. And even if they do not believe they need to be responsive to patients, they need to be responsive to colleagues. Releasing the data is the only acceptable response and not doing so risks damage to their reputations.
      QMUL, Professors White and Sharpe, let the People’s data go.

      Tuesday, November 10, 2015

      Prof Montoya: results of more in-depth immunological tests in ME/CFS were staggering

      By Leela:

      "Many physicians and researchers thought patients with CFS didn't show signs of active inflammation," says Montoya.

      "But when we began to perform more in-depth tests, the results were staggering. A picture of patients with highly inflamed bodies emerged before our eyes and validated what they've been telling us for decades."

      Thursday, November 5, 2015

      Sir Simon Wessely and the successful first voyage of its flagship, the Queen Mary Pace. Built in Oxford to rigorous specifications ...

      Graham McPhee:

      Sir Simon Wessely has written an article for the nationalelfservice (www.nationalelfservice.net/…/the-pace-trial-for-chronic-fa…/) in which he describes the PACE trial as being somewhat similar to a voyage on a cruise ship.
      This was my answer. I'm afraid you'll have to be up to speed on the PACE controversy:

      Southampton to New York in 5 days
      Canard has announced the successful first voyage of its flagship, the Queen Mary Pace. Built in Oxford to rigorous specifications, this sleek and sophisticated liner, with the latest engines built with Graduated Energy Transmission, has taken the world by storm with the first crossing in just 5 days.
      Setting out from Southampton on 18th March, 2005, amid cheers from admiring crowds in the medical enclosure, it left harbour looking every inch the awesome and standard-setting giant that it is.
      With everyone settled in and enjoying the cruise, the captain called together his senior crew and decided to make some minor adjustments to the course plotted. Informing the management back at Canard, these quickly took place, and a mere 5 days later, passengers found themselves at the quayside in Dublin.
      “All changes were rigorously discussed with the senior crew,” explained the captain. “It was felt important to make changes that truly reflected the potential of the new engine arrangement. Dublin is an entirely normal destiny for cruise liners.”
      A number of passengers complained, but, as management at Canard explained, “There are always vexatious passengers on any cruise. All decisions were taken in their interests, and, naturally, it would be inappropriate for passengers unfamiliar with the ways of the sea to be allowed to comment on such matters. The crew had unusually tranquil seas and clement weather to contend with: it is too easy to criticize from the sidelines.” The UK government, which invested heavily in this engineering miracle, recommends that everyone should experience this very effective service.
      When questioned, 22% of the passengers said that they thought Dublin was much better or even very much better than they had realized: but another analysis showed that, actually, none of them ever ended up in New York, even two years later.
      - See more at: http://www.nationalelfservice.net/…/the-pace-trial-for-ch…/…

      Tuesday, November 3, 2015

      Amnesty International's indifference to torture in Denmark

      "To Amnesty International, on the occasion of their request for donation:

      Dear Madame and Sir,

      Thank you for your request for a donation. 

      I am saddened by your rejection of the issue I care about most dearly.

      Nearly 20 million persons worldwide, 1.1 million in the United States, are truly denied their human rights in respect of research and treatment for the polio-like illness Myalgic Encephalomyelitis.

      Many, especially in Great Britain and continental Europe, suffer torture and unbelievably cruel worsening of their condition at the hands of profit-seeking psychiatrists reminiscent of the worst Soviet psychiatric practices. 

      Your indifference to our suffering, torture, and exacerbation of symptoms unto death appalls me.

      Should you wish to engage with this tragedy, please Google the case of the Danish young woman Karina Hansen, whom psychiatrists have transformed in three years of incarceration for no sin save being ill, from a lovely young woman into a mindless brain-damaged permanent cripple.

      Yours sincerely,

      Deborah Waroff"

      Monday, November 2, 2015

      Professor of Cognitive Neuropsychology Keith R Laws: PACE trial interventions of CBT and GET fare no better than Standard Medical Care

      Professor Keith R Laws, Professor of Cognitive Neuropsychology, Sunday, 1 November 2015:

      This week Lancet Psychiatry published a long term follow-up study of the PACE trial assessing psychological interventions for Chronic Fatigue Syndrome/ME - it is available at the website following free registration

      On reading it, I was struck by more questions than answers. It is clear that these follow-up data show that the interventions of Cognitive behavioural Therapy (CBT), Graded Exercise Therapy (GET) and Adaptive Pacing Therapy (APT) fare no better than Standard Medical Care (SMC). While the lack of difference in key outcomes across conditions seem unquestionable, I am more interested in certain questions thrown up by the study concerning decisions that were made and how data were presented.

      A few questions that I find hard to answer from the paper...

      1) How is 'unwell' defined?  
      The authors state that “After completing their final trial outcome assessment, trial participants were offered an additional PACE therapy. if they were still unwellthey wanted more treatment, and their PACE trial doctor agreed this was appropriate. The choice of treatment offered (APT, CBT, or GET) was made by the patient’s doctor, taking into account both the patient’s preference and their own opinion of which would be most beneficial.” White et al 2011

      But how was ‘unwell' defined in practice? Did the PACE doctors listen to patient descriptions about 'feeling unwell' at face-value or did they perhaps refer back to criteria from the previous PACE paper to define 'normal' as patient scores being “within normal ranges for both primary outcomes at 52 weeks” (CFS 18 or less and PF 60+) . Did the PACE Doctors exclude those who said they were still unwell but scored 'normally' or those who said they were well but scored poorly? None of this seems any clearer from the published protocol for the PACE trial.

      More @ LawsDystopiaBlog


      Related Posts with Thumbnails