LOSS OF CAPACITY TO RECOVER FROM ACIDOSIS ON REPEAT EXERCISE IN CHRONIC FATIGUE SYNDROME – A CASE CONTROL STUDY
European Journal of Clinical Investigation © 2011 Stichting European Society for Clinical Investigation Journal Foundation
1. David EJ Jones MD PhD1,†, 2. Kieren G Hollingsworth PhD1,2,†, 3. Djordje G Jakovljevic PhD3,4,5, 4. Gulnar Fattakhova MD3,4, 5. Jessie Pairman3,4, 6. Andrew M Blamire PhD1,2, 7. Michael I Trenell PhD1,4,5,‡, 8. Julia L Newton MD PhD3,4,‡
Author Information
1 Institute of Cellular Medicine
2 Newcastle Magnetic Resonance Centre
3 Institute for Ageing and Health
4 the UK NIHR Biomedical Research Centre in Ageing and Age Related Diseases
5 Newcastle Centre for Brain Ageing and Vitality. Newcastle University, UK
*Correspondence: Professor Julia L Newton Institute for Ageing and Health Medical School Framlington Place Newcastle-upon-Tyne NE2 4HH, UK Email: j.l.newton@ncl.ac.uk
Publication History
1. Accepted manuscript online: 10 JUN 2011 11:17AM EST
2. Received Date: 16-Mar-2011, Accepted Date: 05-Jun-2011
Abstract
Background: Chronic fatigue syndrome (CFS) patients frequently describe difficulties with repeat exercise. Here we explore muscle bioenergetic function in response to 3 bouts of exercise.
Methods: 18 CFS (CDC 1994) patients and 12 sedentary controls underwent assessment of maximal voluntary contraction (MVC), repeat exercise with magnetic resonance spectroscopy and cardio-respiratory fitness test to determine anaerobic threshold.
Results: CFS patients undertaking MVC fell into 2 distinct groups. 8 (45%) showed normal PCr depletion in response to exercise at 35% of MVC (PCr depletion >33%; lower 95% CI for controls). 10 CFS patients had low PCr depletion (generating abnormally low MVC values). The CFS whole group exhibited significantly reduced anaerobic threshold, heart rate, VO2, VO2 peak and peak work compared to controls. Resting muscle pH was similar in controls and both CFS patient groups. However, the CFS group achieving normal PCr depletion values showed increased intra-muscular acidosis compared to controls after similar work after each of the 3 exercise periods with no apparent reduction in acidosis with repeat exercise of the type reported in normal subjects. This CFS group also exhibited significant prolongation (almost 4-fold) of the time taken for pH to recover to baseline.
Conclusion: When exercising to comparable levels to normal controls CFS patients exhibit profound abnormality in bioenergetic function and response to it. Although exercise intervention is the logical treatment for patients showing acidosis any trial must exclude subjects who do not initiate exercise as they will not benefit. This potentially explains previous mixed results in CFS exercise trials.
See also: The main characteristic of ME is an abnormally delayed muscle recovery after doing trivial things, if you don't have that, you don't have ME
Thursday, June 30, 2011
MECFS Alert Episode 2: ME/CFS a truly horrific illness with atrocious research funding, thanks for speaking out Mr King !!
In this episode of MECFS Alert, Llewellyn King discusses atrocious funding inequality.
Labels:
CHRONIC DISEASE,
Cool Blogging Therapy,
Coping,
GOBSART,
LIFE,
ME,
ME/CFS,
RESEARCH,
Science,
VIDEO,
XMRV
Retrovirus Hides in the Brain even though the virus was not measurable in the blood, Swedish Study Finds
ScienceDaily (Aug. 23, 2010) — Studies of the spinal fluid of patients given anti-HIV drugs have resulted in new findings suggesting that the brain can act as a hiding place for the HIV virus. Around 10% of patients showed traces of the virus in their spinal fluid but not in their blood -- a larger proportion than previously realised, reveals a thesis from the University of Gothenburg, Sweden.
We now have effective anti-HIV drugs that can stop the immune system from being compromised and prevent AIDS. Although these drugs effectively prevent the virus from multiplying, the HIV virus also infects the brain and can cause damage if the infection is not treated.
"Antiviral treatment in the brain is complicated by a number of factors, partly because it is surrounded by a protective barrier that affects how well medicines get in," says Arvid Edén, doctor and researcher at the Institute of Biomedicine at the Sahlgrenska Academy. "This means that the brain can act as a reservoir where treatment of the virus may be less effective."
The thesis includes a study of 15 patients who had been effectively medicated for several years. 60% of them showed signs of inflammation in their spinal fluid, albeit at lower levels than without treatment.
"In another study of around 70 patients who had also received anti-HIV drugs, we found HIV in the spinal fluid of around 10% of the patients, even though the virus was not measurable in the blood, which is a significantly higher proportion than previously realised," explains Edén.
The results of both studies would suggest that current HIV treatment cannot entirely suppress the effects of the virus in the brain, although it is not clear whether the residual inflammation or small quantities of virus in the spinal fluid in some of the patients entail a risk of future complications.
"In my opinion, we need to take into account the effects in the brain when developing new drugs and treatment strategies for HIV infection," says Edén
We now have effective anti-HIV drugs that can stop the immune system from being compromised and prevent AIDS. Although these drugs effectively prevent the virus from multiplying, the HIV virus also infects the brain and can cause damage if the infection is not treated.
"Antiviral treatment in the brain is complicated by a number of factors, partly because it is surrounded by a protective barrier that affects how well medicines get in," says Arvid Edén, doctor and researcher at the Institute of Biomedicine at the Sahlgrenska Academy. "This means that the brain can act as a reservoir where treatment of the virus may be less effective."
The thesis includes a study of 15 patients who had been effectively medicated for several years. 60% of them showed signs of inflammation in their spinal fluid, albeit at lower levels than without treatment.
"In another study of around 70 patients who had also received anti-HIV drugs, we found HIV in the spinal fluid of around 10% of the patients, even though the virus was not measurable in the blood, which is a significantly higher proportion than previously realised," explains Edén.
The results of both studies would suggest that current HIV treatment cannot entirely suppress the effects of the virus in the brain, although it is not clear whether the residual inflammation or small quantities of virus in the spinal fluid in some of the patients entail a risk of future complications.
"In my opinion, we need to take into account the effects in the brain when developing new drugs and treatment strategies for HIV infection," says Edén
Wednesday, June 29, 2011
Multiple sclerosis in monkeys caused by virus belonging to the herpes family, suggesting virus may cause MS in humans
By Joe Rojas-Burke, The Oregonian, Tuesday, June 28, 2011:
In 1986, an unknown disease began killing monkeys at the Oregon National Primate Research Center in Hillsboro. Affected animals developed an unsteady gait and a rapidly advancing paralysis of the limbs. Caregivers could do nothing to stop the disease and euthanized most of the helpless animals within a week of symptom onset.
The disease, researchers now report, is the monkey equivalent of multiple sclerosis. And it appears to be caused by a virus – adding support to the possibility that multiple sclerosis in humans can be triggered by a viral infection. Experts say the discovery could help expedite the search for more effective treatments.
"That's the ultimate goal," said co-author Scott Wong, a scientist at the primate center and Oregon Health & Science University's Vaccine and Gene Therapy Institute.
Since the first case appeared at the primate center, 56 monkeys in the Japanese macaque colony have fallen ill with the neurological disease. In most years, no more than four cases appear in the population of more than 300 monkeys. In a few cases, the monkeys recovered and lived normally for many months before relapsing -- a course often seen in people with multiple sclerosis, or MS.
Wong and colleagues studied brain and spinal cord samples from nearly all of the monkeys after death. Microscopic examination revealed damage very similar to that in MS patients, including nerve fibers stripped of their protective sheath. Brain scans performed on eight living monkeys showed scattered patches of dead and damaged nerve cells also similar to those seen in people with MS.
The researchers isolated a previously unknown virus from a sample of damaged spinal nerve tissue taken from one animal. DNA analysis showed that the virus belongs to the herpes family of viruses. Wong's team developed a sensitive test for the virus and used it to screen samples from healthy and diseased monkeys. So far, they have detected the virus in samples of damaged nerve tissue from six monkeys that died from the disorder. The virus has not showed up in healthy samples.
Read more>>
In 1986, an unknown disease began killing monkeys at the Oregon National Primate Research Center in Hillsboro. Affected animals developed an unsteady gait and a rapidly advancing paralysis of the limbs. Caregivers could do nothing to stop the disease and euthanized most of the helpless animals within a week of symptom onset.
The disease, researchers now report, is the monkey equivalent of multiple sclerosis. And it appears to be caused by a virus – adding support to the possibility that multiple sclerosis in humans can be triggered by a viral infection. Experts say the discovery could help expedite the search for more effective treatments.
"That's the ultimate goal," said co-author Scott Wong, a scientist at the primate center and Oregon Health & Science University's Vaccine and Gene Therapy Institute.
Since the first case appeared at the primate center, 56 monkeys in the Japanese macaque colony have fallen ill with the neurological disease. In most years, no more than four cases appear in the population of more than 300 monkeys. In a few cases, the monkeys recovered and lived normally for many months before relapsing -- a course often seen in people with multiple sclerosis, or MS.
Wong and colleagues studied brain and spinal cord samples from nearly all of the monkeys after death. Microscopic examination revealed damage very similar to that in MS patients, including nerve fibers stripped of their protective sheath. Brain scans performed on eight living monkeys showed scattered patches of dead and damaged nerve cells also similar to those seen in people with MS.
The researchers isolated a previously unknown virus from a sample of damaged spinal nerve tissue taken from one animal. DNA analysis showed that the virus belongs to the herpes family of viruses. Wong's team developed a sensitive test for the virus and used it to screen samples from healthy and diseased monkeys. So far, they have detected the virus in samples of damaged nerve tissue from six monkeys that died from the disorder. The virus has not showed up in healthy samples.
Read more>>
PACE study: after months of treatment patients are still so ill they would qualify to start the program again
Re:History of prejudice
Caroline Davis, Patient and former Director
London
Rapid Response to Editor's Choice:
Ending the stalemate over CFS/ME
Fiona Godlee
BMJ 2011;342:doi:10.1136/bmj.d3956 (Published 22 June 2011)
http://www.bmj.com/letters/submit/bmj;342/jun22_3/d3956?title=Re:Re:History%20of%20prejudice
We are a community which does have a number of strong voices motivated by utter exasperation and sometimes tempers run high. It doesn't always do us a lot of good, and that's something the most aggressive of our campaigners would do well to learn. Nobody responds well to being shouted at or heckled. But there are two sides to this story and Ms Godlee apparently only sees one.
After some 30 years of absolutely no biomedical research funded by our national research body the MRC, I don't think it's unreasonable for some patients to have lost patience with the Lancet, the BMJ and the policymakers. It's decades of lost life we are talking about. Think about it. Think about not being able to point to anything you've done or achieved for over 20 years. Think about living on a pittance and being ridiculed by your own family, friends and the media simply for being ill. It's no small issue to us. We have been quiet and patient far too long.
There appears to be a 'done and dusted' assumption on the part of the medical and research establishment in the UK that this 'must be' a biopsychosocial condition. You talk about aggressive patient campaigners: well, this is the other side of the coin: Overly powerful voices in the psychiatric community who will permit no argument that this may be a biomedical problem rather than a psychiatric one, and who hold far too many positions of power in medicine, research, media gatekeeping and Government policy. The same names recur over and over again, and block any possibility of getting the other side of this coin explored.
It is this imbalance that patients are railing against, and it's for this reason patients and researchers 'cannot work together'. There is a simple solution. Drop the assumption, let both sides be heard equally (which means changing some of the names controlling the big decision- making bodies for this condition), fund research for both putative causes equally, and watch how things change.
We need those in policy, medicine and research to keep an open mind on ME and act within their sphere of influence to ensure there is a decent, accurate case criteria for our disease (don't reinvent the wheel, the CCC works well for most patients); that there is fair media coverage of all the research, including biomedical studies coming out of other countries so that a balanced view can be taken of the condition and its causes by the public, Government departments and medical practitioners, and to ensure that patients have the right not to be penalised for refusing "treatment" that we know harms us.
Ms Godlee's comment about the PACE study is typical of the current assumption that is 'making an ass out of you and ME' (excuse the pun). No patients defined under the CCC actually took part in the PACE study; nor did the stats prove that people got better (those who did achieve small improvements after all those months of treatment were generally still so ill they would have qualified to start the program over again). If PACE had worked, many of the hundreds of patients who took part would be very quick to say so and encourage others to take part. Why do you think this has not happened?
As to the Lombardi et al study that is being so heavily criticised, again, Ms Godlee sees one side of a story only. There are as many positive studies on XMRV as negative: guess which make it into the headlines? (If one looks at who advises the Science Media Panel on ME, one will immediately see the reason for this).
Even if XMRV does not prove causative, it has at least created a bandwagon of biomedical research that is way, way overdue for a patient population and disease severity of this magnitude. No wonder patients are cheerleading the Whittemore Peterson Institute.
If you consider yourself to be a fair person, Ms Godlee, then please put aside your assumption of who may be right and who may be wrong about ME and take a more neutral view. This many patients can't all be wrong.
Competing interests: None declared
Published 26 June 2011
Caroline Davis, Patient and former Director
London
Rapid Response to Editor's Choice:
Ending the stalemate over CFS/ME
Fiona Godlee
BMJ 2011;342:doi:10.1136/bmj.d3956 (Published 22 June 2011)
http://www.bmj.com/letters/submit/bmj;342/jun22_3/d3956?title=Re:Re:History%20of%20prejudice
We are a community which does have a number of strong voices motivated by utter exasperation and sometimes tempers run high. It doesn't always do us a lot of good, and that's something the most aggressive of our campaigners would do well to learn. Nobody responds well to being shouted at or heckled. But there are two sides to this story and Ms Godlee apparently only sees one.
After some 30 years of absolutely no biomedical research funded by our national research body the MRC, I don't think it's unreasonable for some patients to have lost patience with the Lancet, the BMJ and the policymakers. It's decades of lost life we are talking about. Think about it. Think about not being able to point to anything you've done or achieved for over 20 years. Think about living on a pittance and being ridiculed by your own family, friends and the media simply for being ill. It's no small issue to us. We have been quiet and patient far too long.
There appears to be a 'done and dusted' assumption on the part of the medical and research establishment in the UK that this 'must be' a biopsychosocial condition. You talk about aggressive patient campaigners: well, this is the other side of the coin: Overly powerful voices in the psychiatric community who will permit no argument that this may be a biomedical problem rather than a psychiatric one, and who hold far too many positions of power in medicine, research, media gatekeeping and Government policy. The same names recur over and over again, and block any possibility of getting the other side of this coin explored.
It is this imbalance that patients are railing against, and it's for this reason patients and researchers 'cannot work together'. There is a simple solution. Drop the assumption, let both sides be heard equally (which means changing some of the names controlling the big decision- making bodies for this condition), fund research for both putative causes equally, and watch how things change.
We need those in policy, medicine and research to keep an open mind on ME and act within their sphere of influence to ensure there is a decent, accurate case criteria for our disease (don't reinvent the wheel, the CCC works well for most patients); that there is fair media coverage of all the research, including biomedical studies coming out of other countries so that a balanced view can be taken of the condition and its causes by the public, Government departments and medical practitioners, and to ensure that patients have the right not to be penalised for refusing "treatment" that we know harms us.
Ms Godlee's comment about the PACE study is typical of the current assumption that is 'making an ass out of you and ME' (excuse the pun). No patients defined under the CCC actually took part in the PACE study; nor did the stats prove that people got better (those who did achieve small improvements after all those months of treatment were generally still so ill they would have qualified to start the program over again). If PACE had worked, many of the hundreds of patients who took part would be very quick to say so and encourage others to take part. Why do you think this has not happened?
As to the Lombardi et al study that is being so heavily criticised, again, Ms Godlee sees one side of a story only. There are as many positive studies on XMRV as negative: guess which make it into the headlines? (If one looks at who advises the Science Media Panel on ME, one will immediately see the reason for this).
Even if XMRV does not prove causative, it has at least created a bandwagon of biomedical research that is way, way overdue for a patient population and disease severity of this magnitude. No wonder patients are cheerleading the Whittemore Peterson Institute.
If you consider yourself to be a fair person, Ms Godlee, then please put aside your assumption of who may be right and who may be wrong about ME and take a more neutral view. This many patients can't all be wrong.
Competing interests: None declared
Published 26 June 2011
Labels:
CBT,
CHRONIC DISEASE,
Cool Blogging Therapy,
Coping,
EXERCISE,
GET,
ME,
ME/CFS,
PACE,
RATT,
RESEARCH
XMRV has created a bandwagon of biomedical research that is way, way overdue
Re:History of prejudice
Caroline Davis, Patient and former Director
London
Rapid Response to Editor's Choice:
Ending the stalemate over CFS/ME
Fiona Godlee
BMJ 2011;342:doi:10.1136/bmj.d3956 (Published 22 June 2011)
http://www.bmj.com/letters/submit/bmj;342/jun22_3/d3956?title=Re:Re:History%20of%20prejudice
We are a community which does have a number of strong voices motivated by utter exasperation and sometimes tempers run high. It doesn't always do us a lot of good, and that's something the most aggressive of our campaigners would do well to learn. Nobody responds well to being shouted at or heckled. But there are two sides to this story and Ms Godlee apparently only sees one.
After some 30 years of absolutely no biomedical research funded by our national research body the MRC, I don't think it's unreasonable for some patients to have lost patience with the Lancet, the BMJ and the policymakers. It's decades of lost life we are talking about. Think about it. Think about not being able to point to anything you've done or achieved for over 20 years. Think about living on a pittance and being ridiculed by your own family, friends and the media simply for being ill. It's no small issue to us. We have been quiet and patient far too long.
There appears to be a 'done and dusted' assumption on the part of the medical and research establishment in the UK that this 'must be' a biopsychosocial condition. You talk about aggressive patient campaigners: well, this is the other side of the coin: Overly powerful voices in the psychiatric community who will permit no argument that this may be a biomedical problem rather than a psychiatric one, and who hold far too many positions of power in medicine, research, media gatekeeping and Government policy. The same names recur over and over again, and block any possibility of getting the other side of this coin explored.
It is this imbalance that patients are railing against, and it's for this reason patients and researchers 'cannot work together'. There is a simple solution. Drop the assumption, let both sides be heard equally (which means changing some of the names controlling the big decision- making bodies for this condition), fund research for both putative causes equally, and watch how things change.
We need those in policy, medicine and research to keep an open mind on ME and act within their sphere of influence to ensure there is a decent, accurate case criteria for our disease (don't reinvent the wheel, the CCC works well for most patients); that there is fair media coverage of all the research, including biomedical studies coming out of other countries so that a balanced view can be taken of the condition and its causes by the public, Government departments and medical practitioners, and to ensure that patients have the right not to be penalised for refusing "treatment" that we know harms us.
Ms Godlee's comment about the PACE study is typical of the current assumption that is 'making an ass out of you and ME' (excuse the pun). No patients defined under the CCC actually took part in the PACE study; nor did the stats prove that people got better (those who did achieve small improvements after all those months of treatment were generally still so ill they would have qualified to start the program over again). If PACE had worked, many of the hundreds of patients who took part would be very quick to say so and encourage others to take part. Why do you think this has not happened?
As to the Lombardi et al study that is being so heavily criticised, again, Ms Godlee sees one side of a story only. There are as many positive studies on XMRV as negative: guess which make it into the headlines? (If one looks at who advises the Science Media Panel on ME, one will immediately see the reason for this).
Even if XMRV does not prove causative, it has at least created a bandwagon of biomedical research that is way, way overdue for a patient population and disease severity of this magnitude. No wonder patients are cheerleading the Whittemore Peterson Institute.
If you consider yourself to be a fair person, Ms Godlee, then please put aside your assumption of who may be right and who may be wrong about ME and take a more neutral view. This many patients can't all be wrong.
Competing interests: None declared
Published 26 June 2011
Caroline Davis, Patient and former Director
London
Rapid Response to Editor's Choice:
Ending the stalemate over CFS/ME
Fiona Godlee
BMJ 2011;342:doi:10.1136/bmj.d3956 (Published 22 June 2011)
http://www.bmj.com/letters/submit/bmj;342/jun22_3/d3956?title=Re:Re:History%20of%20prejudice
We are a community which does have a number of strong voices motivated by utter exasperation and sometimes tempers run high. It doesn't always do us a lot of good, and that's something the most aggressive of our campaigners would do well to learn. Nobody responds well to being shouted at or heckled. But there are two sides to this story and Ms Godlee apparently only sees one.
After some 30 years of absolutely no biomedical research funded by our national research body the MRC, I don't think it's unreasonable for some patients to have lost patience with the Lancet, the BMJ and the policymakers. It's decades of lost life we are talking about. Think about it. Think about not being able to point to anything you've done or achieved for over 20 years. Think about living on a pittance and being ridiculed by your own family, friends and the media simply for being ill. It's no small issue to us. We have been quiet and patient far too long.
There appears to be a 'done and dusted' assumption on the part of the medical and research establishment in the UK that this 'must be' a biopsychosocial condition. You talk about aggressive patient campaigners: well, this is the other side of the coin: Overly powerful voices in the psychiatric community who will permit no argument that this may be a biomedical problem rather than a psychiatric one, and who hold far too many positions of power in medicine, research, media gatekeeping and Government policy. The same names recur over and over again, and block any possibility of getting the other side of this coin explored.
It is this imbalance that patients are railing against, and it's for this reason patients and researchers 'cannot work together'. There is a simple solution. Drop the assumption, let both sides be heard equally (which means changing some of the names controlling the big decision- making bodies for this condition), fund research for both putative causes equally, and watch how things change.
We need those in policy, medicine and research to keep an open mind on ME and act within their sphere of influence to ensure there is a decent, accurate case criteria for our disease (don't reinvent the wheel, the CCC works well for most patients); that there is fair media coverage of all the research, including biomedical studies coming out of other countries so that a balanced view can be taken of the condition and its causes by the public, Government departments and medical practitioners, and to ensure that patients have the right not to be penalised for refusing "treatment" that we know harms us.
Ms Godlee's comment about the PACE study is typical of the current assumption that is 'making an ass out of you and ME' (excuse the pun). No patients defined under the CCC actually took part in the PACE study; nor did the stats prove that people got better (those who did achieve small improvements after all those months of treatment were generally still so ill they would have qualified to start the program over again). If PACE had worked, many of the hundreds of patients who took part would be very quick to say so and encourage others to take part. Why do you think this has not happened?
As to the Lombardi et al study that is being so heavily criticised, again, Ms Godlee sees one side of a story only. There are as many positive studies on XMRV as negative: guess which make it into the headlines? (If one looks at who advises the Science Media Panel on ME, one will immediately see the reason for this).
Even if XMRV does not prove causative, it has at least created a bandwagon of biomedical research that is way, way overdue for a patient population and disease severity of this magnitude. No wonder patients are cheerleading the Whittemore Peterson Institute.
If you consider yourself to be a fair person, Ms Godlee, then please put aside your assumption of who may be right and who may be wrong about ME and take a more neutral view. This many patients can't all be wrong.
Competing interests: None declared
Published 26 June 2011
Scud missiles in the BMJ
Posted by Andrea Pring, WEDNESDAY, 29 JUNE 2011:
My Rapid response to Ending the stalemate over CFS/ME, Fiona Godlee, British Medical Journal, 22 June 2011
It is rather telling when you see which parts were edited out!
What actually did the BMJ hope to achieve by launching this scud missile on a community of already down-trodden patients? What are your motives and objectives? Read more>>
My Rapid response to Ending the stalemate over CFS/ME, Fiona Godlee, British Medical Journal, 22 June 2011
It is rather telling when you see which parts were edited out!
What actually did the BMJ hope to achieve by launching this scud missile on a community of already down-trodden patients? What are your motives and objectives? Read more>>
Tuesday, June 28, 2011
What stands out most in Nigel Hawkes' BMJ article is a horrifying lack of objectivity to investigate the truth about this devastating disease
Dreambirdie said...:
Hi Mindy. So glad you wrote him. So did I. (below)
Mr Hawkes--
Regarding journalism, Bill Moyers sums it up brilliantly: "What is important for the journalist is not how close you are to power, but how close you are to reality." (4/7/08, Huffington Post) Regarding your recent article Dangers of research into chronic fatigue syndrome, it is all too clear just how badly reality lost out.
What stands out most in your article, besides its obvious lack of impartiality, fairness, and objectivity, is its rather horrifying lack of sensitivity and compassion towards those who are suffering with the serious debilitating neuro immune disease known as ME/CFS, an illness affecting close to 20 million people worldwide, and including men, women and children. If you would have been willing to thoroughly investigate the truth about the suffering brought on by this hideous devastating disease, you could have actually gleaned some insight into why ME/CFS patients are outraged and disgusted with those like Wessely, who are the real "danger" in this equation, and who have deluded themselves (and convinced you) into believing they are the victims.
Please read this article, Hard Cell (about Simon Wessely's "treatment" of Ean Proctor) and watch the videos I have sent you. Perhaps they will give you a reality check.
Thank you.
Science Reporter Sees Friend and Other ME Patients Die
Hi Mindy. So glad you wrote him. So did I. (below)
Mr Hawkes--
Regarding journalism, Bill Moyers sums it up brilliantly: "What is important for the journalist is not how close you are to power, but how close you are to reality." (4/7/08, Huffington Post) Regarding your recent article Dangers of research into chronic fatigue syndrome, it is all too clear just how badly reality lost out.
What stands out most in your article, besides its obvious lack of impartiality, fairness, and objectivity, is its rather horrifying lack of sensitivity and compassion towards those who are suffering with the serious debilitating neuro immune disease known as ME/CFS, an illness affecting close to 20 million people worldwide, and including men, women and children. If you would have been willing to thoroughly investigate the truth about the suffering brought on by this hideous devastating disease, you could have actually gleaned some insight into why ME/CFS patients are outraged and disgusted with those like Wessely, who are the real "danger" in this equation, and who have deluded themselves (and convinced you) into believing they are the victims.
Please read this article, Hard Cell (about Simon Wessely's "treatment" of Ean Proctor) and watch the videos I have sent you. Perhaps they will give you a reality check.
Thank you.
Science Reporter Sees Friend and Other ME Patients Die
Study was a sham, researchers say in medical journal
By Lisa Girion, Los Angeles Times, June 27, 2011:
The maker of Neurontin disguised an effort to promote the anti-seizure drug to physicians as a clinical trial and failed to inform involved physicians and patients, according to a new analysis published Monday in the Archives of Internal Medicine journal.
That conclusion was based on an analysis of internal corporate documents that companies involved in marketing Neurontin, including the drug’s current owner, Pfizer Inc., were required to be disclosed in litigation. The authors of the analysis in the Archives of Internal Medicine include paid consultants to plaintiffs in litigation over the drugmaker’s promotion of Neurontin for off-label indications.
After the U.S. Food and Drug Administration approved Neurontin for epileptic seizures, the company launched what it presented as a dosing study to institutional review boards and physicians participating in the Neurontin trial, according to the Archives of Internal Medicine article.
In fact, the Archives of Internal Medicine article says, the study was a “seeding trial” aimed at ... Read more>>
The maker of Neurontin disguised an effort to promote the anti-seizure drug to physicians as a clinical trial and failed to inform involved physicians and patients, according to a new analysis published Monday in the Archives of Internal Medicine journal.
That conclusion was based on an analysis of internal corporate documents that companies involved in marketing Neurontin, including the drug’s current owner, Pfizer Inc., were required to be disclosed in litigation. The authors of the analysis in the Archives of Internal Medicine include paid consultants to plaintiffs in litigation over the drugmaker’s promotion of Neurontin for off-label indications.
After the U.S. Food and Drug Administration approved Neurontin for epileptic seizures, the company launched what it presented as a dosing study to institutional review boards and physicians participating in the Neurontin trial, according to the Archives of Internal Medicine article.
In fact, the Archives of Internal Medicine article says, the study was a “seeding trial” aimed at ... Read more>>
ME/CFS GetUp! Campaign: It is time to get serious research into the cause of this debilitating condition
by FNC ME/CFS Association Inc:
There are currently about 300,000 to 500,000 people in Australia affected by ME/CFS. There is very little to no research into the condition. There needs to be research into it's cause and treatment (non-psychological). Advocacy and other assistance also needs to be addressed. With about 20 cents per year per person in funding of research and $ 10billion lost in productivity it is time to get serious about this debilitating condition.
Labels:
CHRONIC DISEASE,
DIAGNOSING,
GUIDELINES,
Health,
ME,
ME/CFS,
RESEARCH,
Science,
XMRV
The PACE Trial: An Expression Of Concern
Douglas T Fraser, Tuesday 28th June 2011:
The PACE Trial: An Expression Of Concern
Douglas T Fraser
Tuesday 28th June 2011
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On February 18th 2011 the Lancet published an article “PACE: a randomised trial” by P D White, A L Johnson, J Bavinton, T Chalder, and M Sharpe et al. [1].
Essentially an unblinded trial, it appears to be falsely registered as a RCT [1a], an example of “strawman design” [2], and published in breach of the Lancet's own requirements [3]. Read more>>
The PACE Trial: An Expression Of Concern
Douglas T Fraser
Tuesday 28th June 2011
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
On February 18th 2011 the Lancet published an article “PACE: a randomised trial” by P D White, A L Johnson, J Bavinton, T Chalder, and M Sharpe et al. [1].
Essentially an unblinded trial, it appears to be falsely registered as a RCT [1a], an example of “strawman design” [2], and published in breach of the Lancet's own requirements [3]. Read more>>
XMRV cannot infect NIH3T3 mouse cells as Drs Mendoza, Vaughan, and Dusty Miller claimed
by XMRV Global Advocacy on Monday, June 27, 2011:
The following was posted on mecfsforum.com by Gerwyn:
The paper below, from Ramon Mendoza, Andrew E. Vaughan, and A. Dusty Miller, made the following claim:
"The left half of XMRV is present in an endogenous retrovirus of NIH/3T3 Swiss mouse cells"
Here we used XMRV-specific PCR to search for a more closely related source of XMRV in mice. While we could not find a complete copy, we did find a 3,600 bp region of XMRV in an endogenous retrovirus present in NIH/3T3 cells. These results show that XMRV has clear ancestors in mice, and highlight another possible source of contamination in PCR assays for XMRV.
http://jvi.asm.org/cgi/content/abstract/JVI.05137-11v1
However, the following paper demonstrates that NIH/3T3 cells cannot be infected by XMRV.
"Common Inbred Strains of the Laboratory Mouse That Are Susceptible to Infection by Mouse Xenotropic Gammaretroviruses and the Human-Derived Retrovirus XMRV"
In contrast, the XPR1 receptor gene originally cloned from NIH 3T3 cells is not permissive for X-MLV entry (2, 42, 50).
http://jvi.asm.org/cgi/reprint/84/24/12841.pdf
The following was posted on mecfsforum.com by Gerwyn:
The paper below, from Ramon Mendoza, Andrew E. Vaughan, and A. Dusty Miller, made the following claim:
"The left half of XMRV is present in an endogenous retrovirus of NIH/3T3 Swiss mouse cells"
Here we used XMRV-specific PCR to search for a more closely related source of XMRV in mice. While we could not find a complete copy, we did find a 3,600 bp region of XMRV in an endogenous retrovirus present in NIH/3T3 cells. These results show that XMRV has clear ancestors in mice, and highlight another possible source of contamination in PCR assays for XMRV.
http://jvi.asm.org/cgi/content/abstract/JVI.05137-11v1
However, the following paper demonstrates that NIH/3T3 cells cannot be infected by XMRV.
"Common Inbred Strains of the Laboratory Mouse That Are Susceptible to Infection by Mouse Xenotropic Gammaretroviruses and the Human-Derived Retrovirus XMRV"
In contrast, the XPR1 receptor gene originally cloned from NIH 3T3 cells is not permissive for X-MLV entry (2, 42, 50).
http://jvi.asm.org/cgi/reprint/84/24/12841.pdf
IMEA's Letter to the editor of Science regarding serious flaws in the methodology of Paprotka et al.
by XMRV Global Advocacy on Tuesday, June 28, 2011:
Dear Dr Ash
We are a patient organisation and would like to make the following observations regarding the study entitled “Recombinant Origin of the Retrovirus XMRV”, in which John Coffin and his team claim to supply proof that XMRV arose during the passage of a xenograft through strains of mice that could potentially have been used in the formation of the 22Rv1 cell line and thus is a harmless contaminant. Should you choose to reply to this letter we undertake not to make the contents public without your express written permission.
This review first highlights the quote below.
The claim is that XMRV was created as a result of recombination of 2 proviral sequences dubbed PreXMRV-1 and PreXMRV-2.
“The complete sequence of PreXMRV-1 was determined from the early passage xenografts, the NU/NU and Hsd strains, and the CWR-R1 cell line.”
PreXMRV-1 is therefore a metaphor. There is no evidence that it exists as an independent entity. A synthetic clone was constructed from sequences amplified from NU/NU, Hsd and the CWR-R1 cell line. All three are known to contain one or more XMRV specific sequences. The sequences amplified in the early xenografts had a sequence homology to 3.5 kb of XMRV, in a region not unique to XMRV. The fact that the other sequences, used to construct the synthetic clone called PreXMRV-1, could actually have been XMRV specific sequences, must be obvious. It is difficult to understand how this escaped the attention of a competent peer reviewer.
The determination of XMRV concentration in 22Rv1, CWR-R1, NU/NU and Hsd nude mice.
“To quantify the amount of XMRV DNA in the CWR22 xenografts, we developed a real-time PCR primer-probe set that specifically detected XMRV env and excluded murine endogenous proviruses present in BALB/c and NIH3T3 genomic DNA (Fig. 1C). We used quantitative PCR of 22Rv1 DNA to estimate 20 proviruses/cell and used the 22Rv1 DNA to generate a standard curve. The CWR22 xenografts had significantly fewer copies of XMRV env (<1–3 copies/100 cells) compared to the 22Rv1 cells (2000 copies/100 cells). The CWR-R1 cell line had ~3000 copies/100 cells, and the NU/NU and Hsd nude mice, thought to have been used to passage the CWR22 xenograft, had 58 and 68 copies/100 cells, respectively.”
These primers were not used to search for XMRV in the other strains of wild and lab mice despite being able to detect such a low copy number of XMRV, nor were they used to examine the early xenografts for the presence of XMRV despite demonstrating such a high level of clinical sensitivity.
In order to examine the early xenografts the authors created 8 different primer sets. The CWR22 xenograft referred to above was not examined using the new primer sets below:
“We used the same XMRV-specific primer sets to amplify and sequence DNA from early passage xenografts (736, 777, 8L, 8R, 16R, and 18R; Fig. 2B); the results showed that XMRV env, but not gag sequences were present (sequencing coverage summarized in fig. S3), indicating that the early xenografts did not contain XMRV.”
This statement is an opinion and not objective fact. An examination of the primers used to examine the early xenografts is enlightening. The data can be referred to in Fig. S3 (A) of the supporting material. 8fsa-U5rsa can amplify env sequences when XMRV and only XMRV, is present. 18f-13r, 8fsa-Ursa and 8f-U3r, can amplify gag-pol sequences and env sequences when XMRV gag, pol and env sequences are known to be present at high concentrations in the late stage xenografts. There is no evidence however, that 8fsa-U5rsa and 8f-U3r produce amplicons in any situation other than when XMRV specific env sequences are present.
Hence, the presence of amplicons produced via the use of that 8fsa-U5rsa and 8f-U3r primers, means that the presence of XMRV in the early xenografts certainly cannot be ruled out. Indeed the amplicon produced using 18f-13r primers, which amplifies gag pol regions in XMRV, produces further evidence for the existence of XMRV in the early xenografts. This would not be expected to amplify the gag pol region of a proviral sequence PreXMRV-1, with only some 90% homology to the corresponding region in XMRV.
Taken together, these results strongly suggest that that the early xenografts did contain XMRV. This is especially so, when the fact that the existence of PreXMRV-1, as a real in vivo entity, has not been demonstrated. The conclusion, that the absence of gag sequences invalidates this finding, is clearly erroneous. It fails to take into the account that gag primers are some 10 fold less sensitive when dealing with such copy numbers of XMRV. Indeed, Danielson et al. demonstrated that their nested PCR, using input DNA of 600ng, could detect 1 XMRV provirus in 100,000 cells. Yet, these authors were quite unable to detect gag sequences in patients where env sequences were detected. Whereas Lombardi et al. only detected gag sequences in 7% of patients tested, using single round PCR, such as used in this study. They were however able to detect gag sequences in 67% of people with nested RT-PCR.
The concentration in the CWR22 xenograft was established as being less than 1 copy per 100 cells by the authors themselves, thus we would be dealing with concentrations even lower than that because we are looking at a time before the xenografts were fully formed. It is worthy of note that the primers and cycling conditions used to determine the concentration of XMRV in the CWR22 xenograft, were not at any time used to examine the early passage xenografts, nor indeed in the attempt to detect XMRV in other strains of wild and laboratory mice. The authors also make no attempt to determine the concentration of the proviral DNA sequence in the early xenografts.
Although MuLVs (henceforth called XMRV like viruses) can induce tumours by inserting into the promoter regions of tumour related genes, this is by no means the only mechanism by which XMRV like viruses can induce the formation of tumours, as the example below demonstrates. LP-BM5 is a mix of XMRV like viruses.
LP-BM5 murine leukemia retrovirus induces the excessive oxidative stress and immune dysfunction leading to B cell leukemia and murine AIDS with cytokine dysfunction. The cytokines upregulated in this investigation were Il-4 Il-6 and TNf alpha. The cytokines downregulated were INF gamma and IL-2(1).
Il-8 is a cytokine commonly elevated and deemed to be a causative factor in many cancers (2). Some information regarding the role of IL8 is given below:
"Our data show that IL-8 signaling increases AR expression and promotes ligand-independent activation of this receptor in two androgen-dependent cell lines, describing two mechanisms by which this chemokine may assist in promoting the transition of CaP to the androgen-independent state."(3)
Serum 1L-8 is elevated in men with prostate cancer (4).
"Aalinkeel et al. found that IL-8 was significantly higher in the more metastatic PC-3 and DU-145 prostate cancer cell lines, when compared to the poorly metastatic LnCAP cells. The results of our study of IL-8 in men with prostate cancer support the findings of Aalinkeel et al"
XMRV induces IL-8 expression in prostate cancer cell lines (5).
93% Xmrv infected people with ME have elevated il-8 levels (6).
Immune dysregulation and the production of oxidative stress in a host in response of a XMRV like virus infection is well documented. Thus, the claim that XMRV would have to infect many cells in prostate tissue to induce prostate cancer, is an opinion only. The cells shown to be directly infected may well have been transformed by insertional mutagenesis, but the two mechanisms are not mutually exclusive.
Summary
A PCR assay(1) with primers A and reagents B, using cycling conditions C, was able to detect XMRV specific env sequences when the copy number was as low as 1-3 provirus copies per 100 cells.
This PCR assay played no further part in the study.
Instead a new PCR assay(2) with different reagents and primer combinations of D E F G H I J K were used to assay cell lines of the CWR-R1 xenograft, the 22Rv1 cell line and the NU/NU and Hsd nude mice. Primers A B and C were able to detect XMRV sequences when only XMRV was present at a copy number of greater than 2000 proviral copies per 100 cells. Primer J was used to amplify an XMRV sequence from 100 ng of 22Rv1 cell DNA at an unknown level of proviral concentration. The use of primer J however, was not able to amplify any XMRV specific sequences in the investigation cited above. Primer J was the primer chosen to search the DNA of multiple mice species for the presence of XMRV.
Hence, it is impossible to say that XMRV was not present in the early xenografts or indeed the original prostate cancer tissue. It is also not possible to determine whether gag and env sequences were present in the multiple mouse species examined.
The explanation that XMRV entered the human population as a result of recombination of two proviruses would at least require that both proviruses did actually exist as in vivo entities. The existence, of the proviral sequence dubbed as PreXMRV-1 as such an entity, has not been established. The sequences are a construct from sources known to contain XMRV.
We are thus left with at least two competing explanations.
On the one hand we have the explanation that a PCR assay, of unknown sensitivity below the level of 2000 XMRV proviral copies per 100 cells, was simply unable to locate very low copy numbers of XMRV. This is supported by the fact that the use of primers J were unable to amplify XMRV sequences when proviral copies were as high as 3000 proviral copies per cell. Alternatively, we have the authors preferred explanation, that XMRV was formed by a recombination event which is so rare that it could only have happened once and the odds of it happening at any other time (by the authors own admission) are over a billion to one against. I leave the reader to judge what is the most parsimonious, conciliate hypothesis.
References
1) Lee, J.M.; Dehydroepiandrosterone Sulfate Inhibited Immune Dysfunction Induced by LP-BM5 Leukemia Retrovirus Infection through Regulating Th1/Th2 Type Cytokine mRNA Expression and Oxidative Stress in Murine AIDS Model; Journal of The Korean Society of Food Science and Nutrition (Dec 2006)
2) Yuan A, Chen JJ, Yao PL, Yang PC. The role of interleukin-8 in cancer cells and microenvironment interaction. Front Biosci. 2005 Jan 1;10:853-65. Print 2005 Jan 1.
3) Angela Seaton, Paula Scullin, Pamela J. Maxwell, Catherine Wilson, Johanna Pettigrew, Rebecca Gallagher, Joe M. O'Sullivan, Patrick G. Johnston and David J. J. Waugh: Interleukin-8 signaling promotes androgen-independent proliferation of prostate cancer cells via induction of androgen receptor expression and activation; Carcinogenesis (2008) 29 (6): 1148-1156.
4) Lehrer S, Diamond EJ, Mamkine B, Stone NN, Stock RG.; Serum interleukin-8 is elevated in men with prostate cancer and bone metastases.; Technol Cancer Res Treat. 2004 Oct;3(5):411.
5) Robert H. Silverman, Carvell Nguyen, Christopher J. Weight & Eric A. Klein; The human retrovirus XMRV in prostate cancer and chronic fatigue syndrome; Nature Reviews Urology 7, 392-402 (July 2010)
6) V.C. Lombardi, K. S. Hagen, K. W. Hunter, J. W. Diamond, J. Smith-Gagen, W. Yang And J. A. Mikovits; Xenotropic Murine Leukemia Virus-related Virus-associated Chronic Fatigue Syndrome Reveals a Distinct Inflammatory Signature; in vivo 25: 307-314 (2011)
These following extracts from the paper clearly show that the argument rests on highly subjective interpretations. One could easily reverse the conclusions of the authors and the whole study would fall.
"(B) PCR and sequencing of PreXMRV‐1. The complete PreXMRV‐1 genome was cloned and sequenced from the indicated sources using primers that specifically amplify XMRV or PreXMRV‐1 but exclude known endogenous MLV sequences (Fig. S2). We amplified PreXMRV‐1 from the CWR‐R1 cell line, but not the 22Rv1 cell line, indicating the absence of PreXMRV‐1 from these cells. Partial PreXMRV‐1 (env divergent region) was also amplified from xenografts 2524 and 2274, showing that both XMRV and PreXMRV‐1 are present in these samples."
"We used the same XMRV-specific primer sets to amplify and sequence DNA from early passage xenografts (736, 777, 8L, 8R, 16R, and 18R; Fig. 2B); the results showed that XMRV env, but not gag sequences were present (sequencing coverage summarized in fig. S3), indicating that the early xenografts did not contain XMRV."
Due to the serious implications that this paper will have on further research into this retrovirus, it is therefore necessary to amend these flaws, so as to allow your readership the opportunity to judge the data in their usual manner.
Yours sincerely
Louise Gunn
CEO
IMEA
27 June 2011
Dear Dr Ash
We are a patient organisation and would like to make the following observations regarding the study entitled “Recombinant Origin of the Retrovirus XMRV”, in which John Coffin and his team claim to supply proof that XMRV arose during the passage of a xenograft through strains of mice that could potentially have been used in the formation of the 22Rv1 cell line and thus is a harmless contaminant. Should you choose to reply to this letter we undertake not to make the contents public without your express written permission.
This review first highlights the quote below.
The claim is that XMRV was created as a result of recombination of 2 proviral sequences dubbed PreXMRV-1 and PreXMRV-2.
“The complete sequence of PreXMRV-1 was determined from the early passage xenografts, the NU/NU and Hsd strains, and the CWR-R1 cell line.”
PreXMRV-1 is therefore a metaphor. There is no evidence that it exists as an independent entity. A synthetic clone was constructed from sequences amplified from NU/NU, Hsd and the CWR-R1 cell line. All three are known to contain one or more XMRV specific sequences. The sequences amplified in the early xenografts had a sequence homology to 3.5 kb of XMRV, in a region not unique to XMRV. The fact that the other sequences, used to construct the synthetic clone called PreXMRV-1, could actually have been XMRV specific sequences, must be obvious. It is difficult to understand how this escaped the attention of a competent peer reviewer.
The determination of XMRV concentration in 22Rv1, CWR-R1, NU/NU and Hsd nude mice.
“To quantify the amount of XMRV DNA in the CWR22 xenografts, we developed a real-time PCR primer-probe set that specifically detected XMRV env and excluded murine endogenous proviruses present in BALB/c and NIH3T3 genomic DNA (Fig. 1C). We used quantitative PCR of 22Rv1 DNA to estimate 20 proviruses/cell and used the 22Rv1 DNA to generate a standard curve. The CWR22 xenografts had significantly fewer copies of XMRV env (<1–3 copies/100 cells) compared to the 22Rv1 cells (2000 copies/100 cells). The CWR-R1 cell line had ~3000 copies/100 cells, and the NU/NU and Hsd nude mice, thought to have been used to passage the CWR22 xenograft, had 58 and 68 copies/100 cells, respectively.”
These primers were not used to search for XMRV in the other strains of wild and lab mice despite being able to detect such a low copy number of XMRV, nor were they used to examine the early xenografts for the presence of XMRV despite demonstrating such a high level of clinical sensitivity.
In order to examine the early xenografts the authors created 8 different primer sets. The CWR22 xenograft referred to above was not examined using the new primer sets below:
“We used the same XMRV-specific primer sets to amplify and sequence DNA from early passage xenografts (736, 777, 8L, 8R, 16R, and 18R; Fig. 2B); the results showed that XMRV env, but not gag sequences were present (sequencing coverage summarized in fig. S3), indicating that the early xenografts did not contain XMRV.”
This statement is an opinion and not objective fact. An examination of the primers used to examine the early xenografts is enlightening. The data can be referred to in Fig. S3 (A) of the supporting material. 8fsa-U5rsa can amplify env sequences when XMRV and only XMRV, is present. 18f-13r, 8fsa-Ursa and 8f-U3r, can amplify gag-pol sequences and env sequences when XMRV gag, pol and env sequences are known to be present at high concentrations in the late stage xenografts. There is no evidence however, that 8fsa-U5rsa and 8f-U3r produce amplicons in any situation other than when XMRV specific env sequences are present.
Hence, the presence of amplicons produced via the use of that 8fsa-U5rsa and 8f-U3r primers, means that the presence of XMRV in the early xenografts certainly cannot be ruled out. Indeed the amplicon produced using 18f-13r primers, which amplifies gag pol regions in XMRV, produces further evidence for the existence of XMRV in the early xenografts. This would not be expected to amplify the gag pol region of a proviral sequence PreXMRV-1, with only some 90% homology to the corresponding region in XMRV.
Taken together, these results strongly suggest that that the early xenografts did contain XMRV. This is especially so, when the fact that the existence of PreXMRV-1, as a real in vivo entity, has not been demonstrated. The conclusion, that the absence of gag sequences invalidates this finding, is clearly erroneous. It fails to take into the account that gag primers are some 10 fold less sensitive when dealing with such copy numbers of XMRV. Indeed, Danielson et al. demonstrated that their nested PCR, using input DNA of 600ng, could detect 1 XMRV provirus in 100,000 cells. Yet, these authors were quite unable to detect gag sequences in patients where env sequences were detected. Whereas Lombardi et al. only detected gag sequences in 7% of patients tested, using single round PCR, such as used in this study. They were however able to detect gag sequences in 67% of people with nested RT-PCR.
The concentration in the CWR22 xenograft was established as being less than 1 copy per 100 cells by the authors themselves, thus we would be dealing with concentrations even lower than that because we are looking at a time before the xenografts were fully formed. It is worthy of note that the primers and cycling conditions used to determine the concentration of XMRV in the CWR22 xenograft, were not at any time used to examine the early passage xenografts, nor indeed in the attempt to detect XMRV in other strains of wild and laboratory mice. The authors also make no attempt to determine the concentration of the proviral DNA sequence in the early xenografts.
Although MuLVs (henceforth called XMRV like viruses) can induce tumours by inserting into the promoter regions of tumour related genes, this is by no means the only mechanism by which XMRV like viruses can induce the formation of tumours, as the example below demonstrates. LP-BM5 is a mix of XMRV like viruses.
LP-BM5 murine leukemia retrovirus induces the excessive oxidative stress and immune dysfunction leading to B cell leukemia and murine AIDS with cytokine dysfunction. The cytokines upregulated in this investigation were Il-4 Il-6 and TNf alpha. The cytokines downregulated were INF gamma and IL-2(1).
Il-8 is a cytokine commonly elevated and deemed to be a causative factor in many cancers (2). Some information regarding the role of IL8 is given below:
"Our data show that IL-8 signaling increases AR expression and promotes ligand-independent activation of this receptor in two androgen-dependent cell lines, describing two mechanisms by which this chemokine may assist in promoting the transition of CaP to the androgen-independent state."(3)
Serum 1L-8 is elevated in men with prostate cancer (4).
"Aalinkeel et al. found that IL-8 was significantly higher in the more metastatic PC-3 and DU-145 prostate cancer cell lines, when compared to the poorly metastatic LnCAP cells. The results of our study of IL-8 in men with prostate cancer support the findings of Aalinkeel et al"
XMRV induces IL-8 expression in prostate cancer cell lines (5).
93% Xmrv infected people with ME have elevated il-8 levels (6).
Immune dysregulation and the production of oxidative stress in a host in response of a XMRV like virus infection is well documented. Thus, the claim that XMRV would have to infect many cells in prostate tissue to induce prostate cancer, is an opinion only. The cells shown to be directly infected may well have been transformed by insertional mutagenesis, but the two mechanisms are not mutually exclusive.
Summary
A PCR assay(1) with primers A and reagents B, using cycling conditions C, was able to detect XMRV specific env sequences when the copy number was as low as 1-3 provirus copies per 100 cells.
This PCR assay played no further part in the study.
Instead a new PCR assay(2) with different reagents and primer combinations of D E F G H I J K were used to assay cell lines of the CWR-R1 xenograft, the 22Rv1 cell line and the NU/NU and Hsd nude mice. Primers A B and C were able to detect XMRV sequences when only XMRV was present at a copy number of greater than 2000 proviral copies per 100 cells. Primer J was used to amplify an XMRV sequence from 100 ng of 22Rv1 cell DNA at an unknown level of proviral concentration. The use of primer J however, was not able to amplify any XMRV specific sequences in the investigation cited above. Primer J was the primer chosen to search the DNA of multiple mice species for the presence of XMRV.
Hence, it is impossible to say that XMRV was not present in the early xenografts or indeed the original prostate cancer tissue. It is also not possible to determine whether gag and env sequences were present in the multiple mouse species examined.
The explanation that XMRV entered the human population as a result of recombination of two proviruses would at least require that both proviruses did actually exist as in vivo entities. The existence, of the proviral sequence dubbed as PreXMRV-1 as such an entity, has not been established. The sequences are a construct from sources known to contain XMRV.
We are thus left with at least two competing explanations.
On the one hand we have the explanation that a PCR assay, of unknown sensitivity below the level of 2000 XMRV proviral copies per 100 cells, was simply unable to locate very low copy numbers of XMRV. This is supported by the fact that the use of primers J were unable to amplify XMRV sequences when proviral copies were as high as 3000 proviral copies per cell. Alternatively, we have the authors preferred explanation, that XMRV was formed by a recombination event which is so rare that it could only have happened once and the odds of it happening at any other time (by the authors own admission) are over a billion to one against. I leave the reader to judge what is the most parsimonious, conciliate hypothesis.
References
1) Lee, J.M.; Dehydroepiandrosterone Sulfate Inhibited Immune Dysfunction Induced by LP-BM5 Leukemia Retrovirus Infection through Regulating Th1/Th2 Type Cytokine mRNA Expression and Oxidative Stress in Murine AIDS Model; Journal of The Korean Society of Food Science and Nutrition (Dec 2006)
2) Yuan A, Chen JJ, Yao PL, Yang PC. The role of interleukin-8 in cancer cells and microenvironment interaction. Front Biosci. 2005 Jan 1;10:853-65. Print 2005 Jan 1.
3) Angela Seaton, Paula Scullin, Pamela J. Maxwell, Catherine Wilson, Johanna Pettigrew, Rebecca Gallagher, Joe M. O'Sullivan, Patrick G. Johnston and David J. J. Waugh: Interleukin-8 signaling promotes androgen-independent proliferation of prostate cancer cells via induction of androgen receptor expression and activation; Carcinogenesis (2008) 29 (6): 1148-1156.
4) Lehrer S, Diamond EJ, Mamkine B, Stone NN, Stock RG.; Serum interleukin-8 is elevated in men with prostate cancer and bone metastases.; Technol Cancer Res Treat. 2004 Oct;3(5):411.
5) Robert H. Silverman, Carvell Nguyen, Christopher J. Weight & Eric A. Klein; The human retrovirus XMRV in prostate cancer and chronic fatigue syndrome; Nature Reviews Urology 7, 392-402 (July 2010)
6) V.C. Lombardi, K. S. Hagen, K. W. Hunter, J. W. Diamond, J. Smith-Gagen, W. Yang And J. A. Mikovits; Xenotropic Murine Leukemia Virus-related Virus-associated Chronic Fatigue Syndrome Reveals a Distinct Inflammatory Signature; in vivo 25: 307-314 (2011)
These following extracts from the paper clearly show that the argument rests on highly subjective interpretations. One could easily reverse the conclusions of the authors and the whole study would fall.
"(B) PCR and sequencing of PreXMRV‐1. The complete PreXMRV‐1 genome was cloned and sequenced from the indicated sources using primers that specifically amplify XMRV or PreXMRV‐1 but exclude known endogenous MLV sequences (Fig. S2). We amplified PreXMRV‐1 from the CWR‐R1 cell line, but not the 22Rv1 cell line, indicating the absence of PreXMRV‐1 from these cells. Partial PreXMRV‐1 (env divergent region) was also amplified from xenografts 2524 and 2274, showing that both XMRV and PreXMRV‐1 are present in these samples."
"We used the same XMRV-specific primer sets to amplify and sequence DNA from early passage xenografts (736, 777, 8L, 8R, 16R, and 18R; Fig. 2B); the results showed that XMRV env, but not gag sequences were present (sequencing coverage summarized in fig. S3), indicating that the early xenografts did not contain XMRV."
Due to the serious implications that this paper will have on further research into this retrovirus, it is therefore necessary to amend these flaws, so as to allow your readership the opportunity to judge the data in their usual manner.
Yours sincerely
Louise Gunn
CEO
IMEA
27 June 2011
Monday, June 27, 2011
Professor Malcolm Hooper’s further concerns about the PACE Trial article published in The Lancet
Professor Malcolm Hooper:
Professor Malcolm Hooper’s further concerns about the PACE Trial article published in The Lancet
24th June 2011
Executive Summary
Scrutiny of the criteria used to determine which participants were “within the normal range” on the two primary outcomes in the PACE Trial -- physical function and fatigue -- reveals a manifest contradiction in the report published in The Lancet (PD White et al. Lancet 2011:377:823-836).
Ratings that would qualify a potential participant as sufficiently impaired to enter the trial were considered “within the normal range” when recorded on completion of the trial.
There is thus discordance between the designated entry criteria and the benchmarks of “the normal range” in assessing outcomes at the end of the Trial in respect of both physical function and fatigue.
It cannot be acceptable to describe a PACE Trial participant at the end of the trial as having attained levels of physical function and fatigue “within the normal range” and to consider the same participant sufficiently disabled and symptomatic, as judged by the same recorded levels of physical function and fatigue, to have qualified for entry into the PACE Trial in the first place.
This situation has arisen as a result of numerous changes and re-calculations by the Principal Investigators (PIs) in the relevant benchmarks, changes in the PIs’ cited reference material as to what constitutes “the normal range”, and the PIs’ use of inappropriate comparison groups.
It should be noted that the analysis refers to outcomes “within the normal range”. This is not necessarily the same as “normal”. It is a statistical concept, defined as the mean plus or minus one standard deviation from the mean. It may or may not equate well to what is typical in the population. In the case of physical functioning, the threshold of “the normal range” is far from what is “normal”. Due cognisance of this should have been taken in interpreting outcomes on physical function.
However, even with these factors mitigating in favour of positive reporting, only 30% of the CBT participants and 28% of the GET participants recorded outcomes “within the normal range” in respect of physical functioning and fatigue on conclusion of the PACE Trial.
The Trial Protocol sets out two “primary efficacy measures”. These consist of specific parameters delineating what is to be considered “a positive outcome” on physical functioning and fatigue, respectively. In combination, these were to have been used to identify “overall improvers”, but this analysis has been dropped by the PIs.
Professor Hooper is of the view that the PACE Trial fails on a fundamental aspect of clinical research in that there is no attempt to apply the pre-determined primary efficacy measures to the outcome data, and furthermore the benchmarks used to judge suitability for recruitment and outcomes are patently contradictory.
Together with others who have expressed concerns, Professor Hooper continues to believe that the need for an independent statistical re-evaluation of the raw data is overwhelming as, without such an independent assessment, doubts over the veracity of the claims made by Professor White et al cannot be resolved.
Replying to Professor Hooper’s complaint, Professor White et al state: “The PACE trial paper…does not purport to be studying CFS/ME but CFS defined simply as a principal complaint of fatigue that is disabling, having lasted six months, with no alternative medical explanation (Oxford criteria)”. If The Lancet accepts this, Professor Hooper asks that it publish an immediate and unequivocal clarification about this key issue, since during the 8-year life of the PACE Trial, virtually all the documents refer to “CFS/ME” and the published results are being applied to people with the distinct nosological disorder myalgic encephalomyelitis (ME).
Such clarification would serve to protect people with ME from implicit or explicit pressure to engage in exercise programmes (continuance of welfare benefits as well as medical support and basic civility being contingent upon compliance). ME patients have consistently reported that even graded exercise results in deterioration that is often long-lasting and severe.
Introduction
On 28th March 2011 Professor Hooper submitted his detailed concerns in the document “REPORT: COMPLAINT TO THE RELEVANT EXECUTIVE EDITOR OF THE LANCET ABOUT THE PACE TRIAL ARTICLES PUBLISHED BY THE LANCET” (http://www.meactionuk.org.uk/COMPLAINT-to-Lancet-re-PACE.htm).
Professor Peter White, the lead author of the PACE Trial article, was invited by The Lancet’s senior editorial staff to respond to it, which he did in an undated letter sent to Richard Horton, editor-in-chief of The Lancet (http://www.meactionuk.org.uk/whitereply.htm), as a result of which the complaint was rejected in its entirety by The Lancet’s senior editorial staff.
On 28th May 2011 Professor Hooper therefore responded to the failure of Professor White to address the important issues raised (http://www.meactionuk.org.uk/Comments-on-PDW-letter-re-PACE.htm).
On 3rd June 2011, Zoe Mullan, senior editor at The Lancet, indicated to a correspondent unconnected with Professor Hooper that if he had further concerns, she would welcome his contacting her about them. Having been made aware of this, he agreed to do so.
A specific and major concern is the focus of this present document, which relates to the PIs’ PACE Trial entry criteria and criteria for assessing outcomes on physical function and fatigue. The result is an overlap between the benchmarks of “the normal range” on these measures as applied to PACE participants’ outcomes and the benchmarks (on these same measures) denoting impairment at the outset of the Trial. Furthermore, the PIs have failed to report on the pre-defined criteria delineating “a positive outcome” that are specified in the Trial Protocol.
Professor Hooper cannot comprehend how The Lancet editors can accept such non-science as objective and reliable evidence of the success of the PACE Trial and he fails to understand how senior Lancet editors could be “fully satisfied” by the PIs’ illogical conclusion that the same requirement for admission to the trial has been judged by them to denote attainment within “the normal range” at the end of the trial, a situation that requires correction or clarification as a matter of urgency.
He believes that, as a UK custodian of valid science in medicine, The Lancet failed to recognise the very serious flaws in the PACE study itself and in the published article reporting the supposedly successful outcome.
On 18th April 2011 in his broadcast about the PACE Trial on Australian ABC Radio National, Richard Horton was disparaging about criticisms of the article, asserting that the PACE Trial was a well-designed and well-executed study; he also said: “We will invite the critics to submit versions of their criticism for publication and we will try as best as we can to conduct a reasonable scientific debate about this paper. This will be a test I think of this particular section of the patient community to engage in a proper scientific discussion” (http://www.abc.net.au/rn/healthreport/stories/2011/3192571.htm).
Professor Hooper asks that The Lancet honour Richard Horton’s call and that, as part of that process, this present submission be afforded due scrutiny by The Lancet’s independent statisticians.
For the avoidance of doubt, relevant extracts from the PACE Trial protocol and the published article are here provided:
.........................................
PACE TRIAL PROTOCOL (extract)
“10.1 Primary outcome measures
10.1.1 Primary efficacy measures
Since we are interested in changes in both symptoms and disability we have chosen to designate both the symptoms of fatigue and physical function as primary outcomes. This is because it is possible that a specific treatment may relieve symptoms without reducing disability, or vice versa. Both these measures will be self-rated.
The 11 item Chalder Fatigue Questionnaire measures the severity of symptomatic fatigue, and has been the most frequently used measure of fatigue in most previous trials of these interventions. We will use the 0,0,1,1 item scores to allow a possible score of between 0 and 11. A positive outcome will be a 50 % reduction in fatigue score, or a score of 3 or less, this threshold having been previously shown to indicate normal fatigue.
The SF-36 physical function sub-scale measures physical function, and has often been used as a primary outcome measure in trials of CBT and GET. We will count a score of 75 (out of a maximum of 100) or more, or a 50 % increase from baseline in SF-36 sub-scale score as a positive outcome. A score of 70 is about one standard deviation below the mean score (about 85, depending on the study) for the UK adult population. Those participants who improve in both outcome measures will be regarded as overall improvers.
10.2 Secondary outcome measures
10.2.1 Secondary efficacy measures …..”
.........................................
LANCET ARTICLE REPORTING THE PACE TRIAL FINDINGS (extracts)
Study Design & Participants
“Other eligibility criteria consisted of a bimodal score of 6 of 11 or more on the Chalder fatigue questionnaire [ref. 15] and a score of 60 of 100 or less on the short form-36 physical function subscale. [ref. 16] 11 months after the trial began, this requirement was changed from a score of 60 to a score of 65 to increase recruitment”. (Professor White has now admitted in his letter to The Lancet that this “may affect generalisability”).
Outcomes
“The two participant-rated primary outcome measures were the Chalder fatigue questionnaire (Likert scoring 0,1, 2, 3; range 0–33; lowest score is least fatigue) [ref. 15] and the short form-36 physical function subscale (version 2; range 0–100; highest score is best function) [ref. 16]. Before outcome data were examined, we changed the original bimodal scoring of the Chalder fatigue questionnaire (range 0–11) to Likert scoring to more sensitively test our hypotheses of effectiveness”.
Statistical Analysis
“In another post-hoc analysis, we compared the proportions of participants who had scores of both primary outcomes within the normal range at 52 weeks. This range was defined as less than the mean plus 1 SD scores of adult attendees to UK general practice of 14.2 (+4.6) for fatigue (score of 18 or less) and equal to or above the mean minus 1 SD scores of the UK working age population of 84 (–24) for physical function (score of 60 or more) [refs. 32,33]”.
Results
25 (16%) of 153 participants in the APT group were within normal ranges for both primary outcomes at 52 weeks, compared with 44 (30%) of 148 participants for CBT, 43 (28%) of 154 participants for GET, and 22 (15%) of 152 participants for SMC”.
......................................
Failure to Report on “Positive Outcomes”
The PACE Trial Protocol sets out the criteria to be used to delineate”a positive outcome”. These criteria apply to the scores achieved on the two primary outcomes, physical function and fatigue, respectively (see box above).
Analysis of these “primary efficacy measures” (there were no others) does not appear in the article published in The Lancet.
This omission may be viewed in the context of the prior reporting of disappointing results from the PACE Trial’s sibling, the MRC-funded FINE (Fatigue Intervention by Nurses Evaluation) Trial (AJ Wearden et al. BMJ 2010; 340; c1777). It is notable that the criteria specified in the PACE Trial Protocol to denote “a positive outcome” are identical to the criteria that were used to gauge outcomes in the FINE Trial, with the exception that a threshold of 70 (as opposed to 75) was used on physical functioning in the FINE Trial.
Given the close links between the PACE and FINE Trials, it is inconceivable that the PACE Trial Investigators would have been unaware that criteria differing little from their own pre-designated “positive outcome” measures in the PACE Trial had produced disappointing results when applied to the FINE data.
(The poor FINE Trial results may also have influenced the PACE Trial PIs’ decision to change the method approved in respect of assessing outcomes on fatigue as recorded via the Chalder Fatigue Questionnaire, thus departing from the Trial Protocol – see below).
No other measure of “a positive outcome” is presented in The Lancet article. Instead, the analysis focuses on inter-group differences in scores recorded in respect of physical function and fatigue. These are described as “primary outcome measures”. However, without having a (pre)specified parameter on the relevant variables as to what is to be deemed a “primary outcome measure”, this description is meaningless.
The Lancet article does, however, present a secondary analysis of outcomes in respect of these variables, assessed against “the normal range” (see box above). It is this analysis that contains an inherent contradiction ie. it was possible for participants to be deemed to have attained levels of physical function and fatigue “within the normal range” when they had actually deteriorated on these parameters over the course of the PACE Trial.
Assessing Physical Function
Physical function was assessed using the Physical Function subscale of the Short Form 36 Health Survey Questionnaire (usually abbreviated to SF-36), with higher scores indicating better function (McHorney CA et al; Med Care 1993:31:247-263). The raw score range is over a 20 point range. However for purposes of analysis this is converted to a scale of 0-100, rising in increments of 5.
What is “Normal” Physical Function?
The situation whereby it was possible for a person to deteriorate on this measure over the course of the PACE Trial yet still be deemed to have attained physical function “within the normal range” on completion of the trial arose in part in consequence of the PIs’ various revisions of the relevant benchmarks in respect of recruitment criteria and the assessment of outcomes.
The problem also resides in the standard practice of using the mean plus and minus one standard deviation (SD) from the mean to denote the “range of normal” on a variable. When data is “normally distributed” (in statistical terms) around a mean, the concept relates well to what is the norm. In respect of physical function in general, and SF-36 scores in particular, data is skewed. In these circumstances, there is a difference between what is normal in the sense of being most frequently found, and “the normal range”. This should have been flagged up by the PIs in interpreting the reported outcomes on physical function.
The two problems are delineated below.
The Threshold of the “Range of Normal” as a Benchmark on Physical Function
The paper referenced in respect of the threshold of “the normal range” that has been applied in the PACE Trial (Bowling A et al; J Publ Health Med 1999:21:255-270) reviews normative data from a range of sources and concludes: “These results confirm the highly skewed nature of the distributions (see Fig 1), which is a problematic feature of all health status scales.”
This “problematic” feature is that the data are highly skewed towards the high end of the scale. Indeed, scrutiny of the relevant histogram in Fig 1 of the Bowling et al. paper suggests that there are more people who score the maximum 100 on the SF-36 physical functioning scale than the combined total of people who score anything other than 100.
In such circumstances, applying a benchmark of the mean minus one standard deviation to general population data on the SF-36 physical function subscale to denote the threshold of “the normal range”, while technically correct, does not equate to what would be understood as “normal” in respect of physical functioning in the general population.
Because of the skewed nature of distributions on health status scales, the use of a “reference range” may be more appropriate for comparative purposes. This describes the variations of a measurement or value in healthy individuals and is a basis for a physician or other health professional to interpret a set of results for a particular patient. The standard definition of a reference range originates in what is most prevalent in a control group taken from the population.
The PIs’ Definitions and Re-definitions of “Normal” and “The Range of Normal” on Physical Function
In the PACE Trial documents obtained under the FOIA it is recorded that the PIs’ intention was to set the recruitment ceiling at a maximum of 70 and to define normal physical function as an SF-36 score of at least 75.
In his application dated 12th September 2002 to the West Midlands Multicentre Ethics Committee (MREC), Professor White described the derivation of this threshold of “normal” as follows: “We will count a score of 75 [out of a maximum of 100] or more as indicating normal function, this score being one standard deviation below the mean score [90] for the UK working age population”, citing Jenkinson C et al. Short form 36 (SF-36) Health Survey questionnaire: normative data from a large random sample of working age adults; BMJ:1993:306:1437-1440.
It should be noted that the comparative data related to the UK working age population.
A ceiling of 70 in respect of recruitment and a threshold of 75 to denote “normal function” on the SF-36 physical function subscale was accordingly presented in the PACE Trial Identifier. As the SF-36 Physical Function subscale proceeds in increments of 5 this meant that there was the narrowest of margins between the ceiling on physical function in respect of entry to PACE, and the threshold of “normal” on conclusion.
The proposed threshold for entry was discussed at the Trial Steering Committee held on 22nd April 2004 (at which Professor White was present) and those discussions are minuted as follows: “7. The outcome measures were discussed. It was noted that there may need to be an adjustment of the threshold needed for entry to ensure improvements were more than trivial (emphasis added). For instance a participant with a Chalder score of 4 would enter the trial and be judged improved with an outcome score of 3. The TSC (Trial Steering Committee) suggested one solution would be that the entry criteria for the Chalder scale score should be 6 or above, so that a 50% reduction would be consistent with an outcome score of 3. A similar adjustment should be made for the SF-36 physical function subscale” (emphasis added).
Consequently, when the PACE Trial began (the first participant having been randomised on 18th March 2005), the ceiling in respect of SF-36 at entry was a score of 60.
In the Trial Protocol, an SF-36 threshold of 75 remains in respect of assessment of outcomes and plays a part in the identification of “a positive outcome”: “We will count a score of 75 (out of a maximum of 100) or more, or a 50% increase from baseline in SF-36 subscale score as a positive outcome”.
This applies in both the full 226 page final version (unpublished by the PIs but obtained under the FOIA and available at http://www.meactionuk.org.uk/FULL-Protocol-SEARCHABLE-version.pdf) and the shortened 20-page version of the Protocol that was published in 2007 (www.biomedcentral.com/1471-2377/7/6 -- which was not peer-reviewed by the journal because it had already received ethical and funding approval by the time it was submitted, the Editor commenting: “We strongly advise readers to contact the authors or compare with any published result(s) articles to ensure that no deviations from the protocol occurred during the study”).
Curiously, although the SF-36 score threshold remains at 75, the threshold of “normal” cited by the PIs in the PACE Trial protocol has been lowered to 70: “A score of 70 is about one standard deviation below the mean score (about 85, depending on the study) for the UK adult population”. The PIs cite two references in support (Jenkinson C et al; BMJ 1993:306:1437-1440 – ie. the same reference as in the application to the MREC -- and Bowling A et al; J Publ Health Med 1999:21:255-270). It is notable that the normative group identified now relates to the adult population as a whole (ie. it includes elderly people, whereas the normative group previously cited was the working age population).
Because of continued problems attaining recruitment targets, on 9th February 2006 Professor White wrote to Mrs Anne McCullough, Administrator at the West Midlands MREC, requesting a substantial amendment to the trial's entry criteria as he wished to raise the SF-36 threshold required for inclusion criteria for the trial from 60 to 65. He stated: "Increasing the threshold [from 60 to 65] will improve generalisation…. The TMG (Trial Management Group) and TSC (Trial Steering Committee) believe this will also make a significant impact on recruitment”.
(It is notable that in this request for a substantive amendment dated 9th February 2006, Professor White assured the MREC that "Increasing the threshold [from 60 to 65] will improve generalisation” but in his response to Professor Hooper’s complaint on this point, Professor White admitted that: “Such a change may affect the generalisability…of the results”. The context of this statement is such that a stricture rather than an improvement is implicit. In effect, this change meant that, in the midst of the recruitment period, the pool of potential candidates was increased by relaxing the entry criteria to allow people with better physical capacity to take part).
Furthermore, it narrowed the gap between how physically impaired a person had to be in order to be recruited, and how well they had to function to be deemed to have a positive outcome, leading to the following approach to the MREC from Professor White: “This would mean the entry criterion on this measure was only 5 points less than the categorical positive outcome of 7O on this scale. We therefore propose an increase of the categorical positive outcome from 70 to 75, reasserting a ten point score gap between entry criterion and positive outcome” (emphasis added).
Given that the threshold of positive outcome is stated as 75 in the Trial Protocol, this is baffling. (It was the threshold of normal function in the population that is cited as 70.) Unless there is an as-yet unidentified document reducing the SF-36 threshold denoting a positive outcome from 75 to 70, it would appear that Professor White was confused as to the existing benchmark.
In any event, the gap proposed was ten points – representing a minimum increment of two stages on the SF-36 scale.
Professor White further assured the MREC that this change would bring the PACE Trial into line with its “sister study”, the FINE Trial, and that it would not affect the analysis of the trial data: “The other advantage of changing to 75 is that it would bring the PACE trial into line with the FINE trial, an MRC funded trial for CFS/ME and the sister study to PACE. This small change is unlikely to influence power calculations or analysis”.
The presentation of trial data in The Lancet demonstrates that Professor White did not observe the assurances he provided to the ethics committee.
In The Lancet article reporting the results of the PACE Trial, the primary efficacy measures as set out in the trial protocol have been abandoned altogether. There is no reference to any measure of “a positive outcome”.
However, a “post hoc” analysis is presented, which entails comparing PACE participants’ outcomes against a threshold of “the normal range” in respect of physical function. Defined as the mean minus one standard deviation in respect of a normative population and having been specified as 75 in the application to the MREC and reduced to 70 in the protocol, in the analysis published in The Lancet the threshold of “the normal range” is further reduced to an SF-36 score of 60.
This was based on “the mean minus 1 SD scores of the UK working age population of 84 (–24) for physical function”.
One reference is cited in respect of this threshold, this being the Bowling et al paper that was one of two cited at the PACE Trial Protocol stage. That paper reviews normative data from a range of sources, none of which appears to provide the figures cited (see Table 4: “Comparison of SF-36 dimension norms in Britain” in the Bowling et al paper).
Following Professor Hooper’s complaint, Professor White responded in his letter to The Lancet: ““We did, however, make a descriptive error in referring to the sample we referred to in the paper as a ‘UK working age population’, whereas it should have read ‘English adult population’ ”. Such a comparator is inappropriate because, by definition, the English adult population includes elderly people. The appropriate comparison would be with the SF-36 physical function scores for age-matched healthy people. However this would have raised the threshold of the normal range to a higher level, thus making it more difficult – if not impossible – for the PIs to claim even moderate success for the PACE Trial.
Furthermore, the data analysis published in The Lancet is at odds with one of the reasons given by Professor White to the MREC for previously setting the “categorical positive outcome” at 75, namely to put PACE into line with the FINE Trial.
It is notable that, when the FINE Trial results were reported (in the spring of 2010), only 17 of the 81 participants assessed had met the relevant parameter in respect of physical function at the primary outcome point -- a score of at least 75 or an improvement of 50% from baseline.
Remarkably, in view of the abstruse complexity of much of the analysis presented in The Lancet article, the PACE Trial PIs have stated: “Changes to the original published protocol were made to improve either recruitment or interpretability” (The Lancet: doi:10.1016/S0140-6736)11)60651-X).
In summary, since it was possible to score 65 on the SF-36 and still be recruited to the PACE Trial, setting the threshold of “the normal range” at 60 on completion meant that there was a negative five point score gap, meaning that a participant could actually deteriorate during the course of the trial and leave the trial more disabled than before treatment, but still fall within the PIs’ new definition of “normal” (ie. attainment of “normality” was set lower than the entry criteria, which by any standards is illogical).
Assessing “Normal” Fatigue
In the PACE Trial, fatigue was assessed using the Chalder Fatigue Questionnaire or CFQ (Chalder T, Wessely S et al; J Psychosom Res 1993:37:147-153).
The Chalder Fatigue Questionnaire comprises eleven questions. Respondents are asked to indicate their situation in respect of each of these on a four-point scale: “less than usual”; “no more than usual”; “more than usual”; “much more than usual”.
The fatigue score is the sum total of the scores obtained in respect of the eleven items in the Chalder Fatigue Questionnaire. The higher the score, the greater the impact of fatigue. However, there are two methods of scoring responses.
Change by the PIs in Method of Scoring Outcomes
One method of producing a fatigue score involves scoring these respective responses on a scale from 0 to 3 and summing the total. This method (known as Likert scoring, which has a possible range of 0 - 33) was used to assess outcomes on fatigue.
However, for the purposes of screening for entry to PACE, a different method of scoring the responses was adopted. Known as bimodal analysis, this entails placing each response into one of two categories: any item rated “less than usual” or “no more than usual” is allocated a score of 0; any item rated “more than usual” or “much more than usual” is allocated a score of 1. The possible range is therefore 0 -11.
It is notable that the original proposal – as set out in the MREC application, the Trial Identifier, and the Trial Protocol - was to analyse results bimodally. This was to feed into one of two “primary efficacy measures”: “A positive outcome will be a 50 % reduction in fatigue score, or a score of 3 or less, this threshold having been previously shown to indicate normal fatigue (Trial Protocol, citing Chalder T, Berelowitz G, Hirsch S, Pawlikowska T, Wallace P, Wessely S and Wright D: Development of a fatigue scale. J Psychosom Res 1993, 37:147-153.)
The rationale provided for the change to Likert scoring in the consideration of outcomes in The Lancet article was: “Before outcome data were examined, we changed the original bimodal scoring of the Chalder fatigue questionnaire (range 0-11) to Likert scoring to more sensitively test our hypothesis of effectiveness”.
However, one consequence of adopting a Likert approach to processing responses is that it becomes easier to demonstrate differences between the groups when such differences are relatively small.
This had been demonstrated in respect of the fatigue outcome data in the FINE Trial: analysed using bimodal scoring as set out in the FINE Trial protocol, there was no statistically significant improvement in fatigue between the FINE interventions and the “treatment as usual” control group at the primary outcome point (Wearden AJ et al; BMJ 2010:340:c1777).
However, following publication of those results, the FINE Trial Investigator (Dr Alison Wearden PhD, an observer on the PACE Trial Steering Committee) reappraised the FINE Trial data according to Likert scoring and produced a “clinically modest, but statistically significant effect…at both outcome points” (http://www.bmj.com/cgi/eletters/340/apr22_3/c1777#236235), a fact of which the PACE Trial PIs would have been well aware.
What is “Normal” Fatigue? What is “Abnormal” Fatigue?
As with the physical function scores, there is an overlap between the level of fatigue deemed sufficiently significant to qualify a person to participate in the PACE Trial and the level of fatigue deemed to denote a positive outcome.
This means that identical responses on the Chalder Fatigue Questionnaire could qualify a person as sufficiently “fatigued” for entry to the PACE trial and later allow them to be deemed to have attained “normality” in terms of their level of fatigue at the outcomes assessment stage.
This absurdity is somewhat opaque owing to the use of a different method of processing responses to the Chalder Fatigue Questionnaire at entry stage (bimodal) and outcomes assessment (Likert) stage (see above). Nonetheless it is possible to demonstrate a manifest contradiction and flaws in the definitions used.
Qualifying threshold re: Fatigue for Entry to the PACE Trial
As with physical function, the criterion that was used to recruit participants to PACE in respect of fatigue differed from what was originally specified.
In his application dated 12th September 2002 to the MREC, Professor White stated: “We will operationalise CFS in terms of fatigue severity … as follows: a Chalder fatigue score of four or more.” He also referred to: “a score of 4 having been previously shown to indicate abnormal fatigue.”
The PACE Trial Identifier repeated the requirement for a fatigue score of 4 or more to indicate caseness at entry.
However, following discussion at the Trial Steering Committee (on 22nd April 2004) this was revised upward to 6 in order to allow for a more appropriate gap to appear between the required level of fatigue on entry and the threshold of an outcome denoting improvement (at that point, a score of 3 or less, or a 50% improvement from baseline -- however, the consideration of this “primary efficacy measure” was later dropped). PACE participants were recruited on this basis.
Ceiling of “Normal” Fatigue on Completion of the PACE Trial
The commitments given before the PACE Trial interventions began were consistently for a ceiling score of 3 on the Chalder Fatigue Questionnaire, rated bimodally, to represent “normal” fatigue on completion of the trial. The rationale for treating bimodally rated scores of 4 and above as representing abnormal levels of fatigue is repeatedly cited as Chalder T et al. J Psychosom Res 1993:37:147-153. That paper is the work of the lead author of the Chalder Fatigue Questionnaire, PACE Trial Principal Investigator Professor Trudie Chalder, a co-author being the Director of the PACE Trial Clinical Unit and member of the Trial Management Group Professor Simon Wessley. For example:
In his application to the MREC, under the heading “What is the primary end point?” Professor White stated: “We will use the 0,0,1,1 item scores to allow a categorical threshold measure of "abnormal" fatigue with a score of 4 having been previously shown to indicate abnormal fatigue.”
In the PACE Trial Identifier, under the heading “3.9 What are the proposed outcome measures? Primary efficacy measures” Professor White stated: “We will use the 0,0,1,1 item scores to allow a categorical threshold measure of “abnormal” fatigue with a score of 4 having been previously shown to indicate abnormal fatigue [ref 23]” (Chalder T et al. J Psychosom Res 1993; 37: 147-153.)
In the PACE Trial protocol, under the heading “10.1 Primary outcome measures; 10.1.1 Primary efficacy measures” Professor White stated: A positive outcome will be a 50 % reduction in fatigue score, or a score of 3 or less, this threshold having been previously shown to indicate normal fatigue” (Chalder T et al. J Psychosom Res 1993, 37:147-153).
However, in The Lancet article reporting the results of the PACE Trial, when “normal” levels of fatigue were judged on completion of the trial, the analysis conducted related to: “the proportions of participants who had scores of both primary outcomes within the normal range at 52 weeks. This range was defined as less than the mean plus 1 SD scores of adult attendees to UK general practice of 14.2 (+4.6) for fatigue (score of 18 or less) … ”: (32: Cella M, Chalder T et al: J Psychsom Res 2010:69:17-22).
A Likert score of 18 can translate to a bimodal score of between 4 and 9, depending on the specific responses that combine to produce the Likert score. According to the PIs, a bimodal score of 4 or more indicates abnormal fatigue (see above). Hence a Likert score of 18 always represents a state of abnormal fatigue.
In order to allow for a sufficient gap between the positive outcome criterion then proposed – a bimodal score of 3 or less -- the threshold of fatigue at entry to the PACE Trial had been set at 6. However, it is possible to record responses producing a Likert score of 18 (ie. the ceiling of “the range of normal” fatigue on conclusion of the PACE Trial) which translates to bimodal scores of 6, 7, 8, and 9.
Either the threshold of “normal” denoting a positive outcome should have been lower than the measure used in the analysis in The Lancet (Likert 18), and/or the threshold of caseness at recruitment (bimodal score of 6) should have been higher.
The net result of the analysis conducted is that identical responses could both qualify a person as sufficiently “fatigued” for entry to the PACE trial and at completion of the trial allow them to be deemed to be within “the range of normal” in terms of their level of fatigue.
What’s more, as with physical function, it would be possible for a person to record poorer responses in respect of fatigue on completion of the trial than at the outset, yet still be deemed by the PIs to be within “the range of normal” on this subjective primary outcome.
Further Issues Regarding the Assessment of Fatigue in the PACE Trial
Several further points are relevant in this regard.
First, the cited reference for the benchmark chosen to assess PACE outcomes, co-authored by the PACE Trial Principal Investigator Trudie Chalder, also provides bimodal scores for the same population: “community sample: mean fatigue 3.27 (S.D. 3.21)". This places the ceiling at which a person can have fatigue and still be considered within the normal range at a bimodal score of 6.
This is inconsistent with the PACE Trial literature, which repeatedly refers to “a score of 4 having been previously shown to indicate abnormal fatigue” (see above), citing a paper lead-authored by Trudie Chalder and co-authored by Director of the PACE Trial Clinical Unit and member of the Trial Management Group Professor Simon Wessely.
Secondly, the Lancet article states that the benchmark employed was derived from fatigue scores from “adult attendees to UK general practice”. That study was part of a long-term longitudinal scrutiny of a cohort group but, notably, “only completed data from those who went to see their general practitioner the following year…. were used in this study” (emphasis added). The Chalder Fatigue Questionnaires therefore related to the year prior to the selected cohort becoming “attendees to UK general practice”.
This is a curious and convoluted selection of a comparison population from which to derive normative data. Moreover the nature of this comparison group is by no means obvious from the PIs’ description (“adult attendees to general practice”) that is set out in The Lancet article on the PACE Trial results. Again, Trudie Chalder was an author of both papers.
Finally, it is possible that fatigue - unlike physical function - is “normally distributed” in the general population, as asserted by (then Dr) Simon Wessely. Referring to the findings of a study based on data from over 15,000 people (Pawlikowska T, Chalder T, Wessely S et al. BMJ 1994:308:743-746), he stated: “18% had experienced substantial fatigue for six months or longer. Fatigue, however, was ‘normally’ distributed…” (Epidemiology of CFS: in “A Research Portfolio on Chronic Fatigue”; edited by Robin Fox for The Linbury Trust; RSM Press 1998).
If fatigue is normally distributed, then the method of equating “the range of normal” (a statistical concept) with “normality” (what is widespread in the population), as in reporting the PACE Trial results, is acceptable. However, this would differentiate attempts to measure fatigue (ie. by using the Chalder Fatigue Questionnaire) from “all health status scales” in respect of which distributions are “highly skewed” (Bowling A et al. J Publ Health Med 1999:21:255-270) as referenced in the PACE Trial documentation.
The implications of this are profound, suggesting as it does that fatigue has a uniquely different relationship to health status.
It would, however, be in keeping with Wessely’s own findings (as published in his 1998 article on the Epidemiology of CFS referenced above) that “the world could not be divided into those with chronic fatigue (the ill group) and those without (the well)” (emphasis added).
It is worth reiterating that in response to Professor Hooper’s complaint to The Lancet, Peter White, writing on behalf of all contributors to The Lancet article, stated: “The PACE trial paper …. does not purport to be studying CFS/ME but CFS defined simply as a principal complaint of fatigue that is disabling, having lasted six months, with no alternative medical explanation (Oxford criteria)”.
Why would The Lancet fast-track an article concerning a spurious disorder defined “simply as a principal complaint of fatigue”?
Against this background, what was the purpose of the PACE Trial, given that the Director of the PACE Clinical Trial Unit, Professor Simon Wessely, is on record -- long before the PACE Trial began -- stating his empirically-based conclusion that the world cannot be divided into “the ill” and “the well” on the basis of the degree of fatigue experienced?
Conclusion
Reporting on the results of the PACE Trial, The Lancet article states: “25 (16%) of 153 participants in the APT group were within normal ranges for both primary outcomes at 52 weeks, compared with 44 (30%) of 148 participants for CBT, 43 (28%) of 154 participants for GET, and 22 (15%) of 152 participants for SMC.”
In the light of the contradictions and other considerations outlined above, it would appear that these figures, modest as they are, inflate the proportions who may be deemed to be within “the normal range” on conclusion of the PACE Trial (but being within “the normal range” does not necessarily equate to what would be considered “normal” in the typical sense of the word).
“The normal range” is a statistical term; “normality” is the usual/regular/common/typical value of a variable in respect of an appropriate control population. Where a measure is “normally distributed’ in the general population, the method chosen to identify the “normal range” – ie. the mean plus or minus one standard deviation from the mean – equates well to what is “normal”. Where the distribution is skewed, as it is in respect of physical function, then the application of this formula fails to deliver a meaningful threshold in terms of what is “normal” in the population.
Furthermore, there were numerous changes to the chosen thresholds and cut off points, both in terms of entry to the PACE Trial and in respect of the assessment of outcomes.
Manipulation of the benchmarks used to recruit to the PACE Trial and to judge whether or not participants were “within the normal range” at its conclusion has produced an absurd situation whereby the same requirement for admission to the trial is deemed by the PIs to denote success at the end of the trial. With regard to these issues:
• the PIs’ chosen thresholds of the “normal range” on the two “primary outcomes” are contrived, unrepresentative, and unduly low in respect of physical function and high in respect of fatigue
• the nature of the comparison group in respect of physical function is misrepresented in the article published in The Lancet, which refers to a “working age population”. The threshold of the range of normal is now said to have been derived from figures relating to the “adult population as a whole” ie. including elderly people. This affords a lower threshold of the “normal range”, thus boosting the proportion of PACE participants who could be deemed to have attained the benchmark level of physical functioning
• the reference cited in respect of the chosen threshold of the range of normal physical functioning does not appear to provide the figures cited by the PIs (ie. Bowling A et al. Publ Health Med 1999:21:255-270)
• the benchmark chosen in respect of ‘”fatigue” is at odds with the threshold of “abnormal” fatigue as “demonstrated” in previously published work by the PIs, as cited in the Trial Protocol.
These factors makes the PACE Trial outcomes appear more favourable than is warranted; this in turn misrepresents the claimed efficacy of the interventions CBT and GET.
At the same time, the two “primary outcome measures” that were specified to delineate “a positive outcome” are not reported. No alternative “primary efficacy measures” are proposed, nor is there any reference to parameters of “a positive outcome” in The Lancet article.
The analysis given greatest prominence simply compares mean scores between the various intervention and control groups on physical function and fatigue and, having identified some statistically significant differences between these, concludes that CBT and GET “moderately improve outcomes”.
On behalf of all of the contributors to the PACE Trial article published in The Lancet, Peter White has agreed with something that people with myalgic encephalomyelitis have been pointing out, ie. the article does not relate to people with ME but to “Oxford”-defined chronic fatigue syndrome: “a principal complaint of fatigue that is disabling, having lasted six months, with no alternative medical explanation.”
Consequently, there should be immediate, high profile, unequivocal clarification specifying to which patients the PACE Trial findings can legitimately be applied.
The PACE Trial Protocol states that the main aim of the trial was to “provide high quality evidence to inform choices made by patients, patient organisations, health services and health professionals about the relative benefits, cost-effectiveness, and cost-utility, as well as adverse effects, of the most widely advocated treatments for CFS/ME”.
The problematic analysis and presentation of data means that the PACE Trial has failed to provide “high quality evidence”, which is an unacceptable outcome for an eight-year project involving 641 participants that cost £5 million to execute.
Patients, clinicians and tax-payers have a right to expect higher scientific exactitude from The Lancet, and the PIs have an ethical and fiscal duty to allow an independent re-evaluation of the data.
Professor Malcolm Hooper’s further concerns about the PACE Trial article published in The Lancet
24th June 2011
Executive Summary
Scrutiny of the criteria used to determine which participants were “within the normal range” on the two primary outcomes in the PACE Trial -- physical function and fatigue -- reveals a manifest contradiction in the report published in The Lancet (PD White et al. Lancet 2011:377:823-836).
Ratings that would qualify a potential participant as sufficiently impaired to enter the trial were considered “within the normal range” when recorded on completion of the trial.
There is thus discordance between the designated entry criteria and the benchmarks of “the normal range” in assessing outcomes at the end of the Trial in respect of both physical function and fatigue.
It cannot be acceptable to describe a PACE Trial participant at the end of the trial as having attained levels of physical function and fatigue “within the normal range” and to consider the same participant sufficiently disabled and symptomatic, as judged by the same recorded levels of physical function and fatigue, to have qualified for entry into the PACE Trial in the first place.
This situation has arisen as a result of numerous changes and re-calculations by the Principal Investigators (PIs) in the relevant benchmarks, changes in the PIs’ cited reference material as to what constitutes “the normal range”, and the PIs’ use of inappropriate comparison groups.
It should be noted that the analysis refers to outcomes “within the normal range”. This is not necessarily the same as “normal”. It is a statistical concept, defined as the mean plus or minus one standard deviation from the mean. It may or may not equate well to what is typical in the population. In the case of physical functioning, the threshold of “the normal range” is far from what is “normal”. Due cognisance of this should have been taken in interpreting outcomes on physical function.
However, even with these factors mitigating in favour of positive reporting, only 30% of the CBT participants and 28% of the GET participants recorded outcomes “within the normal range” in respect of physical functioning and fatigue on conclusion of the PACE Trial.
The Trial Protocol sets out two “primary efficacy measures”. These consist of specific parameters delineating what is to be considered “a positive outcome” on physical functioning and fatigue, respectively. In combination, these were to have been used to identify “overall improvers”, but this analysis has been dropped by the PIs.
Professor Hooper is of the view that the PACE Trial fails on a fundamental aspect of clinical research in that there is no attempt to apply the pre-determined primary efficacy measures to the outcome data, and furthermore the benchmarks used to judge suitability for recruitment and outcomes are patently contradictory.
Together with others who have expressed concerns, Professor Hooper continues to believe that the need for an independent statistical re-evaluation of the raw data is overwhelming as, without such an independent assessment, doubts over the veracity of the claims made by Professor White et al cannot be resolved.
Replying to Professor Hooper’s complaint, Professor White et al state: “The PACE trial paper…does not purport to be studying CFS/ME but CFS defined simply as a principal complaint of fatigue that is disabling, having lasted six months, with no alternative medical explanation (Oxford criteria)”. If The Lancet accepts this, Professor Hooper asks that it publish an immediate and unequivocal clarification about this key issue, since during the 8-year life of the PACE Trial, virtually all the documents refer to “CFS/ME” and the published results are being applied to people with the distinct nosological disorder myalgic encephalomyelitis (ME).
Such clarification would serve to protect people with ME from implicit or explicit pressure to engage in exercise programmes (continuance of welfare benefits as well as medical support and basic civility being contingent upon compliance). ME patients have consistently reported that even graded exercise results in deterioration that is often long-lasting and severe.
Introduction
On 28th March 2011 Professor Hooper submitted his detailed concerns in the document “REPORT: COMPLAINT TO THE RELEVANT EXECUTIVE EDITOR OF THE LANCET ABOUT THE PACE TRIAL ARTICLES PUBLISHED BY THE LANCET” (http://www.meactionuk.org.uk/COMPLAINT-to-Lancet-re-PACE.htm).
Professor Peter White, the lead author of the PACE Trial article, was invited by The Lancet’s senior editorial staff to respond to it, which he did in an undated letter sent to Richard Horton, editor-in-chief of The Lancet (http://www.meactionuk.org.uk/whitereply.htm), as a result of which the complaint was rejected in its entirety by The Lancet’s senior editorial staff.
On 28th May 2011 Professor Hooper therefore responded to the failure of Professor White to address the important issues raised (http://www.meactionuk.org.uk/Comments-on-PDW-letter-re-PACE.htm).
On 3rd June 2011, Zoe Mullan, senior editor at The Lancet, indicated to a correspondent unconnected with Professor Hooper that if he had further concerns, she would welcome his contacting her about them. Having been made aware of this, he agreed to do so.
A specific and major concern is the focus of this present document, which relates to the PIs’ PACE Trial entry criteria and criteria for assessing outcomes on physical function and fatigue. The result is an overlap between the benchmarks of “the normal range” on these measures as applied to PACE participants’ outcomes and the benchmarks (on these same measures) denoting impairment at the outset of the Trial. Furthermore, the PIs have failed to report on the pre-defined criteria delineating “a positive outcome” that are specified in the Trial Protocol.
Professor Hooper cannot comprehend how The Lancet editors can accept such non-science as objective and reliable evidence of the success of the PACE Trial and he fails to understand how senior Lancet editors could be “fully satisfied” by the PIs’ illogical conclusion that the same requirement for admission to the trial has been judged by them to denote attainment within “the normal range” at the end of the trial, a situation that requires correction or clarification as a matter of urgency.
He believes that, as a UK custodian of valid science in medicine, The Lancet failed to recognise the very serious flaws in the PACE study itself and in the published article reporting the supposedly successful outcome.
On 18th April 2011 in his broadcast about the PACE Trial on Australian ABC Radio National, Richard Horton was disparaging about criticisms of the article, asserting that the PACE Trial was a well-designed and well-executed study; he also said: “We will invite the critics to submit versions of their criticism for publication and we will try as best as we can to conduct a reasonable scientific debate about this paper. This will be a test I think of this particular section of the patient community to engage in a proper scientific discussion” (http://www.abc.net.au/rn/healthreport/stories/2011/3192571.htm).
Professor Hooper asks that The Lancet honour Richard Horton’s call and that, as part of that process, this present submission be afforded due scrutiny by The Lancet’s independent statisticians.
For the avoidance of doubt, relevant extracts from the PACE Trial protocol and the published article are here provided:
.........................................
PACE TRIAL PROTOCOL (extract)
“10.1 Primary outcome measures
10.1.1 Primary efficacy measures
Since we are interested in changes in both symptoms and disability we have chosen to designate both the symptoms of fatigue and physical function as primary outcomes. This is because it is possible that a specific treatment may relieve symptoms without reducing disability, or vice versa. Both these measures will be self-rated.
The 11 item Chalder Fatigue Questionnaire measures the severity of symptomatic fatigue, and has been the most frequently used measure of fatigue in most previous trials of these interventions. We will use the 0,0,1,1 item scores to allow a possible score of between 0 and 11. A positive outcome will be a 50 % reduction in fatigue score, or a score of 3 or less, this threshold having been previously shown to indicate normal fatigue.
The SF-36 physical function sub-scale measures physical function, and has often been used as a primary outcome measure in trials of CBT and GET. We will count a score of 75 (out of a maximum of 100) or more, or a 50 % increase from baseline in SF-36 sub-scale score as a positive outcome. A score of 70 is about one standard deviation below the mean score (about 85, depending on the study) for the UK adult population. Those participants who improve in both outcome measures will be regarded as overall improvers.
10.2 Secondary outcome measures
10.2.1 Secondary efficacy measures …..”
.........................................
LANCET ARTICLE REPORTING THE PACE TRIAL FINDINGS (extracts)
Study Design & Participants
“Other eligibility criteria consisted of a bimodal score of 6 of 11 or more on the Chalder fatigue questionnaire [ref. 15] and a score of 60 of 100 or less on the short form-36 physical function subscale. [ref. 16] 11 months after the trial began, this requirement was changed from a score of 60 to a score of 65 to increase recruitment”. (Professor White has now admitted in his letter to The Lancet that this “may affect generalisability”).
Outcomes
“The two participant-rated primary outcome measures were the Chalder fatigue questionnaire (Likert scoring 0,1, 2, 3; range 0–33; lowest score is least fatigue) [ref. 15] and the short form-36 physical function subscale (version 2; range 0–100; highest score is best function) [ref. 16]. Before outcome data were examined, we changed the original bimodal scoring of the Chalder fatigue questionnaire (range 0–11) to Likert scoring to more sensitively test our hypotheses of effectiveness”.
Statistical Analysis
“In another post-hoc analysis, we compared the proportions of participants who had scores of both primary outcomes within the normal range at 52 weeks. This range was defined as less than the mean plus 1 SD scores of adult attendees to UK general practice of 14.2 (+4.6) for fatigue (score of 18 or less) and equal to or above the mean minus 1 SD scores of the UK working age population of 84 (–24) for physical function (score of 60 or more) [refs. 32,33]”.
Results
25 (16%) of 153 participants in the APT group were within normal ranges for both primary outcomes at 52 weeks, compared with 44 (30%) of 148 participants for CBT, 43 (28%) of 154 participants for GET, and 22 (15%) of 152 participants for SMC”.
......................................
Failure to Report on “Positive Outcomes”
The PACE Trial Protocol sets out the criteria to be used to delineate”a positive outcome”. These criteria apply to the scores achieved on the two primary outcomes, physical function and fatigue, respectively (see box above).
Analysis of these “primary efficacy measures” (there were no others) does not appear in the article published in The Lancet.
This omission may be viewed in the context of the prior reporting of disappointing results from the PACE Trial’s sibling, the MRC-funded FINE (Fatigue Intervention by Nurses Evaluation) Trial (AJ Wearden et al. BMJ 2010; 340; c1777). It is notable that the criteria specified in the PACE Trial Protocol to denote “a positive outcome” are identical to the criteria that were used to gauge outcomes in the FINE Trial, with the exception that a threshold of 70 (as opposed to 75) was used on physical functioning in the FINE Trial.
Given the close links between the PACE and FINE Trials, it is inconceivable that the PACE Trial Investigators would have been unaware that criteria differing little from their own pre-designated “positive outcome” measures in the PACE Trial had produced disappointing results when applied to the FINE data.
(The poor FINE Trial results may also have influenced the PACE Trial PIs’ decision to change the method approved in respect of assessing outcomes on fatigue as recorded via the Chalder Fatigue Questionnaire, thus departing from the Trial Protocol – see below).
No other measure of “a positive outcome” is presented in The Lancet article. Instead, the analysis focuses on inter-group differences in scores recorded in respect of physical function and fatigue. These are described as “primary outcome measures”. However, without having a (pre)specified parameter on the relevant variables as to what is to be deemed a “primary outcome measure”, this description is meaningless.
The Lancet article does, however, present a secondary analysis of outcomes in respect of these variables, assessed against “the normal range” (see box above). It is this analysis that contains an inherent contradiction ie. it was possible for participants to be deemed to have attained levels of physical function and fatigue “within the normal range” when they had actually deteriorated on these parameters over the course of the PACE Trial.
Assessing Physical Function
Physical function was assessed using the Physical Function subscale of the Short Form 36 Health Survey Questionnaire (usually abbreviated to SF-36), with higher scores indicating better function (McHorney CA et al; Med Care 1993:31:247-263). The raw score range is over a 20 point range. However for purposes of analysis this is converted to a scale of 0-100, rising in increments of 5.
What is “Normal” Physical Function?
The situation whereby it was possible for a person to deteriorate on this measure over the course of the PACE Trial yet still be deemed to have attained physical function “within the normal range” on completion of the trial arose in part in consequence of the PIs’ various revisions of the relevant benchmarks in respect of recruitment criteria and the assessment of outcomes.
The problem also resides in the standard practice of using the mean plus and minus one standard deviation (SD) from the mean to denote the “range of normal” on a variable. When data is “normally distributed” (in statistical terms) around a mean, the concept relates well to what is the norm. In respect of physical function in general, and SF-36 scores in particular, data is skewed. In these circumstances, there is a difference between what is normal in the sense of being most frequently found, and “the normal range”. This should have been flagged up by the PIs in interpreting the reported outcomes on physical function.
The two problems are delineated below.
The Threshold of the “Range of Normal” as a Benchmark on Physical Function
The paper referenced in respect of the threshold of “the normal range” that has been applied in the PACE Trial (Bowling A et al; J Publ Health Med 1999:21:255-270) reviews normative data from a range of sources and concludes: “These results confirm the highly skewed nature of the distributions (see Fig 1), which is a problematic feature of all health status scales.”
This “problematic” feature is that the data are highly skewed towards the high end of the scale. Indeed, scrutiny of the relevant histogram in Fig 1 of the Bowling et al. paper suggests that there are more people who score the maximum 100 on the SF-36 physical functioning scale than the combined total of people who score anything other than 100.
In such circumstances, applying a benchmark of the mean minus one standard deviation to general population data on the SF-36 physical function subscale to denote the threshold of “the normal range”, while technically correct, does not equate to what would be understood as “normal” in respect of physical functioning in the general population.
Because of the skewed nature of distributions on health status scales, the use of a “reference range” may be more appropriate for comparative purposes. This describes the variations of a measurement or value in healthy individuals and is a basis for a physician or other health professional to interpret a set of results for a particular patient. The standard definition of a reference range originates in what is most prevalent in a control group taken from the population.
The PIs’ Definitions and Re-definitions of “Normal” and “The Range of Normal” on Physical Function
In the PACE Trial documents obtained under the FOIA it is recorded that the PIs’ intention was to set the recruitment ceiling at a maximum of 70 and to define normal physical function as an SF-36 score of at least 75.
In his application dated 12th September 2002 to the West Midlands Multicentre Ethics Committee (MREC), Professor White described the derivation of this threshold of “normal” as follows: “We will count a score of 75 [out of a maximum of 100] or more as indicating normal function, this score being one standard deviation below the mean score [90] for the UK working age population”, citing Jenkinson C et al. Short form 36 (SF-36) Health Survey questionnaire: normative data from a large random sample of working age adults; BMJ:1993:306:1437-1440.
It should be noted that the comparative data related to the UK working age population.
A ceiling of 70 in respect of recruitment and a threshold of 75 to denote “normal function” on the SF-36 physical function subscale was accordingly presented in the PACE Trial Identifier. As the SF-36 Physical Function subscale proceeds in increments of 5 this meant that there was the narrowest of margins between the ceiling on physical function in respect of entry to PACE, and the threshold of “normal” on conclusion.
The proposed threshold for entry was discussed at the Trial Steering Committee held on 22nd April 2004 (at which Professor White was present) and those discussions are minuted as follows: “7. The outcome measures were discussed. It was noted that there may need to be an adjustment of the threshold needed for entry to ensure improvements were more than trivial (emphasis added). For instance a participant with a Chalder score of 4 would enter the trial and be judged improved with an outcome score of 3. The TSC (Trial Steering Committee) suggested one solution would be that the entry criteria for the Chalder scale score should be 6 or above, so that a 50% reduction would be consistent with an outcome score of 3. A similar adjustment should be made for the SF-36 physical function subscale” (emphasis added).
Consequently, when the PACE Trial began (the first participant having been randomised on 18th March 2005), the ceiling in respect of SF-36 at entry was a score of 60.
In the Trial Protocol, an SF-36 threshold of 75 remains in respect of assessment of outcomes and plays a part in the identification of “a positive outcome”: “We will count a score of 75 (out of a maximum of 100) or more, or a 50% increase from baseline in SF-36 subscale score as a positive outcome”.
This applies in both the full 226 page final version (unpublished by the PIs but obtained under the FOIA and available at http://www.meactionuk.org.uk/FULL-Protocol-SEARCHABLE-version.pdf) and the shortened 20-page version of the Protocol that was published in 2007 (www.biomedcentral.com/1471-2377/7/6 -- which was not peer-reviewed by the journal because it had already received ethical and funding approval by the time it was submitted, the Editor commenting: “We strongly advise readers to contact the authors or compare with any published result(s) articles to ensure that no deviations from the protocol occurred during the study”).
Curiously, although the SF-36 score threshold remains at 75, the threshold of “normal” cited by the PIs in the PACE Trial protocol has been lowered to 70: “A score of 70 is about one standard deviation below the mean score (about 85, depending on the study) for the UK adult population”. The PIs cite two references in support (Jenkinson C et al; BMJ 1993:306:1437-1440 – ie. the same reference as in the application to the MREC -- and Bowling A et al; J Publ Health Med 1999:21:255-270). It is notable that the normative group identified now relates to the adult population as a whole (ie. it includes elderly people, whereas the normative group previously cited was the working age population).
Because of continued problems attaining recruitment targets, on 9th February 2006 Professor White wrote to Mrs Anne McCullough, Administrator at the West Midlands MREC, requesting a substantial amendment to the trial's entry criteria as he wished to raise the SF-36 threshold required for inclusion criteria for the trial from 60 to 65. He stated: "Increasing the threshold [from 60 to 65] will improve generalisation…. The TMG (Trial Management Group) and TSC (Trial Steering Committee) believe this will also make a significant impact on recruitment”.
(It is notable that in this request for a substantive amendment dated 9th February 2006, Professor White assured the MREC that "Increasing the threshold [from 60 to 65] will improve generalisation” but in his response to Professor Hooper’s complaint on this point, Professor White admitted that: “Such a change may affect the generalisability…of the results”. The context of this statement is such that a stricture rather than an improvement is implicit. In effect, this change meant that, in the midst of the recruitment period, the pool of potential candidates was increased by relaxing the entry criteria to allow people with better physical capacity to take part).
Furthermore, it narrowed the gap between how physically impaired a person had to be in order to be recruited, and how well they had to function to be deemed to have a positive outcome, leading to the following approach to the MREC from Professor White: “This would mean the entry criterion on this measure was only 5 points less than the categorical positive outcome of 7O on this scale. We therefore propose an increase of the categorical positive outcome from 70 to 75, reasserting a ten point score gap between entry criterion and positive outcome” (emphasis added).
Given that the threshold of positive outcome is stated as 75 in the Trial Protocol, this is baffling. (It was the threshold of normal function in the population that is cited as 70.) Unless there is an as-yet unidentified document reducing the SF-36 threshold denoting a positive outcome from 75 to 70, it would appear that Professor White was confused as to the existing benchmark.
In any event, the gap proposed was ten points – representing a minimum increment of two stages on the SF-36 scale.
Professor White further assured the MREC that this change would bring the PACE Trial into line with its “sister study”, the FINE Trial, and that it would not affect the analysis of the trial data: “The other advantage of changing to 75 is that it would bring the PACE trial into line with the FINE trial, an MRC funded trial for CFS/ME and the sister study to PACE. This small change is unlikely to influence power calculations or analysis”.
The presentation of trial data in The Lancet demonstrates that Professor White did not observe the assurances he provided to the ethics committee.
In The Lancet article reporting the results of the PACE Trial, the primary efficacy measures as set out in the trial protocol have been abandoned altogether. There is no reference to any measure of “a positive outcome”.
However, a “post hoc” analysis is presented, which entails comparing PACE participants’ outcomes against a threshold of “the normal range” in respect of physical function. Defined as the mean minus one standard deviation in respect of a normative population and having been specified as 75 in the application to the MREC and reduced to 70 in the protocol, in the analysis published in The Lancet the threshold of “the normal range” is further reduced to an SF-36 score of 60.
This was based on “the mean minus 1 SD scores of the UK working age population of 84 (–24) for physical function”.
One reference is cited in respect of this threshold, this being the Bowling et al paper that was one of two cited at the PACE Trial Protocol stage. That paper reviews normative data from a range of sources, none of which appears to provide the figures cited (see Table 4: “Comparison of SF-36 dimension norms in Britain” in the Bowling et al paper).
Following Professor Hooper’s complaint, Professor White responded in his letter to The Lancet: ““We did, however, make a descriptive error in referring to the sample we referred to in the paper as a ‘UK working age population’, whereas it should have read ‘English adult population’ ”. Such a comparator is inappropriate because, by definition, the English adult population includes elderly people. The appropriate comparison would be with the SF-36 physical function scores for age-matched healthy people. However this would have raised the threshold of the normal range to a higher level, thus making it more difficult – if not impossible – for the PIs to claim even moderate success for the PACE Trial.
Furthermore, the data analysis published in The Lancet is at odds with one of the reasons given by Professor White to the MREC for previously setting the “categorical positive outcome” at 75, namely to put PACE into line with the FINE Trial.
It is notable that, when the FINE Trial results were reported (in the spring of 2010), only 17 of the 81 participants assessed had met the relevant parameter in respect of physical function at the primary outcome point -- a score of at least 75 or an improvement of 50% from baseline.
Remarkably, in view of the abstruse complexity of much of the analysis presented in The Lancet article, the PACE Trial PIs have stated: “Changes to the original published protocol were made to improve either recruitment or interpretability” (The Lancet: doi:10.1016/S0140-6736)11)60651-X).
In summary, since it was possible to score 65 on the SF-36 and still be recruited to the PACE Trial, setting the threshold of “the normal range” at 60 on completion meant that there was a negative five point score gap, meaning that a participant could actually deteriorate during the course of the trial and leave the trial more disabled than before treatment, but still fall within the PIs’ new definition of “normal” (ie. attainment of “normality” was set lower than the entry criteria, which by any standards is illogical).
Assessing “Normal” Fatigue
In the PACE Trial, fatigue was assessed using the Chalder Fatigue Questionnaire or CFQ (Chalder T, Wessely S et al; J Psychosom Res 1993:37:147-153).
The Chalder Fatigue Questionnaire comprises eleven questions. Respondents are asked to indicate their situation in respect of each of these on a four-point scale: “less than usual”; “no more than usual”; “more than usual”; “much more than usual”.
The fatigue score is the sum total of the scores obtained in respect of the eleven items in the Chalder Fatigue Questionnaire. The higher the score, the greater the impact of fatigue. However, there are two methods of scoring responses.
Change by the PIs in Method of Scoring Outcomes
One method of producing a fatigue score involves scoring these respective responses on a scale from 0 to 3 and summing the total. This method (known as Likert scoring, which has a possible range of 0 - 33) was used to assess outcomes on fatigue.
However, for the purposes of screening for entry to PACE, a different method of scoring the responses was adopted. Known as bimodal analysis, this entails placing each response into one of two categories: any item rated “less than usual” or “no more than usual” is allocated a score of 0; any item rated “more than usual” or “much more than usual” is allocated a score of 1. The possible range is therefore 0 -11.
It is notable that the original proposal – as set out in the MREC application, the Trial Identifier, and the Trial Protocol - was to analyse results bimodally. This was to feed into one of two “primary efficacy measures”: “A positive outcome will be a 50 % reduction in fatigue score, or a score of 3 or less, this threshold having been previously shown to indicate normal fatigue (Trial Protocol, citing Chalder T, Berelowitz G, Hirsch S, Pawlikowska T, Wallace P, Wessely S and Wright D: Development of a fatigue scale. J Psychosom Res 1993, 37:147-153.)
The rationale provided for the change to Likert scoring in the consideration of outcomes in The Lancet article was: “Before outcome data were examined, we changed the original bimodal scoring of the Chalder fatigue questionnaire (range 0-11) to Likert scoring to more sensitively test our hypothesis of effectiveness”.
However, one consequence of adopting a Likert approach to processing responses is that it becomes easier to demonstrate differences between the groups when such differences are relatively small.
This had been demonstrated in respect of the fatigue outcome data in the FINE Trial: analysed using bimodal scoring as set out in the FINE Trial protocol, there was no statistically significant improvement in fatigue between the FINE interventions and the “treatment as usual” control group at the primary outcome point (Wearden AJ et al; BMJ 2010:340:c1777).
However, following publication of those results, the FINE Trial Investigator (Dr Alison Wearden PhD, an observer on the PACE Trial Steering Committee) reappraised the FINE Trial data according to Likert scoring and produced a “clinically modest, but statistically significant effect…at both outcome points” (http://www.bmj.com/cgi/eletters/340/apr22_3/c1777#236235), a fact of which the PACE Trial PIs would have been well aware.
What is “Normal” Fatigue? What is “Abnormal” Fatigue?
As with the physical function scores, there is an overlap between the level of fatigue deemed sufficiently significant to qualify a person to participate in the PACE Trial and the level of fatigue deemed to denote a positive outcome.
This means that identical responses on the Chalder Fatigue Questionnaire could qualify a person as sufficiently “fatigued” for entry to the PACE trial and later allow them to be deemed to have attained “normality” in terms of their level of fatigue at the outcomes assessment stage.
This absurdity is somewhat opaque owing to the use of a different method of processing responses to the Chalder Fatigue Questionnaire at entry stage (bimodal) and outcomes assessment (Likert) stage (see above). Nonetheless it is possible to demonstrate a manifest contradiction and flaws in the definitions used.
Qualifying threshold re: Fatigue for Entry to the PACE Trial
As with physical function, the criterion that was used to recruit participants to PACE in respect of fatigue differed from what was originally specified.
In his application dated 12th September 2002 to the MREC, Professor White stated: “We will operationalise CFS in terms of fatigue severity … as follows: a Chalder fatigue score of four or more.” He also referred to: “a score of 4 having been previously shown to indicate abnormal fatigue.”
The PACE Trial Identifier repeated the requirement for a fatigue score of 4 or more to indicate caseness at entry.
However, following discussion at the Trial Steering Committee (on 22nd April 2004) this was revised upward to 6 in order to allow for a more appropriate gap to appear between the required level of fatigue on entry and the threshold of an outcome denoting improvement (at that point, a score of 3 or less, or a 50% improvement from baseline -- however, the consideration of this “primary efficacy measure” was later dropped). PACE participants were recruited on this basis.
Ceiling of “Normal” Fatigue on Completion of the PACE Trial
The commitments given before the PACE Trial interventions began were consistently for a ceiling score of 3 on the Chalder Fatigue Questionnaire, rated bimodally, to represent “normal” fatigue on completion of the trial. The rationale for treating bimodally rated scores of 4 and above as representing abnormal levels of fatigue is repeatedly cited as Chalder T et al. J Psychosom Res 1993:37:147-153. That paper is the work of the lead author of the Chalder Fatigue Questionnaire, PACE Trial Principal Investigator Professor Trudie Chalder, a co-author being the Director of the PACE Trial Clinical Unit and member of the Trial Management Group Professor Simon Wessley. For example:
In his application to the MREC, under the heading “What is the primary end point?” Professor White stated: “We will use the 0,0,1,1 item scores to allow a categorical threshold measure of "abnormal" fatigue with a score of 4 having been previously shown to indicate abnormal fatigue.”
In the PACE Trial Identifier, under the heading “3.9 What are the proposed outcome measures? Primary efficacy measures” Professor White stated: “We will use the 0,0,1,1 item scores to allow a categorical threshold measure of “abnormal” fatigue with a score of 4 having been previously shown to indicate abnormal fatigue [ref 23]” (Chalder T et al. J Psychosom Res 1993; 37: 147-153.)
In the PACE Trial protocol, under the heading “10.1 Primary outcome measures; 10.1.1 Primary efficacy measures” Professor White stated: A positive outcome will be a 50 % reduction in fatigue score, or a score of 3 or less, this threshold having been previously shown to indicate normal fatigue” (Chalder T et al. J Psychosom Res 1993, 37:147-153).
However, in The Lancet article reporting the results of the PACE Trial, when “normal” levels of fatigue were judged on completion of the trial, the analysis conducted related to: “the proportions of participants who had scores of both primary outcomes within the normal range at 52 weeks. This range was defined as less than the mean plus 1 SD scores of adult attendees to UK general practice of 14.2 (+4.6) for fatigue (score of 18 or less) … ”: (32: Cella M, Chalder T et al: J Psychsom Res 2010:69:17-22).
A Likert score of 18 can translate to a bimodal score of between 4 and 9, depending on the specific responses that combine to produce the Likert score. According to the PIs, a bimodal score of 4 or more indicates abnormal fatigue (see above). Hence a Likert score of 18 always represents a state of abnormal fatigue.
In order to allow for a sufficient gap between the positive outcome criterion then proposed – a bimodal score of 3 or less -- the threshold of fatigue at entry to the PACE Trial had been set at 6. However, it is possible to record responses producing a Likert score of 18 (ie. the ceiling of “the range of normal” fatigue on conclusion of the PACE Trial) which translates to bimodal scores of 6, 7, 8, and 9.
Either the threshold of “normal” denoting a positive outcome should have been lower than the measure used in the analysis in The Lancet (Likert 18), and/or the threshold of caseness at recruitment (bimodal score of 6) should have been higher.
The net result of the analysis conducted is that identical responses could both qualify a person as sufficiently “fatigued” for entry to the PACE trial and at completion of the trial allow them to be deemed to be within “the range of normal” in terms of their level of fatigue.
What’s more, as with physical function, it would be possible for a person to record poorer responses in respect of fatigue on completion of the trial than at the outset, yet still be deemed by the PIs to be within “the range of normal” on this subjective primary outcome.
Further Issues Regarding the Assessment of Fatigue in the PACE Trial
Several further points are relevant in this regard.
First, the cited reference for the benchmark chosen to assess PACE outcomes, co-authored by the PACE Trial Principal Investigator Trudie Chalder, also provides bimodal scores for the same population: “community sample: mean fatigue 3.27 (S.D. 3.21)". This places the ceiling at which a person can have fatigue and still be considered within the normal range at a bimodal score of 6.
This is inconsistent with the PACE Trial literature, which repeatedly refers to “a score of 4 having been previously shown to indicate abnormal fatigue” (see above), citing a paper lead-authored by Trudie Chalder and co-authored by Director of the PACE Trial Clinical Unit and member of the Trial Management Group Professor Simon Wessely.
Secondly, the Lancet article states that the benchmark employed was derived from fatigue scores from “adult attendees to UK general practice”. That study was part of a long-term longitudinal scrutiny of a cohort group but, notably, “only completed data from those who went to see their general practitioner the following year…. were used in this study” (emphasis added). The Chalder Fatigue Questionnaires therefore related to the year prior to the selected cohort becoming “attendees to UK general practice”.
This is a curious and convoluted selection of a comparison population from which to derive normative data. Moreover the nature of this comparison group is by no means obvious from the PIs’ description (“adult attendees to general practice”) that is set out in The Lancet article on the PACE Trial results. Again, Trudie Chalder was an author of both papers.
Finally, it is possible that fatigue - unlike physical function - is “normally distributed” in the general population, as asserted by (then Dr) Simon Wessely. Referring to the findings of a study based on data from over 15,000 people (Pawlikowska T, Chalder T, Wessely S et al. BMJ 1994:308:743-746), he stated: “18% had experienced substantial fatigue for six months or longer. Fatigue, however, was ‘normally’ distributed…” (Epidemiology of CFS: in “A Research Portfolio on Chronic Fatigue”; edited by Robin Fox for The Linbury Trust; RSM Press 1998).
If fatigue is normally distributed, then the method of equating “the range of normal” (a statistical concept) with “normality” (what is widespread in the population), as in reporting the PACE Trial results, is acceptable. However, this would differentiate attempts to measure fatigue (ie. by using the Chalder Fatigue Questionnaire) from “all health status scales” in respect of which distributions are “highly skewed” (Bowling A et al. J Publ Health Med 1999:21:255-270) as referenced in the PACE Trial documentation.
The implications of this are profound, suggesting as it does that fatigue has a uniquely different relationship to health status.
It would, however, be in keeping with Wessely’s own findings (as published in his 1998 article on the Epidemiology of CFS referenced above) that “the world could not be divided into those with chronic fatigue (the ill group) and those without (the well)” (emphasis added).
It is worth reiterating that in response to Professor Hooper’s complaint to The Lancet, Peter White, writing on behalf of all contributors to The Lancet article, stated: “The PACE trial paper …. does not purport to be studying CFS/ME but CFS defined simply as a principal complaint of fatigue that is disabling, having lasted six months, with no alternative medical explanation (Oxford criteria)”.
Why would The Lancet fast-track an article concerning a spurious disorder defined “simply as a principal complaint of fatigue”?
Against this background, what was the purpose of the PACE Trial, given that the Director of the PACE Clinical Trial Unit, Professor Simon Wessely, is on record -- long before the PACE Trial began -- stating his empirically-based conclusion that the world cannot be divided into “the ill” and “the well” on the basis of the degree of fatigue experienced?
Conclusion
Reporting on the results of the PACE Trial, The Lancet article states: “25 (16%) of 153 participants in the APT group were within normal ranges for both primary outcomes at 52 weeks, compared with 44 (30%) of 148 participants for CBT, 43 (28%) of 154 participants for GET, and 22 (15%) of 152 participants for SMC.”
In the light of the contradictions and other considerations outlined above, it would appear that these figures, modest as they are, inflate the proportions who may be deemed to be within “the normal range” on conclusion of the PACE Trial (but being within “the normal range” does not necessarily equate to what would be considered “normal” in the typical sense of the word).
“The normal range” is a statistical term; “normality” is the usual/regular/common/typical value of a variable in respect of an appropriate control population. Where a measure is “normally distributed’ in the general population, the method chosen to identify the “normal range” – ie. the mean plus or minus one standard deviation from the mean – equates well to what is “normal”. Where the distribution is skewed, as it is in respect of physical function, then the application of this formula fails to deliver a meaningful threshold in terms of what is “normal” in the population.
Furthermore, there were numerous changes to the chosen thresholds and cut off points, both in terms of entry to the PACE Trial and in respect of the assessment of outcomes.
Manipulation of the benchmarks used to recruit to the PACE Trial and to judge whether or not participants were “within the normal range” at its conclusion has produced an absurd situation whereby the same requirement for admission to the trial is deemed by the PIs to denote success at the end of the trial. With regard to these issues:
• the PIs’ chosen thresholds of the “normal range” on the two “primary outcomes” are contrived, unrepresentative, and unduly low in respect of physical function and high in respect of fatigue
• the nature of the comparison group in respect of physical function is misrepresented in the article published in The Lancet, which refers to a “working age population”. The threshold of the range of normal is now said to have been derived from figures relating to the “adult population as a whole” ie. including elderly people. This affords a lower threshold of the “normal range”, thus boosting the proportion of PACE participants who could be deemed to have attained the benchmark level of physical functioning
• the reference cited in respect of the chosen threshold of the range of normal physical functioning does not appear to provide the figures cited by the PIs (ie. Bowling A et al. Publ Health Med 1999:21:255-270)
• the benchmark chosen in respect of ‘”fatigue” is at odds with the threshold of “abnormal” fatigue as “demonstrated” in previously published work by the PIs, as cited in the Trial Protocol.
These factors makes the PACE Trial outcomes appear more favourable than is warranted; this in turn misrepresents the claimed efficacy of the interventions CBT and GET.
At the same time, the two “primary outcome measures” that were specified to delineate “a positive outcome” are not reported. No alternative “primary efficacy measures” are proposed, nor is there any reference to parameters of “a positive outcome” in The Lancet article.
The analysis given greatest prominence simply compares mean scores between the various intervention and control groups on physical function and fatigue and, having identified some statistically significant differences between these, concludes that CBT and GET “moderately improve outcomes”.
On behalf of all of the contributors to the PACE Trial article published in The Lancet, Peter White has agreed with something that people with myalgic encephalomyelitis have been pointing out, ie. the article does not relate to people with ME but to “Oxford”-defined chronic fatigue syndrome: “a principal complaint of fatigue that is disabling, having lasted six months, with no alternative medical explanation.”
Consequently, there should be immediate, high profile, unequivocal clarification specifying to which patients the PACE Trial findings can legitimately be applied.
The PACE Trial Protocol states that the main aim of the trial was to “provide high quality evidence to inform choices made by patients, patient organisations, health services and health professionals about the relative benefits, cost-effectiveness, and cost-utility, as well as adverse effects, of the most widely advocated treatments for CFS/ME”.
The problematic analysis and presentation of data means that the PACE Trial has failed to provide “high quality evidence”, which is an unacceptable outcome for an eight-year project involving 641 participants that cost £5 million to execute.
Patients, clinicians and tax-payers have a right to expect higher scientific exactitude from The Lancet, and the PIs have an ethical and fiscal duty to allow an independent re-evaluation of the data.
University of California: Those with Gulf War illness have the same symptoms as those with genetic mitochondrial disorders
By Kelly Kennedy, USA TODAY, June 26, 2011:
Golomb found that those with Gulf War illness had the same list of symptoms as those with genetic mitochondrial disorders. Mitochondria convert oxygen and glucose into cell energy. The brain and the muscles use more energy than other parts of the body, so those organs are affected first by the disorder.
"Oxidated stress can come from a lot of bad things in the environment," Golomb said, explaining that causes the problems in the mitochondria. Her past research has involved chemical exposures in the Persian Gulf, such as sarin gas, pesticides and anti-nerve-agent pills.
The unpublished results will be released at a committee meeting at the VA .
The research was funded by the Defense Department through the Congressionally Directed Medical Research Programs. Read more>>
Golomb found that those with Gulf War illness had the same list of symptoms as those with genetic mitochondrial disorders. Mitochondria convert oxygen and glucose into cell energy. The brain and the muscles use more energy than other parts of the body, so those organs are affected first by the disorder.
"Oxidated stress can come from a lot of bad things in the environment," Golomb said, explaining that causes the problems in the mitochondria. Her past research has involved chemical exposures in the Persian Gulf, such as sarin gas, pesticides and anti-nerve-agent pills.
The unpublished results will be released at a committee meeting at the VA .
The research was funded by the Defense Department through the Congressionally Directed Medical Research Programs. Read more>>
Anti-oxidants ease Gulf War Syndrome, study finds
By Kelly Kennedy, USA TODAY, June 26, 2011:
WASHINGTON — Anti-oxidant supplements can significantly reduce the symptoms of Gulf War Syndrome, suffered by tens of thousands of veterans, according to research to be presented Monday to the Department of Veterans Affairs.
The study by Beatrice Golomb of the medical school at the University of California-San Diego tested the value of giving doses of the coenzyme Q10 to veterans of the Persian Gulf War. "Every single one of them … improved," Golomb said, adding that there was improvement for all 20 symptoms. "For it to have been chance alone is under one in a million."
More than 20 years after the end of the Gulf War, the 1990-91 conflict that liberated Kuwait after an invasion by Iraq, Golomb's study is the first research that offers potential relief for sufferers of Gulf War Syndrome, said Jim Binns, chairman of the federal panel investigating the condition.
Roughly one in four of the 697,000 veterans of the war has Gulf War illness, according to the federal Research Advisory Committee on Gulf War Veterans' Illnesses. Symptoms include memory and concentration problems, chronic headaches, widespread pain, gastrointestinal problems and chronic fatigue.
"It is the first medication study to show a significant improvement of a major symptom of Gulf War illness in the history of Gulf War illness research," said Binns, the committee chairman. Although it's not a cure, Binns said, and requires further research, "it is extremely encouraging."
Golomb said the treatments helped veterans with headaches, inability to focus and fatigue after exertion. There were also unexpected benefits, she said, such as fewer symptoms for participants suffering from chronic diarrhea and improved blood pressure levels. She worked with 46 veterans.
Golomb found that those with Gulf War illness had the same list of symptoms as those with genetic mitochondrial disorders. Mitochondria convert oxygen and glucose into cell energy. The brain and the muscles use more energy than other parts of the body, so those organs are affected first by the disorder.
"Oxidated stress can come from a lot of bad things in the environment," Golomb said, explaining that causes the problems in the mitochondria. Her past research has involved chemical exposures in the Persian Gulf, such as sarin gas, pesticides and anti-nerve-agent pills.
The unpublished results will be released at a committee meeting at the VA .
The research was funded by the Defense Department through the Congressionally Directed Medical Research Programs.
WASHINGTON — Anti-oxidant supplements can significantly reduce the symptoms of Gulf War Syndrome, suffered by tens of thousands of veterans, according to research to be presented Monday to the Department of Veterans Affairs.
The study by Beatrice Golomb of the medical school at the University of California-San Diego tested the value of giving doses of the coenzyme Q10 to veterans of the Persian Gulf War. "Every single one of them … improved," Golomb said, adding that there was improvement for all 20 symptoms. "For it to have been chance alone is under one in a million."
More than 20 years after the end of the Gulf War, the 1990-91 conflict that liberated Kuwait after an invasion by Iraq, Golomb's study is the first research that offers potential relief for sufferers of Gulf War Syndrome, said Jim Binns, chairman of the federal panel investigating the condition.
Roughly one in four of the 697,000 veterans of the war has Gulf War illness, according to the federal Research Advisory Committee on Gulf War Veterans' Illnesses. Symptoms include memory and concentration problems, chronic headaches, widespread pain, gastrointestinal problems and chronic fatigue.
"It is the first medication study to show a significant improvement of a major symptom of Gulf War illness in the history of Gulf War illness research," said Binns, the committee chairman. Although it's not a cure, Binns said, and requires further research, "it is extremely encouraging."
Golomb said the treatments helped veterans with headaches, inability to focus and fatigue after exertion. There were also unexpected benefits, she said, such as fewer symptoms for participants suffering from chronic diarrhea and improved blood pressure levels. She worked with 46 veterans.
Golomb found that those with Gulf War illness had the same list of symptoms as those with genetic mitochondrial disorders. Mitochondria convert oxygen and glucose into cell energy. The brain and the muscles use more energy than other parts of the body, so those organs are affected first by the disorder.
"Oxidated stress can come from a lot of bad things in the environment," Golomb said, explaining that causes the problems in the mitochondria. Her past research has involved chemical exposures in the Persian Gulf, such as sarin gas, pesticides and anti-nerve-agent pills.
The unpublished results will be released at a committee meeting at the VA .
The research was funded by the Defense Department through the Congressionally Directed Medical Research Programs.
Woman dies at her own funeral
TheHuffingtonPost.com, 06/24/11:
A woman has reportedly died from the shock of coming to life at her own funeral.
Fagilyu Mukhametzyanov, 49, was wrongly declared dead by doctors, but she actually died after hearing people pray for her soul in Kazan, Russia, according to the Daily Mail.
She was taken back to a hospital where she was declared dead, this time for good.
"Her eyes fluttered and we immediately rushed her back to the hospital but she only lived for another 12 minutes," her husband, Fagili Mukhametzyanov, said, according to the Daily News.
Mukhametzyanov said he plans to sue the hospital, which says it is conducting an investigation of the incident.
Her final cause of death was heart failure, according to reports. Her "first death" was also heart-related, a suspected heart attack.
This isn't the first time a funeral has taken an unexpected twist. In recent years, a man showed up alive for his own funeral in Brazil and a premature baby declared dead woke up before his own funeral before dying shortly after in Paraguay.
A woman has reportedly died from the shock of coming to life at her own funeral.
Fagilyu Mukhametzyanov, 49, was wrongly declared dead by doctors, but she actually died after hearing people pray for her soul in Kazan, Russia, according to the Daily Mail.
She was taken back to a hospital where she was declared dead, this time for good.
"Her eyes fluttered and we immediately rushed her back to the hospital but she only lived for another 12 minutes," her husband, Fagili Mukhametzyanov, said, according to the Daily News.
Mukhametzyanov said he plans to sue the hospital, which says it is conducting an investigation of the incident.
Her final cause of death was heart failure, according to reports. Her "first death" was also heart-related, a suspected heart attack.
This isn't the first time a funeral has taken an unexpected twist. In recent years, a man showed up alive for his own funeral in Brazil and a premature baby declared dead woke up before his own funeral before dying shortly after in Paraguay.
Norwegian ME/CFS conference in October 2011 with Dr. Nigel Speight
Posted by Birgitte on 26/06/2011:
EMEA Norway member Norwegian ME Association is holding its next ME/CFS conferences
on 18th and 19th October in Oslo and Bergen, respectively.
More detail will be available later.
Speakers will include -
Dr. Dan Peterson
Dr. Benjamin Natelson
Dr. Nigel Speight
Gudrun Lange
Dr Barbara Baumgarten-Austrheim from Oslo
Neurologist Halvor Næss from Haukeland Hospital in Bergen
Read more>>
EMEA Norway member Norwegian ME Association is holding its next ME/CFS conferences
on 18th and 19th October in Oslo and Bergen, respectively.
More detail will be available later.
Speakers will include -
Dr. Dan Peterson
Dr. Benjamin Natelson
Dr. Nigel Speight
Gudrun Lange
Dr Barbara Baumgarten-Austrheim from Oslo
Neurologist Halvor Næss from Haukeland Hospital in Bergen
Read more>>
Labels:
CHRONIC DISEASE,
Coping,
DIAGNOSING,
ME,
ME/CFS,
RESEARCH,
Science
Sunday, June 26, 2011
New directions from the Norwegian Directory of Health: CBT and GET can no longer be recommended as treatment for ME/CFS
ESME Team:
This is really good news for Norway and hopefully the best step forwards for Europe.
CFS/ME Knowledge summary, evaluation and recommendation for Ministry of Health and Care Services
Many patients need services in an interaction between the primary- and specialist care. This interaction does unfortunately not function well enough today.
The Directorate of Health has answered the mission from the Ministry of Health and Care Services with the following main conclusions and recommendations. The full answer can be read in the box to the right. (Translation of the full answer will be posted later.) Link to the reports from the Center of Knowledge will follow later.
Main conclusions:
It does not exist today a evidence based knowledge fundament to publish national guidelines or a general guide.
With the background from the present reports, The Directorate of Health sees that it will still take time to build good, robust patient care for this group of patients.
The knowledge review does not support an earlier recommendation made to use the NICE-criteria.
The knowledge revies does not on a general level give support for recommending graded exercise therapy and/or cognitive therapy to everybody with CFS/ME.
Recommendations:
The knowledge review of Kenny De Meirleir’s research makes that the Directorate of Health, on the basis of today’s knowledge, can not recommend that the public health sector finances this kind of treatment.
It is recommended that one identifies ongoing studies and makes a summary about existing studies regarding causality and diagnostics.
New research projects and recommendations about interventions must be seen in regard to the severity of the disease; mild – moderate – severe – or very severe, and in what phase of the disease the patient is; unstable – phase of stabilisation – phase of rehabilitation/(“reconstruction”).
An increase in funding for research on causality and treatment is recommended.
Collection and dissemination of experience-based knowledge will be facilitated amongst other things through regional conferences of experience.
The creation of a national treatment-/competence-service for CFS/ME for a limited time is considerated.
The biobank at Oslo University Hospital – Aker will be closely related to the national service.
It is recommended that ambulant/outpatient-teams is being created for children, youth and adults in all health regions.
It is recommended that an effort is started to develop good models for how kids as relatives/next of kin should be given follow-up.
It is recommended regional polyclinic for CFS/ME.
It is recommended rehabilitation services built on the experiences and competence from amongst others Sølvskottberget.
It is recommended regional Learning- and coping-courses for patients and next of kin.
It is recommended to extend the national Information Telephone Service for CFS/ME.
Original Norwegian Text : http://www.helsedirektoratet.no/habilitering_rehabilitering/cfs-me/cfs_me_kunnskapsoppsummering__evaluering_og_abefalinger_til_hod_813684
Kind regards,
ESME Team
Carl Zimmer: If the scientific community put more value on replication, science would do a better job
By CARL ZIMMER, Published: June 25, 2011
ONE of the great strengths of science is that it can
fix its own mistakes. *There are many hypotheses in
science which are wrong,” the astrophysicist Carl
Sagan once said. *That’s perfectly all right: it’s the
aperture to finding out what’s right. Science is a
self-correcting process.*
If only it were that simple. Scientists can certainly
point with pride to many self-corrections, but science
is not like an iPhone; it does not instantly
auto-correct.
As a series of controversies over the past few months
have demonstrated, science fixes its mistakes more
slowly, more fitfully and with more difficulty than
Sagan’s words would suggest.
Science runs forward better than it does backward.
Why? One simple answer is that it takes a lot of time
to look back over other scientists’ work and replicate
their experiments. Scientists are busy people,
scrambling to get grants and tenure.
As a result, papers that attract harsh criticism may
nonetheless escape the careful scrutiny required if
they are to be refuted.
In May, for instance, the journal Science published
eight critiques of a controversial paper
(http://bit.ly/kTVizb ) that it had run in December.
In the paper, a team of scientists described a species
of bacteria that seemed to defy the known rules of
biology by using arsenic instead of phosphorus to build
its DNA.
Chemists and microbiologists roundly condemned the
paper; in the eight critiques (http://bit.ly/mg55ZU ),
researchers attacked the study for using sloppy
techniques and failing to rule out more plausible
alternatives.
But none of those critics had actually tried to
replicate the initial results. That would take months of
research: getting the bacteria from the original team
of scientists, rearing them, setting up the experiment,
gathering results and interpreting them.
Many scientists are leery of spending so much time on
what they consider a foregone conclusion, and
graduate students are reluctant because they want
their first experiments to make a big splash, not
confirm what everyone already suspects.
“I’ve got my own science to do,* John Helmann, a
microbiologist at Cornell and a critic of the Science
paper, told Nature (http://bit.ly/j2IDb0 ).
The most persistent critic, Rosie Redfield, a
microbiologist at the University of British Columbia,
announced this month on her blog
(http://bit.ly/jMVieG ) that she would try to replicate
the original results - but only the most basic ones,
and only for the sake of science’s public reputation.
“Scientifically I think trying to replicate the claimed
results is a waste of time,*
she wrote in an e-mail.
For now, the original paper has not been retracted;
the results still stand.
Even when scientists rerun an experiment, and even
when they find that the original result is flawed, they
still may have trouble getting their paper published.
The reason is surprisingly mundane: journal editors
typically prefer to publish groundbreaking new
research, not dutiful replications.
In March, for instance, Daryl Bem, a psychologist at
Cornell University, shocked his colleagues by
publishing a paper in a leading scientific journal, The
Journal of Personality and Social Psychology, in which
he presented the results of experiments showing, he
claimed, that people’s minds could be influenced by
events in the future, as if they were clairvoyant.
Three teams of scientists promptly tried to replicate
his results. All three teams failed. All three teams
wrote up their results and submitted them to The
Journal of Personality and Social Psychology. And all
three teams were rejected - but not because their
results were flawed.
As the journal’s editor, Eliot Smith, explained to The
Psychologist (http://bit.ly/keLfoW ), a British
publication, the journal has a longstanding policy of
not publishing replication studies.
“This policy is not new and is not unique to this
journal,*
he said.
As a result, the original study stands.
Even when follow-up studies manage to see the light
of day, they still don’t necessarily bring matters to a
close. Sometimes the original authors will declare the
follow-up studies to be flawed and refuse to retract
their paper.
Such a standoff is now taking place over a
controversial claim that chronic fatigue syndrome is
caused by a virus.
In October 2009, the virologist Judy Mikovits and
colleagues reported in Science (http://bit.ly/iVnRai )
that people with chronic fatigue syndrome had high
levels of a virus called XMRV. They suggested that
XMRV might be the cause of the disorder.
Several other teams have since tried - and failed - to
find XMRV in people with chronic fatigue syndrome.
As they’ve published their studies over the past year,
skepticism has grown. The editors of Science asked
the authors of the XMRV study to retract their paper.
But the scientists refused; Ms. Mikovits declared that
a retraction would be “premature.*
The editors have since published an “editorial
expression of concern.* (http://bit.ly/l7iULF )
Once again, the result still stands.
But perhaps not forever. Ian Lipkin, a virologist at
Columbia University who is renowned in scientific
circles for discovering new viruses behind mysterious
outbreaks, is also known for doing what he calls
“de-discovery*: intensely scrutinizing controversial
claims about diseases.
Last September, Mr. Lipkin laid out several tips
(http://bit.ly/lPAlV6 ) for effective de-discovery in the
journal Microbiology and Molecular Biology Reviews.
He recommended engaging other scientists - including
those who published the original findings - as well as
any relevant advocacy groups (like those for people
suffering from the disease in question).
Together, everyone must agree on a rigorous series of
steps for the experiment. Each laboratory then carries
out the same test, and then all the results are
gathered together.
At the request of the National Institutes of Health,
Mr. Lipkin is running just such a project with Ms.
Mikovits and other researchers to test the link
between viruses and chronic fatigue, based on a
large-scale study of 300 subjects. He expects results
by the end of this year.
This sort of study, however, is the exception rather
than the rule. If the scientific community put more
value on replication - by setting aside time, money
and journal space - science would do a better job of
living up to Carl Sagan’s words.
Carl Zimmer writes frequently for The New York Times
about science and is the author, most recently, of “A
Planet of Viruses.*
ONE of the great strengths of science is that it can
fix its own mistakes. *There are many hypotheses in
science which are wrong,” the astrophysicist Carl
Sagan once said. *That’s perfectly all right: it’s the
aperture to finding out what’s right. Science is a
self-correcting process.*
If only it were that simple. Scientists can certainly
point with pride to many self-corrections, but science
is not like an iPhone; it does not instantly
auto-correct.
As a series of controversies over the past few months
have demonstrated, science fixes its mistakes more
slowly, more fitfully and with more difficulty than
Sagan’s words would suggest.
Science runs forward better than it does backward.
Why? One simple answer is that it takes a lot of time
to look back over other scientists’ work and replicate
their experiments. Scientists are busy people,
scrambling to get grants and tenure.
As a result, papers that attract harsh criticism may
nonetheless escape the careful scrutiny required if
they are to be refuted.
In May, for instance, the journal Science published
eight critiques of a controversial paper
(http://bit.ly/kTVizb ) that it had run in December.
In the paper, a team of scientists described a species
of bacteria that seemed to defy the known rules of
biology by using arsenic instead of phosphorus to build
its DNA.
Chemists and microbiologists roundly condemned the
paper; in the eight critiques (http://bit.ly/mg55ZU ),
researchers attacked the study for using sloppy
techniques and failing to rule out more plausible
alternatives.
But none of those critics had actually tried to
replicate the initial results. That would take months of
research: getting the bacteria from the original team
of scientists, rearing them, setting up the experiment,
gathering results and interpreting them.
Many scientists are leery of spending so much time on
what they consider a foregone conclusion, and
graduate students are reluctant because they want
their first experiments to make a big splash, not
confirm what everyone already suspects.
“I’ve got my own science to do,* John Helmann, a
microbiologist at Cornell and a critic of the Science
paper, told Nature (http://bit.ly/j2IDb0 ).
The most persistent critic, Rosie Redfield, a
microbiologist at the University of British Columbia,
announced this month on her blog
(http://bit.ly/jMVieG ) that she would try to replicate
the original results - but only the most basic ones,
and only for the sake of science’s public reputation.
“Scientifically I think trying to replicate the claimed
results is a waste of time,*
she wrote in an e-mail.
For now, the original paper has not been retracted;
the results still stand.
Even when scientists rerun an experiment, and even
when they find that the original result is flawed, they
still may have trouble getting their paper published.
The reason is surprisingly mundane: journal editors
typically prefer to publish groundbreaking new
research, not dutiful replications.
In March, for instance, Daryl Bem, a psychologist at
Cornell University, shocked his colleagues by
publishing a paper in a leading scientific journal, The
Journal of Personality and Social Psychology, in which
he presented the results of experiments showing, he
claimed, that people’s minds could be influenced by
events in the future, as if they were clairvoyant.
Three teams of scientists promptly tried to replicate
his results. All three teams failed. All three teams
wrote up their results and submitted them to The
Journal of Personality and Social Psychology. And all
three teams were rejected - but not because their
results were flawed.
As the journal’s editor, Eliot Smith, explained to The
Psychologist (http://bit.ly/keLfoW ), a British
publication, the journal has a longstanding policy of
not publishing replication studies.
“This policy is not new and is not unique to this
journal,*
he said.
As a result, the original study stands.
Even when follow-up studies manage to see the light
of day, they still don’t necessarily bring matters to a
close. Sometimes the original authors will declare the
follow-up studies to be flawed and refuse to retract
their paper.
Such a standoff is now taking place over a
controversial claim that chronic fatigue syndrome is
caused by a virus.
In October 2009, the virologist Judy Mikovits and
colleagues reported in Science (http://bit.ly/iVnRai )
that people with chronic fatigue syndrome had high
levels of a virus called XMRV. They suggested that
XMRV might be the cause of the disorder.
Several other teams have since tried - and failed - to
find XMRV in people with chronic fatigue syndrome.
As they’ve published their studies over the past year,
skepticism has grown. The editors of Science asked
the authors of the XMRV study to retract their paper.
But the scientists refused; Ms. Mikovits declared that
a retraction would be “premature.*
The editors have since published an “editorial
expression of concern.* (http://bit.ly/l7iULF )
Once again, the result still stands.
But perhaps not forever. Ian Lipkin, a virologist at
Columbia University who is renowned in scientific
circles for discovering new viruses behind mysterious
outbreaks, is also known for doing what he calls
“de-discovery*: intensely scrutinizing controversial
claims about diseases.
Last September, Mr. Lipkin laid out several tips
(http://bit.ly/lPAlV6 ) for effective de-discovery in the
journal Microbiology and Molecular Biology Reviews.
He recommended engaging other scientists - including
those who published the original findings - as well as
any relevant advocacy groups (like those for people
suffering from the disease in question).
Together, everyone must agree on a rigorous series of
steps for the experiment. Each laboratory then carries
out the same test, and then all the results are
gathered together.
At the request of the National Institutes of Health,
Mr. Lipkin is running just such a project with Ms.
Mikovits and other researchers to test the link
between viruses and chronic fatigue, based on a
large-scale study of 300 subjects. He expects results
by the end of this year.
This sort of study, however, is the exception rather
than the rule. If the scientific community put more
value on replication - by setting aside time, money
and journal space - science would do a better job of
living up to Carl Sagan’s words.
Carl Zimmer writes frequently for The New York Times
about science and is the author, most recently, of “A
Planet of Viruses.*
Subscribe to:
Posts (Atom)