The holidays are fast approaching, and the “elves” are busy at the North Pole. No, not the presidential candidates. No, not the Capitol Hill pols. And no, not those unrelenting pursuers of objectivity and truth: the journalists.
I refer instead to the bureaucrats, in particular those implementing the new “comparative effectiveness review” (CER) process for comparing alternative treatments for given medical conditions.
The 2010 Patient Protection and Affordable Care Act — aka Obamacare — established the Patient-Centered Outcomes Research Institute to “conduct research to provide information about the best available evidence to help patients and their health care providers make more informed decisions.” What could be wrong with that? CER is supposed to be “a rigorous evaluation of the impact of different options that are available for treating a given medical condition for a particular set of patients.”
Alas, there is a problem: The federal government does not have patients. Instead, it has interest groups engaged in a long twilight struggle over shares of the federal budget pie. Less for one group means more for others, and even modest reductions in the huge federal health-care budget are a tempting goal for other constituencies.
In other words, there can be no such thing as unpoliticized science in the Beltway. It is inevitable that political pressures will lead policymakers to use the findings yielded by CER analyses to influence decisions on coverage, reimbursement, or incentives within Medicare, Medicaid, and other federal health programs.
Consider the new environment confronting would-be investors in new and improved medical technologies, examples of which are pharmaceuticals and medical devices and equipment. One cannot know in advance either how CER analyses of interest will turn out or how the findings will be used. Indeed, the uncertainties are enormous. The findings of statistical analyses are driven in substantial part by the design of the underlying studies.
Such studies always will conflict to some degree, introducing considerable subjectivity into the process of deriving “conclusions” from the CER process. Even for a given study, experts inevitably will differ on conclusions to be learned and/or recommendations to be made. More important, the process of scientific discovery is dynamic: Later findings can call earlier findings into question, and CER analysis necessarily will find itself “behind the curve” as medical technologies and treatment protocols evolve over time. And what is true for a population may not be true for a given subset of patients, a problem for which such top-down approaches as CER are particularly ill-conceived.
As an example, let us recall the experience of the “ALLHAT” (The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack) clinical trial, conducted over the eight-year period from 1994 through 2002. ALLHAT was a large (over 42,000 patients) and well-publicized comparative analysis of four alternative hypertension drugs, as well as the effects of lipid drugs, on the rates of heart attacks, strokes, and early deaths. Substantial disagreement emerged in the scientific literature over the design of the trial, the interpretation of the data, the importance of observed side effects, and a number of other parameters. Other CER analyses suggested differing conclusions. As the end of the ALLHAT study approached, new drugs (in particular the statin class of cholesterol drugs) and drug combination therapies reduced somewhat the usefulness of the ALLHAT findings, and there is little evidence in the literature that the ALLHAT trial has had an appreciable effect upon clinical practice.
Nonetheless, the likelihood that CER studies will be used one way or another is very high. This means that private-sector investors in new technologies will perceive four new or strengthened parameters. First, there will arise a need to expand private clinical testing to include preliminary CER analysis, as a means of acquiring information about future federal findings and government reactions. Second, increased pricing pressures can be expected as a result of CER analyses that are inconclusive or adverse. Third, CER raises the risk of non-approval or limited approval for federally-financed programs, perhaps as a tool with which to force ever-greater price reductions. Finally, CER is likely to yield a longer regulatory approval process, and thus a shortening of the effective patent period and a delay in expected sales revenues.
Recent research from the Pacific Research Institute examined the likely effects of these CER implications for R&D investment in new and improved pharmaceuticals and devices and equipment. Using data from the National Science Foundation and other sources, R&D investment would be reduced by about $10 billion per year over the period 2014 through 2025, or about 10-12 percent. Based upon the scholarly literature on the benefits of medical innovation, this reduction in the advance of medical technology would impose an expected loss of about 5 million life-years annually, with a conservative economic value of $500 billion, an amount substantially greater than the entire U.S. market for pharmaceuticals and devices and equipment.
This adverse effect would be concentrated upon technological advances likely to serve the needs of smaller subgroups within the overall patient population, upon riskier investments among new treatments, and upon drugs and equipment expected to prove relatively less profitable.
In short: An expanded federal CER effort — a top-down process — is likely to prove very unwise as a matter of public policy. A renewed emphasis upon a bottom-up approach of experimentation by many millions of practitioners and patients would be a more fruitful vehicle for the acquisition of information about the comparative effectiveness of alternative clinical approaches.
Coal In Our Stockings: The Destruction of Medical Innovation
Benjamin Zycher
The holidays are fast approaching, and the “elves” are busy at the North Pole. No, not the presidential candidates. No, not the Capitol Hill pols. And no, not those unrelenting pursuers of objectivity and truth: the journalists.
I refer instead to the bureaucrats, in particular those implementing the new “comparative effectiveness review” (CER) process for comparing alternative treatments for given medical conditions.
The 2010 Patient Protection and Affordable Care Act — aka Obamacare — established the Patient-Centered Outcomes Research Institute to “conduct research to provide information about the best available evidence to help patients and their health care providers make more informed decisions.” What could be wrong with that? CER is supposed to be “a rigorous evaluation of the impact of different options that are available for treating a given medical condition for a particular set of patients.”
Alas, there is a problem: The federal government does not have patients. Instead, it has interest groups engaged in a long twilight struggle over shares of the federal budget pie. Less for one group means more for others, and even modest reductions in the huge federal health-care budget are a tempting goal for other constituencies.
In other words, there can be no such thing as unpoliticized science in the Beltway. It is inevitable that political pressures will lead policymakers to use the findings yielded by CER analyses to influence decisions on coverage, reimbursement, or incentives within Medicare, Medicaid, and other federal health programs.
Consider the new environment confronting would-be investors in new and improved medical technologies, examples of which are pharmaceuticals and medical devices and equipment. One cannot know in advance either how CER analyses of interest will turn out or how the findings will be used. Indeed, the uncertainties are enormous. The findings of statistical analyses are driven in substantial part by the design of the underlying studies.
Such studies always will conflict to some degree, introducing considerable subjectivity into the process of deriving “conclusions” from the CER process. Even for a given study, experts inevitably will differ on conclusions to be learned and/or recommendations to be made. More important, the process of scientific discovery is dynamic: Later findings can call earlier findings into question, and CER analysis necessarily will find itself “behind the curve” as medical technologies and treatment protocols evolve over time. And what is true for a population may not be true for a given subset of patients, a problem for which such top-down approaches as CER are particularly ill-conceived.
As an example, let us recall the experience of the “ALLHAT” (The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack) clinical trial, conducted over the eight-year period from 1994 through 2002. ALLHAT was a large (over 42,000 patients) and well-publicized comparative analysis of four alternative hypertension drugs, as well as the effects of lipid drugs, on the rates of heart attacks, strokes, and early deaths. Substantial disagreement emerged in the scientific literature over the design of the trial, the interpretation of the data, the importance of observed side effects, and a number of other parameters. Other CER analyses suggested differing conclusions. As the end of the ALLHAT study approached, new drugs (in particular the statin class of cholesterol drugs) and drug combination therapies reduced somewhat the usefulness of the ALLHAT findings, and there is little evidence in the literature that the ALLHAT trial has had an appreciable effect upon clinical practice.
Nonetheless, the likelihood that CER studies will be used one way or another is very high. This means that private-sector investors in new technologies will perceive four new or strengthened parameters. First, there will arise a need to expand private clinical testing to include preliminary CER analysis, as a means of acquiring information about future federal findings and government reactions. Second, increased pricing pressures can be expected as a result of CER analyses that are inconclusive or adverse. Third, CER raises the risk of non-approval or limited approval for federally-financed programs, perhaps as a tool with which to force ever-greater price reductions. Finally, CER is likely to yield a longer regulatory approval process, and thus a shortening of the effective patent period and a delay in expected sales revenues.
Recent research from the Pacific Research Institute examined the likely effects of these CER implications for R&D investment in new and improved pharmaceuticals and devices and equipment. Using data from the National Science Foundation and other sources, R&D investment would be reduced by about $10 billion per year over the period 2014 through 2025, or about 10-12 percent. Based upon the scholarly literature on the benefits of medical innovation, this reduction in the advance of medical technology would impose an expected loss of about 5 million life-years annually, with a conservative economic value of $500 billion, an amount substantially greater than the entire U.S. market for pharmaceuticals and devices and equipment.
This adverse effect would be concentrated upon technological advances likely to serve the needs of smaller subgroups within the overall patient population, upon riskier investments among new treatments, and upon drugs and equipment expected to prove relatively less profitable.
In short: An expanded federal CER effort — a top-down process — is likely to prove very unwise as a matter of public policy. A renewed emphasis upon a bottom-up approach of experimentation by many millions of practitioners and patients would be a more fruitful vehicle for the acquisition of information about the comparative effectiveness of alternative clinical approaches.
Nothing contained in this blog is to be construed as necessarily reflecting the views of the Pacific Research Institute or as an attempt to thwart or aid the passage of any legislation.