Comparative-Effectiveness Research

Michael Scally MD

Doctor of Medicine
10+ Year Member
Giving Teeth to Comparative-Effectiveness Research - The Oregon Experience

Posted by NEJM - February 3rd, 2010
Somnath Saha, M.D., M.P.H., Darren D. Coffman, M.S., and Ariel K. Smits, M.D., M.P.H.

Experts believe that comparative-effectiveness research (CER) can substantially reduce future health care spending and improve the quality of care.1,2 Their analyses indicate that CER can control costs if its results are used to inform coverage, payment, and cost-sharing policies that provide incentives for appropriate and cost-effective care.1,2 But the proposed approach to CER in the United States would constrain these uses of the research, to avoid any implication that health care will be rationed. Though the word elicits fear and opposition, "rationing" is simply the equitable, or rational, distribution of resources; it involves delivering health care services according to clinical need and effectiveness, rather than wealth or geographic location.

Continue reading . . . Giving Teeth to Comparative-Effectiveness Research ? The Oregon Experience | Health Care Reform Center
 
Last edited:
Comparative Effectiveness and Health Care Spending -- Implications for Reform

Weinstein MC, Skinner JA. Comparative Effectiveness and Health Care Spending -- Implications for Reform. N Engl J Med;362(5):460-5.

Title VIII of the American Recovery and Reinvestment Act of 2009 authorizes the expenditure of $1.1 billion to conduct research comparing "clinical outcomes, effectiveness, and appropriateness of items, services, and procedures that are used to prevent, diagnose, or treat diseases, disorders, and other health conditions." Federal support of "comparative effectiveness" research has been viewed as a cornerstone in controlling runaway health care costs.

Although cost is not mentioned explicitly in the comparative effectiveness legislation, the American College of Physicians and others have called for cost-effectiveness analysis - assessment of the added improvement in health outcomes relative to cost - to be on the agenda for comparative effectiveness research.1,2 This approach has come under harsh criticism from some who view it as the first step in health care rationing by the government, that cost cutting will mean the withdrawal of expensive treatments with small (but still positive) benefits. Some politicians have therefore tried to restrict any efforts to use comparative effectiveness to guide U.S. health care policy.3

Continue reading . . . NEJM -- Comparative Effectiveness and Health Care Spending -- Implications for Reform
 
Last edited:
Five Next Steps for a New National Program for Comparative-Effectiveness Research
Five Next Steps for a New National Program for Comparative-Effectiveness Research | Health Care Reform Center

Jordan M. VanLare, A.B., Patrick H. Conway, M.D., and Harold C. Sox, M.D.
February 17th, 2010

The American Recovery and Reinvestment Act appropriated $1.1 billion to fund comparative-effectiveness research (CER) — unprecedented generosity for a program for evaluating health care practices. The legislation established the Federal Coordinating Council for Comparative Effectiveness Research and charged it with advising the secretary of health and human services on the allocation of CER funds. It also mandated an Institute of Medicine (IOM) study to recommend initial national priorities for CER. Both the Federal Coordinating Council and the IOM reported to Congress on June 30, 2009.

Continue Reading . . .
 

Attachments

New Trend Promises To Shift Disease Diagnosis, Treatment From Hospitals To Physicians, Patients.

The New York Times (5/21, BU4, Lohr - Unboxed - High-Tech Solutions to High-Cost Medical Care - NYTimes.com) reported that in addition to the healthcare overhaul, "there is another broad transformation in healthcare under way, a powerful force for decentralized innovation" that is being "fueled in good part by technology." The "trend promises to shift a lot of the diagnosis, monitoring and treatment of disease from hospitals and specialized clinics, where treatment is expensive, to primary care physicians and patients themselves -- at far less cost." Indeed, the "promise, according to Dr. David M. Lawrence," formerly of Kaiser Permanente, is "an array of technology-enabled, consumer-based services that constitute a new form of primary healthcare." The article went on to explore the trend by looking "at a start-up in the field of sleep medicine."
 
The Wall Street Journal (6/3, Hobson) "Health Blog" reported that a study conducted in 2006 and 2007 found that many consumers do not support comparative effectiveness research. Notably, many participants did not understand what that entails, and they favored more, not less medical care. Based on the findings, the study's authors concluded that it might be difficult to convince the general public of the benefits of comparative effectiveness research.

Consumers Not Too Psyched About ‘Evidence-Based Health Care’
Consumers Not Too Psyched About ‘Evidence-Based Health Care’ - Health Blog - WSJ

A study published online today by Health Affairs does not bring good news for proponents of “evidence-based health care,” an umbrella term for comparative effectiveness research, shared decision making, cost transparency and the like. After getting consumer input via an online survey, focus groups and interviews, researchers found “many of these consumers’ beliefs, values, and knowledge to be at odds with what policy makers prescribe as evidence-based health care.” (The research was funded by the California HealthCare Foundation and conducted in 2006 and 2007.)

Right off the bat, study participants had no idea what terms such as “quality guidelines,” “quality standards,” and “medical evidence” meant — many thought the latter, for example, referred to “things like my test results and medical history.” Study participants also had difficulty understanding the notion that some providers might consistently opt for drugs or treatments not in line with what research data say is the best route. They were suspicious of the concept of medical guidelines, and were “more inclined to trust their own and their physicians’ judgments of quality.”

Participants agreed with the notion that the more medical care, the better, and most survey respondents didn’t like the idea of linking the effectiveness of a treatment or drug with a patient’s out-of-pocket costs. A third of survey respondents agreed with the notion that more effective treatments usually cost more than less effective ones.

The survey also found that consumers were disinclined to take notes during a doctor’s appointment or bring outside information to discuss with the doc. “They believed that determining what constituted necessary care was mainly their provider’s job,” researchers write.

While there’s a “small, but nontrivial” minority that accepts the precepts of evidence-based health care, the researchers write that it’s going to be a tough sell for most. “The beliefs underlying the themes that surfaced in both the qualitative research and the survey — more is better, newer is better, you get what you pay for, guidelines limit my doctor’s ability to provide me with the care I need and deserve — are deeply rooted and widespread,” they write.

Update: This post has been updated to include when the study was conducted.


Carman KL, Maurer M, Yegian JM, et al. Evidence That Consumers Are Skeptical About Evidence-Based Health Care. Health Aff:hlthaff.2009.0296.

We undertook focus groups, interviews, and an online survey with health care consumers as part of a recent project to assist purchasers in communicating more effectively about health care evidence and quality. Most of the consumers were ages 18-64; had health insurance through a current employer; and had taken part in making decisions about health insurance coverage for themselves, their spouse, or someone else. We found many of these consumers' beliefs, values, and knowledge to be at odds with what policy makers prescribe as evidence-based health care. Few consumers understood terms such as "medical evidence" or "quality guidelines." Most believed that more care meant higher-quality, better care. The gaps in knowledge and misconceptions point to serious challenges in engaging consumers in evidence-based decision making.
 

Attachments

Last edited:
Health Policy Briefs - Comparative Effectiveness Research
Health Policy Briefs
http://www.healthaffairs.org/healthpolicybriefs/brief_pdfs/healthpolicybrief_27.pdf

10/05/2010

A broad effort is under way to understand what really works in health care, perhaps leading to better value for dollars spent.

What's the issue?

The U.S. government has jump-started an unprecedented effort to better understand what works and what doesn't in health care. The effort, called comparative effectiveness research, is designed to determine which treatments, diagnostic tests, public health strategies (such as broad-based cancer screening), and other health care services accomplish the most good for people in general or for different groups within the population.

Recent legislation, including the Affordable Care Act, has directed new funding toward this research. But this expansion of effort raises a number of questions: What methods should be used to conduct the research? Will physicians and other health care providers change what they do for patients based on comparative effectiveness research findings? How will patients and providers learn about the results? Will the research be conducted openly and soundly enough that patients and providers will trust the outcomes? Will private insurers and other payers use the research findings to make decisions on whether to cover treatments, and how much to pay for them?

This brief examines some of these key issues and areas of controversy.

What's the background?


With 2010 health spending estimated at $2.6 trillion, the United States spends more than any country--and more per person--on health care. Yet it's widely agreed that much of the health care provided in the United States is of little value, and in some instances may even harm patients.

There is also little or no scientific evidence to support much of U.S. health care. In fact, more than half the treatments provided to patients lack clear evidence that they are effective at all, according to the Institute of Medicine, part of the National Academies. And in cases where there are two different treatments for the same condition--for example, surgery versus medication--there is only rarely adequate evidence about which one is more effective.

Life-Or-Death Concerns: For many patients and their health providers, this lack of understanding what works best in health care can be a life-or-death issue. For insurers and government officials seeking to spend health care dollars as wisely as possible, knowing which approach works best could enable them to guide patients to optimal treatments, and help them make decisions about which treatments to cover and what to pay for them.

The Institute of Medicine has defined comparative effectiveness research as "the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or improve the delivery of care." The purpose of the research is to help consumers, clinicians, purchasers, and policy makers make informed decisions about health care for individual patients and the population as a whole.

These studies, however, are not designed to look for the most cost-effective alternatives. Cost- effectiveness research, which has been debated in Congress for many years, is largely excluded from the comparative effectiveness research effort created through national health reform legislation enacted in March 2010. Exhibit 1 defines some of the basic terms used in health care research in general, and comparative effectiveness research in particular.

What's in the law?

Comparative effectiveness research has been carried out in the United States for years, mainly under the aegis of the National Institutes of Health (NIH), the Agency for Healthcare Research and Quality (AHRQ), and the Department of Veterans Affairs. But this type of research received a big boost in 2009 under the American Recovery and Reinvestment Act, or stimulus legislation.

Boost In Funding: That law authorized $1.1 billion more to be spent on the research and designated the money to three agencies: NIH, AHRQ, and the Office of the Secretary of Health and Human Services. To oversee how that money was spent, the law also created a Federal Coordinating Council for Comparative Effectiveness Research, an advisory board mostly made up of clinicians. Exhibit 2 outlines how the three agencies have directed their comparative effectiveness research funds.

The stimulus law also directed the Institute of Medicine to recommend national priorities for comparative effectiveness research. After soliciting nominations through a Web-based questionnaire and receiving testimony from a wide range of interested parties, the IOM recommended 100 research priorities. These included determining the best test strategies for coronary heart disease, the best strategies to prevent older adults from falling, and the best ways to treat lower back pain.

Ongoing Support: The provisions incorporated into the stimulus law were just the start of a much larger comparative effectiveness research effort. The national health reform legislation, known as the Affordable Care Act, established a new, nongovernmental entity called the Patient-Centered Outcomes Research Institute to oversee and set guidelines for the research (Exhibit 3). The law also created a steady stream of research funding. Starting in 2013, Medicare and all private health insurance companies will pay a tax into a trust fund that will support the activities of the new institute. This funding is estimated to reach $500 million annually by 2015.

Under the law, a 21-member board of governors for the institute was picked by the acting comptroller general, head of the Government Accountability Office (the arm of Congress charged with evaluating and investigating the federal government). Although the institute's mandate has not been clearly defined, its main function appears to be formulating a portfolio of research projects, with a methodology committee involved in setting research standards.

Under the legislation, the institute will contract with NIH, AHRQ, and private sector organizations to oversee funding and research, suggesting that it will outsource everything from soliciting proposals to evaluating outcomes.

Side-Stepping Controversy: In the health reform legislation, Congress rejected using cost-effectiveness analyses to aid Medicare coverage and reimbursement decisions. In particular, it wanted to stay away from the metric called quality-adjusted life-years (QALYs)--which is used by England's and Wales' National Institute for Health and Clinical Quality (NICE)--to define health outcomes as part of cost- effectiveness determinations.

Cost-effectiveness analysis has aided coverage and reimbursement decisions elsewhere in the world. NICE has adopted a cost-effectiveness threshold range of £20,000-£30,000 per QALY, or about US$33,000-$50,000. The agency doesn't accept or reject technologies on cost-effectiveness grounds only, although the calculus does play a key role in NICE's decisions.

During the debate over national health reform in the United States, the notion of weighing QALYs and costs as part of the calculus for deciding whether to cover treatments became a political minefield. Critics associated the metric with "rationing"--that is, with explicit decisions to withhold certain types of care from patients because they were too costly. As a result, the final language of the Affordable Care Act forbids the government from using QALYs and other cost-effectiveness estimates "as a threshold to determine coverage, reimbursement, or incentive programs" under Medicare (Exhibit 4).

The government is also forbidden from making decisions on "coverage, reimbursement, or incentive programs" under Medicare "in a manner that treats extending the life of an elderly, disabled, or terminally ill individual as of lower value than extending the life of an individual who is younger, nondisabled, or not terminally ill."

What are key concerns and issues?

There continues to be confusion over the topic of comparative effectiveness research as well as the creation of the Patient-Centered Outcomes Research Institute and its role in devising the future comparative effectiveness research agenda. Supporters of the research stress the positive outcomes that it could produce. Among them:

More Clarity On Appropriate Treatments: Advances in science often lead to new health care treatments, but don't necessarily provide information about which ones work best, and for which patients. Comparative effectiveness research aims to develop a better understanding of treatment outcomes that best fit an individual's needs and preferences. Clinicians and patients need to know not only how treatments work for the general population, but also which ones work best for specific types of patients, such as the elderly, racial and ethnic minorities, and those with more than one disease or condition. With this greater clarity, the odds are better for accelerating the use of beneficial innovations and delivering the right treatment to the right patient at the right time.

More Information On Neglected Diseases Or Populations: Among the priorities the IOM proposed for comparative effectiveness research was a focus on historically neglected issues, such as minority health care and mental illness. Supporters of comparative effectiveness research say it could help reduce disparities in health and health care and make the system fairer for those people and conditions that have often been left out.

Broad Input On The Research Focus: The coming comparative effectiveness research agenda will be set by the Patient-Centered Outcomes Research Institute, whose board includes a diverse range of private stakeholders in addition to government policy makers. The 21 board members include representatives of consumers and patients, hospitals, industry, nurses, payers, physicians, researchers, surgeons, and the leaders of NIH and AHRQ. The multi-stakeholder orientation of the board has attracted broad support.

More Value For The Money: Despite the language in the Affordable Care Act that restricts the use of cost-effectiveness analysis in Medicare's coverage decisions, backers of comparative effectiveness research say it could lead to making better use of the nation's health care dollars. If there's more clarity about which treatments work best--and for which types of patients--there's potential for shifting money to those interventions and away from less effective treatments.

Critics of comparative effectiveness research still have concerns about how research results may be used, including the following:

Mandating Treatment Decisions: As noted above, Congress has barred the federal government from using simple rules based on measurements such as quality-adjusted life-years to determine coverage, reimbursement, or incentive programs under Medicare. Some Republican lawmakers and other conservatives fear that the law could be changed or ignored--and that there might still come a time when such metrics will be used to tie the cost of care to the value of supporting a person's life.

Undermining Access To Care: There is evidence that the general public is concerned about how comparative effectiveness studies could be used to limit their health care choices. Two recent national public opinion surveys found broad support for using comparative effectiveness research results to provide additional information to doctors and patients, but less support for using the results to allocate government resources or mandate treatment decisions. Only about half the respondents to a poll conducted in May supported using comparative effectiveness research to determine whether Medicare and private insurance companies will cover new and existing medical treatments.

What's next?

The board of governors for the Patient-Centered Outcomes Research Institute now has important decisions to make as it helps to shape the comparative effectiveness research agenda. The board is likely to have substantial input into developing priorities for research, and will also be responsible for overseeing the distribution of information to patients and providers.

The board and the institute must also contemplate a number of technical questions, such as how to make sure that the research fairly compares different interventions. They will also have to insure that long-neglected diseases or conditions, as well as health care issues affecting minorities and ethnic groups, are properly addressed through the research.

Thorny Issues Ahead: One of the thorniest questions remaining is the future use of cost-effectiveness research. Nothing in the law prohibits this type of research from being carried out; rather, the institute and the government cannot develop a dollars-per-quality-adjusted life-year or some other cost-effectiveness metric as a "threshold" to recommend for or against coverage of specific health care interventions.

But researchers carrying out federally funded comparative effectiveness studies can include a cost- effectiveness analysis, or information enabling others to perform those analyses. As yet, we don't know what patients, providers, and payers will do with that information--ignore it, or use it in some fashion to achieve better value for the health care dollars the nation spends.
Resources

Association of American Medical Colleges, "Summary of Patient- Centered Outcomes Research Provisions," March 2010.

Avorn, Jerry and Michael Fischer, "'Bench to Behavior': Translating Comparative Effectiveness Research into Improved Clinical Practice," Health Affairs, 29, no. 10 (2010): 1891-1900.

Benner, Joshua S., Marisa R. Morrison, Erin K. Karnes, S. Lawrence Kocot, and Mark B. McClellan, "An Evaluation of Recent Federal Spending on Comparative Effectiveness Research: Priorities, Gaps, and Next Steps," Health Affairs, 29, no. 10 (2010): 1768-76.

Garber, Alan M. and Harold C. Sox, "The Role of Costs in Comparative Effectiveness Research," Health Affairs, 29, no. 10 (2010): 1805- 11.

Gerber, Alan S., Eric M. Patashnik, David Doherty, and Conor Dowling, "The Public Wants Information, Not Board Mandates, from Comparative Effectiveness Research," Health Affairs, 29, no. 10 (2010): 1872-81.

Institute of Medicine, Board on Health Care Services, "Initial National Priorities for Comparative Effectiveness Research," June 30, 2009.

Neumann, Peter J. and Dan Greenberg, "Is the United States Ready for QALYs?" Health Affairs, 28, no. 5 (2009): 1366-71.

Patel, Kavita, "Health Reform's Tortuous Route to the Patient-Centered Outcomes Research Institute," Health Affairs, 29, no. 10 (2010): 1777-82.

Robinson, James C., "Comparative Effectiveness Research: From Clinical Information to Economic Incentives,"Health Affairs, 29, no. 10 (2010): 1788-95.

Wilensky, Gail R., "The Policies and Politics of Creating a Comparative Clinical Effectiveness Research Center,"Health Affairs, 28, no. 4 (2009): w719-29 (published online).
 
New Health Affairs Issue: Comparative Effectiveness Research
New Health Affairs Issue: Comparative Effectiveness Research – Health Affairs Blog

October 5th, 2010
by Chris Fleming

A national push on comparative effectiveness research is under way as a result of federal stimulus and health reform legislation. The research, which is aimed at answering critical questions about what works—and what doesn’t—in health care, is the subject of the October issue of Health Affairs. The issue explores the myriad challenges inherent in making the most of the research, and using it to better inform the health care decisions of the future. http://content.healthaffairs.org/content/vol29/issue10/

Comparative effectiveness research has been described by the Institute of Medicine as assisting “consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.” Key to the new national comparative effectiveness research initiative will be the new Patient-Centered Outcomes Research Institute, established under health reform as a nongovernmental entity that will set priorities for comparative effectiveness research, develop and implement a research agenda, and disseminate research findings to health care decision makers.

The nation as a whole now faces a number of challenges, including how it will make use of comparative effectiveness research to improve the value of health care. A key topic examined by several authors in the October Health Affairs is how the research can be used to improve the cost-effectiveness of care. For example, Steven Pearson, the president and founder of the Institute for Clinical and Economic Review (ICER) at Harvard University Medical School, and Peter B. Bach, a pulmonary and critical care physician and member of the Health Outcomes Research Group at Memorial Sloan-Kettering Cancer Center, propose an innovative way for Medicare to draw on the research to set payment for services that provide comparable patient outcomes.

The October issue of Health Affairs is funded by the National Pharmaceutical Council, WellPoint Foundation, and Association of American Medical Colleges.

Setting priorities. Other studies in this edition of Health Affairs examine additional research challenges. An analysis by Joshua S. Benner and colleagues at the Brookings Institution offers early insights into how the new comparative effectiveness strategy is developing and the gaps that need to be addressed. Nearly 90 percent of the $1.1 billion allocated for comparative effectiveness research under the stimulus legislation will be spent on evidence development and synthesis and on improving research capacity, the study authors found. They recommend stronger emphasis on experimental research, evaluating of broad health system-level reforms, identifying subgroups of patients most likely to benefit from given interventions, addressing the needs of understudied groups, and developing effective strategies for disseminating research results.

Designing the research. Engaging patients, doctors, and other stakeholders in the design of comparative effectiveness studies would help ensure the relevance of this research to health care decision makers. Ari Hoffman, of the University of California, San Francisco, and colleagues at the Center for Medical Technology Policy in Baltimore detail five principles for effective engagement of a broad coalition of research participants: (1) ensure balance among participating stakeholders; (2) get participants to “buy in” to the enterprise and understand their roles; (3) provide neutral and expert facilitators for research discussions; (4) establish connections among the participants; and (5) keep participants engaged throughout the research process.

Garnering public support. Americans have mixed feelings about comparative effectiveness research. Two studies from national opinion surveys by Alan S. Gerber of Yale University, Eric M. Patashnik of the University of Virginia, and colleagues find that people see the value of information generated by comparative effectiveness research, but fear that it may be used to ration care, or limit doctors’ ability to tailor their care. Although people want information to help them make health care decisions, they do not want their treatment options restricted.

Disseminating research findings. Historically, it takes a long time for new research to make its way into everyday clinical practice. Jerry Avorn, of Harvard Medical School, and Michael Fischer of Brigham and Women’s Hospital, describe a variety of ways to speed “bench to behavior” translation of new comparative effectiveness research studies, including: plan early for dissemination; develop new models of continuing medical education based on best available evidence instead of marketing; use academic detailing, which allows for tailored communication through education outreach; embed new research findings into health technology applications, like computer-assisted prompts for doctors and computerized physician order entry; and require pharmaceutical and device manufacturers to include a balanced summary of research findings in their promotional materials.

More could—and should—be done to maximize the value from this new research enterprise, according to Lynn M. Etheredge, of Chevy Chase, Md., a consultant to the Rapid Learning Project at George Washington University. Etheredge recommends a presidential order establishing a national database for effectiveness research studies as part of a strategy to instill a rapid-learning culture across the health care system. Ultimately, he observes, the system must be able to learn the best use of new technologies as quickly as it produces them. Building a high-performance infrastructure for comparative effectiveness research will help bring this about.

Ann C. Bonham and Mildred Z. Solomon, of the Association of American Medical Colleges, describe how academic medicine can also play a strong role in moving comparative effectiveness research into practice.

Selecting research methods and tools. To maximize the value of effectiveness research, the newPatient-Centered Outcomes Research Institute should take a balanced, flexible approach to the types of studies it sponsors, writes Louis P. Garrison Jr., of the University of Washington, Seattle, and colleagues. The authors note that findings from the Institute will be used by a range of decision makers, including government regulators, policy makers, payers, providers, and patients, who will have different information needs and evidence standards. Overly strict, one-size-fits-all research standards could impede the real-world use of effectiveness research by a full range of stakeholders.

Two papers support the role of observational evidence in comparative effectiveness research, in addition to clinical trials, long considered the “gold standard” for research. Unlike controlled trials, observational research consists of retrospective and prospective studies based on treatment choices made by patients and their providers, not by assignment according to a research protocol. These “real-world” data sets can be enormously useful to understanding treatment benefits and harms, according to Nancy Dreyer, of Outcome Sciences in Cambridge, Mass., and co-authors, who write that, in order to guide good decision making, effectiveness research should encompass a range of methods. Rachael L. Fleurence, of United Biosource Corp., Bethesda, Md., and colleagues agree, noting that observational studies offer quicker results and the opportunity to investigate large numbers of interventions and outcomes among diverse populations, often at a lower cost than clinical trials.

Addressing differences among population groups. Three papers—by Lisa A. Simpson, of the Cincinnati Children’s Hospital Medical Center, and colleagues; David L. Shern and colleagues at Mental Health America; and a Web First article by C. Daniel Mullins, of the University of Maryland—discuss the potential for comparative effectiveness research to substantially improve health and health care among children, minorities, and those with mental illness. Historically, these groups have been underrepresented in many medical research studies or underserved by the health care system.
 
Giving Teeth to Comparative-Effectiveness Research - The Oregon Experience

Posted by NEJM - February 3rd, 2010
Somnath Saha, M.D., M.P.H., Darren D. Coffman, M.S., and Ariel K. Smits, M.D., M.P.H.

Experts believe that comparative-effectiveness research (CER) can substantially reduce future health care spending and improve the quality of care.1,2 Their analyses indicate that CER can control costs if its results are used to inform coverage, payment, and cost-sharing policies that provide incentives for appropriate and cost-effective care.1,2 But the proposed approach to CER in the United States would constrain these uses of the research, to avoid any implication that health care will be rationed. Though the word elicits fear and opposition, "rationing" is simply the equitable, or rational, distribution of resources; it involves delivering health care services according to clinical need and effectiveness, rather than wealth or geographic location.

Continue reading . . . Giving Teeth to Comparative-Effectiveness Research ? The Oregon Experience | Health Care Reform Center

Dr. Scally, I think you a genius when it comes to the HPTA axis, but all this comparative-effectiveness research is a kabuki sideshow. The real details of ObamaCare are that it will bend the cost curve waaaayyy up (it already is and has required some companies to receive waivers in a vain attempt to control the PR disaster that is hitting OCare). Funny you should mention Oregon. Physicians for Reform has a very interesting story about Oregon style healthcare. PFR is for real market based reform and knows socialized medicine is another way to say rationing and eugenics. I can give you plenty of examples in Britain and Canada for rationing and examples of Eugenics practices in Britain (I am still researching Canada). Here is the story of a patient in Oregon that should serve as a lesson. Given the NHS and NICE in Britain having to be sued to allow for Herceptin treatments of breast cancer this goes to show one that rationing is a part of any socialized program. And if rationing leads to death, or as is the case in this story, an outright offering for physician assisted suicide then what you have is eugenics plain and simple. If people die because treatment is denied (and Herceptin was already known to be highly effective in Britain so the argument that it had very little clinical benefit cannot be made) then you are killing those with diseases. This is eugenics. The current recess appointed head who handles Medicare and Medicaid just loves the NHS. That should perk up your ears if you are a doctor who believes in the principle "first, do no harm". If you want to know just how unsavory a character this SOB is see here: http://www.cnsnews.com/news/article/health-care-groups-congress-de-fund-medi. No wonder he had to be snuck in as a recess appointment. Mr. rationing lover himself is now making decisions for Medicare and Medicaid. That should scare the hell out of anyone on these programs. And Obama wants to ultimately do this for the rest of us as well. Ain't gonna happen.

What This Means For You - Physicians for Healthcare Reform

The powerful story of Barbara Wagner demonstrates why this discussion is of utmost importance. When Barbara’s lung cancer reappeared during the spring of 2008 her oncologist recommended aggressive treatment with Tarceva, a new chemotherapy. However, Oregon’s state run health plan denied the potentially life altering drug because they did not feel it was "cost-effective." Instead, the State plan offered to pay for either hospice care or physician-assisted suicide.

In stunned disbelief you may ask, "How can this be? This happens in Europe. I’ve heard stories of Britain’s National Health Service delaying intervention until the patient dies or reports of physician-assisted suicide in the Netherlands. But in America?"

The answer is simple. Oregon state officials controlled the process of healthcare decision-making—not Barbara and her physician. Chemotherapy would cost the state $4,000 every month she remained alive; the drugs for physician-assisted suicide held a one-time expense of less than $100. Barbara’s treatment plan boiled down to accounting. To cover chemotherapy state policy demanded a five percent patient survival rate at five years. As a new drug, Tarceva did not meet this dispassionate criterion. To Oregon, Barbara was no longer a patient; she had become a "negative economic unit."

In 1994 Barbara’s state established the Oregon Health Plan to give its working poor access to basic healthcare while limiting costs by "prioritizing care." In 1997 Oregon legalized physician-assisted suicide to offer "death with dignity" to patients who chose to die without further medical treatment. In the end, the State secured the power to ration healthcare in order to control its financial risk, even if that meant replacing a patient’s chance to live with the choice of how to die.

When queried about withholding Barbara’s treatment, Dr. Walter Shaffer, a spokesman for Oregon’s Division of Medical Assistance Programs, explained the policy this way, "We can't cover everything for everyone. Taxpayer dollars are limited for publicly funded programs. We try to come up with policies that provide the most good for the most people."

Dr. Som Saha, chairman of the commission that sets policy for the Oregon Health Plan, echoed Shaffer, "If we invest thousands and thousands of dollars in one person's days to weeks, we are taking away those dollars from someone [else]."

Twice Barbara appealed the ruling. Twice Oregon denied her treatment.

Government compassion sounds so noble when first introduced. In fact, this well-intentioned motive fueled the creation of the State-sponsored health plan that now denied Barbara’s treatment. As "we the people" become more and more reliant on the government, inch by precious inch, liberty slips away. Citizens become powerless in dependency. Seduced by sweet words of compassion, the welfare of the State silently usurps the wellbeing of the individual citizen. Secure in the belief that government will care for them, many Americans slumber in complacency until one day, "we the people" awake to find liberty lost.

To learn more about this story go here and here.

ObamaCare must die. If you let the cold government make decisions your physician should be making you can bet that, like any other power they have, it will be abused to your detriment.
 
Iglehart JK. The Political Fight Over Comparative Effectiveness Research. Health Aff 2010;29(10):1757-60.

The creation of a public-private institute to direct new comparative effectiveness research represents a challenging new chapter in America's on-again, off-again support for determining what works in health care.
 

Attachments

Dentzer S. Comparative Effectiveness: Coherent Health Care At Last? Health Aff 2010;29(10):1756-.Comparative Effectiveness: Coherent Health Care At Last? -- Dentzer 29 (10): 1756 -- Health Affairs

Someday historians may gather to discuss cultural and economic trends in twenty-first-century America. They will describe ananomalous reality: a country living in what purported to be a scientific age, save for such tendencies as almost totemic devotion to much health care of little or no proven value.

Let us hope these historians will be able to cite a turning point when this cult lost its hold on the nation. Perhaps one will unearth a photo of a cornerstone being laid for a new building: The Patient-Centered Outcomes Research Institute, Founded MMX.

As detailed in this thematic issue of Health Affairs, the institute was created under the Affordable Care Act to coordinate a majornew national push on comparative effectiveness research. The strategy flows from the novel concept that before we put patients at huge risk or incur new health care spending, we ought to have a reasonably good idea of how well the interventions work—especially compared to differing treatments for the same condition, or (sometimes) for different subgroups of patients.

Fierce Debate

Federal funding in the amount of $1.1 billion was allocated for the research under the 2009 stimulus law (an article by Joshua Benner and colleagues details how the money was spent). That spending paved the way for a fierce debate over whether any additional backing for the research should be incorporated into national health reform legislation enacted this year.

As John Iglehart describes in this month’s Entry Point, Democratic leaders largely embraced the idea; many Republicans had previously been for it before they were against it. Meanwhile, fear that the research would lead to government rationing of care fueled the "death panels" fury of summer 2009. More legitimate worries arose from those who feared that the research would somehow defeat efforts to tailor therapies to individuals’ specific characteristics or genetic makeup.

Such concerns were eventually allayed, and the Patient-Centered Outcomes Research Institute was born as a paradoxically nongovernmental institute with a government-appointed board of directors (names were announced September 23, 2010). Dollars to carry out the research were to come primarily from a tax on health insurers.

Now the "opportunities" side of the research ledger is bulging with options. As Alan Garber and Harold Sox point out, the legislation allows the effectiveness of not just individual treatments, but even entire programs to improve public health, to be compared. As the Institute of Medicine heralded, there is unprecedented opportunity "to assist consumers, clinicians, purchasers and policy makers to make informed decisions that will improve health care at both the individual and population levels." In particular, care could be improved for minorities and other groups historically left out of much medical research, or for those with mental illnesses, as David Shern and colleagues from Mental Health America contend.

The "challenges" side of the ledger seems equally packed. Papers in this issue explore a number of them, including tactics and methodologies. Dave Chokshi et al. discuss such lessons as selecting appropriate "comparators," lest the deck be stacked when one intervention is compared inappropriately to another. Once completed,the research must be disseminated in order to change health care practice. Jeffrey Lerner et al. thus propose a national patient library of the research for use by clinicians and patients.

The ‘R’ Word

Perhaps above all is the challenge imposed by the health reform law itself, which imposed tight restrictions on what could be done with the research to avoid any appearance that it would lead to government rationing. Specifically, Medicare was barred from using the research to establish cost-effectiveness of interventions or from drawing on such analyses in deciding whether and how much to pay for a given intervention.

However, solutions are also put forward, such as Garber and Sox’s proposal that "private parties" could perform cost-effectiveness analysis based in part on information published by the institute. We can only hope that opponents of this idea won’t now be inspired to pass new legislation blocking off even this escape hatch to sanity.

We sincerely thank the organizations whose sponsorship and support made this issue possible: the National Pharmaceutical Council, WellPoint Foundation, and Association of American Medical Colleges. To state the obvious, these organizations have an interest in advancing comparative effectiveness research, as do all Americans. As is customary, the sponsors had no role in the selection or editing of articles, and all content was peer-reviewed.
 
The Pragmatist's Guide To Comparative Effectiveness Research

All developed countries have been struggling with a trend toward health care absorbing an ever-larger fraction of government and private budgets. One potential solution is to rely more heavily on studies of the costs and effectiveness of new technologies in an effort to ensure that new spending is justified by a commensurate gain in consumer benefits. For most nonhealth commodities, markets function sufficiently well to perform this function unassisted. But in a market such as health care, effectiveness studies can (in theory) shed light on what patients would have demanded in the absence of moral hazard and adverse selection.

As one example, an Associated Press article described patient reactions to the price of a $93,000 drug (Provenge) that extends life for incurable prostate cancer by an average of four months. One respondent, Bob Svensson, 80, a former corporate finance officer whose insurance was paying for the treatment, declared: “‘I would not spend that money,’ because the benefit doesn’t seem worth it . . .” Perhaps reassuringly, this particular treatment would fail most cost effectiveness guidelines.


Chandra A, Jena AB, Skinner JS. The pragmatist's guide to comparative effectiveness research. J Econ Perspect 2011;25(2):27-46. An Error Occurred Setting Your User Cookie

Following an acrimonious health care reform debate involving charges of "death panels," in 2010, Congress explicitly forbade the use of cost-effectiveness analysis in government programs of the Patient Protection and Affordable Care Act. In this context, comparative effectiveness research emerged as an alternative strategy to understand better what works in health care.

Put simply, comparative effectiveness research compares the efficacy of two or more diagnostic tests, treatments, or health care delivery methods without any explicit consideration of costs. To economists, the omission of costs from an assessment might seem nonsensical, but we argue that comparative effectiveness research still holds promise. First, it sidesteps one problem facing cost-effectiveness analysis--the widespread political resistance to the idea of using prices in health care. Second, there is little or no evidence on comparative effectiveness for a vast array of treatments: for example, we don't know whether proton-beam therapy, a very expensive treatment for prostate cancer (which requires building a cyclotron and a facility the size of a football field) offers any advantage over conventional approaches.

Most drug studies compare new drugs to placebos, rather than "head-to-head" with other drugs on the market, leaving a vacuum as to which drug works best. Finally, the comparative effectiveness research can prove a useful first step even in the absence of cost information if it provides key estimates of treatment effects. After all, such effects are typically expensive to determine and require years or even decades of data. Costs are much easier to measure, and can be appended at a later date as financial Armageddon draws closer.
 
Joe Selby. Nat Rev Drug Discov 2011;10(9):652. Joe Selby : Article : Nature Reviews Drug Discovery

As part of the Affordable Care Act of 2010, the US Congress created the Patient-Centered Outcomes Research Institute (PCORI). Just over a year since the creation of the comparative effectiveness research (CER) organization, the PCORI has now appointed its first Executive Director, Joe Selby. A physician who formerly directed research at Kaiser Permanente, northern California, Selby will now supervise the formation of the nascent institute's plan of action. And by 2014 he will be overseeing an expected annual research budget of US$500 million. Speaking with Asher Mullard, Selby explained the case and agenda for the PCORI.
 
Back
Top