Not infrequently, I read articles or get emails that raise the question of whether measurement of patient experience might have unintended, even perverse consequences.
Could too much attention to patients’ pain lead to overuse of pain medications? Could giving patients what they want lead to overuse of antibiotics and other forms of wasteful spending when providers are already being pressured to become more efficient? Could worrying about what patients think distract physicians from paying attention to the diseases that are making them ill?
Here is what I have learned from a few decades as doctor, and slightly less time trying to measure and improve health care at the hospital where I work: no individual quality measure is perfect. In theory, any measure in isolation could, if taken to an extreme, lead to undesirable consequences. For example, measurement of surgical complication rates could discourage surgeons from operating on sicker patients, and measurement of quality of diabetes care could distract physicians from taking good care of other problems. But, the fact is, common sense has a way of prevailing, physicians usually do the right thing and the net effect of measurement is better patient care.
These principles are true for all quality measures in healthcare, including patient experience. I think what we call “patient experience” constitutes important yet imperfect measures of issues that really do matter to patients, and thus should matter to the people who take care of them. Like all measures of performance in health care, however, no individual metric should stand on its own as the sole measure of quality. Taking care of patients is just too complex.
In medicine, you are often dealing with values that are in conflict, and there is no “right” or “wrong” answer of how to weigh them, but there is still something to be learned from knowing whether you are an outlier in how you make decisions.
I am reminded of concern that arose a few years ago at Partners Healthcare when one of our hospitals had a higher-than-expected mortality from coronary angioplasty. When experts delved into the cases, they found that the cause was not technical competence, but that some cardiologists were being very aggressive, attempting angioplasty on patients who were essentially near death.
An ill-considered reaction to these data would have been to tell the cardiologists not to do angioplasty on very sick patients—after all, very sick patients are the ones who derive the most benefit from the procedure. Instead, the cardiologists were asked to bounce cases off a colleague before proceeding, so that they could get at least one other person’s perspective on whether putting the patient and the family through the angioplasty made sense. That review process was followed immediately by a decline in angioplasty mortality to the normal range.
In the same way, I think patient experience data are very important, but not the sole source of truth when it comes to determining the right thing to do. Some of the articles that express skepticism about patient experience data cite the New England Journal of Medicine paper that showed that oncologists who told their patients the truth about the incurable prognosis of their metastatic cancers were rated as poorer communicators than physicians who gave softer but less accurate messages. That paper, by the late Dr. Jane Weeks, should not be taken as support for being dishonest with patients. Jane was a good friend of mine, and what she was demonstrating in this paper was the complexity of taking care of patients.
There are actually lots of data showing that better patient experience performance is associated with better clinical outcomes. But I don’t think we are dealing with a scientific issue here; it’s more about how the data are being used. And much of the push-back that is reflected in the skeptical articles and emails is caused by the irritation of having financial incentives tied to patient experience data.
Now the reality is that there is no perfect way for money to change hands in medicine. Every incentive system can have perverse effects if carried to an extreme—including fee-for-service. So of course tying money to patient experience can lead to unintended consequences in some settings. And physicians (who have plenty of reasons to feel cranky these days) are quick to point them out. It’s not hard to make any performance incentive look potentially perverse, if one wants to.
My take is that the real goal is to have physicians look at their patient experience data, care about them, and try to improve. The financial incentives are a means to that end -- and I doubt the any institution’s incentives for improving patient experience are nearly as much as the rewards for generating more RVUs. If non-financial incentives such as internal or external transparency can accomplish the goal with less worry about financial “penalties” for doing the right thing, that would be great. The University of Utah is doing exactly this—putting all their Press Ganey comments on line for every doctor.
But if organizations put some money on the line for patient experience, and those incentives cause physicians to receive less money because they have delivered what they consider better care, I actually think physicians will almost always do the right thing. After all, part of the definition of professionalism is willingness to make personal sacrifices in the interests of the patients. By putting the focus on patients and what they endure as they receive our care, I think patient experience is actually renewing professionalism in medicine.