Measuring patient satisfaction and quality of care has become a much talked about topic lately. Due to changes in federal law, Medicare reimbursements paid to hospitals are now tied to patient satisfaction scores. These new laws give hospitals even more incentive to improve patient satisfaction and experience – which, in theory, is a good thing. Doctors (and hospitals) should be subject to feedback and held accountable for their work. There are, however, problems with patient satisfaction surveys.
Dr. Christopher Johnson writes in a recent article that patient satisfaction surveys – as currently used – are “riddled with problems.” Dr. Johnson goes on to say, “they [surveys] don’t measure what they are suppose to measure and they can easily drive physician behavior the wrong way.” Dr. Johnson refers to his survey tools as made and facilitated by Press Ganey. Press Ganey is a massive corporation that provides hospitals and health systems full service patient experience solutions, in which Press Ganey writes, distributes, and collects the surveys and data. As such, hospitals using Press Ganey’s services essentially outsource all of their patient satisfaction and survey measurement. Dr. Johnson writes, “I’ve read the Press Ganey forms and the questions they ask are all very reasonable.” This quote raises a red flag that perhaps Dr. Johnson misses: Dr. Johnson is not, nor is anyone at his organization, writing the questions on their patient satisfaction surveys. It makes the most sense for those who are trying to benefit from collecting their own patients’ data to write their own surveys. Those closest to the patients and those working in the hospital are better suited to understand the different context and circumstances surrounding the measurement of patient satisfaction. Instead of a Press Ganey employee 800 miles away writing survey questions and processing the data; the doctors, nurses, and administrators should be more involved in the process of measuring quality of care. There is no such thing as a one-size-fits-all survey.
Dr. Johnson also speaks of the poor sample sizes used to collect the data; “[a]lthough the forms are sent out to a random sample of patients, a very non-random distribution of them are returned. Perhaps only the patients who are happy, or those who are unhappy, send them back.” This very well could be true, but it is most likely an easy fix. Again, each hospital knows (or should know) the best way to collect data from their patients. Generally, for most hospitals, the best way is with in-person paper surveys. Surveying the patient while still on site raises response rates and ensures that the sample size is more random than the one Dr. Johnson describes. Along with higher response rates, a patient’s memory of her experience is fresher when surveyed on-site and therefore the data is more accurate.
Dr. Johnson is correct in saying that patient satisfaction surveys “as currently used are riddled with problems,” but there are solutions to these problems. Understanding the quality of care provided by a doctor or a hospital or a nurse is too important to ignore or be discouraged by those obstacles. The healthcare world has been talking about measuring patient satisfaction for decades yet still have a long way to go. Getting rid of patient satisfaction surveys is not the solution. Acknowledging constructive criticism about the process and fostering open debate about improving patient satisfaction is the solution.