Customized Plans Advance Patient Experience for Penn Medicine Medical Practices

Added on Feb 6, 2020

patient-doctorCustomized Improvement Plans Advance the Patient Experience for Lancaster General Health Physicians
By​ Audrey Doyle

When the rate of patient experience improvement across Lancaster General Health Physicians’ medical practices started to slow in mid-2017, the network’s Experience team developed customized plans for each site to restore the previously steady improvement trajectory. Within a few months of implementing their respective plans, the medical practices’ top-box scores for several patient experience measures began to improve, including those pertaining to staff’s ability to collaborate with one another, show sensitivity to patients’ needs, and coordinate patient care, as well as the helpfulness and respectful treatment of patients by clerks and reception. In addition, the network’s top-box performance on the Likelihood to Recommend Practice metric, a key marker of patient loyalty, has since risen from the 54th percentile to the 75th percentile.

Focusing on the Behavior Instead of the Outcome

Part of Penn Medicine, Lancaster General Health Physicians (LGHP) is a network of more than 350 primary care and specialty physicians and more than 200 advanced practice providers who deliver health care services at 66 medical practice locations throughout Lancaster, Pennsylvania. In early 2015, LGHP formed the Experience team to support the practices’ efforts to deliver a positive experience to patients, employees, and the Lancaster community. Thanks to various initiatives, programs, and customer service trainings developed and implemented by the eight-person team, the patient experience scores for LGHP’s medical practices increased steadily for more than two years, as did LGHP’s national percentile rank for Likelihood to Recommend Practice.

When the medical practices’ top-box scores stopped rising as quickly as they had been, the team introduced additional improvement tactics to get the group back on track, but none made the impact they were hoping for. This prompted the team to conduct a thorough review of the site leaders’ understanding of and approach to patient experience improvement. According to Kyle Garrett, the team’s consumer insights project manager, they discovered the site leaders were focusing on Likelihood to Recommend as a metric to foster improvement, instead of prioritizing their efforts on the specific experience drivers that correlate to patients’ likelihood to recommend their particular practice.

“Focusing on Likelihood to Recommend wasn’t working, because it’s very hard for front-line staff to contextualize what they need to do to become more ‘recommendable,’ so to speak. But focusing on a behavior that drives Likelihood to Recommend, like sensitivity to patients’ needs, is something they can relate to, and improving that behavior is a meaningful and achievable target for them,” Garrett said. “We needed to help our site leaders switch their focus from the outcome to the behaviors driving the outcome, let them pick the behavior to focus on, and give them tools and other supports to help them improve that behavior.”

A critical first step in that endeavor was to assess the culture at each medical practice. This involved analyzing employee engagement data, as well as visiting each site to observe the interactions of senior leaders and middle managers and to interview front-line staff.

One main goal of the site visits was to build credibility and foster a positive relationship with the leaders and their staffs. The other main goal was to learn what stressors the front-line staff were facing. This was crucial for gaining context as to why the sites were underperforming, said Justin Cook, who, as one of four advisors on the team, conducted many of the staff interviews. “We needed to know what they felt they needed so that we could develop supports that were specific to them, as opposed to hoping that whatever we developed would be something they’d need or want,” he said.

Creating a Customized Menu of Improvement Options

Once the team had a clear understanding of each site’s culture, they conducted a progression analysis of patient experience survey data to identify the top drivers of Likelihood to Recommend for each site. Just prior to the start of fiscal year 2018, they gave each site leader the five survey questions that most closely correlated with patients’ likelihood to recommend their practice and instructed them to have their staff members choose one of those questions to focus on as an organizational goal for the year.

“They could choose whichever one they wanted, but no matter what they chose, that question—not their top-box score for Likelihood to Recommend Practice—is what would go on their huddle board as the thing to focus on improving,” said Cook. “Our goal was to have the front-line staff feel empowered in choosing their own target—one that made sense to them and was incredibly important to patients. Their goal was to work on changing their behaviors so that patients’ care experiences would improve.”

According to Garrett, creating a customized menu of improvement options from which to choose was key to the initiative’s success, as it ensured that each site would be focusing on meaningful improvement and not on “trying to fix something that wasn’t broken,” which can easily happen when a conclusion is made based on assumptions rather than facts.

For example, Garrett explained, the team learned through their earlier discussions with leaders and staff that some of them believed the primary reason they weren’t scoring well for Likelihood to Recommend was because their patients were upset that they weren’t being prescribed the pain medication they felt they needed. When the team analyzed the data for those practices, however, they discovered that other drivers, such as long wait times or perceptions of clinician inattentiveness, were the basis for the negative comments and low scores, with pain medication representing only a fraction of patient complaints. Having data that showed what the negative comments and scores were truly based on corrected these misinterpretations, “which meant we could give these sites experience drivers to choose from that would improve their patients’ experiences, and as a consequence, their experience scores,” said Cook.

To help each site better understand their experience scores, the team educated leaders on what their numbers meant and how to communicate the information to staff members. They also educated leaders and staff on the relationship between employee experience and patient experience and how each influences a patient’s likelihood to recommend a practice.

“Before, we did have classroom-style learning sessions, but we didn’t offer them to everyone and we found that people often weren’t retaining the information,” Garrett said. “Now we’re sitting down with folks and teaching them how to use these resources. Instead of saying, ‘here are your numbers,’ we’re saying, ‘here’s what your numbers mean and here’s how to communicate this information to your team.’”

Developing Supports Catered to Each Site

According to Cook, the Experience team spent a little more than a year assessing site culture, building credibility, fostering positive relationships, identifying experience drivers for each site, and conducting data literacy training. Once this was done, they began developing customized supports.

One support they developed is customized service standards training. “For example, if a site chooses to improve their ‘Staff worked together’ driver but they’re not sure how to do it, they’ll reach out to us and we’ll provide onsite service standards training with a focus on that driver,” explained Garrett. The training would be designed to help the site leader and staff understand the importance of handoffs and how to effectively model good communication in front of patients and would “bring everyone together in terms of improving that score,” he said.

According to Garrett, the customized training has been popular. “We’ve done well over 40 of these trainings. People request it because we make sure it’s tailored to what they’re experiencing at their practice. They’re not being ‘talked to.’ They’re having a conversation about something they care about and have asked to learn more about.”

Another support is a monthly scorecard that each site leader shares with their staff. “The scorecards are useful in determining which driver represents the best opportunity for improvement when considering not only rank, but top-box score,” said Cook. “Because we generate scorecards with both practice scores and provider-specific scores for each survey question, the scorecards are also helpful, when n-size permits, in identifying whether specific providers within a practice are pulling ahead of or lagging behind the practice score. Those providers may represent opportunities to spread best practices or offer supports as needed.”

To keep a finger on the pulse of their improvement efforts, each site leader also now receives a custom tile report each month. The tile report, generated through the Press Ganey portal, includes a tile for every scoring question on the CGCAHPS survey, along with the site’s top-box score for Likelihood to Recommend, score goal, percentile rank, monthly score over the preceding year, and distribution of responses of patients who rated their likelihood to recommend the practice as being Very Poor, Poor, Fair, Good, or Very Good.

If a site leader needs help understanding how to interpret the data in one of their monthly scorecards or tile reports, Cook or a fellow advisor will visit the site, analyze the most recent comments and scores with the leader, and make specific recommendations for improvement. Likewise, if a scorecard or tile report shows that a site is starting to struggle with a driver, an advisor will visit the site, observe how the site is functioning to pinpoint what’s causing the problem, and work with the leader and staff on ways to fix it.

When the Experience team came together in early 2015, LGHP’s top-box score for Likelihood to Recommend was 79%. As of November 2019, it was just shy of 86%. In addition, its national rank rose from the 46th percentile in FY 2015 to the 75th percentile in FY 2020.

But Cook and Garrett don’t want medical practice staffs to focus solely on those numbers.

“Our medical practices receive 1.2 million outpatient visits per year, so that 7% improvement in top-box score equates to 84,000 visits, and it’s that number that we want them to celebrate,” Cook said.

“That number means 84,000 more encounters that patients felt were Very Good, 84,000 more opportunities to improve patient outcomes, and 84,000 more opportunities to reduce patient suffering,” Garrett concluded. “That’s what resonates with our care providers, and that’s what motivates them to continue to improve.”