Skip to main content

Table 4 Barriers and Facilitators affecting practitioners’ ability to use PRMs to improve P3C

From: Can practitioners use patient reported measures to enhance person centred coordinated care in practice? A qualitative study

 

Barrier

Examples

Facilitators

People based

Clinicians’ lack skills for using PRMs

Lack of clarity about the purpose and value of PRMs will fail to motivate patients to complete it and professionals to champion it.

Lack of understanding and/or training on how to apply the measure in clinical settings.

Requiring the skill to use the measures, whist maintaining rapport with the patient.

Provision of training to practitioners on why PRMs are important e.g., how it fits into P3C theory, how it can be delivered and used in practice to improve service delivery.

Showing the patient the findings on the computer screen, while discussing them during consultations.

Imposed work burden on staff

Staff can view measurement systems as extra and unnecessary work.

Health professionals are too overwhelmed by existing workloads, so it would be better if they were not responsible for patients completing measures.

Offering a financial incentive.

Using a champion from the same healthcare service to encourage use of the measure.

Reducing the burden of the new workflow by training specific staff members to handle the measurement system.

Facilitating a smooth integration of the PRM data into a health organisation’s electronic record system, so that accessing it is less burdensome and so that the information integrates with what data is already being collected.

Emotional burden on staff

Staff resistance to delivering the measures and hearing results, due to a fear of the unknown e.g. what feedback they may receive about their work.

Focusing on the change and improvement that can be made because of the information retrieved from the measure, rather than on what has gone wrong.

Burden on patients

“Culture shock for patients” – patients are not used to being asked to do ‘homework’ outside of the consultation; being involved in the consultation or being asked new, difficult questions.

Patients not motivated to complete the measure as view it as only being useful for the health professional.

Completing measures can be time consuming and burdensome. If many measures are given to the patient, they may develop questionnaire fatigue, especially if not thanked or told why the results are important.

Technology - if the delivery of the measure becomes electronic then it can introduce a new workflow for the patients (as well as staff). Interface of the electronic version of the measure may not be user friendly.

Making patients aware of improvements to patient care that were made in response PRM data.

Ensuring that someone asks the patient to complete the measure, rather than just having it lying around.

Monitoring how many questionnaires individual patients are receiving.

Picking measures that are relevant to the patient and adding to the collection slowly.

Using one measure that can give patients the opportunity to talk about everything, not just certain conditions or issues.

Providing different delivery formats, so that the completion of the PRMs is as easy as possible and the offer of support if patients are making a switch to electronic methods. Improving technology, so that patient access is improved. Using external software agencies for IT support and for sharing patient feedback on the website.

PRM based

PRM design

Lengthy questionnaires can interfere with patient-practitioner conversation. It will also make completion even less likely for people who already find it difficult to fill them in.

Questions can be hard to understand and/or to respond to for some patient groups due to question design or because of their condition.

Translations needed for different languages and for linguistic variations between different English speaking countries.

Insensitive question design can have a detrimental impact on the respondent.

Family involvement and perspectives not often sought with PRMs.

Working with the developers to make the items are relevant and fitting with your population group.

Making decisions on whether people are able to respond to PRMs despite the impact of their condition(s) (e.g. cognitive impairments) on a case-by-case basis.

Using proxy measures rather than excluding people who are unable to self-report

Triangulating results from patients, health care practitioners, carers and with standard responses from people with the same condition.

Using PRMs that use lay language. Using measures that have translated versions available. Making sure that the questions asked are in their local language and are asked by someone that they trust.

Using measures that use positively framed items such as the WEMWBS.

PRMs not providing an accurate measurement of outcome/behaviour/experience

Practitioners find that PAM results often jar with what they have learnt from interacting with patients. Consequently, they doubt whether the measure provides a true representation of how activated someone is.

Patients giving answers that they think the practitioner wants or feel nervous about complaining.

Using peer advocates who can advise on how to complete the measures and encourage honest responses.

Keeping participants responses anonymous so that respondent bias can be minimised. However, this makes collating data with other sources difficult.

Training staff to interpret patient behaviours to combat discrepancies between proxy and self-report measures.

Examining variables influencing agreement/disagreement between a proxy and a patient score. Once discrepancies are identified, scores can be adjusted and controlled for.

Access and interpretation

Maintaining patient contact

Can be difficult to feedback to patients who just disappear.

 

Maintaining access to data.

Data given to Clinical Commissioning Groups for aggregate measurements, but data not returned for practice level use.

 

PRMs data difficult to interpret

Findings presented in overly statistical form to people without the skills to interpret them.

Giving a simple overview of the data, showing trends that indicate what might and might not be a good direction to go in. Giving different options for how to make changes in care. Including graphical representations of data and a decision support system.

Keeping it simple – limiting the number of questions you use, so you know what good will look like. If the answer options are related to outcomes that are important to patients, the results will be easier to evaluate and will be valued.

Lack of feedback systems

If a patient accesses their results without an explanation, it can cause confusion and worry.