Skip to main content

Implementation of child-centred outcome measures in routine paediatric healthcare practice: a systematic review

Abstract

Background

Person-centred outcome measures (PCOMs) are commonly used in routine adult healthcare to measure and improve outcomes, but less attention has been paid to PCOMs in children’s services. The aim of this systematic review is to identify and synthesise existing evidence of the determinants, strategies, and mechanisms that influence the implementation of PCOMs into paediatric healthcare practice.

Methods

The review was conducted and reported in accordance with PRISMA guidelines. Databased searched included CINAHL, Embase, Medline, and PsycInfo. Google scholar was also searched for grey literature on 25th March 2022. Studies were included if the setting was a children’s healthcare service, investigating the implementation or use of an outcome measure or screening tool in healthcare practice, and reported outcomes relating to use of a measure. Data were tabulated and thematically analysed through deductive coding to the constructs of the adapted-Consolidated Framework for Implementation Research (CFIR). Results were presented as a narrative synthesis, and a logic model developed.

Results

We retained 69 studies, conducted across primary (n = 14), secondary (n = 13), tertiary (n = 37), and community (n = 8) healthcare settings, including both child self-report (n = 46) and parent-proxy (n = 47) measures. The most frequently reported barriers to measure implementation included staff lack of knowledge about how the measure may improve care and outcomes; the complexity of using and implementing the measure; and a lack of resources to support implementation and its continued use including funding and staff. The most frequently reported facilitators of implementation and continued use include educating and training staff and families on: how to implement and use the measure; the advantages of using PCOMs over current practice; and the benefit their use has on patient care and outcomes. The resulting logic model presents the mechanisms through which strategies can reduce the barriers to implementation and support the use of PCOMs in practice.

Conclusions

These findings can be used to support the development of context-specific implementation plans through a combination of existing strategies. This will enable the implementation of PCOMs into routine paediatric healthcare practice to empower settings to better identify and improve child-centred outcomes.

Trial registration

Prospero CRD 42022330013.

Background

Person-centredness is at the centre of holistic healthcare and a core commitment of the World Health Organisation [1,2,3,4]. In order to deliver child-centred paediatric healthcare, it is essential to understand what is important to children and their families [5, 6]. The United Nations Convention on the Rights of the Child emphasises the importance of children being involved in matters that affect them [7]. Patient-reported information is central to improving care and quality of life, and evidence demonstrates that children can reliably self-report [6, 8]. However, their voices have not always been prioritised in clinical care or research [9].

Person-Centred Outcome Measures (PCOMs) are standardised questionnaires used to assess patient (and sometimes family) outcomes of healthcare [10,11,12]. They are usually self-completed by the patient, or proxy-reported when a patient is unable to self-report [10,11,12]. Research demonstrates that PCOM use can improve care quality and patient outcomes [13, 14], support conversations about care, initiate decision-making through shared language, and empower patients and families [11, 15, 16]. Whilst PCOM use has become common place and the benefits recognised in adult healthcare, there is limited understanding of the impacts, benefits, and implementation of PCOMs in paediatric services [11, 17, 18].

Additional complexities must be taken into consideration in use of PCOMs with children as opposed to adults, such as the need for child-centred language and their varying cognitive and developmental abilities [8]. Prior reviews have not incorporated the three aspects of implementation, service-focused and clinical outcomes [19] (e.g. on acceptability or improvements in Health Related Quality-of-Life (HRQoL)) and tend not to be theory driven limiting rigour and translatability [11, 20,21,22]. Theoretically-informed implementation strategies are needed to implement PCOMs into routine paediatric practice for the benefit of children, their families, and health care services (Including health and social care professionals, hereafter “professionals”) [23].

This systematic review aimed to identify and appraise the evidence for implementation of PCOMs into paediatric healthcare settings and develop a logic model to identify potential strategies for implementation and their causal mechanisms. The review objectives were 1) to identify determinants and strategies for implementing PCOMs; 2) to describe the mechanisms through which barriers and facilitators to implementation interact to enable or hinder implementation of PCOMs; 3) synthesise the findings through the development of a logic model; 4) to appraise the quality of the evidence.

Methods

This systematic review was conducted and reported in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [24]. The protocol for this review was registered on PROSPERO (International Prospective Register of Systematic Reviews; registration number CRD 42022330013).

Searches

CINAHL, Embase, Medline, and PsycInfo were searched to ensure articles across medical, nursing, and psychological disciplines were considered [8, 25, 26]. Google Scholar was searched for additional articles or grey literature and references cited in selected articles were also searched [27]. Databases were searched from 2009 to present (25 March 2022) as 2009 was the year the patient-reported outcome measure programme was introduced into the NHS in the UK [28] as well as a shift in thinking about and focus on outcome measurement in health internationally [29].

Search terms were informed by child-focused research [8, 26] and search strategies from adult PCOM and implementation research [12, 15, 25]. Related Medical Subject Headings were also used in conjunction with the keywords based on the following concepts: children AND outcome measures AND healthcare settings AND implementation. Full search strategies for each database can be found in the supplementary files [S1, S2, S3 and S4].

Study inclusion and exclusion criteria

Inclusion criteria

  • Population: children ≤ 18 years old. Studies which include both children and adults were included if the data about those 18 and under are reported separately, or if the population were professionals working with children, or their parents.

  • Intervention: Implementation or use of PCOMs or screening tools that are self-completed by a child in clinical care or proxy (parent/carer or professional) to improve care processes and/or outcomes.

  • Outcome: data relating to barriers and facilitators to healthcare implementation and/or sustained use of a measure.

  • Study types: Qualitative, case reports, quantitative (all experimental designs), mixed methods, service evaluations, quality improvement projects, audits. Systematic reviews were excluded but used for reference searching [27].

Exclusion criteria

  • Population: Studies including only people aged > 18 years where they are not professionals working with or parents/carers of children ≤ 18 years old.

  • Intervention: Studies where outcome measures are used to measure the effectiveness of an intervention or where measures are implemented into non-healthcare settings e.g., schools/social care

  • Outcomes: data relating to scores, psychometric properties, or reporting symptom prevalence only

  • Article type: Discussion/opinion articles, commentaries, editorials, letters, systematic reviews

Study selection

Articles identified in the search were imported to Covidence. HS screened titles and abstracts for eligibility; if there was not enough information to determine eligibility from initial screening, the full text article was screened. Full text articles were screened by HS and 10% were screened by a 2nd reviewer (DH). Discrepancies over eligibility of full text articles were discussed and resolved with a third reviewer (DB). Reason for exclusion of studies at the full text stage were recorded in a PRISMA flow chart [24].

Potential effect modifiers and reasons for heterogeneity

Heterogeneity in the data is anticipated due to the inclusion of paediatric healthcare settings globally and across multiple health conditions, therefore the barriers and facilitators identified may be context specific.

Study quality assessment

Study Quality Assessment was undertaken by HS. As multiple study types were included, several critical appraisal tools were used to assess the quality of studies of varying designs. The Critical Appraisal Skills Programme (CASP) tools [30] were used to assess study quality. Where there was not an appropriate CASP tool for the study design, the Joanna Briggs Institute (JBI) critical appraisal tools [31] were used. For mixed method studies, the Mixed Methods Appraisal Tool (MMAT) [32] was used. For quality improvement projects, the Quality Improvement Minimum Quality Criteria Set (QI-MQCS) [33] was used, and for non-randomised experimental studies of interventions, the Risk Of Bias In Non-randomized Studies – of Interventions (ROBINS-I) tool [34] was used. Articles were assessed against the items included in the checklists to develop understanding of the evidence rather than to exclude studies based on score. Study quality assessment results are presented in the results.

Data extraction strategy

Data was extracted by HS. Data extracted in Covidence included: authors, title, date, country, aim, design and methods, sample (including: conditions and age of child, proxy inclusion, inclusion/exclusion criteria, sample size), healthcare setting, outcome measure used, administration data (how it is delivered and by who), implementation data (facilitators and barriers [12]), and patient outcomes data. Data were extracted from both results and discussion sections to capture investigators’ observations regarding implementation of the measure. Where data were extracted from the discussion section of papers, this was noted.

Data synthesis and presentation

A narrative synthesis was conducted by HS to integrate qualitative and quantitative findings following the Guidance on the Conduct of Narrative Synthesis in Systematic reviews [35] with results discussed with RH, CES, DB. If disagreement occurred during these discussions, final adjudication (if needed) would be by RH. Preliminary synthesis involved tabulation, to develop initial descriptions of the studies and begin to identify patterns between studies. This was followed by a thematic analysis; deductively coding the extracted quantitative and qualitative data to the adapted-Consolidated Framework for Implementation Research (CFIR) constructs and sub-constructs [36, 37]. The adapted-CFIR comprises the original five domains from the CFIR with a sixth domain called ‘patient needs and resources’ [36, 37]. This gives person-centredness a greater focus to help ensure that patients’ needs are prioritised throughout all stages of the development, implementation, and evaluation of complex healthcare interventions [37]. This theory was selected as it is a well-established theory that has been evidenced to be effective for underpinning research and implementation of complex interventions in healthcare settings [23, 25, 40].

The effects of heterogeneity across studies were examined by comparing similarities and differences in outcomes across, study designs, settings, and populations to better understand the impact of context.

Logic model development

The adapted-CFIR supported the data analysis and subsequent development of a logic model using Smith et al.’s [38] Implementation Research Logic Model template by HS but presented and discussed with members of the research team (RH, CES, DB). The determinants of implementation in the template map to the adapted-CFIR constructs and sub-constructs [36, 37]. This allowed thematically coded data to be mapped directly into the logic model as either determinant barriers or facilitating strategies.

Results

Review statistics

Search yield

The search yielded N = 7401 articles from databases and a further n = 20 from citation searches. After duplicates were removed [n = 1789], n = 5632 records were title and abstract screened, and n = 5382 were excluded. Of the remining n = 250 records, n = 94 were conference abstracts and thus excluded. Following full text review [n = 156], n = 87 were excluded (reasons: no relevant outcomes reported [n = 36], adult population [n = 34], wrong study design [n = 8], wrong intervention [n = 6], wrong setting [n = 3]), with n = 69 retained for the analysis. Figure 1 below shows a PRIMSA Flow Diagram of the inclusion/exclusion process and Table 1 summaries the included studies.

Fig. 1
figure 1

PRISMA flow diagram. Adapted from Page et al. (2021) [41]

Table 1 Summary of characteristics of included studies

Study characteristics

Country

Of the n = 69 articles retained, n = 30 were conducted in the USA [42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71], n = 10 in the UK [72,73,74,75,76,77,78,79,80,81], n = 9 in The Netherlands [18, 82,83,84,85,86,87,88,89,90], n = 8 in Canada [91,92,93,94,95,96,97,98], n = 2 in Sweden [99, 100] and South Africa [101, 102], and n = 1 from each of Australia [103], Austria [104], Belgium [105], Germany [106], Iceland [107], Malawi [108], Norway [109], and North America (countries unspecified for the latter) [110].

Study design

With respect to study design, n = 17 studies used mixed methods [58, 65, 72, 75, 79, 82, 86, 89, 90, 95, 99, 100, 102, 103, 106, 107, 109, 110]; n = 15 studies used qualitative methods [46, 52, 55, 57, 60, 67, 70, 76, 78, 80, 91, 93, 94, 105, 108]. N = 12 studies were non-randomised experimental studies [44, 54, 61, 68, 69, 71, 77, 81, 85, 92, 96, 101], and n = 12 were quality improvement projects [42, 43, 45, 47,48,49, 51, 53, 62,63,64, 98]. There were n = 5 studies used cross-sectional designs [50, 59, 73, 74, 97], n = 4 case reports [18, 56, 66, 83], n = 3 cohort studies [84, 87, 104], and n = 1 study was part of a randomised control trial [88].

Setting

The most reported setting in articles was tertiary care hospital settings (n = 35) [18, 44, 46, 50,51,52, 54, 56, 58,59,60,61, 63, 69, 70, 79, 81, 82, 84,85,86,87,88,89,90,91, 94,95,96,97,98, 101, 103, 104, 106, 110], and n = 2 were conducted in both tertiary hospital and community settings [76, 105]. Of the n = 11 articles reporting on studies conducted in secondary care settings, these included: n = 8 mental health care settings [47, 62, 68, 72,73,74,75, 80], n = 2 speech and language clinics [92, 93], and n = 1 physiotherapy clinic [66]. A further n = 2 articles reported on studies conducted across secondary mental health care settings and the community [65, 77]. There were n = 12 articles reporting on studies conducted in primary care settings [42, 43, 45, 49, 53, 57, 64, 67, 71, 99, 100, 107, 109], and n = 2 articles reported on studies conducted across primary care and community settings [48, 55]. N = 2 articles reported on studies took place solely in the community [78, 108] and n = 1 took place across multiple settings, but did not specify details [83].

Population studied

Participants in included studies had a range of medical conditions, with many studies including children with multiple conditions. This included (using ICD-10 headings [111]): mental and behavioural conditions [42, 43, 45, 47,48,49, 54,55,56,57,58, 62, 64, 65, 67, 68, 71,72,73,74,75, 77, 78, 80, 92, 93, 99, 100, 102, 107, 109], cancer [18, 50, 59,60,61, 81, 83, 86, 87, 89, 90, 96, 103, 104], rheumatological conditions [50, 83, 84, 89, 90, 98, 106, 110], pain (chronic and acute) [44, 53, 63, 94, 95, 97, 110], endocrine conditions [46, 50, 51, 56, 82, 106], haematological conditions [50, 58, 61, 89, 90, 103], circulatory conditions (cardiac and pulmonological) [50, 56, 61, 69, 83], gastrointestinal conditions [50, 53, 89, 90], infectious diseases [83, 102, 108], respiratory conditions [56, 88, 106], neurological conditions [50, 95, 101], metabolic conditions [83, 89, 90], nephralgic conditions [56, 83], allergy/immunological conditions [56, 58], organ/stem-cell transplant [61, 91], other life-limiting/life-threatening conditions [76, 105], congenital conditions [70], ophthalmological conditions [79], and 4 studies included other chronic/unspecified conditions [52, 66, 85, 89, 90].

Measures studied and methods of completion

A range of generic and disease specific measures were used, including the Pediatric Quality-of-Life 4.0 (PedsQL 4.0), Psychological Assessment Tool 2.0 (PAT 2.0), and Quality Of Life in children and adolescents with DISABilities and their Families (DISABKIDS). There were n = 29 articles that reported on studies that included both child self-report and parent/caregiver-proxy report [18, 44, 47, 52, 56,57,58, 60, 61, 65, 66, 68, 72, 73, 81,82,83,84,85,86, 89,90,91, 95,96,97,98, 101, 105, 110]; n = 17 included child self-report only [45, 46, 51, 53, 54, 59, 62,63,64, 69, 79, 80, 88, 94, 104, 106, 108] and n = 18 included parent/caregiver-proxy report only [42, 43, 48, 49, 55, 67, 71, 77, 78, 87, 92, 93, 99, 100, 102, 103, 107, 109], usually because children were very young (< 6 years). There were n = 5 articles where this information was not reported [50, 70, 74,75,76].

Study quality assessment

Included articles varied in quality. As CASP do not recommend or include a scoring system [30], articles assessed with the CASP checklists were unable to be scored, however, 90% [n = 45] of articles that were able to be scored were assessed as of good to high quality (Table 2)—i.e., those that could be scored met > 80% of criteria or had low-moderate risk of bias.

Table 2 Summary of quality appraisal scores

All qualitative studies (n = 15) reported clear aims and methodology addressing the research question. Weaknesses related to failures to discuss the relationship between researchers and participants, and lack of details of ethical considerations and recruitment strategies. The n = 1 randomised control trial reported a clear aim, but some methodological decisions were lacking, and participant demographics were not reported. The n = 3 cohort studies had clear, focused aims, and well detailed methods, but lacked detail about confounding variables and attrition.

Mixed methods studies (n = 17) met between 70–100% of the MMAT criteria indicating they were generally of high methodological quality. Methodologically weaker studies lacked detail of the sample and risk of non-response bias. Cross sectional studies (n = 5) were well-reported and on average met 98% of the JBI criteria. Case reports (n = 4) met on average 75% of the relevant JBI criteria. Non-randomised experimental studies of interventions (n = 12) assessed with the ROBINS-I tool were of low-moderate risk of bias; usually due to low adherence to the intervention, however these data were extracted as it pertained to the review aims. Quality improvement projects (n = 12) met on average 93% of the QI-MQCS criteria. Main areas of weakness were lack of reporting of patient-health related outcomes and data on the sustainability or scalability of the project.

Adapted-CFIR constructs

Table 3 details the adapted-CFIR domains and constructs extracted from the literature. The five most common subconstructs included complexity [n = 37], knowledge and beliefs about the intervention [n = 37], relative advantage [n = 31], patient needs and resources [n = 25], and available resources [n = 24]. Findings within each adapted-CFIR construct/sub-construct are presented below. Those constructs that were identified in less than three studies are not included in the narrative synthesis due to insufficient data [25]. Illustrative quotes are provided in Table 4 (reported as Q1, Q2, etc.).

Table 3 Factors identified using the Adapted-CFIR influencing PCOM implementation.
Table 4 Illustrative quotes

Intervention characteristics

Intervention source

Professional engagement in the development process of the measure [18] and the perceived security of the platform hosting electronic- or e-PCOMs were both factors that facilitated implementation [83]. However, low rates of parent completion were observed when newly implemented PCOMs were introduced to participants in the context of a research study rather than as new aspect of routine clinical care [Q1] [87, 99].

Evidence strength and quality

Presenting evidence to support PCOM use and perceptions of PCOMs as the ‘gold standard’ were key facilitating factors for implementation and frequent continuous use; training/education programmes that emphasised that PCOMs were research-evidenced, valid, and reliable supported this [70, 73, 92, 97]. Similarly, a significant barrier to implementation was professionals’ perceptions that there was insufficient evidence justifying PCOM use or supporting them as valid instruments [Q2] [50, 55].

Relative advantage

The use of PCOMs was perceived as advantageous, particularly from professionals’ perspectives. One study reported that 80% [n = 53] of parents found PCOM use provided added value over standard consultations [82]. Advantages included: improving communication, engagement, and decision-making with patients and families [Q3] [53, 55, 56, 61, 69, 79, 80, 91, 100, 103, 106, 108], enhancing quality of care and assessment [44, 46, 53, 56, 61, 62, 69, 70, 79, 80, 82, 85, 100, 105, 108], identifying concerns that would have remained unidentified in standard consultations [45, 46, 49, 53, 54, 56, 64, 79, 82, 100, 103, 105, 106], and increased referral rates and access to other services and treatment [45, 46, 48, 53, 56, 64, 103].

Three studies reported that professionals continued to use PCOMs after studies ended due to improved identification of patients’ unmet needs and due to the PCOMs having become integrated into routine practice [49, 64, 100]. Where professionals did not consider PCOMs beneficial, this was often due to them being perceived as bureaucratic exercises that did not elicit new information compared to standard consultations [Q4] [50, 55, 72, 93, 103].

Regarding e-PCOMs, there were mixed perspectives as to whether professionals felt that technology enhanced workflow and assessment compared to traditional paper-based PCOMs [49, 65]. However, there was a strong preference for accessing reports and scores from measures electronically [49, 50, 89, 90, 110], while inclusion of visual representations of progress, e.g. graphs tracking scores over time, was considered beneficial [44, 83].

Adaptability

Where PCOMs could be integrated into electronic systems or platforms this facilitated implementation [Q5] [57, 94, 106]. Correspondingly, this was identified as a barrier in studies where integration did not occur [42, 55, 72, 109]. Lack of cross-cultural validity of PCOMs and those not provided in service users’ language were identified as significant barriers [Q6] [52, 57, 78, 100, 108, 109].

Complexity

Ease of PCOM use facilitated implementation, which included professionals’ views of administering the measure, interpreting the score, and feeding back scores to patients and families [52, 57, 61,62,63, 65, 69, 71, 75, 78, 80, 82, 94, 100, 102, 106, 108, 110]. The importance of measures being child/user friendly was emphasised including ease of completion [Q7] [44, 49, 52, 58, 59, 61, 68, 70, 76, 78, 80, 86, 88,89,90, 94, 99, 100, 103, 104, 106, 108, 110], appropriate measure length [56, 58, 65], and language and a reading level understandable to children and parents completing the measures [Q8] [43, 59, 60, 67, 72, 76, 99]. However, the content of PCOMs (particularly items of a sensitive nature) was a barrier to implementation [Q9] [65, 80, 100].

Design quality and packaging

There was general preference for digital administration methods, such as tablets or computers [Q10] [58, 60, 98, 110]. Yu et al. [98] found 83% [n = 196] of parent/caregivers preferred the electronic version over paper or had no preference. Similarly, Stinson et al. [110] reported that only 16% [n = 77] of children preferred pen and paper as the method of administration. Children preferred using technology to complete measures (compared to professional administration), with 71% [n = 112] responding that they agreed or strongly agreed with the statement ‘I felt less embarrassed answering these questions on the computer than I would have with a clinician’ [68].

Cost

The two most significant costs associated with implementing PCOMs were monetary cost and time. Costs of implementing and maintaining e-PCOMs were discussed [43, 44, 98, 101, 108], although the initial cost of e-PCOMs could be offset over time (due to the recurring costs of paper-based measures [98]). There were some reports that measures were time-consuming to administer during appointments and concerns that this might detract from dedicated patient care [Q11] [43, 70, 108, 109]. However, in further study, providers reported that once PCOMs were implemented into routine care, an average of 16 min of time was saved per appointment [62]. This was particularly important as those time-savings were able to be redirected to improving patient care [62].

Outer setting

Cosmopolitanism

Multi-disciplinary, joined-up, inter-agency working was a significant factor in implementation, as there are often many agencies and services involved in the care of children [76]. Partnerships between settings facilitated the implementation and sustained use of measures and this was linked to peer pressure sub-construct [82, 106]. A lack of resource to address identified unmet need was a significant barrier to sustained use [Q12] [43, 50, 55, 57].

Peer pressure

Linking to cosmopolitanism and partnerships between settings, if other clinics they worked with were using specific PCOMs, this increased the motivation of professionals to also use them [70, 82, 106]; one study reported this was a motivating factor for 86.1% [n = 31] of paediatricians [82].

External policy and incentives

External recommendations, guidelines, or association endorsements were a motivating factor for settings to implement and use PCOMs in practice [80, 82, 92]. However, the source of the recommendation could potentially impact implementation [Q13] [108]. Lack of awareness of or disagreement with recommendations from professional associations was a barrier [55].

Inner setting

Structural characteristics

The main barriers regarding structural characteristics of organisations were related to organisational changes such as high staff turnover [43, 57, 75]. Age of professionals also impacted perceptions of using PCOMs in routine practice; in one study, older practitioners were more likely to be sceptical about the validity and evidence-base for using PCOMs [50].

Networks and communication

Multidisciplinary team communication was seen as a prerequisite to support use of PCOMs [91, 107]. Professionals recognised that using PCOMs supported information sharing between staff in a more systematic way, which improved care [Q14, Q15] [46, 61, 66, 103].

Tension for change

One potential barrier to the implementation of PCOMs was staff readiness and willingness to change current practice [55, 108]. However providing education and training to staff on the expected benefits of using PCOMs could change attitudes and willingness to change and thus facilitate implementation [47].

Compatibility

Perceived disruption to workflows was a potential barrier to implementation [Q16] [55, 60, 70, 93, 94, 98,99,100]. However, in practice, the introduction of PCOMs was generally not seen as disruptive and they became an integral aspect of routine care [Q17] [49, 60, 61, 63, 69, 82, 87, 108].

Relative priority

Shared recognition of the importance of using PCOMs, sometimes referred to as ‘buy-in’ or ‘ownership’ [60], was considered an important facilitating factor for implementation and use of PCOMs in practice [Q18] [60, 93]. Where professionals or patients did not perceive the benefit of PCOMs, this was a barrier to implementation [47, 91]. Education and training may have the potential to facilitate implementation where perceptions of PCOMs are a barrier as one educational intervention increased speech-language pathologists’ positive perceptions of outcome measurement from 49% of participants to 71% of participants [n = 46] [92].

Organisational incentives and rewards

Monetary incentives or rewards were a potential motivating factor for professionals to use PCOMs in practice, particularly in lower-middle income [112] countries [Q19] [82, 108]. Lack of reimbursement for administering tools was a barrier to implementation and continued use, particularly in countries with privatised insurance-based healthcare [Q20] [50, 55, 61, 67].

Leadership engagement

Commitment and support from leadership significantly contributed to the successful implementation of PCOMs into routine practice. High levels of support from leadership was more likely to enable successful implementation than when they had different priorities [Q21] [47, 57, 86, 107, 108], although this was not always the case [73].

Available resources

Lack of resources was a major barrier to implementation [43, 53, 55, 67, 72, 93] and continued funding was a necessity for sustainability [67, 108]. Lack of time to implement, administer, score, and record results of measures was a significant barrier, and often remained a barrier even when other barriers had been addressed [46, 53, 55, 57, 60, 63, 71, 80, 86, 93, 95, 109]. Inadequate staff numbers and high staff turnover was a barrier[Q22] [47, 57, 67], while recruiting additional staff to support measure implementation and use was a facilitator [46, 47, 57, 80, 93]. Challenges finding physical spaces for patients and families to complete measures in private was also an issue [Q23] [58, 60, 67, 82, 93, 108].

The technology requirements of e-PCOMs often created challenges, specifically in relation to internet access, access to and cost of devices [65, 67, 80, 94, 98, 101]. Conversely, paper-based measures had posed challenges pertaining to PCOM availability, stationery resources [42, 93, 94, 108], and the additional time required to enter results into electronic patient records [Q24] [65].

Access to knowledge and information

Lack of awareness and knowledge of both PCOMs and how to incorporate them into routine practice was a barrier to implementation and sustained use [50, 57, 64, 72], as was more knowledgeable staff not sharing their knowledge with those less knowledgeable [108]. Successful strategies to address issues of knowledge about using PCOMs in routine practice included reminders of when and to whom to administer PCOMs, through electronic health records or emails [Q25] [42, 48, 64, 93, 95]. Ongoing efforts to engage professionals through additional training, webinars, handbooks, and guidelines, improved sustainability [46, 47, 52, 57, 64, 72, 80, 85, 92, 100]. Explanations for children and families completing measures regarding what PCOMs are, their purpose, and how to complete them also facilitated use of PCOMs in practice [18].

Individual characteristics

Knowledge and beliefs about the intervention

When PCOMs were perceived positively, for example as validated tools that could support assessment and improve care outcomes, this acted as a facilitating factor [Q26] [44, 46, 49, 51, 53, 57, 58, 61, 68,69,70,71, 73, 74, 78, 79, 81, 84,85,86, 89,90,91,92, 94, 100, 102,103,104, 106,107,108]. Conversely, when PCOMS were perceived more negatively by either professionals or children and families (for example as time consuming), this acted as a barrier [Q27] [46, 50, 55, 67, 72, 74, 80, 91, 100, 108]. Educational strategies were often key to supporting implementation and use [72, 80, 85, 92, 100]. Additionally, if parents of children felt PCOMs were being used as tests, this could create unnecessary stress for families and act as a barrier [Q28] [76, 78].

Self-efficacy

Professional confidence [57, 82, 108] or lack of confidence [50, 57, 79, 93] in using PCOMs was a respective facilitator or barrier. Training and education to use PCOMs could increase self-efficacy and support implementation [46, 74, 108]. As professionals gained experience using PCOMs in practice, their self-efficacy increased [73, 99].

Individual identification with organization

Challenging relationships between professionals and management and a perceived lack of organisational commitment to the intervention were a barrier reported by one study [86]. Trusting relationships between professionals and families, and opportunities to work in partnership facilitated implementation [Q29] [60, 78].

Other personal attributes

Several personal traits were identified that could influence successful implementation and routine use in practice. Following through on actions was an issue for parents in terms of remembering to complete and return screening forms [42], and for professionals in terms of administering measures and discussing results with patients [42, 63, 83, 85, 93]. Professionals’ confidence, experience, and discipline all had the potential to act as barriers or facilitators [50, 55, 57, 73, 74]. Motivation of professionals and families was also important [Q30, Q31] [85, 89, 90]. For parents and patients particularly, motivation was often linked to the perceived added value of the measure for the consultation. Other personal attributes that could impact implementation included parental mental load and stage of treatment/diagnosis (which were often linked) [Q32] [87, 96] and how comfortable children felt talking to professionals [Q33] [91].

Process

Planning

Clearly defined responsibilities that have been collaboratively agreed with advanced notice are important for successful implementation of PCOMs [43, 46, 56, 57, 60, 63, 64, 70]. Absence of planning presents potential barriers to adherence [Q34] [57, 64, 93]. The way in which PCOMs are introduced and formally ratified by managers is also likely to have an impact on the success and uptake [Q35] [78, 86].

Formally Appointed Implementation Leads

Formally appointed implementation leaders or teams and support of site leadership were seen as essential components to adoption and uptake of newly implemented outcome measures [Q36] [56, 57, 65, 97, 107]. Lack thereof was noted as a significant barrier [43]. This sub-construct had significant cross-over with the Champions sub-construct, as the terms were sometimes used interchangeably.

Champions

Individual or team champions were seen as playing a key role in raising awareness of the interventions and promoting the use and value of PCOMs and supporting colleagues [Q37, Q38] [57, 60, 62, 65].

External change agents

External change agents who provide support, in terms of policy, advice, resources, or other forms of support to assist implementation, were seen as a facilitating factor [Q39] [65, 86, 107, 108].

Executing

As noted in some of the previously discussed sub-constructs, there were a number of logistical, resource, and education/information barriers which resulted in the intervention not being used according to plan; addressing these barriers was found to reduce these issues and increase adherence [42, 61, 63, 64, 86, 100]. However, patients and families forgetting or being unable to complete and return measures or completing the wrong measure was also an issue impacting implementation of PCOMs [Q40] [42, 52, 63, 100].

Patient needs and resources

Patient needs were better identified with the introduction of PCOMs into routine practice. PCOMs identified concerns of children and families that professionals perceived would not have been picked up in standard practice [Q41] [45, 46, 53, 54, 57, 60, 63, 64, 69, 77, 82, 87, 89,90,91,92, 102, 103, 105, 106, 108], with one study noting a 68% increase in identification [54]. PCOMs also increased referral rates though identifying unmet needs [45, 48, 53, 64]. Improvements in HRQoL scores were attributed to PCOMs supporting treatment decisions in one study, which reported 33% improvement in scores [56]. Increased focus on children, and better provision of individualised person-centred care were also noted [70, 76, 100, 105].

Evidence of effectiveness

Although several barriers to implementation were identified, numerous strategies from high quality research were able to successfully address barriers and support implementation of PCOMs into routine practice. In particular, training or educating professionals, children and families generally had a positive effect [42, 44, 47, 61, 63, 64, 73, 74, 80, 85, 92, 100] on the implementation of PCOMs, as did addressing logistical barriers [42, 63, 64]. Numerous studies also showed increased identification of concerns and referral rates after implementation of PCOMs [45, 46, 53, 54, 57, 60, 63, 64, 69, 77, 82, 87, 89, 90, 92, 102, 103, 105, 106, 108] which also acted as a facilitator for implementing PCOMs.

Logic model for implementing person-centred outcome measures in paediatric healthcare settings

The findings of this review have informed the development of a logic model (Fig. 2) which identifies determinants, strategies, and mechanisms for implementation from these barriers and facilitators. The logic model illustrates how the existing evidence for determinants of implementation can be used to develop strategies to achieve implementation, service, and patient/clinical outcomes. It also demonstrates the mechanisms though which these interconnected factors achieve outcomes.

Fig. 2
figure 2

Logic model for implementing PCOMs into paediatric healthcare settings. Adapted from Smith et al. (2020) [38]

Discussion

This review has identified key barriers and facilitators to the implementation of PCOMs into paediatric healthcare practice using the adapted-CFIR. These findings informed the development of a logic model that can inform and support future development of context-specific implementation strategies for implementing PCOMs in different paediatric settings.

Relative advantage of PCOMs were echoed in the adult evidence base [13, 16] and some systematic reviews of measures used in specific paediatric settings [11, 20, 22], demonstrating benefits to decision-making, communication, identification of concerns, patient quality of life and referrals.

Existing evidence on implementation, emphasising the importance of PCOMs being evidence based, valid, and reliable [13, 14, 22] is reflected in the sub-constructs of intervention source and avidence quality. Øvretveit et al. [14] note the importance of measures being developed with the adult patients using them, to ensure suitability. A systematic review by Coombes et al. [8] suggests, in line with the findings from this review, that children generally prefer computerised measures and highlights the importance of measures being developmentally appropriate (relating to language used, recall period, and response formats) [8]. This further evidences the importance of involving key stakeholders in the development of PCOMs to support implementation and the mechanisms through which this occurs can be seen visually in the logic model.

The access to knowledge domain was intrinsically linked to relative advantage, intervention source, and evidence quality. In order for the intervention source, evidence quality, and relative advantage to act as facilitators, professionals, patients, and families must be supported to understand the reliability, validity and benefits of the PCOM [14, 18, 22].

Findings relating to resources, staffing, and leadership are consistent with literature from adult healthcare services, including the importance of integration within existing systems and workflows, staff willingness to change and ‘buy-in’, and leadership engagement and support [13, 14, 18].

Strengths and limitations

This systematic review provides a thorough, theory driven examination of the evidence for implementing PCOMs into paediatric healthcare settings. The adapted-CFIR supported the identification of facilitators and barriers to implementation, with only three sub-constructs for which there were no data identified. This supported the development of a comprehensive and theoreticallyinformed logic model. Given that 67% [n = 46] of the retained studies included child self-report measures, this review supports prioritisation of children’s voices in their care, and the centrality of person-centredness to quality care.

Of the sub-constructs for which no data was identified (trialability, individual stage of change, and reflecting and evaluating), it could be that these domains are not relevant to paediatric healthcare, or it could be due to limitations of the existing evidence base. These areas should be prioritised in future research.

Recommendations for practice

From the evidence synthesis and logic model development, several strategies for implementing PCOMs into paediatric healthcare settings have been identified. Education about the benefit of PCOMs is important to increase professional’s understanding of the importance and benefit of PCOMs to facilitate implementation [72, 80, 85, 92, 100]. Including key stakeholders in measure development helps to ensure the outcomes being measured are relevant and useful [8, 14, 18]. This further precipitates a sense of shared ownership with professionals, patients and families over the PCOM being implemented [60, 93]. The identification of context-specific factors (such as financial incentives as a facilitator in lower-middle income countries or reimbursement in privatised insurance-based healthcare systems) [50, 55, 61, 67, 82, 108, 112] further demonstrates the importance of professionals’ understanding the context in which implementation occurs.

Conclusions

To our knowledge, this is the first systematic review conducted into the implementation of PCOMs in paediatric healthcare that is not condition or setting specific. This review provides a comprehensive overview of the potential barriers to implementing and using PCOMs in paediatric healthcare, and the factors that can facilitate implementation and adherence.

This review has also demonstrated the suitability of the adapted-CFIR to theoretically inform implementation research in paediatric settings. The visual presentation of the logic model clearly demonstrates the interconnectedness of the numerous determinants of implementation. It also demonstrates the mechanisms through which implementation strategies can facilitate the implementation of PCOMs into paediatric healthcare settings to achieve improved outcomes for children and their families.

Using PCOMs in routine paediatric care is key to child-centred quality care. This review provides important evidence for how to implement PCOMs in practice in order to support better identification of patient needs. Future research should aim to assess the applicability and feasibility of this logic model in different settings to support implementation interventions, particularly in lower-middle income settings as much of the existing evidence come from higher income countries.

Availability of data and materials

Not applicable.

Abbreviations

PCOM:

Person-Centred Outcome Measure

PRISMA:

Preferred Reporting Items for Systematic Reviews and Meta-Analyses

PROSPERO:

International Prospective Register of Systematic Reviews

NHS:

National Health Service

CASP:

Critical Appraisal Skills Programme

JBI:

Joanna Briggs Institute

MMAT:

Mixed-Methods Appraisal Tool

QI-MQCS:

Quality Improvement Minimum Quality Criteria Set

ROBINS-I:

Risk Of Bias In Non-randomised Studies—of Interventions

CFIR:

Consolidated Framework for Implementation Research

HRQoL:

Health Related Quality-of-Life

References

  1. Kitson A, Marshall A, Bassett K, Zeitz K. What are the core elements of patient-centred care? A narrative review and synthesis of the literature from health policy, medicine and nursing. J Adv Nurs. 2013;69(1):4–15.

    Article  PubMed  Google Scholar 

  2. Mead N, Bower P. Patient-centredness: a conceptual framework and review of the empirical literature. Soc Sci Med. 2000;51(7):1087–110.

    Article  CAS  PubMed  Google Scholar 

  3. Coulter A, Oldham J. Person-centred care: what is it and how do we get there? Future Hospital J. 2016;3(2):114–6.

    Article  PubMed  PubMed Central  Google Scholar 

  4. World Health Organisation. People-Centred Health Care: A Policy Framework. Geneva: Switzerland; 2007.

    Google Scholar 

  5. Namisango E, Bristowe K, Murtagh FE, Downing J, Powell RA, Abas M, et al. Towards person-centred quality care for children with life-limiting and life-threatening illness: Self-reported symptoms, concerns and priority outcomes from a multi-country qualitative study. Palliat Med. 2020;34(3):319–35.

    Article  PubMed  Google Scholar 

  6. Hinds PS, Menard JC, Jacobs SS. The child’s voice in pediatric palliative and end-of-life care. Prog Palliative Care. 2012;20(6):337–42.

    Article  Google Scholar 

  7. United Nations Convention on the Rights of the Child; 1989.

  8. Coombes L, Bristowe K, Ellis-Smith C, Aworinde J, Fraser LK, Downing J, et al. Enhancing validity, reliability and participation in self-reported health outcome measurement for children and young people: a systematic review of recall period, response scale format, and administration modality. Qual Life Res; 2021.

  9. Namisango E, Bristowe K, Allsop MJ, Murtagh FEM, Abas M, Higginson IJ, et al. Symptoms and concerns among children and young people with life-limiting and life-threatening conditions: a systematic review highlighting meaningful health outcomes. Patient  Patient Centered Outcomes Res. 2019;12(1):15–55.

  10. Bradshaw A, Santarelli M, Khamis AM, Sartain K, Johnson M, Boland J, et al. Implementing person-centred outcome measures (PCOMs) into routine palliative care: A protocol for a mixed-methods process evaluation of The RESOLVE PCOM Implementation Strategy. BMJ Open. 2021;11(9): e051904.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Bele S, Chugh A, Mohamed B, Teela L, Haverman L, Santana MJ. Patient-reported outcome measures in routine pediatric clinical care: a systematic review. Front Pediatr. 2020;8:364.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Antunes B, Harding R, Higginson IJ. Implementing patient-reported outcome measures in palliative care clinical practice: A systematic review of facilitators and barriers. Palliat Med. 2014;28(2):158–75.

    Article  PubMed  Google Scholar 

  13. Olde Rikkert MGM, Van Der Wees PJ, Schoon Y, Westert GP. Using patient reported outcomes measures to promote integrated care. Int J Integr Care. 2018;18(2): 8.

  14. Øvretveit J, Zubkoff L, Nelson EC, Frampton S, Knudsen JL, Zimlichman E. Using patient-reported outcome measurement to improve patient care. Int J Qual Health Care. 2017;29(6):874–9.

    Article  PubMed  Google Scholar 

  15. Greenhalgh J, Gooding K, Gibbons E, Dalkin S, Wright J, Valderas J, et al. How do patient reported outcome measures (PROMs) support clinician-patient communication and patient care? A realist synthesis. J Patient-Reported Outcomes. 2018;2(1):42.

  16. Etkind SN, Daveson BA, Kwok W, Witt J, Bausewein C, Higginson IJ, et al. Capture, transfer, and feedback of patient-centered outcomes data in palliative care populations: does it make a difference? a systematic review. J Pain Symptom Manage. 2015;49(3):611–24.

    Article  PubMed  Google Scholar 

  17. Huang I-C, Revicki DA, Schwartz CE. Measuring pediatric patient-reported outcomes: good progress but a long way to go. Qual Life Res. 2014;23(3):747–50.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Engelen V, Haverman L, Koopman H, Schouten - Van Meeteren N, Meijer - Van Den Bergh E, Vrijmoet-Wiersma J, et al. Development and implementation of a patient reported outcome intervention (QLIC-ON PROfile) in clinical paediatric oncology practice. Patient Education and Counseling. 2010;81(2):235–44.

  19. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Admin Policy Mental Health Mental Health Serv Res. 2011;38(2):65–76.

    Article  PubMed  Google Scholar 

  20. Anderson LM, Papadakis JL, Vesco AT, Shapiro JB, Feldman MA, Evans MA, et al. Patient-Reported and Parent Proxy-Reported Outcomes in Pediatric Medical Specialty Clinical Settings: A Systematic Review of Implementation. J Pediatr Psychol. 2020;45(3):247–65.

    Article  PubMed  Google Scholar 

  21. Roe D, Mazor Y, Gelkopf M. Patient-reported outcome measurements (PROMs) and provider assessment in mental health: a systematic review of the context of implementation. Int J Qual Health Care. 2021;34(Supplement_1):ii28-ii39.

  22. Deighton J, Croudace T, Fonagy P, Brown J, Patalay P, Wolpert M. Measuring mental health and wellbeing outcomes for children and adolescents to inform practice and policy: a review of child self-report measures. Child Adolesc Psychiatry Ment Health. 2014;8(1):14.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Nilsen P. Making sense of implementation theories, models and frameworks. Implementation Sci. 2015;10(1):53.

  24. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7): e1000097.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Gillam J, Davies N, Aworinde J, Yorganci E, Anderson JE, Evans C. Implementation of eHealth to Support Assessment and Decision-making for Residents With Dementia in Long-term Care: Systematic Review. J Med Internet Res. 2022;24(2): e29837.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Coombes LH, Wiseman T, Lucas G, Sangha A, Murtagh FE. Health-related quality-of-life outcome measures in paediatric palliative care: A systematic review of psychometric properties and feasibility of use. Palliat Med. 2016;30(10):935–49.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews. Cochrane Database Syst Rev. 2011;2011(8):MR000026.

  28. Devlin NJ, Appleby J. Getting the most out of PROMs: Putting health outcomes at the heart of NHS decision-making. London; 2010.

  29. Weldring T, Smith SMS. Article Commentary: Patient-Reported Outcomes (PROs) and Patient-Reported Outcome Measures (PROMs). Health Services Insights. 2013;6:HSI.S11093.

  30. Critical Appraisal Skills Programme. CASP Checklists 2019 [Available from: https://casp-uk.net/casp-tools-checklists/] Accessed 11 April 2023.

  31. Joanna Briggs Institute. Critical Appraisal Tools 2017 [Available from: https://jbi.global/critical-appraisal-tools] Accessed 11 April 2023.

  32. Hong QN, Fàbregues S, Bartlett G, Boardman F, Cargo M, Dagenais P, et al. The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Educ Inf. 2018;34(4):285–91.

    Google Scholar 

  33. Hempel S, Shekelle PG, Liu JL, Sherwood Danz M, Foy R, Lim Y-W, et al. Development of the Quality Improvement Minimum Quality Criteria Set (QI-MQCS): a tool for critical appraisal of quality improvement intervention publications. BMJ Qual Saf. 2015;24(12):796–804.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016:i4919.

  35. Popay J, Roberts H, Sowden A, Petticrew M, Arai L, Rodgers M, et al. Guidance on the Conduct of Narrative Synthesis in Systematic Reviews. Lancaster; 2006.

  36. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Safaeinili N, Brown‐Johnson C, Shaw JG, Mahoney M, Winget M. CFIR simplified: Pragmatic application of and adaptations to the Consolidated Framework for Implementation Research (CFIR) for evaluation of a patient‐centered care transformation within a learning health system. Learn Health Syst. 2020;4(1):e10201.

  38. Smith JD, Li DH, Rafferty MR. The Implementation Research Logic Model: a method for planning, executing, reporting, and synthesizing implementation projects. Implementation Sci. 2020;15(1):84.

  39. Rohwer A, Pfadenhauer L, Burns J, Brereton L, Gerhardus A, Booth A, et al. Series: Clinical Epidemiology in South Africa. Paper 3: Logic models help make sense of complexity in systematic reviews and health technology assessments. Journal of Clinical Epidemiology. 2017;83:37–47.

  40. Stover AM, Haverman L, Van Oers HA, Greenhalgh J, Potter CM. Using an implementation science approach to implement and evaluate patient-reported outcome measures (PROM) initiatives in routine care settings. Quality of Life Research. 2020.

  41. Page MJ, Mckenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, The PRISMA, et al. statement: an updated guideline for reporting systematic reviews. BMJ. 2020;2021: n71.

    Google Scholar 

  42. Berger-Jenkins E, Monk C, D’Onfro K, Sultana M, Brandt L, Ankam J, et al. Screening for Both Child Behavior and Social Determinants of Health in Pediatric Primary Care. J Dev Behav Pediatr. 2019;40(6):415–24.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Berry AD, Garzon DL, Mack P, Kanwischer KZ, Beck DG. Implementing an Early Childhood Developmental Screening and Surveillance Program in Primary Care Settings: Lessons Learned From a Project in Illinois. J Pediatr Health Care. 2014;28(6):516–25.

    Article  PubMed  Google Scholar 

  44. Bhandari RP, Feinstein AB, Huestis SE, Krane EJ, Dunn AL, Cohen LL, et al. Pediatric-Collaborative Health Outcomes Information Registry (Peds-CHOIR): a learning health system to guide pediatric pain research and treatment. Pain. 2016;157(9):2033–44.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Bose J, Zeno R, Warren B, Sinnott LT, Fitzgerald EA. Implementation of Universal Adolescent Depression Screening: Quality Improvement Outcomes. J Pediatr Health Care. 2021;35(3):270–7.

    Article  PubMed  Google Scholar 

  46. Brodar KE, Leite RO, Marchetti D, Jaramillo M, Davis E, Sanchez J, et al. Psychological screening and consultation in a pediatric diabetes clinic: Medical providers’ perspectives. Clinical Practice in Pediatric Psychology. 2022;10(2):164–79.

    Article  Google Scholar 

  47. Butz C, Valleru J, Castillo A, Butter EM. Implementation of an Outcome Measure in Pediatric Behavioral Health: A Process Improvement Initiative. Pediatr Qual Saf. 2017;2(6):e043-e.

  48. Campbell K, Carbone PS, Liu D, Stipelman CH. Improving Autism Screening and Referrals With Electronic Support and Evaluations in Primary Care. Pediatrics. 2021;147(3): e20201609.

    Article  PubMed  Google Scholar 

  49. Campbell K, Carpenter KLH, Espinosa S, Hashemi J, Qiu Q, Tepper M, et al. Use of a Digital Modified Checklist for Autism in Toddlers – Revised with Follow-up to Improve Quality of Screening for Autism. J Pediatr. 2017;183:133-9.e1.

    Article  PubMed  PubMed Central  Google Scholar 

  50. Chen M, Jones CM, Bauer HE, Osakwe O, Ketheeswaran P, Baker JN, et al. Barriers and Opportunities for Patient-Reported Outcome Implementation: A National Pediatrician Survey in the United States. Children. 2022;9(2):185.

    Article  PubMed  PubMed Central  Google Scholar 

  51. Corathers SD, Kichler J, Jones N-HY, Houchen A, Jolly M, Morwessel N, et al. Improving Depression Screening for Adolescents With Type 1 Diabetes. Pediatrics. 2013;132(5):e1395-e402.

  52. Cox ED, Dobrozsi SK, Forrest CB, Gerhardt WE, Kliems H, Reeve BB, et al. Considerations to support use of patient-reported outcomes measurement information system pediatric measures in ambulatory clinics. J Pediatr. 2021;230:198-206.e2.

    Article  PubMed  Google Scholar 

  53. Cunningham NR, Moorman E, Brown CM, Mallon D, Chundi PK, Mara CA, et al. Integrating psychological screening into medical care for youth with abdominal pain. Pediatrics. 2018;142(2): e20172876.

    Article  PubMed  Google Scholar 

  54. Fein JA, Pailler ME, Barg FK, Wintersteen MB, Hayes K, Tien AY, et al. Feasibility and Effects of a Web-Based Adolescent Psychiatric Assessment Administered by Clinical Staff in the Pediatric Emergency Department. Arch Pediatr Adolesc Med. 2010;164(12):1112-7.

  55. Fenikilé TS, Ellerbeck K, Filippi MK, Daley CM. Barriers to autism screening in family medicine practice: a qualitative study. Primary Health Care Research & Development. 2015;16(04):356–66.

    Article  Google Scholar 

  56. Gerhardt WE, Mara CA, Kudel I, Morgan EM, Schoettker PJ, Napora J, et al. Systemwide Implementation of Patient-Reported Outcomes in Routine Clinical Care at a Children’s Hospital. The Joint Commission Journal on Quality and Patient Safety. 2018;44(8):441–53.

    Article  PubMed  Google Scholar 

  57. Godoy L, Gordon S, Druskin L, Long M, Kelly KP, Beers L. Pediatric Provider Experiences with Implementation of Routine Mental Health Screening. J Dev Behav Pediatr. 2021;42(1):32–40.

    Article  PubMed  Google Scholar 

  58. Herbert L, Hardy S. Implementation of a Mental Health Screening Program in a Pediatric Tertiary Care Setting. Clin Pediatr (Phila). 2019;58(10):1078–84.

    Article  PubMed  Google Scholar 

  59. Hinds PS, Nuss SL, Ruccione KS, Withycombe JS, Jacobs S, DeLuca H, et al. PROMIS pediatric measures in pediatric oncology: valid and clinically feasible indicators of patient-reported outcomes. Pediatr Blood Cancer. 2013;60(3):402–8.

    Article  PubMed  Google Scholar 

  60. Kazak AE, Barakat LP, Askins MA, Mccafferty M, Lattomus A, Ruppe N, et al. Provider Perspectives on the Implementation of Psychosocial Risk Screening in Pediatric Cancer. J Pediatr Psychol. 2017;42(6):700–10.

    Article  PubMed  Google Scholar 

  61. Kazak AE, Christofferson J, Gutierrez Richards H, Rivero-Conil S, Sandler E. Implementing screening with the Psychosocial Assessment Tool (PAT) in clinical oncology practice. Clinical Practice in Pediatric Psychology. 2019;7(2):140–50.

    Article  Google Scholar 

  62. Krishna R, Valleru J, Smith W. Implementing Outcome-based Care in Pediatric Psychiatry: Early Results and Overcoming Barriers. Pediatr Qual Saf. 2019;4(1): e132.

    Article  PubMed  PubMed Central  Google Scholar 

  63. Lynch-Jordan AM, Kashikar-Zuck S, Crosby LE, Lopez WL, Smolyansky BH, Parkins IS, et al. Applying Quality Improvement Methods to Implement a Measurement System for Chronic Pain-Related Disability. J Pediatr Psychol. 2010;35(1):32–41.

    Article  PubMed  Google Scholar 

  64. Mansour M, Krishnaprasadh D, Lichtenberger J, Teitelbaum J. Implementing the Patient Health Questionnaire Modified for Adolescents to improve screening for depression among adolescents in a Federally Qualified Health Centre. BMJ Open Quality. 2020;9(4): e000751.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Purbeck CA, Briggs EC, Tunno AM, Richardson LM, Pynoos RS, Fairbank JA. Trauma-informed measurement-based care for children: Implementation in diverse treatment settings. Psychol Serv. 2020;17(3):311–22.

    Article  PubMed  Google Scholar 

  66. Schreiber J, Marchetti GF, Racicot B, Kaminski E. The Use of a Knowledge Translation Program to Increase Use of Standardized Outcome Measures in an Outpatient Pediatric Physical Therapy Clinic: Administrative Case Report. Phys Ther. 2015;95(4):613–29.

    Article  PubMed  Google Scholar 

  67. Silver RB, Newland RP, Hartz K, Jandasek B, Godoy L, Lingras KA, et al. Integrating early childhood screening in pediatrics: a longitudinal qualitative study of barriers and facilitators. Clin Pract Pediatric Psychol. 2017;5(4):426–40.

    Article  Google Scholar 

  68. Townsend L, Kobak K, Kearney C, Milham M, Andreotti C, Escalera J, et al. Development of three web-based computerized versions of the kiddie schedule for affective disorders and schizophrenia child psychiatric diagnostic interview: preliminary validity data. J Am Acad Child Adolesc Psychiatry. 2020;59(2):309–25.

  69. Uzark K, King E, Spicer R, Beekman R, Kimball T, Varni JW. The clinical utility of health-related quality of life assessment in pediatric cardiology outpatient practice. Congenit Heart Dis. 2013;8(3):211–8.

  70. Weidler EM, Britto MT, Sitzman TJ. Facilitators and barriers to implementing standardized outcome measurement for children with cleft lip and palate. Cleft Palate Craniofac J. 2021;58(1):7–18.

  71. Windham GC, Smith KS, Rosen N, Anderson MC, Grether JK, Coolman RB, et al. Autism and developmental screening in a public, primary care setting primarily serving hispanics: challenges and results. J Autism Dev Disord. 2014;44(7):1621–32.

  72. Batty MJ, Moldavsky M, Foroushani PS, Pass S, Marriott M, Sayal K, et al. Implementing routine outcome measures in child and adolescent mental health services: from present to future practice. Child Adolesc Mental Health. 2013;18(2):82–7.

    Article  Google Scholar 

  73. Bear HA, Dalzell K, Edbrooke-Childs J, Wolpert M. Applying behaviour change theory to understand the barriers to implementing routine outcome monitoring. Br J Clin Psychol. 2022;61(3):557–78.

    Article  PubMed  Google Scholar 

  74. Edbrooke-Childs J, Barry D, Rodriguez IM, Papageorgiou D, Wolpert M, Schulz J. Patient reported outcome measures in child and adolescent mental health services: associations between clinician demographic characteristics, attitudes and efficacy. Child Adolesc Mental Health. 2017;22(1):36–41.

    Article  Google Scholar 

  75. Fullerton M, Edbrooke-Childs J, Law D, Martin K, Whelan I, Wolpert M. Using patient-reported outcome measures to improve service effectiveness for supervisors: a mixed-methods evaluation of supervisors’ attitudes and self-efficacy after training to use outcome measures in child mental health. Child Adolesc Mental Health. 2018;23(1):34–40.

    Article  Google Scholar 

  76. Harding R, Chambers L, Bluebond-Langner M. Advancing the science of outcome measurement in paediatric palliative care. Int J Palliat Nurs. 2019;25(2):72–9.

    Article  PubMed  Google Scholar 

  77. Hardy C, Hackett E, Murphy E, Cooper B, Ford T, Conroy S. Mental health screening and early intervention: clinical research study for under 5-year-old children in care in an inner London borough. Clin Child Psychol Psychiatry. 2015;20(2):261–75.

    Article  PubMed  Google Scholar 

  78. Kendall S, Nash A, Braun A, Bastug G, Rougeaux E, Bedford H. Acceptability and understanding of the Ages & Stages Questionnaires®, Third Edition, as part of the Healthy Child Programme 2-year health and development review in England: Parent and professional perspectives. Child Care Health Dev. 2019;45(2):251–6.

  79. Robertson AO, Tadić V, Rahi JS. Attitudes, experiences, and preferences of ophthalmic professionals regarding routine use of patient-reported outcome measures in clinical practice. PLoS ONE. 2020;15(12): e0243563.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  80. Sharples E, Qin C, Goveas V, Gondek D, Deighton J, Wolpert M, et al. A qualitative exploration of attitudes towards the use of outcome measures in child and adolescent mental health services. Clin Child Psychol Psychiatry. 2017;22(2):219–28.

    Article  PubMed  Google Scholar 

  81. Weiner B, Michelagnoli M, Drake R, Christie D. Screening for distress in young people after treatment for sarcoma: a feasibility study. J Pediatr Oncol Nurs. 2016;33(1):25–32.

  82. Eilander M, De Wit M, Rotteveel J, Maas-Van Schaaijk N, Roeleveld-Versteegh A, Snoek F. Implementation of quality of life monitoring in Dutch routine care of adolescents with type 1 diabetes: appreciated but difficult. Pediatr Diabetes. 2016;17(2):112–9.

    Article  PubMed  Google Scholar 

  83. Haverman L, van Oers HA, Limperg PF, Hijmans CT, Schepers SA, Sint Nicolaas SM, et al. Implementation of electronic patient reported outcomes in pediatric daily clinical practice: The KLIK experience. Clinical Practice in Pediatric Psychology. 2014;2(1):50–67.

    Article  Google Scholar 

  84. Haverman L, van Rossum MAJ, van Veenendaal M, van den Berg JM, Dolman KM, Swart J, et al. Effectiveness of a web-based application to monitor health-related quality of life. Pediatrics. 2013;131(2):e533–43.

    Article  PubMed  Google Scholar 

  85. Santana MJ, Haverman L, Absolom K, Takeuchi E, Feeny D, Grootenhuis M, et al. Training clinicians in how to use patient-reported outcome measures in routine clinical practice. Qual Life Res. 2015;24(7):1707–18.

    Article  PubMed  Google Scholar 

  86. Schepers SA, Sint Nicolaas SM, Haverman L, Wensing M, Schouten van Meeteren AYN, Veening MA, et al. Real-world implementation of electronic patient-reported outcomes in outpatient pediatric cancer care. Psycho-Oncology. 2017;26(7):951–9.

  87. Schepers SA, Sint Nicolaas SM, Maurice-Stam H, Van Dijk-Lokkart EM, Van Den Bergh EMM, De Boer N, et al. First experience with electronic feedback of the Psychosocial Assessment Tool in pediatric cancer care. Support Care Cancer. 2017;25(10):3113–21.

    Article  PubMed  PubMed Central  Google Scholar 

  88. Van Bragt S, Van Den Bemt L, Cretier R, Van Weel C, Merkus P, Schermer T. PELICAN: Content evaluation of patient-centered care for children with asthma based on an online tool. Pediatr Pulmonol. 2016;51(10):993–1003.

    Article  PubMed  Google Scholar 

  89. van Muilekom MM, Teela L, van Oers HA, van Goudoever JB, Grootenhuis MA, Haverman L. Patients' and parents' perspective on the implementation of Patient Reported Outcome Measures in pediatric clinical practice using the KLIK PROM portal. Qual Life Res. 2021.

  90. Van Muilekom MM, Teela L, Van Oers HA, Van Goudoever JB, Grootenhuis MA, Haverman L. Correction to: Patients’ and parents’ perspective on the implementation of Patient Reported Outcome Measures in pediatric clinical practice using the KLIK PROM portal. Qual Life Res. 2022;31(1):255–6.

    Article  PubMed  PubMed Central  Google Scholar 

  91. Anthony SJ, Young K, Pol SJ, Selkirk EK, Blydt-Hansen T, Boucher S, et al. Patient-reported outcome measures in pediatric solid organ transplantation: Exploring stakeholder perspectives on clinical implementation through qualitative description. Qual Life Res. 2021;30(5):1355–64.

    Article  PubMed  PubMed Central  Google Scholar 

  92. Cunningham BJ, Oram CJ. Using implementation science to engage stakeholders and improve outcome measurement in a preschool speech-language service system. Speech Language Hearing. 2020;23(1):17–24.

  93. Kwok EY, Moodie ST, Cunningham BJ, Oram CJ. Barriers and facilitators to implementation of a preschool outcome measure: an interview study with speech-language pathologists. J Commun Disord. 2022;95: 106166.

  94. Lalloo C, Stinson JN, Brown SC, Campbell F, Isaac L, Henry JL. Pain-QuILT: assessing clinical feasibility of a Web-based tool for the visual self-report of pain in an interdisciplinary pediatric chronic pain clinic. Clin J Pain. 2014;30(11):934–43.

    Article  PubMed  Google Scholar 

  95. Orava T, Provvidenza C, Townley A, Kingsnorth S. Screening and assessment of chronic pain among children with cerebral palsy: a process evaluation of a pain toolbox. Disabil Rehabil. 2019;41(22):2695–703.

    Article  PubMed  Google Scholar 

  96. Schulte F, Russell KB, Pelletier W, Scott-Lane L, Guilcher GMT, Strother D, et al. Screening for psychosocial distress in pediatric cancer patients: An examination of feasibility in a single institution. Pediatr Hematol Oncol. 2019;36(3):125–37.

    Article  PubMed  Google Scholar 

  97. Yamada J, Squires JE, Estabrooks CA, Victor C, Stevens B. The role of organizational context in moderating the effect of research use on pain outcomes in hospitalized children: a cross sectional study. BMC Health Serv Res. 2017;17(1):68.

  98. Yu JY, Goldberg T, Lao N, Feldman BM, Goh YI. Electronic forms for patient reported outcome measures (PROMs) are an effective, time-efficient, and cost-minimizing alternative to paper forms. Pediatr Rheumatol Online J. 2021;19(1):67.

    Article  PubMed  PubMed Central  Google Scholar 

  99. Fäldt A, Nordlund H, Holmqvist U, Lucas S, Fabian H. Nurses’ experiences of screening for communication difficulties at 18 months of age. Acta Paediatr. 2019;108(4):662–9.

    Article  PubMed  Google Scholar 

  100. Fält E, Salari R, Fabian H, Sarkadi A. Facilitating implementation of an evidence-based method to assess the mental health of 3–5-year-old children at Child Health Clinics: A mixed-methods process evaluation. PLoS One. 2020;15(6): e0234383.

  101. Davies EH, Fieggen K, Wilmshurst J, Anyanwu O, Burman RJ, Komarzynski S. Demonstrating the feasibility of digital health to support pediatric patients in South Africa. Epilepsia Open. 2021;6(4):653–62.

    Article  PubMed  PubMed Central  Google Scholar 

  102. Van Der Merwe MN, Mosca R, Swanepoel DW, Glascoe FP, Van Der Linde J. Early detection of developmental delays in vulnerable children by community care workers using an mHealth tool. Early Child Dev Care. 2019;189(5):855–66.

    Article  Google Scholar 

  103. Mccarthy MC, Wakefield CE, Degraves S, Bowden M, Eyles D, Williams LK. Feasibility of clinical psychosocial screening in pediatric oncology: Implementing the PAT2.0. J Psychosoc Oncol. 2016;34(5):363–75.

  104. Meryk A, Kropshofer G, Hetzer B, Riedl D, Lehmann J, Rumpold G, et al. Implementation of daily patient-reported outcome measurements to support children with cancer. Pediatr Blood Cancer. 2021:e29279.

  105. Friedel M, Brichard B, Boonen S, Tonon C, De Terwangne B, Bellis D, et al. Face and content validity, acceptability, and feasibility of the adapted version of the children’s palliative outcome scale: a qualitative pilot study. J Palliat Med. 2021;24(2):181–8.

  106. Barthel D, Fischer KI, Nolte S, Otto C, Meyrose A-K, Reisinger S, et al. Implementation of the Kids-CAT in clinical settings: a newly developed computer-adaptive test to facilitate the assessment of patient-reported outcomes of children and adolescents in clinical practice in Germany. Qual Life Res. 2016;25(3):585–94.

    Article  CAS  PubMed  Google Scholar 

  107. Jonsdottir SL, Saemundsen E, Gudmundsdottir S, Haraldsdottir GS, Palsdottir AH, Rafnsson V. Implementing an early detection program for autism in primary healthcare: Screening, education of healthcare professionals, referrals for diagnostic evaluation, and early intervention. Research in Autism Spectrum Disorders. 2020;77: 101616.

    Article  Google Scholar 

  108. Kip EC, Udedi M, Kulisewa K, Go VF, Gaynes BN. Barriers and facilitators to implementing the HEADSS psychosocial screening tool for adolescents living with HIV/AIDS in teen club program in Malawi: health care providers perspectives. Int J Mental Health Syst. 2022;16(1):8.

  109. Westergren T, Mølland E, Haraldstad K, Tellefsen Håland Å, Stamnes Köpp UM, Fegran L, et al. Implementation of the norwegian ‘Starting right’ child health service innovation: implementation adjustments, adoption, and acceptability. BMC Health Serv Res. 2021;21(1):86.

  110. Stinson JN, Connelly M, Jibb LA, Schanberg LE, Walco G, Spiegel LR, et al. Developing a standardized approach to the assessment of pain in children and youth presenting to pediatric rheumatology providers: a Delphi survey and consensus conference process followed by feasibility testing. Pediatr Rheumatol. 2012;10(1):7.

    Article  Google Scholar 

  111. World Health Organisation. International Statistical Classification of Diseases and Related Health Problems 10th Revision 2019 [Available from: https://icd.who.int/browse10/2019/en] Accessed 11 April 2023

  112. Lencucha R, Neupane S. The use, misuse and overuse of the ‘low-income and middle-income countries’ category. BMJ Glob Health. 2022;7(6): e009067.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

We thank the European Research Council and the NIHR Applied Research Collaboration South London (NIHR ARC South London) at King’s College Hospital NHS Foundation Trust for the financial support needed to undertake this study.

We would like to thank the Library and Collections Learning and Delivery Team at King’s College London for their support in the development and refinement of the search strategy. We also thank Christina Ramsenthaler, senior lecture at the Institute of Nursing at Zurich University of Applied Sciences, who screened a German paper against the eligibility criteria.

The Children’s Palliative care Outcome Scale (C-POS) Study Steering Group members are: AK Anderson, Jo Bayly, Lydia Bate, Myra Bluebond-Langner, Debbie Box, Katherine Bristowe, Rachel Burman, Lizzie Chambers, Lucy Coombes, Alan Craft, Fin Craig, Aislinn Delaney, Jonathan Downie, Julia Downing, Bobbie Farsides, Sara Fovargue, Lorna Fraser, Jane Green, Jay Halbert, Julie Hall-Carmichael, Irene Higginson, Michelle Hills, Mevhibe Hocaoglu, Vanessa Holme, Gill Hughes, Jo Laddie, Angela Logun, Eve Malam, Steve Marshall, Linda Maynard, Andrina McCormack, Catriona McKeating, Lis Meates, Fliss Murtagh, Eve Namisango, Veronica Neefjes, Cheryl Norman, Sue Picton, Christina Ramsenthaler, Anna Roach, Ellen Smith, Michelle Ward, Mark Whiting

Funding

The C-POS study is supported by the European Research Council’s Horizon 2020 programme [Grant ID: 772635]; this article reflects only the author’s views, and the European Research Council is not liable for any use that may be made of the information contained therein.

The C-POS study is supported by the National Institute for Health and Care Research (NIHR) Applied Research Collaboration South London (NIHR ARC South London) at King’s College Hospital NHS Foundation Trust. The views expressed are those of the author[s] and not necessarily those of the NIHR or the Department of Health and Social Care.

Hannah Scott, King’s College London, is supported by the National Institute for Health and Care Research (NIHR) Applied Research Collaboration South London (NIHR ARC South London) at King’s College Hospital NHS Foundation Trust. The views expressed are those of the author[s] and not necessarily those of the NIHR or the Department of Health and Social Care.

Professor Fliss Murtagh is a UK National Institute for Health Research (NIHR) Senior Investigator. The views expressed in this article are those of the author(s) and not necessarily those of the NIHR, or the Department of Health and Social Care.

Professor Myra Bluebond-Langner’s post is supported by funding from The True Colours Trust. All research at Great Ormond Street Hospital NHS Foundation Trust is made possible by the NIHR Great Ormond Street Hospital Biomedical Research Centre.

The funding bodies above did not have any role in the design of the study, collection, analysis, interpretation of data, or writing of the manuscript.

Author information

Authors and Affiliations

Authors

Consortia

Contributions

HS prepared the protocol, ran the searches, screened articles, conducted the analysis and synthesis, developed the logic model, and prepared the manuscript for publication. DH screened 10% of the articles at the full text screening stage. DB resolved a conflict that occurred at the full text screening stage. RH, CES, and DB provided supervision throughout, and were involved in critical review and revision of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Hannah May Scott.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Scott, H.M., Braybrook, D., Harðardóttir, D. et al. Implementation of child-centred outcome measures in routine paediatric healthcare practice: a systematic review. Health Qual Life Outcomes 21, 63 (2023). https://doi.org/10.1186/s12955-023-02143-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12955-023-02143-9

Keywords