Article Text

Download PDFPDF

Can patient-reported measurements of pain be used to improve cancer pain management? A systematic review and meta-analysis
  1. Rosalind Adam1,
  2. Christopher D Burton1,
  3. Christine M Bond1,
  4. Marijn de Bruin2 and
  5. Peter Murchie1
  1. 1 Centre of Academic Primary Care, University of Aberdeen, Aberdeen, UK
  2. 2 Aberdeen Health Psychology Group, Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, UK
  1. Correspondence to Dr Rosalind Adam, Centre of Academic Primary Care, University of Aberdeen, Room 1:131, Polwarth Building, Foresterhill, Aberdeen AB25 2ZD, UK; rosalindadam{at}abdn.ac.uk

Abstract

Purpose Cancer pain is a distressing and complex experience. It is feasible that the systematic collection and feedback of patient-reported outcome measurements (PROMs) relating to pain could enhance cancer pain management. We aimed to conduct a systematic review of interventions in which patient-reported pain data were collected and fed back to patients and/or professionals in order to improve cancer pain control.

Methods MEDLINE, EMBASE and CINAHL databases were searched for randomised and non-randomised controlled trials in which patient-reported data were collected and fed back with the intention of improving pain management by adult patients or professionals. We conducted a narrative synthesis. We also conducted a meta-analysis of studies reporting pain intensity.

Results 29 reports from 22 trials of 20 interventions were included. PROM measures were used to alert physicians to poorly controlled pain, to target pain education and to link treatment to management algorithms. Few interventions were underpinned by explicit behavioural theories. Interventions were inconsistently applied or infrequently led to changes in treatment. Narrative synthesis suggested that feedback of PROM data tended to increase discussions between patients and professionals about pain and/or symptoms overall. Meta-analysis of 12 studies showed a reduction in average pain intensity in intervention group participants compared with controls (mean difference=−0.59 (95% CI −0.87 to −0.30)).

Conclusions Interventions that assess and feedback cancer pain data to patients and/or professionals have so far led to modest reductions in cancer pain intensity. Suggestions are given to inform and enhance future PROM feedback interventions.

  • Cancer
  • Pain
  • PROM
  • Clinical assessment
  • self-report

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Pain is the most frequent complication of cancer.1 Approximately 40% of patients experience moderate-to-severe pain at diagnosis, rising to 70% at the end of life.1 Cancer pain control is frequently suboptimal, despite effective treatments being available.2 Under-reporting of pain by patients, inadequate communication about pain between patients and healthcare professionals, and inadequate assessment of pain by professionals are known to contribute to poor pain control.3 ,4

Traditional clinical consultation models rely on a question and answer-based dialogue between the patient and professional during which patients are prompted to report and describe problems. This may underestimate pain for several reasons. Retrospective reports by patients are subject to recall bias, underestimation and imprecision.5 Patients may fail to report cancer pain if they expect that pain is an inevitable consequence of cancer, if they believe that pain is a useful indicator of disease activity, or if they fear that symptom discussions will shift the professional's focus away from the treatment of disease.6 Pain can be a complex and subjective experience, and patients can have difficulties judging the validity of pain as a presenting symptom that warrants medical attention.7 Professionals may not ask about or adequately assess the details of the patient's pain.8 Therefore, it is possible that the traditional consultation model could lead to specific deficiencies in cancer pain management.

The potential value of collecting patient-reported outcome measurements (PROMs) is increasingly being recognised in clinical practice.9 PROMs are defined as: ‘measurements of any aspect of a patient's health status that come directly from the patient, without interpretation of the patient's response by a clinician or anyone else’.10 Patient-reported outcomes might be collected from patients via interviews, questionnaires or diaries. Recently, digital technology has enabled PROMs to be collected remotely via hand-held devices and web-based forms. It has been suggested that PROMs might have value in the provision of patient health status information to clinicians; monitoring response to treatments (and their side effects); detecting unrecognised problems; and improving health management behaviours by patients and professionals.11 In oncology, PROMs have been shown to improve patient satisfaction with their care and to increase the frequency of discussion of patient outcomes during consultations.12 ,13

Despite the impact of pain on the well-being of patients with cancer and the potential value of using PROMs to enhance cancer pain management, it is currently unclear whether PROM interventions can have an impact on patient pain outcomes. This review aims to synthesise the evidence on interventions which have used patient-reported pain measurements to enhance the management of cancer-related pain by making these pain data available to patients and/or healthcare providers; to describe the interventions and their main components; and to determine whether the systematic collection of patient reported pain data can improve cancer pain outcomes.

Methods

A systematic review was conducted to identify randomised controlled trials (RCTs) and controlled trials of interventions which involved the systematic collection of patient-reported measurements of pain related to cancer or its treatment. The review was conducted according to ‘the Preferred Reporting Items for Systematic reviews and Meta-Analyses’ (PRISMA) criteria. A review protocol was registered and is available at: http://www.crd.york.ac.uk/PROSPEROFILES/15217_PROTOCOL_20141027.pdf

Inclusion and exclusion criteria

This review considered RCT and non-RCT in which patient-reported measurements of pain were collected and fed back to patients and/or clinicians with the intention of improving cancer pain management behaviours by adult patients or professionals. It was judged that non-randomised studies were relevant to the assessment of PROM intervention components. Inclusion and exclusion criteria are summarised in table 1.

Table 1

Summary of inclusion and exclusion criteria

Search strategy

There were three groups of search terms relating to: cancer pain; self-report and measurement; and behavioural change relating to pain management. Keywords and Boolean operators were explored and combined on the advice of a senior medical librarian to search MEDLINE, EMBASE and CINAHL databases from inception. Database searches took place in November and December 2014 and a MEDLINE search was updated in December 2015. Detailed search strategies and dates are shown in online supplementary appendix 1. Reference lists of two reviews of PROMs in oncology12 ,13 and all relevant full-text papers included in this review were searched for additional relevant titles.

Study selection

Study titles and then abstracts of relevant titles were screened independently by two authors (RA and CMB). Full texts were retrieved for all unique abstracts which were felt to be potentially relevant by either author, and these were reviewed independently against the inclusion and exclusion criteria by two authors (RA and one of CMB, CDB, PM and MdB). Any disagreement was resolved by discussion.

Risk of bias assessment

Risk of bias was assessed independently by two authors (RA and CDB) according to the Cochrane collaboration risk of bias tool14 and inter-rater reliability was assessed using Cohen's κ statistic,15 calculated on Stata statistical software V.14.

Data extraction and synthesis

Data extraction was based on the Template for Intervention Description and Replication (TIDieR) checklist.16 Study authors were contacted by email where methodological or outcome data were missing from papers.

As specified in the protocol, we anticipated heterogeneity in interventions and reported outcomes and so carried out a narrative synthesis of the included studies. For those studies which reported outcomes for pain intensity using similar measures, we also conducted a meta-analysis. RevMan V.5 was used for statistical analysis, with a random-effects model in view of the clinical heterogeneity of studies.

Results

A PRISMA diagram is shown in figure 1. In total, 3412 titles were identified by searching four databases and by screening reference lists. No new studies were identified in the updated MEDLINE search (December 2015); however, one new article was identified after the initial database searches17 which was linked to the research team of an earlier study.18 Forty-five full-text articles were assessed, of which 29 satisfied the inclusion and exclusion criteria and were included in the narrative synthesis.

Figure 1

PRISMA chart detailing study identification and selection process. PRISMA, Preferred Reporting Items for Systematic reviews and Meta-Analyses.

Characteristics of the included studies

There were 29 reports17–45 of 22 unique trials of 20 interventions. Twenty trials were RCTs, and two were controlled trials.19 ,23 The trials were published between 1997 and 2015 and were conducted in the USA, the Netherlands, Norway, Canada, Germany and the UK (table 2). There were 5234 unique trial participants. Most studies were conducted in an oncology outpatient setting in patients with mixed cancer types (table 2).

Table 2

Summary of studies

Risk of bias in included studies

A Cochrane risk of bias summary assessment is shown in table 3. Inter-rater reliability for risk of bias assessment (κ) between the two reviewers was 0.84 (95% CI 0.75 to 0.88), suggesting high levels of agreement. The ‘blinding of participants and personnel’ category has been omitted from the summary assessment because the nature of the interventions meant that none of the included studies could have blinded the research participants. Only Wilkie et al 45 blinded treating physicians and instructed patient's not to take their pain tools to clinic appointments; however, the remainder of studies expected physicians to act on patient-reported data, and therefore treating physicians tended not to be blinded. In some studies controls also monitored symptoms without feedback to clinicians, and in the remainder controls received usual care without additional pain monitoring.

Table 3

Risk of bias for the included studies

The results of four studies should be interpreted with caution. Aubin et al 19 conducted a non-randomised study which had high dropouts due to death and hospital admission. The study by Bertsche et al 23 was also a non-randomised trial. Methodological details were lacking in the studies by Trowbridge et al 40 and Vallières et al 41 and the risk of bias in these studies was unclear.

Theory, rationale and intervention components

The interventions and their components are summarised in table 2. Wilkie et al 45 based their coaching intervention on Johnson's46 behavioural system model for nursing practice. No other interventions used a specific behavioural theory to guide development, although several trials29 ,34 ,39 used self-efficacy and academic detailing theories to inform their interventions.

PROM data collection

A variety of formats were used to allow patients to report pain and other symptoms. Nine trials used pen and paper,19 ,23 ,25 ,26 ,28 ,34 ,40 ,45 four used touch screen devices or personal digital assistants to collect the data,20 ,36 ,37 ,41 three used automated telephone monitoring,17 ,18 ,35 one used web-based systems,21 and in two trials, the patient was interviewed by a nurse27 or a health educator29 for the data. One study offered a choice between automated telephone monitoring or online monitoring.33

Pain and symptom monitoring took place immediately before planned outpatient visits in five studies without the option of home symptom monitoring,20 ,29 ,37 ,40 ,42 and one study23 collected PROMs during an inpatient stay. The remaining studies offered the ability to monitor symptoms at home as required, or at set intervals ranging from twice daily to monthly.

Eight out of 22 studies focused on pain and analgesic monitoring alone and the remainder involved other PROM measures such as mood, quality of life, distress, and analgesic usage. Pain was often monitored alongside other physical symptoms including: nausea, vomiting, constipation, diarrhoea, fatigue, appetite loss, sleep disturbance, cough, breathlessness, fever and dry mouth.

PROM data usage and feedback mechanisms

The patient-reported outcome data were used in a variety of ways. Summary data were given to a clinician in advance of a consultation in eight studies.17 ,20 ,21 ,28 ,36 ,37 ,40 ,42 None of the clinicians in these studies were given specific instructions about how to use the data except in the study by Vallières et al,41 in which clinicians were asked to alter analgesics according to the WHO's analgesic ladder.

Five studies17 ,21 ,27 ,29 ,34 used the patient-generated data to target education on analgesic use, self-management skills and communicating about pain. Berry et al 21 embedded automated tailored coaching messages into their web-based intervention. The coaching messages typically focused on how to communicate about unrelieved symptoms with professionals. Four interventions17 ,18 ,27 ,35 contained automatic alerts to physicians based on predetermined symptom thresholds. One study19 also used a symptom threshold concept within their paper diary intervention, instructing patients to contact their nurse if pain intensity or analgesic use crossed a threshold. Four studies23 ,26 ,27 ,33 linked patient-reported data to specific management algorithms to support clinical decision-making.

Intervention fidelity

Several interventions were not delivered as designed. Mooney et al 35 reported that only 20 of 167 (12%) automated alerts to physicians of symptoms exceeding a threshold resulted in a provider-initiated unscheduled contact. Hoekstra et al 28 reported that despite patients being advised to take their symptom monitor to all medical appointments, it was used in only 232 of 1291 (18%) consultations. Van der Peet et al 44 found that 22 of 37 (59%) written recommendations to physicians advising medication changes were ignored. In comparison, one study by Bertsche et al 23 found that algorithm-derived treatment recommendations were fully accepted by physicians in 85% of cases.

Quantitative assessment of changes in pain intensity

Pain was self-rated on a numerical rating out of 10 by intervention patients and controls at baseline and the end of the study in 15 trials (Post et al 36 provided previously unpublished data to allow comparison of effect size in this review). Seven studies19 ,26 ,33 ,36 ,41 ,44 rated pain using the Brief Pain Inventory, one24 used measures from the Amsterdam Pain Management Index, one study17 used the MD Anderson symptom inventory and one study45 used a validated 10 cm visual analogue scale. Five trials28 ,29 ,34 ,35 ,39 used simple non-validated numerical pain rating scales out of 10 points.

Forest plots summarising average pain intensity across 12 trials, and present pain across 3 trials are shown in figures 2 and 3. Average pain refers to how a patient feels their pain has been overall and is a specific item in the Brief Pain Inventory. Studies which did not use the Brief Pain Inventory but provided a report of overall/cumulative pain severity as reported by the patient have been considered here under the heading of average pain intensity.

Figure 2

Forest plot of average/overall pain intensity.

Figure 3

Forest plot of present pain intensity.

A statistically significant reduction in average pain intensity was found of around half a point out of 10, mean difference −0.59 (95% CI −0.87 to −0.30). Removing the non-randomised study by Aubin et al 19 from the meta-analysis did not significantly alter this result (mean difference −0.58 (95% CI −0.90 to −0.26). The I2 statistic was 46% indicating moderate heterogeneity, which was expected in view of the heterogeneity of the interventions. One study by Mooney et al 35 which had problems with fidelity appeared to be an outlier on the forest plot. A sensitivity analysis with this removed reduced the I2 statistic to 24%.

Three studies reported ‘present’ pain intensity, that is, pain at the moment that it was being reported by the patient. There was no significant difference in present pain intensity between control and intervention groups, mean difference −0.20 (95% CI −0.89 to 0.49).

Narrative summary of other pain-related outcomes

Several studies included pain-related outcome measures other than pain intensity. Full details of the results of these outcome measures are included as an online supplementary table in appendix 2. Six studies (detailed in 10 reports) considered the effect of the PROMs on the clinical consultation.20 ,22 ,29–32 ,37 ,42–43 ,45 Interventions were associated with more symptoms being reported and/or more discussions specifically about pain.

There was no evidence that opioid prescribing or the pain management index (an estimate of adequacy of analgesic prescription) was improved in the intervention groups compared with controls.17 ,34 ,39 ,40 ,45 However, one study by Bertsche et al 23 found significant improvements in guideline adherence over the intervention period.

Two studies17 ,18 reported reductions in the number of pain threshold events over time in the intervention group compared with the control group, but these reductions only reached statistical significance in the study by Cleeland et al.18 The most frequent clinical response to pain threshold alerts in both studies17 ,18 was to reinforce existing management strategies.

Discussion

Main findings

Feedback based on patient-reported pain outcomes has been used to effect changes in pain management in four main ways: (1) to provide reports about pain and additional symptoms to professionals (with the intention of increasing professional awareness of unrelieved pain and other problems); (2) to tailor patient pain education about self-management strategies and how to communicate about pain; (3) to prompt contact between a patient and professional when pain is above a set threshold; and (4) to link pain treatments to the severity of pain experienced by the patient via algorithmic management guidelines. Such interventions currently have a statistically significant but small effect (<1 point on a 0–10 points rating scale) on patient-reported average pain intensity.

Previous reviews have shown that PROMs in oncology can improve patient satisfaction with care and consultation outcomes. This is the first review to have shown a significant impact of PROMs on a symptom outcome. However, it is accepted that for analgesics, patients desire reductions in pain of at least 50%, ideally experiencing no worse than mild pain.47 A half-point improvement on a 10-point scale is not of such a magnitude. However, as monitoring pain and feeding this back to patients and/or professionals is fairly simple, the technique should be considered as part of more comprehensive programmes to tackle cancer pain.

The process evaluations described in three studies suggested that intervention fidelity was suboptimal,28 ,35 ,44 which is likely to have reduced the effectiveness of interventions. Physicians failed to respond to symptom alerts and patients failed to take their data to consultations. Moreover, making professionals aware of high levels of patient-reported pain did not necessarily result in changes to analgesic prescribing. It is unclear from the evidence in this review as to why this might be the case. Previous studies have suggested that physicians can have a preference for their own judgement of symptoms over formal PROM measures.48 Another possibility is that numerical ratings of pain fail to take into consideration the complexity of pain experiences and individual patient preferences for pain management, which can become more apparent during the clinical consultation. Qualitative studies have shown that patients often manage pain around an acceptable level, and make trade-offs between opioid side effects, physical activity, cognitive function and pain relief.49 The interventions reviewed have not captured this complexity.

Strengths and limitations

This review was systematically conducted and identified trials spanning three decades. Twenty of the 22 trials included were RCTs and narrative description of these trials has allowed the components of interventions to be characterised. Despite the use of different measures of pain, we were able to obtain and combine pain data from 15 studies to allow for a meta-analysis of PROMs on clinically relevant outcomes. The main limitation of this review is that there were problems with intervention and trial description in several trials (table 3) which could have introduced bias. In addition, it is important to note that pain measurement was not the principal focus of every study included in this review. Some trials collected a range of symptoms and quality of life data including pain, and fed that back to patients and/or professionals. However, in all trials, pain was specifically monitored and pain-related outcomes were reported within the results, enabling comparison of pain data within this review.

Implications for practice, policy and research

Interventions which use PROMs to inform cancer pain management by patients and professionals show promise, but their usefulness and impact on pain might be enhanced if interventions are better designed and delivered. Based on the narrative review and considering the main components described by original study authors, we formulated a summary of the key steps that are necessary in order for these type of interventions to be effective (see figure 4). Arguably, a key component is the feedback process between patients and professionals and this requires further attention. The majority of studies in this review presented professionals with pain measures or threshold alerts without any instructions on how these measures should be used. This represents a missed opportunity since evidence-based cancer pain management guidelines exist to guide action.

Figure 4

Steps by which PROM interventions can alter patient-reported pain intensity. PROM, patient-reported outcome measurement.

Conclusions

Interventions which have used patient-reported measurements to enhance the management of cancer-related pain have achieved modest reductions in cancer pain intensity. The studies demonstrate that patients with cancer can provide their own data to guide management. The challenges are to provide effective transfer of information and to ensure clinicians act on this information in order to improve pain control.

References

Footnotes

  • Twitter Follow Rosalind Adam at @rosadamaberdeen

  • Contributors RA was involved in the design of this review, carried out database searches, assessed studies for inclusion in the review, performed data extraction, assessed risk of bias and was involved in the synthesis of results. CDB was involved in the design of this review, assessed studies for inclusion in the review, assessed risk of bias of included studies, and contributed to drafting and revising the article critically. CMB was involved in the design of this review, assessed studies for inclusion in the review, and contributed to drafting and revising the article critically. MdB was involved in the design of this review, assessed studies for inclusion in the review, and contributed to drafting and revising the article critically. PM was involved in the design of this review, assessed studies for inclusion in the review, and contributed to drafting and revising the article critically.

  • Funding RA completed this review during a clinical academic fellowship funded by the Chief Scientist Office of the Scottish Government, grant reference RG12141-10.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.