Article Text

Download PDFPDF

Development and initial validation of a new outcome measure for hospice and palliative care: the St Christopher's Index of Patient Priorities (SKIPP)
Free
  1. Julia Addington-Hall1,
  2. Katherine Hunt1,
  3. Ali Rowsell1,
  4. Rosanna Heal2,
  5. Penny Hansford2,
  6. Barbara Monroe2 and
  7. Nigel Sykes2
  1. 1Faculty of Health Sciences, University of Southampton, Southampton, UK
  2. 2St Christopher's Hospice, London, UK
  1. Correspondence to Professor Julia Addington-Hall, Faculty of Health Sciences, University of Southampton, Highfield, Southampton SO17 1BJ, UK; jaddingtonhall{at}gmail.com, J.Addington-Hall{at}soton.ac.uk

Abstract

Objective To develop and conduct a preliminary psychometric analysis of a hospice and palliative care patient-reported outcome measure to detect patients’ perceptions of change in quality of life (QoL) and issues of concern, and views of service benefit.

Methods Following pilot testing and cognitive interviewing, St Christopher's Index of Patient Priorities (SKIPP) was administered twice to hospice inpatients and homecare patients. QoL was rated ‘now’, and retrospectively ‘before starting hospice care’ or ‘at the time of the first interview’. Patients nominated and rated progress with main concerns, rated the difference the service was making, and completed palliative care outcome scale. Patients completed SKIPP again within 24 h to measure test-retest reliability.

Results QoL scores ‘now’ differed significantly from retrospective scores made at same time: QoL increased with hospice care when patients ‘looked back’ on previous QoL. Four-fifths reported that their first concern had got ‘a little’/ ‘much’ better since initial service contact: this declined subsequently. Four-fifths at both time points said the hospice had made ‘a lot of difference’ to them. No significant differences were noted between time points on palliative care outcome scale items. Test-retest analyses were prevented by low numbers.

Conclusions SKIPP can detect patients’ perception of change in QoL and main concerns, and the difference patients think the service has made to them. Its design with current and retrospective components addresses response shift and means it can be used for quality improvement or clinical purposes with only one administration, an advantage in frail populations. It is therefore a useful addition to hospice and palliative care patient-reported outcome measures.

  • Methodological research
  • Hospice care
  • Quality of life
  • Service evaluation
  • Clinical assessment

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Measuring change in healthcare outcomes is argued to have an important part to play in improving healthcare quality and efficiency.1 Particular emphasis is now placed on patient-reported outcome measures (PROMs), where the patient's own perspective on the impact of healthcare on their health status is valued and measured.2 The National Health Service (NHS) Outcomes Framework uses outcome measures to provide a national overview of the NHS’ performance, thus acting as a catalyst for quality improvement and outcome measurement.3 Other less centralised health systems are seeing similar moves to use outcome data to drive health improvement. Palliative care services must also monitor their outcomes.4 Radbruch has argued ‘we have to prove the quality of care that we deliver, account for the resources that are allocated and verify that patients are receiving the best possible care in relation to these resources’.5 Measuring palliative care outcomes presents practical and ethical challenges because patients are usually frail, symptomatic, often develop cognitive problems and have deteriorating health status.6 ,7

Within palliative care, PROMs initially focused on the assessment of symptoms.8 ,9 but their scope has widened to include psychological factors, communication, practical issues and family concern(s).10 ,11 Palliative care quality of life (QoL) has incorporated psychological, social and spiritual dimensions as well as outcomes such as dignity and hope.12–16 Inclusion of all these dimensions is at odds with the requirement to keep palliative care measures short and easy to use. Moreover, such measures prescribe to patients what constitutes QoL, rather than letting them decide what matters to them. Schedule for the Evaluation of QoL and other measures which invite patients to define what is personally important, have been designed to overcome this problem.17–19

None of these tools directly assesses patients’ view of service impact on their well-being; however they choose to define this. This study is a pragmatic attempt to address this lack and answer to the increasingly competitive nature of UK healthcare provision. The objective was to design and validate a PROM for routine use by hospice and palliative care services capable of detecting patients’ perception of change in their QoL and in matters of individual concern for them, and to assess the extent to which they see themselves as benefiting from palliative care. The measure is known as the St Christopher's Index of Patient Priorities (SKIPP).

Method

Setting

St Christopher's Hospice serves a population of 1.5 million with considerable socioeconomic and ethnic variety. The service has an inpatient unit, together with community palliative care and day care services. All data collection and analysis was undertaken by independent researchers at the University of Southampton.

Phase I initial development

Before designing SKIPP, palliative care QoL and outcome measures were reviewed. The Project Advisory Group and researchers discussed the measure's purpose, and the hospice's previous experience with outcome measures. SKIPP was drafted and administered in individual interviews to a convenience sample of patients attending the day care service, using cognitive interview techniques of ‘read aloud/think aloud’ to inform question and response wording.20 ,21 SKIPP is described in table 1. Patients were interviewed twice. Interviews were recorded, and responses entered directly into a framework. Phase I findings were considered by the researchers and the Project Advisory Team, and amendments made to SKIPP accordingly.

Table 1

Content of SKIPP measure at Times 1 and 2

Phase II questionnaire testing

SKIPP has two versions: one to be used the first time a patient completes it (SKIPP-T1), and the other to be used on second or subsequent interviews (SKIPP-T2). The content of both versions is summarised in table 1.

Patients were interviewed twice, with 3 days between Time 1 and Time 2 if they were inpatients, and 7 days if they were home care patients.

At both interviews, participants also completed the palliative care outcome scale (POS) as a comparator.22 The POS was chosen because it is widely used and was explicitly developed as an outcome measure.5 ,22 The procedure for the Time 2 interview was similar to that at Time 1. All respondents were invited to complete the measure themselves but most preferred to have it read out to them and completed by the interviewer. Time taken to complete the measure and respondent preference were obtained as indicators of SKIPP's utility.

Phase III: test-retest reliability

At Time 2 nursing staff identified study participants whose condition had been stable between Time 1 and Time 2 interviews: these participants were then asked if they would retake the measure 24 h later to assess its test-retest reliability. It was planned to use Cohen's κ to measure agreement between scores at the second and third interviews.22

Phase IV—revising SKIPP

Following data analysis, the Project Advisory Group and researchers further revised SKIPP in the light of the research findings. The resulting measure was then put into routine use in the inpatient unit and in the community service.

Statistical analysis

Summary measures at both time points are presented as mean (SD) for continuous (approximately) normally distributed variables, medians (IQRs) for non-normally distributed variables, and frequencies and percentages for categorical variables. All statistical tests are based on two-sided p values and CIs presented as appropriate. Comparisons over time use appropriate matched pairs analyses: as a consequence frequencies for variables differ between comparisons because of missing data, a known problem in palliative care research.

Results

Phase 1

Cognitive interviews were held with 13 patients: 9 were interviewed again 1 week later.

Three participants felt unable to rate their QoL, although they talked about it and their perceptions of the hospice's impact on it. Other participants did provide ratings, although some found this difficult. Most were able to nominate their main concerns, to rate the difference the service had made to these concerns, and to give reasons for their answers. The most frequent response, ‘a lot of difference’, usually related to a concrete intervention or improvement such as successful symptom treatment or provision of help at home when needed. ‘A little bit of difference’ related to less tangible concepts such as being able to mix with people in similar situations, patients knowing they could call on the hospice if and when they needed more support and a general feeling of improved mood or optimism.

Phase II

Thirty-five respondents completed SKIPP at Time 1 and Time 2: these form the dataset for this paper (seven more patients participated at Time 1 but withdrew before Time 2). Mean age was 64.8 (SD 12.3) years; 48.6% were men (n=17). Primary diagnosis was available for 31 (88%). For 80.6% of these, primary diagnosis was cancer (n=25), with respiratory (5), colorectal (4), breast (4) and gynaecological cancers (4) the most common. Most respondents were recruited from the hospice inpatient unit (60%, n=21); others were home care patients attending day care (40%, n=14).

Quality of life

At Time 1, respondents rated their QoL ‘today’ as being better than it had been ‘before hospice care’: the difference between these two variables was statistically significant (T1-today median 5.00 (IQR 4.0–5.0), T1-retrospective median 3.00 (IQR 1.5–5.0), n=33, Wilcoxon test p<0.02).

At Time 2, respondents’ retrospective rating of their QoL as it had been at the first interview (T2-retrospective) did not differ significantly from how they had rated their QoL contemporaneously at Time 1 (T2-retrospective median 4.00 (IQR 4.0–5.0), 4.43, T1-today median 5.0 (IQR 4.0–5.0), n=31, Wilcoxon test Not statistically significant (NS))

At Time 2, respondents also rated their current QoL (T2-today). This rating was significantly higher than their retrospective rating of QoL at Time 1 (T2-retrospective median 4 (IQR 4–5), T2-today median 5 (IQR 4–6), n=28, Wilcoxon test p=0.04). However, this difference is not apparent when comparing the ratings respondents gave contemporaneously to their QoL at Time 1 and Time 2 (T1-today median 5 (IQR 4–5), T2-today median 5 (IQR 4–6), n=29, Wilcoxon test NS).

Concerns

At Time 1 93.8% (30) respondents reported they had had concerns before hospice care began and 74.4% (23) reported that they still had concerns: this difference was not statistically significant (n=32, McNemar test NS).

At Time 2, 53.1% (17) reported having current concerns. This did not differ from the proportion of these respondents who reported having concerns at Time 1 (68.8% (22), n=32, McNemar test, NS).

There was, however, a statistically significant difference between reported concerns before hospice care and current concerns reported at Time 2 (93.5%, (29); 54.8% (17); n=31, McNemar test p=0.01).

Patients could list up to three concerns on each occasion. At Time 1, in relation to their first concern, half (51.6%, n=16) reported that ‘things had got much better’, 29% (n=9) reported that it had ‘got a little better’, 12.9% (n=4) reported ‘no change’ and 6.5% (n=2) that it had got ‘a little worse’ (three patients with no concerns are excluded). The proportion reporting that ‘things had got much better’ was lower for the second concern (39.1%, n=9) and the third concern (30.8%, n=4) (again, those with no concerns are excluded).

At Time 2, 20.0% (n=5) reported that their first concern had ‘got much better’, 12% (n=3) reported ‘things had got a little better’, 56% (n=14) reported ‘no change’ and 12% (n=3) reported that their concern had got ‘a little worse’. Those with no concerns at Time 1 are excluded. Sample sizes for second and third concerns are too small for comment.

The extent of change respondents reported in their first concern since coming to the service at Time 1 was compared with the extent of change they reported at Time 2 in their first concern at Time 1 (Things had got much better 51.6% (n=16), a little better 29% (n=9), no change 12.9% (n=4), a little worse 6.5% (2) versus 20% (5), 12% (3), 56% (14), 12% (3), respectively, Wilcoxon test, p<0.001: 10 reported no concerns at Time 2). They therefore reported greater change between the beginning of hospice care and Time 1, than between Time 1 and Time 2.

Overall impact of hospice care

At Time 1 and Time 2, patients were asked whether the care from the hospice services ‘had made a difference to how things are going at present’. At Time 1, 89.3% (n=25) reported that services had made ‘a lot’ of difference, and 10.7% (n=3) that it had helped ‘a little bit’. At Time 2, 85.7% (n=24) reported that it had helped ‘a lot’, 3.6% (n=1) ‘a little bit’, 7.1% (n=2) ‘no, not much’ and 3.6%% (n=1) ‘no, not at all’. There was no statistically significant difference in the overall ratings of the impact of care provided by the hospice services between Time 1 and Time 2 (Wilcoxon Test, NS)

POS

Table 2 gives mean and median scores on individual items in POS at Time 1 and Time 2, together with p values from the Wilcoxon Test. No significant differences between time points were noted although the increase in family anxiety scores over time approached statistical significance.

Table 2

Change in POS scores over time

Completion times and preferences

The researcher recorded the time taken to complete POS and SKIPP: POS, mean 11.4 min, median 10; SKIPP mean 7.3, median 7. This difference was not statistically significant. Although patients were asked which of the measures they preferred, only five responded. Of these, two expressed no preference while three preferred SKIPP because it was shorter than POS.

Phase III—test-retest reliability

Despite considerable effort only 13 patients thought by staff to be in a stable condition at Time 2 were willing to participate in the readministration of SKIPP 24 h later at Time 3 to test reliability. Cohen's κ is unsuitable for sample sizes below 44 when assuming a relative error of 30%.22 Data are therefore categorised for each questionnaire item as ‘agreement’ when precisely the same score was given on both occasions, ‘increased’ when the second score was higher than the first, and ‘decreased’ when it was lower (table 3). This basic analysis shows 70% or above of responses at Time 2 and Time 3 agreed, except on the two QoL items where 64% agreed on the first question and only 36.4% on the second.

Table 3

Agreement between test and retest scores (n=13)

Phase IV

A number of decisions were made about the format and usage of SKIPP in the light of Phases II and III:

  1. Phase II data indicate that using SKIPP once can show patients’ views of changes in their QoL and concerns as a result of the care they had received, and of the overall impact of care. Therefore a single administration of SKIPP would be sufficient when used for quality improvement purposes.

  2. The questionnaire initially elicited patients’ nominated concerns, which ensured that the measure truly reflected patients’ own opinions. However, textual data requires considerable analysis and this limits its value in a routine outcome measure. Therefore a list of key concerns was derived from Phase II data by content analysis: pain, breathlessness, nausea and vomiting, appetite or weight loss, generally feeling ill, difficulty sleeping, difficulty moving about, feeling depressed or alone, worries about your family, money worries, your future. ‘Something else’ is also included, with space to write an additional concern.

  3. Patients were able to list their three main concerns and around 40% did indeed identify this number. Giving three options makes analysis more complex because of the need to combine incidence and outcomes data across concern variables. It was therefore decided that including only one concerns question in SKIPP was the best balance between capturing key concerns and producing a sufficiently concise and user-friendly measure.

  4. A limitation of using patient-generated outcome measures is that patients may nominate different factors on each occasion, making it impossible to track a particular factor over time. Two adapted questions from POS are therefore included in the SKIPP, the first in order to assess service effectiveness on pain, seen as an indicator of the success of symptom control in general.

    The POS pain question responses were simplified as they contain descriptors of each grade of pain which increase the complexity of question wording. The impact of these changes on the validation of this question is unknown but it performed well in terms of response spread and missing data, indicating that it is likely in this population to be able to detect change and that it is acceptable to respondents.

  5. The second POS question included is that on family anxiety. This had a high proportion of missing data, with 38% of respondents failing to answer it. However, concerns about their families emerged as an important issue, and the difference between Time 1 and Time 2 approached statistical significance. The response categories have again been simplified.

  6. Recent research at St Christopher's into the epidemiology and treatment of depression in palliative care has confirmed its importance to well-being in this patient group and a screening question for depression was therefore included in SKIPP.23–25

Discussion

The objectives of SKIPP, the new PROM whose development is reported here, were to design and validate a measure for routine use by hospice and palliative care services to detect patients’ perceptions of the impact of the service on their well-being while providing a broad indication of patients’ own perceived QoL. This information has the dual purpose of directing the efforts of the caring team and providing quality assurance evidence for commissioners, regulators and prospective users of the service. St Christopher's Hospice was already using a satisfaction questionnaire with patients but the lack of validation, the uniformly high apparent rates of satisfaction that emerged and the lack of detail about patient concerns meant that this was unsatisfactory even for internal use. The findings presented in this paper demonstrate that the new measure can fulfil its purpose of detecting patients’ perceptions of change in relation to their QoL and main concerns. It is also able to assess how far patients feel they have benefited from the care provided. Moreover it appears that SKIPP can be completed by all but the sickest patients, albeit with help.

SKIPP was designed to address the phenomenon of response shift, in which the recalibration of a person's internal assessment of their QoL or a symptom over time, a change in their values or a reconceptualisation of what is being measured tends to obscure the degree to which change is perceived to have occurred.26 ,27 Response shift is an issue in palliative care research (while adaptive for the patient) because further deterioration in health may lead patients to consider that their previous QoL was actually rather better than they had thought at the time, as they have now experienced how much worse things can get. Alternatively, they may adjust to states that they would previously have considered indicative of poor QoL and upgrade their current QoL. Asking patients to assess their current and previous QoL at the same time, as in SKIPP, is an attempt to overcome this. Some evidence for the impact of response shift on the ability of PROMs to detect change over time in this population comes from the fact that respondents’ rating of QOL at the second interview was significantly higher than the rating they made at the same time of their QoL at the first interview, while it did not differ significantly from the rating of QOL made at the first interview. Response shift may also account at least in part for why changes over time were not found in POS scores, although POS has been shown to detect change in other studies.10 The sensitivity of POS may have been reduced in this study by large amounts of missing data on most POS questions. Missing data are not reported in other POS studies, and users are instructed to complete each question: interviewers using structured questionnaires have an important role to play in ‘educating’ the respondent about their task21 and it is unclear whether the missing data here represent a failure of interviewer technique, or whether patients did not want or were unable to give numerical responses.

The fact that patients were able to differentiate clearly between their QOL, concerns and well-being before hospice involvement and their current status, even after as short a period as a week, highlights the importance in palliative care research studies of ensuring that baseline measures are collected before patients first receive the intervention or have contact with the service being evaluated. In reality, this has often proved difficult. The method adopted in SKIPP may provide an alternative approach. The ability of our measure to assess organisational impact from a single use is important in this rapidly deteriorating population, but may also be applicable to other settings characterised by short stays and clinical instability. Additionally, it avoids the administrative load of multiple forms requiring data entry and analysis.

SKIPP was designed to be useful in improving individual patient outcomes, as well as in quality assuring service provision. However, the part of the measure that identifies patients’ concerns is necessarily a compromise between the clinical imperative to identify each patient's needs fully and the practical organisational requirement to be able to analyse the changing scope of those needs in as simple a way as possible. Information from these questions can be used to understand the main concerns of patients receiving care from the service, and to compare the main concerns of patients receiving care from different parts of the service (inpatient, home care, day care). It can be also be used to understand how the concerns of different patient groups differ and to inform appropriate service provision.

It is acknowledged that the sample size in each development phase is small, a result of the type of population and the use of a single organisation. Incomplete data are also a problem. However, these did not prevent the emergence of statistically significant differences in the responses. The test-retest comparisons were particularly affected by the sample size, which prevented the use of planned statistics. Nonetheless, agreement on all but one of the assessments was over 70%, suggesting that they have reasonable reliability: further psychometric testing would be needed to establish this.

SKIPP is intended to be simple, easily understood and used by staff and patients. It is also designed to be understandable by commissioners in showing user views of the benefit they have derived from the organisation's care. In routine clinical use within St Christopher's, now amounting to a total of over 1000 patients across inpatient and community settings, SKIPP has continued to show a consistent pattern of outcomes and to influence clinical priorities by revealing previously unspoken patient concerns. It has the potential to function as a benchmarking tool within palliative care and is currently being trialled in a number of other organisations.

The final version of SKIPP is available from St Christopher's http://www.stchristophers.org.uk

Acknowledgments

We would like to express our thanks to the patients who took part in this study, to the staff at St Christopher's who made it possible, and to Angie Rogers who contributed to this research study.

References

Footnotes

  • Contributors JA-H, NS, BM, RH and PH devised the study. JA-H was the principal researcher and was responsible for research proposal design, supervision and management of the research including analysis, writing this paper and its submission. AR contributed to research design, obtaining ethical permission, data collection and data analysis. KH contributed to data analysis and paper drafting. RH, PH, DBM and NS led the initial development and subsequent revisions of SKIPP in the light of research findings, and its implementation. All authors contributed to writing this paper and reviewed it for critical comment: NS led the revision of several drafts.

  • Funding St Christopher's Hospice.

  • Competing interests None.

  • Patient consent Obtained.

  • Ethics approval Research Ethics Approval for the study was obtained from the Southampton and South West Hampshire Research Ethics Committee B (07/H0504/114).

  • Provenance and peer review Not commissioned; externally peer reviewed.