Background As well as facilitating patients’ wish to die at home, evaluating quality of care in this setting is essential. Postbereavement surveys with family members represent one assessment method. ‘Care Of the Dying Evaluation’ (CODE) is a 40-item self-completion postbereavement questionnaire, based on the key components of best practice for care of the dying.
Aim To assess the validity and reliability of CODE by conducting: cognitive ‘think aloud’ interviews; test–retest analysis; and assessing internal consistency and construct validity of three key composite scales.
Design Postbereavement survey to next-of-kin (NOK).
Setting/participants 291 NOK to patients who died at home in Northwest England from an advanced incurable illness were invited to complete the CODE questionnaire. Additionally, potential participants were asked to undertake a cognitive interview and/or complete CODE for a second time a month later.
Results 72 bereaved relatives (24.7% response rate) returned the completed CODE questionnaire, and 25 completed CODE for a second time. 15 cognitive interviews were undertaken. All interviewees found CODE sensitively worded and easy to understand. Minor revisions were suggested to provide additional clarity. Test–retest analysis showed all except one question had moderate or good stability. Although the ENVIRONMENT scale was not as relevant within the home setting, all three key composite scales showed good internal consistency and construct validity.
Conclusions ‘CODE’ represents a user-friendly, comprehensive outcome measure for care of the dying and has been found to be valid and reliable. CODE could potentially be used to benchmark individual organisations and identify areas for improvement.
- Terminal care
- Quality of life
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
On a national and international level, enabling more people to die in their preferred place of care continues to be a priority.1 ,2 With appropriate support, this preference tends to be at home3 and in the UK, up to 74% of people express this wish.4 ,5 Within several developed countries, there continues to be disparity between the number of patients who express and the number who actually achieve their preference.6 ,7 Additionally, little is known about the quality of care provided to patients and their families and whether or not they receive better care at home compared with institutional care.8 The UK Quality Standards for the End of Life stated that people in the last days of life should be ‘identified in a timely way and have their care coordinated and delivered in accordance with their personalised care plan, including rapid access to holistic support, equipment and administration of medication.’9 It is recognised that there can be shortcomings in the provision of care for dying patients with basic principles of palliative care not being practised. Hence, measuring and determining the quality of patient care and the level of family support remain as important as enabling choice about where patients die.
The quality markers and measures arising from the UK End of Life Care Strategy2 proposed that bereaved relatives’ surveys, such as ‘Views Of Informal Carers—Evaluation of Services’ (VOICES),10 form a useful method of assessing quality from the user perspective. Another such instrument is ‘Care Of the Dying Evaluation’ (CODE),11 a 40-item self-completion postal questionnaire developed to assess the quality of care and the level of support provided to patients and their families in the last days of life. CODE represents a shortened version of the original instrument, ‘Evaluating Care and Health Outcomes—for the Dying’ (ECHO-D). ECHO-D is a 91-item questionnaire (plus 14 ‘stem questions’) unique from other postbereavement questionnaires as it specifically links to key components representing best practice for ‘care of the dying’ (last days of life and immediate postbereavement period). ECHO-D was developed from the literature on a ‘good death’; review of existing postbereavement questionnaires (from both within and outwith the UK); and the goals of an integrated care pathway, the Liverpool Care Pathway for the Dying Patient.12 ECHO-D was used with over 700 bereaved relatives within a hospice and hospital setting and was shown to be valid, reliable and sensitive in detecting inequalities in care and areas of unmet need.11 ,13–15 CODE represents a revised, shortened and more user-friendly version of this instrument and is based on the ‘key quality indicator’ questions within ECHO-D. Key questions were selected from ECHO-D to form CODE based on the following criteria for each question: clinical importance; face, content and construct validity; test–retest reliability and internal consistency; number of missing responses; and response variability. These objective factors were used to provide an overall evaluation and determine the key questions which should form CODE. The process was undertaken by the principle investigator (CRM) and any queries discussed with the research supervisor (JEE). Questions within CODE ask about aspects of symptom control, communication, provision of fluids, place of death, emotional and spiritual support. Three key composite scales, originally developed and validated from the ECHO-D questionnaire, were further revised and analysed within CODE. The CARE, COMMUNICATION and ENVIRONMENT scales were created by grouping question items together which had similar conceptual themes and Likert scale response options. To date, neither ECHO-D nor CODE has been used to quality of assess care for those who died at home although both were developed for use in all care settings.
The aim of this study was to assess the validity and reliability of CODE to ensure that as a shortened questionnaire, it was perceived to be acceptable, sufficiently comprehensive and relevant for use within a community setting by bereaved relatives. This study was engaged within the context of a larger study which sought to assess quality of care for those who died at home in a specific community catchment area and initial findings are reported elsewhere.16
The objectives were to:
use cognitive ‘think aloud’ interviews to ensure CODE had robust face and content validity
reassess the stability over time of the individual CODE questions by conducting test–retest reliability analysis
assess construct validity and internal consistency of the three key composite scales.
Patients who had an expected death in their own home in a specific community catchment area within Northwest England between July 2011 and December 2012 were identified from the Preferred Place of Care (PPC) database. The following inclusion criteria were applied: patient was over 18 years of age; had an advanced incurable illness; and had received care from a healthcare professional working within the study's community catchment area. The PPC database records all patients who were expected to die in their own home and whose care was supported by members of the community healthcare team. Data obtained from the PPC database were cross-referenced with a second data base held by the community team to obtain the address of each deceased patient. Due to data protection restrictions within the host community organisation, we were not able to have direct access to the next-of-kin details because of concerns about verifying the accuracy of this data. Hence patients who died within a nursing or residential home were excluded. A potential sample of 291 patients was identified. Initially, we tried to recruit potential participants via community nursing teams (‘District Nurses’), but due to low recruitment rates, the method for initial approach of potential participants had to be revised and a postal survey was undertaken.
Next-of-kin were sent an information pack (via the patient's home address), a minimum of 2 months after the bereavement containing: covering letter; participant information sheet; response form; copy of the ‘CODE’ questionnaire; and freepost envelope for returning the questionnaire. Although there is no specific guidance regarding the ideal time for approaching bereaved relatives, generally an interval of 2–3 months has been used in previous studies and mirrors our ethically approved work with ECHO-D.
Potential participants were invited to complete and return the CODE questionnaire, with a reminder letter being sent out approximately 4 weeks after the first mailing to non-respondents. Within the CODE questionnaire, participants were asked if they are willing to:
be interviewed about their experience of completing the CODE questionnaire
complete the CODE questionnaire again approximately a month later.
For those willing to be interviewed, following written informed consent, a semistructured cognitive ‘think aloud’ interview was conducted by one of the researchers (AG), a trained counsellor in bereavement support. The ‘think aloud’ process prompts participants to articulate their thoughts (or ‘think aloud’) as they read a question, recall information and turn the information into an answer. The process helps gain understanding about whether questions have been understood and how answers have been formulated.17 A ‘retrospective’ approach was used where the CODE questionnaire was completed by the participant and returned to the research team for review prior to the interview. Immediately before the interview commenced, the CODE questionnaire was handed back to the participant. All interviews were digitally recorded and were undertaken either in the participant's own home or in one of two local hospices.
For those willing to repeat the questionnaire, a second copy was sent out a month later, in keeping with the time frame used in our previous studies13 ,14 and within other postbereavement studies.18
As this was an exploratory study assessing the feasibility and validity of using CODE in this care setting, formal sample sizes were not calculated, and the aim was to obtain 100 completed CODE questionnaires. As the response rate was likely to be 35%,14 ,19 ,20 almost 300 potential participants (n=291) were approached. Cognitive interviews would continue to be conducted until data saturation about new issues regarding the content or clarity of CODE was reached.
The cognitive interviews were transcribed verbatim and analysed using a content analysis framework, which is regarded as an objective and systematic way of evaluating phenomena.21 ,22 Data were refined into specific categories, with words and phrases of shared meaning. A randomised selection of interview transcripts (n=8) were independently reviewed by a second researcher (BAJ), not directly involved in the data collection, to check for coding, and any discrepancies were discussed with a third researcher (CRM).
Quantitative data were analysed using SPSS V.16 and Statistical Analysis System V.9.2. The stability of CODE over time was assessed using the following measures: percentage agreement; κ statistic23 (Cohen's for nominal response options and weighted for ordinal response options); and Spearman's correlation coefficient (for ordinal data). As each analysis measure provides different information about the overall stability of a question item, this provides more robust data compared with using one method alone. The criteria for good stability over time was defined as percentage agreement >70%; κ>0.6; and r>0.724 ,25 and moderate stability over time as percentage agreement >30%; κ>0.40; and r>0.3.
For each of the three key composite scales (CARE, COMMUNICATION and ENVIRONMENT), Cronbach's α was used to assess the internal consistency25 ,26 (with values >0.70 being regarded as satisfactory)26 and item–total correlations (for values of 0.40 or above).25 Confirmatory factor analysis was used to assess construct validity. The suitability of questions was examined by inspection of the correlation matrix and the Goodness of Fit Index (GFI).27
A total of 72 bereaved relatives (24.7% response rate) returned the completed CODE questionnaire. In all, 29 participants indicated they were willing to complete the questionnaire for a second time with 25 subsequently returning the second CODE questionnaire. Additionally, 25 were willing to be interviewed although seven subsequently could not be contacted (despite several attempts) and three changed their mind, resulting in 15 completed interviews. Although a further two participants subsequently indicated they were willing to be interviewed, as data saturation had been reached, these were not conducted. The interviews lasted between 1 and 2 h.
Participants were mainly women (n=47, 65.3%), a spouse or partner (n=40, 55.6%) and aged 50 years or above (table 1). The demographic details of non-responders were not available, limiting our ability to assess the representativeness of our sample. The majority of deceased patients had advanced incurable cancer (n=57, 79.2%), with others having cardiac failure (n=10, 13.9%), chronic obstructive pulmonary disease (n=2, 2.8%), end-stage renal disease (n=1, 1.4%) and there were two missing responses.
All respondents reported that they had found CODE easy to understand and complete with only two participants stating they did not like the use of the Likert scale response options. The majority of respondents (n=13) found CODE appropriate in length and the time taken to complete the CODE questionnaire varied from 15 min to 2 h. This potentially reflects different patterns of completion by participants, with some completing CODE in one go, and others taking longer and completing it in stages, a pattern similar to our previous studies with ECHO-D.13 Although the language used within CODE was thought to be sensitive, the process of actually receiving the questionnaire did evoke some distress, as one participant stated:
‘That's a bit insensitive; that's what I originally thought because straight away my dad was like I can't deal with that...’ (ID 163, female, age 50–59)
In particular, two participants found the fact that the information packs were addressed ‘to the carer of (patient name)’ upsetting.
All interviewed participants found the interview process itself cathartic and therapeutic to some degree, independent of the quality of care. Equally, some participants found it beneficial to complete the questionnaire, with one participant going as far to say:
It was a release for me to put it on paper. (ID 162, female, age 60–69)
Transcripts were initially coded into three main categories, namely, recall; comprehension; and sensitivity, and then further refined into smaller subcategories exploring each of these further. Specific examples of feedback from each category and revisions, where required, are shown in table 2 and more detailed information about these findings will be presented in a subsequent manuscript.
Additional areas of care that were perceived by bereaved relatives to be particularly pertinent when patients died at home included: the availability of specific supportive equipment (hospital bed, air mattress) to prevent hospital admission and the supportive role of the community pharmacist. Additionally, questions relating to the availability of written information to support verbal discussions about the common symptoms seen as someone is dying and practical advice about what to do after death were deemed important.
All except two questions (patients’ spiritual needs (κ=0.4) and discussion about ‘drip’ (κ=0.38)) showed moderate or good stability over time (see online supplementary table). Additionally, although the question asking about the participants’ spiritual needs showed moderate stability, the indices for the percentage agreement, κ statistic and Spearman's correlation were all within the lower end of this ‘moderate stability’ category.
Key composite scales
The scores for the ENVIRONMENT, CARE and COMMUNICATION composite scales showed a wide range of responses although mean scores were generally high reflecting good perceptions about these aspects of care. Initial analysis, however, indicated issues with using the ENVIRONMENT scale for those who had died at home (table 3). Question 5, asked about the cleanliness of the ‘ward area’, was generally ‘not applicable’ (n=30) but also had 15 missing responses. Hence this question was omitted from further analysis of this scale.
The internal consistency for the three composite scales was good (Cronbach's α>0.79; all item-total scores, except one, >0.4) suggesting that the items had high inter-item correlations, and worked well together as individual scales (table 3).
The GFI was 0.72 confirming suitability of the data for factor analysis. Postanalysis inspection of the correlation matrices showed that all correlations, with the exception of one, exceeded the r>0.30 threshold (table 3). The exception was within the COMMUNICATION scale where the correlation between question 9 (doctors had time to listen) and question 16 (involvement in decisions) was 0.20. Apart from question 16, which had a factor loading of 0.39, all other question item factor loadings were high (range 0.69–0.97) showing that each scale did represent a single construct.
Overall, CODE was perceived by bereaved relatives to have good face and content validity. Feedback from the cognitive interviews helped further revise and refine CODE's content to improve its clarity and ensure it is sufficiently comprehensive to capture pertinent issues. Revisions, however, were minor and generally related to the preamble information at the start of each section rather than the individual questions. Generally, the test–retest analysis showed CODE's questions were stable over time and this builds on the previous work undertaken with ECHO-D. Although the ENVIRONMENT scale was not as relevant for use with those whose family member had died at home, each of the three composite scales—CARE, COMMUNICATION and ENVIRONMENT—worked well together in terms of their internal consistency and construct validity.
There are several limitations of the study that need to be considered in the interpretation of the results. The study has a response rate of 24.7%, lower than anticipated and when compared with previous similar studies. Additionally, the demographic data of non-respondents were unavailable, limiting how representative our sample was to the population as a whole. Our participants tended to be women, be of white ethnic origin and Christian religious affiliation. This type of study represents a sensitive and challenging area of work and is likely to be the main factor impacting on the overall return. Additionally, having access to potential participants’ direct home addresses would have been ethically and practically preferable and would have been likely to improve our overall response rate. Restrictions imposed upon us regarding access to the next-of-kin details meant initial letters of contact were insufficiently personalised. This has been fed back as a major limitation to the community host organisation with recommendations for alterations if future studies of this nature were conducted. Our numbers were lower than anticipated, and this needs to be considered in the context of our statistical analysis. Although recommendations about sample sizes for factor analysis vary, these do include having a total sample size of 150 and that the number of participants to question items should be of a ratio of at least 5:1.28 Construct validity, however, should not be regarded as a one-off assessment but instead should an ongoing process, potentially determined after years of use of an instrument, in different settings and with different populations.29 We would propose to undertake further work to seek confirmation of these findings. Nevertheless, the assessment of construct validity and test–retest reliability analysis build on earlier work conducted with the original instrument, ECHO-D, which strengthens our results.11 ,13 ,15 From the 30 clinical questions within CODE (the additional 10 ask about demographic details for the patient and participant), 24 were directly taken unchanged from ECHO-D and had already been assessed for test–retest reliability. Additionally, more than one method of statistical analysis was conducted to assess test–retest reliability providing more robust data. For example, percentage agreement simply assesses the proportion of times that the same response is given on round 1 and round 2. Hence, the probability of providing the same answer on two occasions is influenced by the number of response options, that is, more likely to have stability over time with dichotomous response options compared with 5-point Likert scale responses. By additionally using the κ statistic and for ordinal response options, Spearman's correlation coefficient, these limitations are minimised. Our analysis within this study showed only two questions had a κ of 0.40 or below, indicating CODE's stability over time. Although some postbereavement questionnaires such as ‘Toolkit of Instruments to Measure End-of-life care’ (TIME) have conducted similar psychometric testing as we have undertaken for CODE, this is not the case for all postbereavement instruments available.30 In light of the issues raised with question items relating to spiritual support, further refinement of the preamble to this section has been undertaken to include a definition of ‘religious and spiritual support’. In view of our overall sample size, excluding specific questions at this stage of validation was not thought to be appropriate, but further reassessment with a larger sample and within different settings would feature part of our ongoing work. Additionally, piloting the use of CODE as an outcome measure in countries other than the UK would also form part of future work streams.
With the increased likelihood that bereaved relatives’ surveys will form part of the way we evaluate quality of care, there are some key issues that will need to be addressed. First, having solid and reliable data systems is fundamental for studies of this nature to be conducted. Often, existing data sources have been developed for purposes other than assessing quality of care measurement and so re-evaluation of the best ways to use existing sources is required.31 Second, engagement with clinicians to help appreciate the need and benefits of these types of assessment is important. For example, although we initially tried to approach potential participants using local healthcare professionals already known to the deceased, this method was unsuccessful due to a number of factors: gate-keeping by healthcare professionals; perception that involving bereaved relatives in evaluations was inappropriate; lack of healthcare confidence in approaching the subject; and lack of time. When this type of research is carefully conducted, however, bereaved relatives can find it a positive experience,32 a finding similar to that of our interview participants. Third, trying to ensure as representative a sample as possible is obtained for postbereavement surveys is important. Our participants tended to be the next-of-kin for those who had died from cancer. This is consistent with previous findings that having a cancer diagnosis, compared with cardiovascular or respiratory disease, appears to increase the likelihood that death at home will be achieved.4 This may reflect difficulties in prognostication or differing levels of support and the reasons behind these differences need further exploration. Additionally, ethnic minority groups are often under-represented in research studies33 and all of our participants described themselves as ‘white’ ethic origin. Previous work with VOICES suggests that having direct involvement and engagement with representatives from the local communities can aid in response rate rather than simply having translators available or directly translating the survey instrument.34
One of the key UK national strategies is to try to enable more patients to die in their place of preference, which generally is within their own home. Ensuring that not only home deaths are achieved but that consistent and good quality level of care is provided in the home setting is of fundamental importance. Although prospective patient reporting is desirable, obtaining this is not without practical and ethical challenges. Hence, bereaved relatives’ surveys represent a key component of healthcare assessment. The ‘CODE’ questionnaire represents a user-friendly, comprehensive outcome measure for care for the dying and this study builds on earlier work supporting question item validity and reliability. With its specific focus on care of the dying in the last days of life, it could potentially be used as a core outcome measure by individual organisations. Within the context of a larger study, CODE could allow benchmarking of the current quality of care on a national level, identify key areas for improvement and help sustain a continuous improvement in quality of care for dying patients.
We wish to thank all the bereaved relatives who participated in this study.
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Files in this Data Supplement:
- Data supplement 1 - Online supplement
Contributors CRM was involved with the conception and design of the study, including the development of the CODE questionnaire, monitored the data collection for the whole study, wrote the statistical analysis plan, cleaned and analysed the data, and drafted and revised the paper. She is the guarantor. CL was involved with the conception and design of the study, was responsible for the conduction of the postal postbereavement survey, and revised the draft paper. AG was responsible for the conduction and analysis of the cognitive interviews and drafted the cognitive interview part of the draft paper. BAJ was responsible for the conduction and analysis of the cognitive interviews, and revised the paper. TFC wrote the statistical analysis plan, analysed the data, drafted and revised the paper. SRM was involved with the conception and design of the study, helped oversee the project planning and revised the draft paper. AW was involved with the conception and design of the study, was responsible for the data entry for the questionnaire responses and analysed the data, and revised the paper. JEE was involved with the conception and design of the study, and revised the paper. All authors gave final approval of the version to be published.
Funding This research was supported by Liverpool Community Health NHS Trust and Liverpool John Moore's University.
Competing interests None.
Ethics approval Ethical approval was obtained from NRES Committee North West—Liverpool East (11/NW/0159).
Provenance and peer review Not commissioned; internally peer reviewed.
Data sharing statement As this study is not a clinical trial, routine data sharing would not be undertaken. However, anonymised data would be available for sharing on request from the Marie Curie Palliative Care Institute Liverpool.