Article Text

Instruments used to measure the effectiveness of palliative care education initiatives at the undergraduate level: a critical literature review
  1. Rosemary Ann Frey,
  2. Merryn Gott and
  3. Hayley Neil
  1. School of Nursing,University of Auckland, Auckland, New Zealand
  1. Correspondence to Dr Rosemary Ann Frey, School of Nursing, University of Auckland, PO Box 92019, Auckland Mail Centre, 1142 Auckland, New Zealand;r.frey{at}


Background The increase in the numbers of patients with palliative care needs has resulted in growing pressures on the small number of specialist palliative care providers within the New Zealand context. These pressures can potentially be eased by ensuring an adequately trained workforce, beginning with undergraduate training in the healthcare field. The goal of the present review is to ascertain what tools exist to measure the effectiveness of undergraduate palliative care education initiatives.

Method A systematic review of qualitative and quantitative literature was undertaken. Searches within ERIC, CINAHL Plus, Medline and Medline in Progress, and Google Scholar databases were conducted for the period 1990–2011. A checklist adapted from Hawker et al was used to select and assess data.

Results 14 of the 112 articles met the inclusion criteria. Overall inconsistencies in the amount of validation information provided and a narrow focus on aspects of palliative care competence was apparent. No universally applicable validated questionnaire to assess the effectiveness of undergraduate palliative care education could be identified.

Conclusions The increased focus by educational institutions on instilling palliative care skills in healthcare students necessitates the development of comprehensive and validated tools to evaluate the effectiveness of education initiatives.

  • Terminal care
  • Education and training
  • Service evaluation
  • Supportive care

Statistics from

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.


For developed countries, the ageing and dying of the ‘baby boomers’ will be one of the key public health challenges of this century (Gott and Ingleton, 2011).1

Over the next 40 years, the global number of deaths will almost double.2 An increasing proportion of these deaths will be as a result of chronic diseases, known to benefit from a palliative approach to care.3 However, most patients do not receive adequate palliative care and die with unaddressed physical and psycho-social needs.4 The World Health Organization has identified improved palliative care as a global health priority.5 Many countries have recognised that any strategy aimed at improving palliative care provision must include a strong focus on education and training.6

There is now widespread acceptance that palliative care should be a core component of the role of all health professionals who care for patients at the end of life.1 However, there is evidence that ‘generalists’ often feel ill equipped to identify and manage patients’ palliative care needs.7 As Keating and Teed argue: ‘growing social demand for skills in the provision of palliative care services places pressure on health professional courses to produce exemplary graduates’8 (page 5). An increasing number of institutions internationally have introduced palliative care education initiatives (eg, Palliative Care Curriculum for Undergraduates in Australia).9 Nevertheless, debate still surrounds programme adequacy. Reported deficiencies in undergraduate palliative care education include programme availability, structure and content.10

Central to effective education provision is evaluation of educational initiatives. As outlined by Bashook,11 within healthcare, methods of collecting evidence of competence can be grouped according to what each is best at measuring, including knowledge (eg, multiple choice questions, essay questions, short answer questions); decision making (eg, oral examinations); practice performance (eg, portfolios, global rating forms); and skills (eg, objective structured clinical examinations (OSCEs)). Such evaluations require valid tools to measure student outcomes.


To identify instruments to assess the effectiveness of palliative care education initiatives at undergraduate level, and to consider the psychometric properties of those instruments.


A systematic review of the literature was undertaken using a framework developed by Hawker et al.12 This framework provides a review structure adaptable to a range of methodological approaches. The review was conducted in the following stages: search strategy; inclusion criteria; assessment of relevance; data extraction and appraisal; and data synthesis.

Search strategy

A list of keywords was developed by consensus among the reviewers and relevant databases were searched, including ERIC, CINAHL Plus, Medline and Medline in Progress, and Google Scholar, for the period 1990–2011. Keywords included programme, assessment, evaluation, effect, impact, education, palliative care, terminal care, nursing undergraduate and student. Wild card searches were used to account for word variations.

Inclusion criteria

Consideration for inclusion required the following: the topic of the research must measure skill and knowledge development and/or attitudinal change following participation in a palliative care education programme; and the research participants must be students at the undergraduate level. The literature was further limited to peer-reviewed articles published in English between 1990 and 2011. Qualitative and quantitative research was eligible for inclusion.

Assessment of relevance

Three systematic and objective stages of assessment examined in turn the title, abstract and body of the paper. An initial scoping exercise, conducted by HN, involved reviewing the title and, if necessary, the abstract of the retrieved search items. Independent assessments by the two reviewers HN and RF used a checklist developed by Hawker et al.12 These assessments were then compared and any disagreements were discussed and resolved.


A total of 14 out of an initial 112 articles met the inclusion criteria. Excluded articles were not relevant to the identified criteria (see figure 1). Of the included articles, seven of the studies included instruments that assessed undergraduate nursing students,13–19 three studies included tools that assessed medical undergraduates,20–22 while one study included measures designed for physiotherapy undergraduates.23 Of the remaining studies, one assessed healthcare and medical undergraduates24 and two studies25 ,26 included instruments designed to assess people from a range of disciplines. Three of the articles were based on studies conducted in the UK13 ,20 ,21; six were conducted in the USA,14 ,16–18 ,25 one was conducted in Canada,15 one in Australia,16 two in India19 ,23 and one in Hungary.24 The results of the review are presented below as a summary and evaluation in light of the research objective. The 14 identified articles reported on the use or development of 13 questionnaire tools.

Figure 1

Flow chart of the included literature.

Overview of the instruments

The majority of studies used existing instruments.13–17 ,20 ,21 ,24 ,26 One of the studies used an instrument developed in the 1960s,26 two studies used instruments developed in the 1970s,15 ,24 two were developed in the 1980s,15 ,17 two in the 1990s14 ,20 and four from 2000 onwards.16 ,21 ,23 ,25 Two studies modified the tool for the purpose of their research either by changing the terminology or the format.25 ,26 Five studies included original measures.16 ,18 ,19 ,22 ,23 The tables give an overview of the research articles (supplementary table 1) and they included measures ranging from more widely used and tested tools (supplementary table 2) to more recent instruments (supplementary table 3).


The studies assessed in the review used a variety of indicators. Four of the studies included measures of medical knowledge.13 ,16 ,19 ,23 Attitudes and opinions related to palliative care delivery were also assessed,14 ,16–18 ,22 ,23 ,25 as were perceptions of confidence in dealing with issues related to palliative care delivery20 ,21 and frequency of experience in palliative care delivery.23 A number of studies examined attitudes and emotional reactions to death and dying.14–17 20–22 ,25


Theoretical knowledge was assessed by Arber13 and Kwekkeboom et al16 using the Palliative Care Quiz for Nursing (PCQN).27 Items focused on the philosophy and principles of palliative care, the management of symptoms, and psychosocial and spiritual care of individuals and families. Velayudhan et al19 developed a multiple choice questionnaire for medical students which contained 20 items, and another 15-item version for nursing students on theoretical knowledge, including some psychosocial open-ended questions. Kumar et al23 used the Physical Therapy in Palliative Care—Knowledge, Attitudes, Beliefs and Experiences Scale (PTiPC-KABE Scale)23 to research student physiotherapists. The 37-item self-report measure consisted of quantitative and qualitative data relating to the participants’ perceived knowledge, attitudes, beliefs and experiences of palliative care.

Changes to participants’ confidence in palliative care delivery were measured by Mason and Ellershaw20 ,21 using the Self-Efficacy in Palliative Care (SEPC) Scale.28 The 23-item SEPC contains three distinct subscales (communication, patient management and multi-professional team work). Mavis29 indicated that the self-assessed confidence of medical students correlated with performance on a variety of interventions and competence assessment measures.

Educational studies have demonstrated the importance of positive attitudes as a result of learning.30 In addition to the PCQN,27 Kwekkeboom et al16 included a 12-item scale designed by Bradley et al31 to assess the attitudes of physicians and nurses about care at the end of life. The domains included views about roles and responsibilities of healthcare professionals in caring for terminal patients, the extent to which palliative care provides additional benefits not offered in conventional medical care, views about the role, and importance of clinician–patient communication.

Previous studies demonstrated that students who completed clinical rotations and courses in palliative care expressed more comfort with death and caring for dying patients.32 To address this aspect of effective education, Kwekkeboom et al16 assessed nursing students’ concerns about caring for dying patients using a six-item scale designed by Milton39 representing major areas of concern to nursing students. Participants were asked to rate their degree of comfort when dealing with a dying patient and their family members, their ability to locate resources needed to care for a dying patients, and their ability to handle their own emotions.

Barrere et al,14 Frommelt25 and Mallory17 all used the Frommelt Attitude toward Care of the Dying (FATCOD)33 instrument developed by Frommelt in 1988. Schwartz et al22 explored changes to participants’ attitudes towards death using the Concept of a Good Death Measure.40 The instrument contains 17 descriptive positive statements relating to a ‘good’ death. Based on the research by Walden-Galuszko and associates42 the measure incorporates the concepts of a ‘traditional’ versus a ‘modern’ death, and includes items based on discussions with clinicians and a literature review. The measure assesses three domains: closure, personal control and clinical criteria.

Death anxiety was measured by Mason and Ellershaw20 ,21 using the Thanatophobia Scale (TS)36 to measure attitudes and expected outcomes of providing palliative care. Hegedus et al24 used the Multidimensional Fear of Death Scale (MFODS),43 developed by Neimeyer and Moore in 1994 based on the drawing on Hoelter's34 definition of fear of death: ‘an emotional reaction involving subjective feelings of unpleasantness and concern based on contemplation or anticipation of any several facets related to death’ (Hoelter34 in Hegedus et al,24 page 265). Hurtig and Stewin15 used the Confrontation–Integration of Death Scale (CIDS) developed by Klug.35 CIDS measures two areas of what Hurtig and Stewin15 described as ‘the reconciliation with death construct’: ‘death confrontation’ (contemplation of death), and ‘death integration’ (the positive emotional response to death confrontation)15 (page 31). Personal experience of death was accounted for using an open-ended question.

Schwartz et al22 used the Concerns about Dying (CD) instrument.41 The CD contains 10 descriptive statements designed to assess an individual's comfort level in caring for the dying as well as general concerns about death. The CD is split into three parts: general concerns about death and dying; spirituality; and concerns about working with the dying.

Mooney26 used the revised Collett–Lester Fear of Death Scale,38 originally created in 1969.37 The instrument contains four subscales (seven items each relating to one's own death or the death of others).


The most common response format was a Likert scale with 12 of the 14 included studies incorporating this format. A 1–5 scale was most frequently used.14 ,16 ,17 ,22–26 Hurtig and Stewin15 included CIDS35 which used a four-point scale in each of two subscales of 10 items (integration factor) and 8 items (confrontation factor) respectively. High scores correspond to a high degree of a factor. Kwekkeboom et al's16 ‘Concerns about caring for dying patients’ questionnaire also recorded responses in a four-point Likert format with 0=‘not at all’ and 4=‘very much so’. Higher scores indicated more concern/worry. The ‘Concept of a Good Death’40 measure used by Schwartz et al22 incorporated a four-point Likert format as well. The measure assessed the perceived essential components at the end of life among a number of dimensions, including spiritual peace, acceptance, freedom from pain, closure, rated along scales ranging from 1 ‘not necessary’ to 4 ‘essential’. The TS37 included in the studies by Mason and Ellershaw20 ,21 recorded responses along a seven-point Likert scale measuring the level of agreement with seven statements of negative attitude towards caring for a dying patient. The ‘Completion Survey’ measure designed by Thompson18 assessed confidence level in dealing with patients who are dying, addressing the level of comfort at the beginning of the educational intervention and at the conclusion of the programme.

The PCQN27 in the studies by Arber13 and Kwekkeboom et al16 used a true/false/don't know format to measure nurses’ knowledge. Knowledge was also assessed by Velayudhan et al19 using a multiple-choice format. Finally, the SEPC28 incorporated by Mason and Ellershaw20 ,21 measured confidence in performing practice-based objectives on a 100 mm visual analogue scale.

Psychometric properties

Of the 14 reviewed articles, two studies omitted validation information for measures and five studies referenced previous validation. The amount of detail in the remaining articles varied. Kumar et al23 reported the test–retest reliability for a pilot of the PTiPC-KABE Scale for 24 participants. In contrast, the SEPC28 and TS37 were rigorously validated in the studies by Mason and Ellershaw.20 ,21 Internal consistency, measured by Cronbach's α, was most often used.14 ,17 ,20–22 The PCQN27 in the Arber13 and Kwekkeboom et al16 studies was assessed for reliability using the Kuder–Richardson (KR-20) formula for dichotomous variables. The CIDS,35 used by Hurtig and Stewin,15 was also assessed using the KR-20. Moderate to good psychometric properties in relation to reliability were reported (coefficients ranging from 0.65 to 0.95). Split-half reliability for the Revised Collett–Lester Fear of Death Scale38 used by Mooney26 ranged from 0.72 to 0.91.38 Test–retest reliability was demonstrated for a majority of the measures incorporated in the studies,13–17 22–26 including the PCQN,27 FATCOD,33 MFODS,38 CIDS,35 Attitudes about Care at the End of Life,31 Revised Collett–Lester Fear of Death Scale,38 PTiPC-KABE Scale,23 Concept of a Good Death40 and CD41 measures. Structural validity was demonstrated through principal component analysis for measures included in three of the studies.16 ,20 ,21 Content validity was assessed for FATCOD and FATCOD Form B,25 ,33 and the ‘Concerns about caring for dying patients’16 measure (Kwekkeboom, personal communication).


Of the 14 articles, 13 reported response rates of 60% or greater. The study by Schwartz et al22 reported a response rate of 90% for the inter-clerkship component of the study. In contrast, the response rate for the longitudinal elective component was 53%. Frommelt25 reported the number of people in the control and experimental groups, but not the population. Eight studies incorporated longitudinal designs, each of which were subject to attrition in the post-test component.13 ,14 ,17 ,20–22 ,24 ,26 The explanation for this attrition in the study by Barrere et al14 was the loss of nine students from the traditional programme (13% decrease), five from the accelerated programme (11% decrease) and non-completion of the follow-up questionnaire. Mason and Ellershaw20 ,21 reported a small number of questionnaires returned with incomplete sections, indicative of problems in formulating a response to some items in the 2008 and 2010 studies. No significant demographic differences (eg, gender, previous experience) were reported for this subgroup, although the analyses were not presented. Other issues include a lack of established validity for an instrument as cited by Kumar et al,23 although the research cited good test–retest reliability. Question wording was also an issue in the study by Arber,13 who cited problems with ambiguity in some items found in the PCQN27 within the British context. Mallory17 also noted a limitation of FATCOD33 in its ability to identify all factors (eg, all previous education, all death experiences) that could impact on the participants’ attitudes toward care of the dying. Finally, Barrere et al14 reported limitations in the forced-choice format, preventing any explanations for participant choices.


Evaluation of effectiveness is essential to the delivery of quality undergraduate palliative care education and the inclusion of a valid instrument in the process is also essential. According to Meekin et al44 (page 987), the evaluation of a palliative care education programme's effectiveness ‘should take into account the singularly broad range of knowledge, skills, and attitudes that must coalesce for a student to develop competence in the area’. Questionnaire tools have ranged from the very specific in focus (Completion Survey)18 to more inclusive measures (PTiPC-KABE Scale).23 However, no measure comprehensively addressed all areas of palliative care competence. Most measures also focused on students from a narrow range of health professions. For example the PTiPC-KABE Scale23 assessed the palliative care related knowledge, attitudes, beliefs and experiences of physical therapy students. The measures were most often directed at assessing nursing students (PCQN,27 FATCOD,33 CIDS,35 ‘Concerns about caring for dying patients questionnaire’,14 Completion Survey),18 medical students (SEPC,28 Concept of a Good Death,40 CD)41 or both (TS,36 Attitudes about Care at the End of Life,31 Palliative Care Knowledge Questionnaire).19 Two were directed at more diverse populations (FATCOD, Form B, Revised Collett–Lester Fear of Death Scale).25 ,38

Every measure in this review relied on self-report data. However, self-report data may not provide the best measure of behavioural competence .Self-assessment is by design subjective and context dependent.45 Self-reported abilities may vary from actual abilities46 Attitudes were often assessed (eg, FATCOD, MFODS).33 ,43 While studies within healthcare have supported a link between attitude and behaviour,47 behaviours are influenced factors.48 Measurements of attitudes alone are therefore insufficient.

As with any research, this review has limitations. Reviewed articles were limited to those published in English and the grey literature was not searched. Some of the work in healthcare education evaluation is not published and is therefore omitted from this review.

Ultimately, measurement of the effectiveness of palliative care education initiatives cannot rely on the creation of one universal tool. To address this issue, Weisman et al49 recommend correlating indicators of perceived levels of competence with observed performance in OSCEs. Although not without limitations, the method has been used extensively in the assessment of palliative care competency.50 Mason and Ellershaw20 (page 691) recommend that: ‘the addition of observed structured clinical examinations (OSCEs) would strengthen this study and further validate the effects of the educational programme’. Thus self-assessments should be complimented by reliable and valid external sources of information.45 A multidimensional approach to assessment is required.


Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

    Files in this Data Supplement:


  • Contributors Rosemary Frey, Merryn Gott and Hayley Neil were responsible for the design, data analysis, interpretation, article drafts and revisions, and final approval for publication. There were no additional contributors. The sponsor did not have any involvement in the planning, execution, or write-up of the research. Additionally the funder played no role in drafting the manuscript.

  • Funding Funding for this research was provided by the University of Auckland, New Zealand.

  • Competing interests None.

  • Provenance and peer review Not commissioned; externally peer reviewed.