Main

Cure rates for paediatric cancer are approaching 82% (Canadian Cancer Society’s Steering Committee on Cancer Statistics, 2011) but the costs of this progress include a high frequency and intensity of symptoms during treatment (Baggott et al, 2010; Poder et al, 2010; Miller et al, 2011) and chronic health conditions following completion of treatment (Oeffinger et al, 2006). In general, the symptom burden in children undergoing treatment for cancer is very high (Baggott et al, 2010; Poder et al, 2010; Miller et al, 2011). Active symptom screening is important because children undergoing cancer treatment may not voice concerns or complain. They may accept symptoms as an inevitable consequence of chemotherapy, only seeking help when the symptoms become severe (Woodgate, 2003; Gibson et al, 2010). It is important to identify symptoms by self-report as opposed to by proxy report. Identification of symptoms by the child, most especially subjective symptoms, ensures that the perspectives and experiences of the child are being captured and that the attention of clinicians and parents is focused on the symptoms most important to that child.

Within the adult oncology setting, screening and assessment of symptoms through patient self-report has been identified as an important priority (Coates et al, 1983; Griffin et al, 1996; de Boer-Dennert et al, 1997; Carelle et al, 2002). Efforts by Cancer Care Ontario (CCO) have culminated in the widespread use of a symptom screening tool based upon the Edmonton Symptom Assessment Scale (ESAS; Bruera et al, 1991). The ESAS is a validated tool that asks adult patients to rate the severity of nine common symptoms including pain, anxiety and nausea. In a satisfaction survey of 2921 patients, 87% of respondents thought the ESAS was an important tool for letting healthcare providers know how they feel (Cancer Care Ontario, 2011). Furthermore, through an initiative with CCO and the Canadian Partnership Against Cancer, evidence-based guidelines were developed to manage severe symptoms identified by the tool (Cancer Care Ontario, 2011).

In contrast, there are no symptom screening tools available for paediatric cancer patients (Dupuis et al, 2012; Tomlinson et al, 2014). In our previous research, we conducted a systematic review of symptom assessment scales that have been used in paediatric cancer and using focus group methodology, identified that none were ideal for symptom screening (Dupuis et al, 2012; Tomlinson et al, 2014). The overall goal is to develop an electronic symptom screening tool for use in clinical practice. We began the process of item generation using a nominal group technique among paediatric cancer healthcare professionals (Tomlinson et al, 2014). A total of 44 items were generated initially; these were reduced to the 15 items, which were considered the most important for symptom screening based on criteria articulated by the focus group (Tomlinson et al, 2014). With these 15 items, an initial draft of an instrument was developed and named the Symptom Screening in Pediatrics Tool (SSPedi; Tomlinson et al, 2014). The objective of this study was to evaluate and refine the initial iteration of SSPedi using the opinions of children with cancer and parents of paediatric cancer patients.

Materials and Methods

Subjects

Child respondents were patients 8–18 years of age with cancer undergoing active treatment. We included children as young as 8 years of age as we were confident that most children could complete electronic diaries in this age range based on previous work (Palermo et al, 2004; Stinson et al, 2008; Stinson, 2009; Alfven, 2010). Although there will be some children younger than 8 years of age who can complete electronic diaries, we wanted to ensure that most children were able to understand and complete the instrument for the early phase of instrument development. Exclusion criteria were illness severity, cognitive disability or visual impairment that precluded completion of SSPedi according to the primary healthcare team. We excluded children with cognitive disability as we wanted to distinguish between children who did not understand a SSPedi item related to their disability vs a poorly worded item for most children without cognitive disabilities. We excluded children with visual impairment so that child participants could provide feedback on how SSPedi was presented, even if the child could not read well. Parent respondents were parents of eligible children. All participants had to be able to understand English. Parents did not need to be parents of enrolled children although enrolment of a parent and child from the same family was permitted. Sampling was purposive to consider variance by age, underlying diagnosis and gender.

Procedures

Respondents were recruited from The Hospital for Sick Children (SickKids) in Toronto, Canada. This study received Research Ethics Board approval from SickKids and all participants/guardians provided informed consent or assent as appropriate. Potential respondents were approached in the inpatient or outpatient setting by a member of the study team. Demographic information was obtained from respondents and from the patients’ health records. Next, child respondents were invited to complete SSPedi by themselves. Research personnel were present to answer questions in a standardised manner. If children had questions about the meaning of a specific item, a predefined set of synonyms was shown to them. For children who requested greater assistance, SSPedi could be read to them verbatim by research personnel. All assistance provided was recorded. Each parent respondent was asked to complete SSPedi on behalf of their child.

After the child or parent completed SSPedi, he/she then responded to semi-structured questions. Interviews were conducted by trained clinical research associates or nurses with experience in cognitive interviewing. All interviews were audio taped and transcribed. Questions were designed to (i) assess ease of completion; (ii) evaluate understandability using cognitive probing; (iii) assess whether items with two concepts (such as ‘feeling scared or worried’) should remain as one item or be separated into two different items; and (iv) assess content validity, or importance of each item to children with cancer in terms of how much it bothered them. Details of these questions are included below.

First, child and parent respondents rated how easy or difficult SSPedi was to complete, both overall and for each item, using a five-point Likert scale ranging from 1=‘very hard’ to 5=‘very easy’. Parents estimated their child’s ability to complete SSPedi rather than their own experience.

Second, cognitive probing was conducted in children (not parents) to evaluate the understanding of each item and of the response scale. Cognitive interviewing is a technique used to determine a respondent’s level of understanding and to elicit their opinions on a certain question or word (Collins, 2003; Drennan, 2003; Willis et al, 2005; Willis, 2005; Irwin et al, 2009; Ahmed et al, 2009; DiBenedetti et al, 2013). Each interview was conducted by one interviewer and one recorder. One interviewer posed questions to determine whether the child understood the meaning of a SSPedi item. Based on those responses, further questions could be used to clarify the child’s level of understanding. The recorder listened to the discussion and judged whether the child understood the item using the scale described below. For example, in order to evaluate understanding of ‘feeling disappointed or sad’, we asked the child what the item meant to him/her and asked for examples of things that might make him/her feel disappointed or sad. Depending on the response, the interviewer could continue to probe, for example, by asking the child to describe the opposite of disappointed or the opposite of sad. The recorder then rated the child’s understanding of each item using a four-point Likert scale ranging from 1=‘completely incorrect’ to 4=‘completely correct’. We also evaluated the child’s understanding of the response scale by identifying a symptom that the child had indicated as ‘a little’, ‘medium’ or ‘a lot’ of bother to him/her. We asked the child why he/she indicted that amount and not more or less bother. Understanding of the response scale was rated by the recorder on a three-point Likert scale consisting of 1=‘not able to distinguish between choices’; 2=‘understands some of the differences between choices but some confusion exists’; and 3=‘able to distinguish between choices’.

Third, because some items contained two concepts such as ‘feeling scared or worried’, we asked respondents whether these concepts should remain together in one item or whether they should be separated into two different items. Fourth, we asked about each item’s importance and whether this symptom bothers children with cancer enough to ask about it regularly. Finally, we asked respondents whether there were any items missing from SSPedi.

Each cohort of 10 children and 10 parents (if parents were still being interviewed) were interviewed in parallel. Upon completion of each cohort, the study team reviewed the responses to identify whether the tool should be modified. For example, with the first iteration, responses were reviewed after 20 participants had been enrolled. Additional questions could be added to the script mid-iteration depending on the findings during cognitive probing. We anticipated that a final version would require two to four iterations for children (20–40 total) and two iterations for parents (20 total). Child recruitment ceased with the first group of 10 children in which modifications were not required.

Defining thresholds for change

Changes to SSPedi were based upon a set of defined change thresholds that focused on child responses rather than parent responses. Thresholds for change were decided a priori and were derived using the opinions of the investigators based on their experience with children with cancer and instrument development. For the questions related to ease of completion and understandability, thresholds were defined for each cohort of 10 children interviewed since changes were implemented for the next cohort of 10 children to address identified concerns. Thresholds for change for these two questions were: at least 20% (2 of 10) of children rated the item as hard or very hard to complete/understand and at least 20% (2 of 10) of children had interviewer ratings of understandability of ‘mostly’ or ‘completely incorrect’. If thresholds were met, the study team considered item modification.

For the questions related to whether items with two concepts should be separated and whether items are important enough to be included in SSPedi, thresholds were defined among all children interviewed on a cumulative basis since changes to the tool were not anticipated to impact on responses. Thresholds for change for these two questions were: at least 60% of children thought items with two concepts should be separated and if at least 80% of children rated the item as not bothersome enough to ask about it regularly. If thresholds were met, the study team considered separating the combined item or removing an item from SSPedi.

If either children or parents identified missing items, the team reviewed the item to determine whether respondents should be asked about this item in the next iteration of testing. In this case, respondents in the next iteration were asked ‘Do you think ‘item’ should be included in SSPedi?’. The item was added if at least 40% of children thought the item should be included. We also considered the situation in which children thought two SSPedi items could be combined, for example ‘Headache’ and ‘Hurt or pain (other than headache)’. If children or parents identified this as a possibility, then a question about combining them was added to the next iteration of testing. Two SSPedi items were combined if at least 60% (6 of 10) of children thought they should be combined.

Statistics

Each cohort consisted of 10 children and 10 parents (Bordoni et al, 2006; Kushner et al, 2008). A total unweighted SSPedi score was calculated for each administration. Each item’s Likert score ranged from 0 (no bother) to 4 (worst bother); Likert scores were summed for a total score that ranged from 0 (none) to 60 (worst possible) when SSPedi contained 15 items. These scores were described using the median and range. Since the number of items could change with each iteration, the scores are presented by iteration.

Results

Between August 2013 and January 2014, 30 children and 20 parents were recruited for this study. Figure 1 illustrates the flow diagram of patient evaluation and enrolment. After three cohorts of child respondents (n=30) and two cohorts of parent respondents (n=20), no thresholds for change were met and the instrument was considered satisfactory. Table 1 illustrates the demographics of the children and parents presented by cohort. There were four child and parent respondents from the same family. The median total interview time with child participants was 27.6 (interquartile range (IQR) 23.1–31.1) minutes. The median interview time with parent participants was 18.8 (IQR 14.9–22.5) minutes.

Figure 1
figure 1

Flow diagram of child and parent identification and enrolment.

Table 1 Demographics of the child and parent respondents (n=50)

Among children, 27 of 30 (90%) found SSPedi easy or very easy to complete overall; none found SSPedi hard or very hard to complete. The median time required to complete SSPedi was 1.8 (range 0.5–10.1) minutes. Twenty-seven (90%) children thought the instrument length was ‘about right’, with 3 children stating that it was ‘too long’. Twelve (40%) asked for clarification around at least one SSPedi item, resulting in presenting or reading of the synonym list for that item. One child asked that a portion of SSPedi be read by the research personnel, while three children required the entire instrument to be read out loud. All children who felt that SSPedi was ‘too long’ and who asked for the tool to be read out loud were in the 8–10 year age range. The concept of bother was understood by 30 (100%) and the response options were understood by 26 of 29 (90%) children (one child did not complete the interview).

Table 2 summarises the results of questions focused on ease or difficulty of completion, understandability, whether items with two concepts should be separated into two questions and whether the item is important enough to children with cancer to include it in SSPedi. Parent responses are illustrated in Appendix 1.

Table 2 Results among child respondents for each SSPedi itema

Three items met thresholds for change based upon ease of completion and understandability. ‘Changes in how your body or face look’ was noted as hard to complete by two children in the second cohort. However, in one child, difficulty related to thinking about the topic rather than understanding the item was noted, and thus the item was not altered. Second, ‘Tingly or numb hands or feet’ was not understood by two children in cohort 2. Additional synonyms were added for this item based on how other children described the symptoms. Finally, ‘Constipation’ was not understood by six children across all cohorts. An additional descriptor ‘(hard to poop)’ was added to the item.

Changes in taste’ was identified as a missing item by a parent respondent in cohort 1. The last four children in cohort 1 and all children in cohort 2 were asked whether this item should be added to SSPedi. Since 10 of 14 (71%) child respondents thought this item should be added, the subsequent iteration used in cohort 3 included this SSPedi item. There was also a question about whether ‘Headache’ and ‘Hurt or pain (other than headache)’ should be combined based on respondent comments. The last four children in cohort 1 and all children in cohort 2 were asked whether these two items should be combined; 7 of 14 (50%) child respondents thought these items should be combined. Since this result did not meet our threshold for combining two separate items (60%), they remained as separate items for cohort 3.

One item was removed after reviewing the responses from cohort 1, namely ‘Sleeping too much or too little’. When describing the items ‘Sleeping too much or too little’ and ‘Feeling tired’, children provided similar responses qualitatively. When specifically asked during cognitive probing, three of the last five children in cohort 1 stated that the two items are measuring the same thing. The investigative team decided to delete the sleeping item since feeling tired was felt to be more relevant to children while changes in sleeping pattern was felt to be more relevant to parents. Changes to SSPedi and the rationale for these changes are delineated in Table 3.

Table 3 Summary of changes to SSPedi by iteration and rationale

After review of the data from the 10 children in cohort 3, no further modifications to SSPedi were required. The median (range) SSPedi scores for the three iterations were 10 (2–22), 11 (1–33) and 9 (3–23). Figure 2 illustrates the final paper version of SSPedi.

Figure 2
figure 2

Final version of SSPedi.

Discussion

We successfully developed an initial draft of SSPedi that is easy to complete, understandable and has content validity according to children with cancer. This accomplishment is an important step toward the development of an electronic symptom screening tool for children with cancer and incorporation of this tool into clinical practice. It is important to emphasise that the primary purpose of this tool will be for symptom screening and not symptom assessment. Many symptom assessment tools exist for children with cancer, some of which are generic (Dupuis et al, 2012; Tomlinson et al, 2014) while others are symptom specific (Hicks et al, 2001; Dupuis et al, 2006; Gilchrist and Tanner, 2013; Jacobs et al, 2013); SSPedi is not intended to replace these instruments.

We learned several important methodological lessons during this study. First, we appreciated that defining thresholds for change a priori is important. This step facilitates decision making when reviewing responses following each iteration, and provides a framework to focus investigator discussions. Second, we recognised the importance of using a multidisciplinary group of healthcare experts, including a parent advocate, to decide on potential modifications to the instrument.

Many children required assistance with reading and needed additional explanation of SSPedi items. We realise that this will be an ongoing issue, particularly with the youngest children. Yet, this is an important challenge to overcome as children are the best reporter of their symptoms and ideally, even young children should be enabled to self-report. In our future work, we plan to offer audio assistance to children, both to read SSPedi out loud and to help them with a specific item if they are unsure of its meaning. Another future goal will be to evaluate and possibly refine SSPedi to allow its use by all children irrespective of age, disability or language. Translation and cross-cultural validation will be important for languages other than English and revision may be required for English-speaking countries outside of North America.

The strength of our study is the rigorous and iterative approach to the early evaluation of SSPedi. However, this report has several limitations including its conduct at a single site. We chose this approach to ensure close oversight over the interview process and data. However, evaluation of the instrument’s psychometric properties will be conducted in a multicentre setting. Another limitation is that the number of children interviewed with each iteration was relatively small. We acknowledge that further revisions to SSPedi may be required as the instrument continues to be evaluated in larger cohorts of children.

In summary, we have finalised a paper version of SSPedi, a symptom screening tool for children with cancer. Future work will translate the finalised paper version of SSPedi to an electronic version. The psychometric properties of the final electronic version will then be evaluated in a multicentre setting.