Intended for healthcare professionals

Education And Debate Hands-on guide to questionnaire research

Administering, analysing, and reporting your questionnaire

BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7452.1372 (Published 03 June 2004) Cite this as: BMJ 2004;328:1372

This article has a correction. Please see:

  1. Petra M Boynton, lecturer in health services research (p.boynton{at}pcps.ucl.ac.uk)1
  1. 1Department of Primary Care and Population Sciences, University College London, London N19 5LW
  • Accepted 17 March 2004

Understanding your study group is key to getting a good response to a questionnaire; dealing with the resulting mass of data is another challenge

The first step in producing good questionnaire research is getting the right questionnaire.1 However, even the best questionnaire will not get adequate results if it is not used properly. This article outlines how to pilot your questionnaire, distribute and administer it; and get it returned, analysed, and written up for publication. It is intended to supplement published guidance on questionnaire research, three quarters of which focuses on content and design.2

Piloting

Questionnaires tend to fail because participants don't understand them, can't complete them, get bored or offended by them, or dislike how they look. Although friends and colleagues can help check spelling, grammar, and layout, they cannot reliably predict the emotional reactions or comprehension difficulties of other groups. Whether you have constructed your own questionnaire or are using an existing instrument, always pilot it on participants who are representative of your definitive sample. You need to build in protected time for this phase and get approval from an ethics committee.3

During piloting, take detailed notes on how participants react to both the general format of your instrument and the specific questions. How long do people take to complete it? Do any questions need to be repeated or explained? How do participants indicate that they have arrived at an answer? Do they show confusion or surprise at a particular response—if so, why? Short, abrupt questions may unintentionally provoke short, abrupt answers. Piloting will provide a guide for rephrasing questions to invite a richer response (box 1).


Embedded Image

Box 1: Patient preference is preferable

I worked on a sexual health study where we initially planned to present the questionnaire on a computer, since we had read people were supposedly more comfortable “talking” to a computer. Although this seemed to be the case in practices with middle class patients, we struggled to recruit in practices where participants were less familiar with computers. Their reasons for refusal were not linked to the topic of the research, but because they saw our laptops as something they might break, could make them look foolish, or would feed directly to the internet (which was inextricably linked to computers in some people's minds). We found offering a choice between completing the questionnaire on paper or the laptop computer greatly increased response rates.

Planning data collection

You should be aware of the relevant data protection legislation (for United Kingdom see http://www.informationcommissioner.gov.uk/) and ensure that you follow internal codes of practice for your institution—for example, obtaining and completing a form from your data protection officer. Do not include names, addresses, or other identifying markers within your electronic database, except for a participant number linked to a securely kept manual file.

The piloting phase should include planning and testing a strategy for getting your questionnaire out and back—for example, who you have invited to complete it (the sampling frame), who has agreed to do so (the response rate), who you've had usable returns from (the completion rate), and whether and when you needed to send a reminder letter. If you are employing researchers to deliver and collect the questionnaire it's important they know exactly how to do this.4

Administrative errors can hamper the progress of your research. Real examples include researchers giving the questionnaire to wrong participants (for example, a questionnaire aimed at men given to women); incomplete instructions on how to fill in the questionnaire (for example, participants did not know whether to tick one or several items); postal surveys in which the questionnaire was missing from the envelope; and a study of over 3000 participants in which the questionnaire was sent out with no return address.

Administering your questionnaire

The choice of how to administer a questionnaire is too often made on convenience or cost grounds (see table A on bmj.com). Scientific and ethical considerations should include:

  • The needs and preferences of participants, who should understand what is required of them; remain interested and cooperative throughout completion; be asked the right questions and have their responses recorded accurately; and receive appropriate support during and after completing the questionnaire

  • The skills and resources available to your research team

  • The nature of your study—for example, short term feasibility projects, clinical trials, or large scale surveys.

Maximising your response rate

Sending out hundreds of questionnaires is a thankless task, and it is sometimes hard to pay attention to the many minor details that combine to raise response and completion rates. Extensive evidence exists on best practice (box 2), and principal investigators should ensure that they provide their staff with the necessary time and resources to follow it. Note, however, that it is better to collect fewer questionnaires with good quality responses than high numbers of questionnaires that are inaccurate or incomplete. The third article in this series discusses how to maximise response rates from groups that are hard to research.15

Accounting for those who refuse to participate

Survey research tends to focus on people who have completed the study. Yet those who don't participate are equally important scientifically, and their details should also be recorded (remember to seek ethical approval for this).4 16 17

Box 2: Factors shown to increase response rates

  • The questionnaire is clearly designed and has a simple layout5

  • It offers participants incentives or prizes in return for completion6

  • It has been thoroughly piloted and tested5

  • Participants are notified about the study in advance with a personalised invitation7

  • The aim of study and means of completing the questionnaire are clearly explained8 9

  • A researcher is available to answer questions and collect the completed questionnaire10

  • If using a postal questionnaire, a stamped addressed envelope is included7

  • The participant feels they are a stakeholder in the study11

  • Questions are phrased in a way that holds the participant's attention11

  • Questionnaire has clear focus and purpose and is kept concise7 8 11

  • The questionnaire is appealing to look at,12 as is the researcher13

  • If appropriate, the questionnaire is delivered electronically14

One way of reducing refusal and non-completion rates is to set strict exclusion criteria at the start of your research. For example, for practical reasons many studies exclude participants who are unable to read or write in the language of the questionnaire and those with certain physical and mental disabilities that might interfere with their ability to give informed consent, cooperate with the researcher, or understand the questions asked. However, research that systematically excludes hard to reach groups is increasingly seen as unethical, and you may need to build additional strategies and resources into your study protocol at the outset.15 Keep a record of all participants that fit the different exclusion categories (see bmj.com).

Collecting data on non-participants will also allow you to monitor the research process. For example, you may find that certain researchers seem to have a higher proportion of participants refusing, and if so you should work with those individuals to improve the way they introduce the research or seek consent. In addition, if early refusals are found to be unusually high, you might need to rethink your overall approach.10

Entering, checking, and cleaning data

Novice researchers often assume that once they have selected, designed, and distributed their questionnaire, their work is largely complete. In reality, entering, checking, and cleaning the data account for much of the workload. Some principles for keeping quantitative data clean are listed on bmj.com.

Even if a specialist team sets up the database(s), all researchers should be taught how to enter, clean, code, and back up the data, and the system for doing this should be universally agreed and understood. Agree on the statistical package you wish to use (such as SPSS, Stata, EpiInfo, Excel, or Access) and decide on a coding system before anyone starts work on the dataset.

It is good practice to enter data into an electronic database as the study progresses rather than face a mountain of processing at the end. The project manager should normally take responsibility for coordinating and overseeing this process and for ensuring that all researchers know what their role is with data management. These and other management tasks are time consuming and must be built into the study protocol and budget. Include data entry and coding in any pilot study to get an estimate of the time required and potential problems to troubleshoot.

Analysing your data

You should be able to predict the type of analysis required for your different questionnaire items at the planning stage of your study by considering the structure of each item and the likely distribution of responses (box 3).1 Table B on bmj.com shows some examples of data analysis methods for different types of responses.18 19w1

Writing up and reporting

Once you have completed your data analysis, you will need to think creatively about the clearest and most parsimonious way to report and present your findings. You will almost certainly find that you have too much data to fit into a standard journal article, dissertation, or research report, so deciding what to include and omit is crucial. Take statistical advice from the outset of your research. This can keep you focused on the hypothesis or question you are testing and the important results from your study (and therefore what tables and graphs to present).

Box 3: Nasty surprise from a simple questionnaire

Moshe selected a standardised measure on emotional wellbeing to use in his research, which looked easy to complete and participants answered readily. When he came to analysing his data, he discovered that rather than scoring each response directly as indicated on the questionnaire, a complicated computer algorithm had to be created, and he was stumped. He found a statistician to help with the recoding, and realised that for future studies it might be an idea to check both the measure and its scoring system before selecting it.

Box 4: An unexpected result

Priti, a specialist registrar in hepatology, completed an attitude questionnaire in patients having liver transplantation and those who were still waiting for a donor. She expected to find that those who had received a new liver would be happier than those awaiting a donor. However, the morale scale used in her questionnaire showed that the transplantation group did not have significantly better morale scores. Priti felt that this negative finding was worth further investigation.

Methods section

The methods section should give details of your exclusion criteria and discuss their implications for the transferability of your findings. Data on refusals and unsuitable participants should also be presented and discussed, preferably using a recruitment diagram.w2 Finally, state and justify the statistical or qualitative analyses used.18 19w2

Results section

When compiling the results section you should return to your original research question and set out the findings that addressed this. In other words, make sure your results are hypothesis driven. Do not be afraid to report non-significant results, which in reality are often as important as significant results—for example, if participants did not experience anxiety in a particular situation (box 4). Don't analyse and report on every question within your questionnaire

Choose the most statistically appropriate and visually appealing format for graphs (table). w3 Label graphs and their axes adequately and include meaningful titles for tables and diagrams. Refer your reader to any tables or graphs within your text, and highlight the main findings.

Examples of ways of presenting data and when to use them

View this table:

If you have used open ended questions within your questionnaire, do not cherry pick quotes for your results section. You need to outline what main themes emerged, and use quotes as necessary to illustrate the themes and supplement your quantitative findings.

Discussion section

The discussion should refer back to the results section and suggest what the main findings mean. You should acknowledge the limitations of your study and couch the discussion in the light of these. For example, if your response rate was low, you may need to recommend further studies to confirm your preliminary results. Your conclusions must not go beyond the scope of your study—for example, if you have done a small, parochial study do not suggest changes in national policy. You should also discuss any questions your participants persistently refused to answer or answered in a way you didn't expect.

Taking account of psychological and social influences

Questionnaire research (and indeed science in general) can never be completely objective. Researchers and participants are all human beings with psychological, emotional, and social needs. Too often, we fail to take these factors into account when planning, undertaking, and analysing our work. A questionnaire means something different to participants and researchers.w4 Researchers want data (with a view to publications, promotion, academic recognition, and further grant income). Junior research staff and administrators, especially if poorly trained and supervised, may be put under pressure, leading to critical errors in piloting (for example, piloting on friends rather than the target group), sampling (for example, drifting towards convenience rather than random samples) and in the distribution, collection, and coding of questionnaires.15 Staff employed to assist with a questionnaire study may not be familiar with all the tasks required to make it a success and may be unaware that covering up their ignorance or skill deficits will make the entire study unsound.

Summary points

Piloting is essential to check the questionnaire works in the study group and identify administrative and analytical problems

The method of administration should be determined by scientific considerations not just costs

Entering, checking, and cleaning data should be done as the study progresses

Don't try to include all the results when reporting studies

Do include exclusion criteria and data on non-respondents

Research participants, on the other hand, may be motivated to complete a questionnaire through interest, boredom, a desire to help others (particularly true in health studies), because they feel pressurised to do so, through loneliness, or for an unconscious ulterior motive (“pleasing the doctor”). All of these introduce potential biases into the recruitment and data collection process.

This is the second in a series of three articles edited by Trisha Greenhalgh

Embedded ImageReferences w1-10, illustrative examples, and further information on using questionnaires are on bmj.com

Acknowledgments

I thank Alicia O'Cathain, Trish Greenhalgh, Jill Russell, Geoff Wong, Marcia Rigby, Sara Shaw, Fraser Macfarlane, and Will Callaghan for their helpful feedback on earlier versions of this paper and Gary Wood for advice on statistics and analysis.

PMB has taught research methods in a primary care setting for the past 13 years, specialising in practical approaches and using the experiences and concerns of researchers and participants as the basis of learning. This series of papers arose directly from questions asked about real questionnaire studies. To address these questions she and Trisha Greenhalgh explored a wide range of sources from the psychological and health services research literature.

Footnotes

  • Conflict of interests None declared

References

View Abstract