Comment

Nursing research: how to design an effective survey tool

Planning and developing survey tools is important, but can be overlooked. Key components include content, look and feel, and pilot testing. While planning a survey can take time, investment in this phase can significantly enhance data quality and, therefore, the ability to answer the research question. As the number of online surveys grows, nurse researchers need to be increasingly thoughtful in survey design to promote response and optimise data quality. Researchers must also discuss best practice strategies for building and delivering surveys to maximise study rigour.
A survey’s success is dependent on a good response rate, asking the right questions, and ensuring respondents' answers are similar to the way people think and act in reality

Planning and developing survey tools is important, but can be overlooked. Here we look at key components including content, look and feel, and pilot testing

A survey’s success is dependent on a good response rate, asking the right questions, and ensuring respondents' answers are similar to the way people think and act in reality
A survey’s success is dependent on a good response rate, asking the right questions, and ensuring respondents' answers are similar to the way people think and act in reality Picture: iStock

Surveys can provide an understanding of participants’ attitudes, behaviours, opinions and characteristics. They are commonly used in nursing and health research as they allow large volumes of data to be collected from geographically dispersed participants promptly (Shiyab et al 2023).

With the advent of online surveys, there is the added advantage of data being collected in a format that can be imported into statistical software. Despite the advantages of online surveys, the success of a survey depends on achieving a good response rate, asking the right questions to generate the right data, and how similar respondents' answers are to the way people think and act in reality.

‘Items which collect interesting but unimportant information should be avoided. A survey that is too long might cause fatigue and high participant dropout’

While there has been some recent attention to improving response rates (Shiyab et al 2023), there has been less emphasis on planning and developing survey tools. Survey design, which involves creating survey items, placing them in a logical order and determining response options is a key aspect of this process. Here, we highlight the importance of taking time to plan and develop the survey, as it can significantly enhance the quality and rate of response.

Planning the content of a survey

An important challenge in survey design is striking the right balance between respondent burden and data capture. This requires careful consideration of the number of questions and the time required to respond. Each item in the survey should be scrutinised to ensure it collects data essential to answer the research question(s). Items which collect interesting but unimportant information should be avoided. A survey that is too long might cause fatigue and high participant dropout.

Items in a survey can be categorised as validated tools or investigator-developed items. It is common, although not compulsory, for survey tools to combine validated tools and investigator-developed items. Validated tools refer to sets of questions that have been developed and psychometrically validated to ensure they produce reliable and accurate results about a particular construct (Elangovan & Sundaravel 2021). Validated tools have undergone a rigorous psychometric development and validation process to ensure that they consistently measure the construct accurately.

‘Researchers should ensure that each item asks a single question and avoids negative terms or leading language’

Using existing validated tools can save time and resources as the quality of the tool has been established. Additionally, using validated measures allows comparison between studies which have used the same validated instrument. Validated instruments usually require the authors’ approval to use. Many popular instruments have websites that provide information about accessing and using them (for example the health literacy questionnaire (HLQ), EuroQol). Reviewing literature around key constructs can help to identify relevant, validated instruments.

In contrast, investigator-developed items are items developed by the research team. These items may include demographic items to capture professional and personal characteristics or may explore specific attitudes, behaviours, opinions, and characteristics. While these items are developed for the specific project, they should be based on the literature to inform likely responses or areas of investigation. Additionally, consideration should be given to how these items might fit with other data collections to guide comparisons.

For example, questions about gender should be drawn from national guidelines (for example Australian Bureau of Statistics 2020). These items must be carefully constructed so that they are clear and precise. Researchers should ensure that each item asks a single question and avoids negative terms or leading language.

How to achieve the ideal look and feel for your survey

The format of a survey can significantly impact the quality of data collected. First, the survey look and feel are important. Using organisational logos, ensuring an attractive colour scheme and that the work is free of typographical errors and spelling mistakes projects a professional appearance.

‘If the first scale has disagree as the lowest number on the left of a Likert scale, all scales should be ordered from disagree/dissatisfied etc on the left to agree/satisfied etc on the right’

Beyond appearance, ensuring that the respondent receives clear instructions about completing the tool is important. For example, if the respondents are expected to answer questions about their experience within a particular time period (for example in the past two weeks), all respondents must share this understanding so that they answer consistently.

Several strategies can keep respondents interested and engaged throughout the survey to reduce dropouts. Ordering items logically is important to promote the flow of ideas and avoid perceived repetition or confusion. Providing brief but clear instructions can also make the transition to different parts of the survey as smooth as possible.

Ensuring that all rating scales are ordered in the same direction can avoid confusion. For example, if the first scale has disagree as the lowest number on the left of a Likert scale, all scales should be ordered from disagree/dissatisfied etc on the left to agree/satisfied etc on the right.

Additionally, using questions with variable question formats can promote engagement. For example, mixing multiple-choice questions with response matrixes and open-ended items can ensure respondents remain engaged. Careful attention must be paid to question wording to avoid ambiguous words that could be interpreted differently and technical words or jargon that might be misunderstood.

Pilot testing and ensuring your survey tool’s face and content validity

Before any survey is disseminated it should first be pilot tested. While there are varying methods of pilot testing depending on the type and nature of the research, generally, pilot testing involves ensuring the tool’s face and content validity.

‘Pilot feedback ensures that any issues with the survey can be addressed before wider dissemination’

Face validity refers to evaluating whether the tool subjectively appears to measure what it is intended to measure, while content validity evaluates how well the tool covers all aspects of the construct it aims to measure (Jackson et al 2023). To assess these characteristics, the tool should be reviewed by people with expertise in the constructs that the tool seeks to measure. Additionally the tool should be tested with people similar to the target population to ensure that they can understand items, follow the instructions and ensure the tool works as intended when a respondent moves through it. Pilot feedback ensures that any issues with the survey can be addressed before wider dissemination.

While planning a survey can take time, investment in this phase can significantly enhance data quality and, therefore, the ability to answer the research question. As the number of online surveys grows, researchers need to be increasingly thoughtful in survey design to promote response and optimise data quality. Researchers must also discuss best practice strategies for building and delivering surveys to maximise study rigour.


Further information


Elizabeth Halcomb is professor of primary healthcare nursing, School of Nursing, University of Wollongong, New South Wales, Australia and editor of Nurse Researcher; and Wa’ed Shiyab is a PhD candidate, University of Wollongong, New South Wales, Australia