Description of the problem or issue
Modern epidemiology is concerned with the study of the distribution and frequency of disease and disease determinants amongst human populations (Hennekens 1987). The outputs from epidemiological research are often recommendations that will inform health practice and policy, assist in the planning and commissioning of health and social care services, and guide the implementation of prevention, health promotion, and public health surveillance programmes (Bonita 2006; Donaldson 2009).
The validity and reliability of the recommendations arising from epidemiological research are determined by the quality of the data on which the evidence is constructed (Hosking 1995; Wilcox 2012). If the source data are inaccurate or incomplete, the recommendations will be biased, misleading, and potentially harmful (Bowling 2005; Boynton 2004; Haller 2009; Hosking 1995; Wilcox 2012).
Data collection is one of the key determinants of data quality. The aim is to collect data that are: relevant to the research objectives; accurate; complete; standardised across studies and research centres; efficient for data recording and data processing; and appropriate for statistical analysis (Hosking 1995; Wilcox 2012). Errors made during this phase can negatively affect any of these elements, and may be difficult to detect and correct at a later stage (Hosking 1995).
The choice of data collection method depends on the research objectives, the type of data to be collected, time and frequency of data collection, follow-up period, data scoring requirements, the target population, available resources, regulatory frameworks for research, and the sponsors' requirements (Boynton 2004; Coons 2009; Hosking 1995; Wilcox 2012). The quantitative survey method is one of the most commonly used methods in epidemiology, since it satisfies most of these conditions (Bowling 2005; Bowling 2009; Carter 2000). This method consists of a series of procedures for collecting information from a sample of the population under study and for making statistical inferences about them (Bowling 2005; Bowling 2009). Questionnaires are one of the principal tools available for quantitative surveys.
Survey questionnaires comprise a series of questions designed for gathering information about respondents' attributes, behaviours, beliefs, knowledge, attitudes, or opinions (Carter 2000). Survey questionnaires can vary in how potential respondents are contacted, the medium used to deliver questionnaires to respondents, and the mode of administering the questions (Bowling 2005; Bowling 2009). In relation to the latter, two principal modes of survey questionnaire administration are self administration and interviews (Carter 2000).
Although both modes have their advantages and disadvantages, self administered survey questionnaires are better for achieving a wide geographic coverage of the population and for dealing with sensitive topics such as drug taking or sexual behaviours (Bowling 2005; Bowling 2009; Carter 2000; Gwaltney 2008). Additionally, they are less resource-intensive than interviews, particularly in relation to specialised resources (Bowling 2005). For these reasons, self administration has often been the preferred option for administering survey questionnaires in epidemiological studies.
In addition, new areas of research (such as genetic studies) and the need for collecting increasingly large data sets pose new challenges for modern epidemiology, for which self administered survey questionnaires might be well suited.
Due to the popularity of self administered survey questionnaires, and their potential to meet the challenges facing modern epidemiology, it is important that researchers pay special consideration to the mode that they will be using to deliver self administered survey questionnaires.
Mode of delivery
The mode of delivery refers to how respondents complete a self administered survey questionnaire. Traditionally, paper-and-pencil self administered questionnaires have been the preferred delivery mode in a large number of research projects. They are typically simple to use, have relatively low implementation costs, and have low support and training requirements (Shah 2010; Wilcox 2012). However, paper-and-pencil survey questionnaires tend to be time-consuming, present a high risk of entry-related errors, have large data storage requirements, lack security and flexibility, and are difficult to distribute across geographically dispersed users (Cole 2006; Shah 2010; Shapiro 2004; Zhang 2012).
Developments in information and communication technologies (ICT) have enabled the electronic delivery of self administered survey questionnaires. Electronic modes of delivery could maximise both the speed and scalability of data collection, and reduce its costs, without compromising data quality (thus addressing some of the limitations of traditional paper-and-pencil instruments) (Coons 2009; Gwaltney 2008; Lane 2006; Shah 2010). For example, they have been shown to reduce the administrative burden and costs associated with running a research project, as well as the likelihood of entry-related errors (Brandt 2006). They are also likely to be more secure than traditional paper-and-pencil methods, more flexible, and can support the implementation of complex skip patterns as well as the continuous collection of data without temporal or geographical constraints (Coons 2009; Shapiro 2004). Consequently, the use of electronic self administered survey questionnaires has become common in several research areas such as pain, asthma, tobacco use, and smoking cessation (Lane 2006).
However, it is important to consider the impact that changing the mode of delivery can have on the responses collected. The properties of the responses given to a self administered survey questionnaire are typically the result of the interaction between the survey questionnaire, the respondents, and the mode of delivery (Bowling 2005; Tourangeau 2000). The use of an electronic mode of delivery may affect the interaction between these factors, thus altering the properties of survey questionnaire responses (Bowling 2005; Coons 2009; Tourangeau 2000).
Effect of electronic delivery modes on survey questionnaire responses
The characteristics of an electronic platform (i.e., delivery mode) can influence both survey questionnaires (i.e., survey questionnaire design, steps needed to complete a survey questionnaire, and setting in which data collection takes place) and respondents (i.e., respondents' personal characteristics and attitudes), thus affecting survey questionnaire responses.
Influence of electronic platform on survey questionnaires
Concerning their influence on survey questionnaires, electronic delivery modes can constrain the design of a survey questionnaire both in terms of layout and response formats (Bowling 2005; Bowling 2009; Coons 2009; Fan 2010). Questionnaire layout refers to the amount of information that can be presented to respondents in a single screen: scrolling or screen-by-screen layouts (Fan 2010). In small-screen devices, for example, items may have to be presented one at a time, as opposed to multiple items on a single page (Coons 2009; Gwaltney 2008). The type of questionnaire layout will determine the amount of information that is available to respondents when appraising and answering a question (Fan 2010).
Response format refers to the methods by which users can enter their responses to the questions in a survey questionnaire (e.g., visual analogue scales, adjectival scales, and Likert scales) (Fan 2010; Lane 2006). The type of response format is usually dictated by the type of data to be collected (Streiner 2008). However, the input method (e.g., touchscreen, optical readers, and keyboard) supported by an electronic platform can also determine the types of response formats that can be used in an electronic self administered questionnaire. Each response format is associated with different technical issues that can affect the level of effort required to answer a question (Fan 2010).
Electronic delivery modes can also influence the steps needed to complete a survey questionnaire. Depending on the type of questionnaire layout, respondents might have to provide an answer to all the questions before submitting the survey questionnaire, or they might have to submit their answer to one question before being able to move on to the next one (Fan 2010). The implementation of validation procedures may stop respondents from referring to previous questions when answering the current question (Bowling 2005; Gwaltney 2008; Lane 2006). In addition, technical problems with an electronic platform may cause a respondent to refuse to continue completing a survey questionnaire (Fan 2010). These factors can affect the cognitive processes experienced when providing answers to a survey questionnaire.
Moreover, the characteristics of an electronic platform can determine the setting in which data collection takes places. Changes to the setting may influence the thoughts, feelings, and behaviours that respondents experience when completing a survey questionnaire (Gaggioli 2013; Klasnja 2012; Tourangeau 2000); thus affecting the responses collected.
Influence of electronic platform on respondents
Electronic modes of delivery can also affect the responses given to self administered survey questionnaires through direct influence on respondents. Personal characteristics such as age and computer literacy may determine the ease with which users interact with an electronic platform (Coons 2009; Gwaltney 2008). Moreover, attitudinal factors, such as concerns about privacy, anonymity, and confidentiality, might influence respondents' willingness to provide accurate answers to certain items (Cheng 2011; Kaltenthaler 2008; Lane 2006). Additionally, respondents understand electronic modes of delivery through their social and cultural beliefs (Cheng 2011). These beliefs can influence the acceptability to respondents of electronic self administered survey questionnaires, which may result in biased reporting of data (Cheng 2011).
Data equivalence between alternative modes of delivery
Although advantageous from an administrative and cost-reduction point of view, the use of electronic modes of delivery can result in changes to the responses collected. The magnitude of these changes depends on the modifications made to the content and the format of both questions and responses during the adaption of the original instrument to an electronic format (Coons 2009). Coons 2009 identified three types of modifications:
minor modifications: those not expected to change the content or the meaning of the questions or the responses;
moderate modifications: those that might introduce subtle changes to the meaning of the items or questions; and
substantial modifications: those that will definitely change the content or meaning of the assessment instrument.
The adaption of any survey questionnaire to a new mode of delivery should be accompanied by evidence that demonstrates the equivalence between the two delivery modes (Coons 2009; Gwaltney 2008). Equivalence is a function of the comparability of the psychometric properties between the data collected using the original survey questionnaire and the data collected using the alternative delivery mode (Gwaltney 2008). Data equivalence involves the demonstration that the rank, mean, and dispersion of the scores remain relatively unchanged when using an alternative delivery mode (Coons 2009). According to Coons 2009, the level of evidence needed to demonstrate equivalence depends on the type of modifications made to the original instrument. Therefore, minor changes would require cognitive interviewing and usability testing; moderate modifications would require equivalence and usability testing; and substantial modifications would require a full assessment of the psychometric properties of the instrument.
Previous systematic reviews have evaluated the equivalence between paper-and-pencil and electronic modes of self administered survey questionnaires (Gwaltney 2008; Lane 2006). Lane and colleagues found that hand-held computers are as effective as paper-and-pencil methods for data collection, are faster, and are preferred by the majority of users (Lane 2006). Similarly, Gwaltney and colleagues found evidence supporting the equivalence between paper-and-pencil and electronic-administered patient-reported outcome measures (PROMS) (Gwaltney 2008). Additionally, Coons and colleagues proposed recommendations for the level of evidence required to support the equivalence between electronic and paper-and-pencil PROMS (Coons 2009). However, these reviews only considered specialist handheld and computer devices that are not normally available to the general public (e.g., personal digital assistants (PDA)). Recent developments in personal mobile devices (i.e., smartphones and tablets) have led to devices capable of delivering self administered survey questionnaires in a way that is accessible, easily customisable, and wide reaching. In addition, personal mobile devices could help address some of the methodological limitations faced by previous reviews and the limitations inherent to quantitative research methods.
Description of the methods being investigated
Smartphones and tablets are mobile devices with advanced computing and connectivity capabilities. Although current smartphones and tablets evolved from previous generations of mobile phones, the focus of this review will be on devices that became available with or after the first generation iPhone®. The reason for this choice is that the operating system framework of these devices focuses on small, distributed software applications (i.e., apps).
Mobile operating systems provide a platform that has modified both the functions that apps can perform and their distribution model. Through an operating system, apps are able to access the different computational and connectivity capabilities of a smartphone or tablet so as to enable it to perform specialist functions. In this context, apps can enable a personal mobile device to operate as a data collection tool for self administered survey questionnaires. With regards to their distribution model, apps can be built into the device, or can be developed by external parties. In the latter case, users can directly download these apps from marketplaces and install them onto their devices (Aanensen 2009; Wilcox 2012).
Furthermore, smartphones and tablets are equipped with built-in sensors that could unobtrusively capture some of the contextual and situational information that takes place whilst completing a survey questionnaire.
How these methods might work
By accessing the advanced computing capabilities of smartphones and tablets, apps can support the collection of complex data and deal with complex scoring requirements. Multiple input methods allow for the implementation of various response formats. The wireless connectivity capabilities of these personal mobile devices can enable the immediate transfer of data without temporal or geographical constraints (Aanensen 2009; Coons 2009; Gaggioli 2013; Haller 2009). The distribution model of apps may enable the deployment of survey questionnaires in a way that is flexible and convenient for both researchers and respondents.
The combination of the portability, reach (in the third quarter of 2012 approximately 56% of mobile subscribers in the UK owned a smartphone (mobiThinking 2013)), and personalisation of these personal mobile devices may help reduce the training requirements needed for users to complete a survey questionnaire, reduce the burden that the data collection process imposes on respondents, address some of the respondent-specific and attitudinal factors that act as barriers to the implementation of electronic devices in epidemiological research, and increase the variety of settings in which survey questionnaires can be completed.
Moreover, the ubiquitous presence of smartphones and tablets can help reduce the likelihood of recall bias by allowing respondents to capture data in real time. Finally, the high uptake of mobile technology offers a wide audience that researchers can target, facilitating the scalability of research studies.
Potential limitations of this review
One of the limitations of studies in this field is the difficulty in identifying the specific factors that are responsible for changes in survey questionnaire responses. This is particularly relevant for this systematic review if we consider the multiplicity of devices with differing technical specifications that currently inundate the market, and the rapid pace at which technology advances. Moreover, we anticipate that some of the included studies might not report the specific changes made to the original instrument during its adaption to a new mode of delivery.
In addition, there are large variations in the levels of technological literacy and access to smartphones and tablets across contexts. This might affect the generalisability of the results of the included studies and of this systematic review as a whole.
Why it is important to do this review
Delivery mode effects have been well documented for conventional electronic modes of data collection. However, these effects have not been evaluated for mobile apps.
The electronic delivery of self administered survey questionnaires via mobile apps may affect the survey questionnaire responses in similar ways to other electronic devices. However, the portability of smartphones and tablets has resulted in changes to usage patterns in terms of frequency, duration, and type of interaction with the device, as well as the location in which this interaction takes place (Ishii 2004; Oulasvirta 2012). This, in combination with the reach and high level of personalisation of personal mobile devices, and the popularity of apps, could introduce new ways in which delivery mode effects are expressed (both in terms of their magnitude and direction).
Therefore, it is important to evaluate the effectiveness of self administered survey questionnaires delivered via mobile apps, particularly if we take into consideration the number of research studies that are already starting to use this mode of delivery.