Comparison of self administered survey questionnaire responses collected using mobile apps versus other methods

  • Protocol
  • Methodology

Authors

  • José S Marcano Belisario,

    Corresponding author
    1. School of Public Health, Imperial College London, Global eHealth Unit, Department of Primary Care and Public Health, London, UK
    • José S Marcano Belisario, Global eHealth Unit, Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, UK. jose.marcano-belisario10@imperial.ac.uk.

    Search for more papers by this author
  • Kit Huckvale,

    1. School of Public Health, Imperial College London, Global eHealth Unit, Department of Primary Care and Public Health, London, UK
    Search for more papers by this author
  • Andreja Saje,

    1. School of Public Health, Imperial College London, Global eHealth Unit, Department of Primary Care and Public Health, London, UK
    2. University of Ljubljana, Faculty of Medicine, Ljubljana, Slovenia
    Search for more papers by this author
  • Aleš Porcnik,

    1. School of Public Health, Imperial College London, Global eHealth Unit, Department of Primary Care and Public Health, London, UK
    2. University of Ljubljana, Faculty of Medicine, Ljubljana, Slovenia
    Search for more papers by this author
  • Cecily P Morrison,

    1. School of Public Health, Imperial College London, Global eHealth Unit, Department of Primary Care and Public Health, London, UK
    Search for more papers by this author
  • Josip Car

    1. Imperial College & Nanyang Technological University, Lee Kong Chian School of Medicine, Singapore, Singapore
    Search for more papers by this author

Abstract

This is the protocol for a review and there is no abstract. The objectives are as follows:

To assess the impact of delivery mode on data equivalence by comparing self administered survey questionnaires delivered via mobile apps with alternative delivery modes.

Background

Description of the problem or issue

Modern epidemiology is concerned with the study of the distribution and frequency of disease and disease determinants amongst human populations (Hennekens 1987). The outputs from epidemiological research are often recommendations that will inform health practice and policy, assist in the planning and commissioning of health and social care services, and guide the implementation of prevention, health promotion, and public health surveillance programmes (Bonita 2006; Donaldson 2009).

The validity and reliability of the recommendations arising from epidemiological research are determined by the quality of the data on which the evidence is constructed (Hosking 1995; Wilcox 2012). If the source data are inaccurate or incomplete, the recommendations will be biased, misleading, and potentially harmful (Bowling 2005; Boynton 2004; Haller 2009; Hosking 1995; Wilcox 2012).

Data collection is one of the key determinants of data quality. The aim is to collect data that are: relevant to the research objectives; accurate; complete; standardised across studies and research centres; efficient for data recording and data processing; and appropriate for statistical analysis (Hosking 1995; Wilcox 2012). Errors made during this phase can negatively affect any of these elements, and may be difficult to detect and correct at a later stage (Hosking 1995).

The choice of data collection method depends on the research objectives, the type of data to be collected, time and frequency of data collection, follow-up period, data scoring requirements, the target population, available resources, regulatory frameworks for research, and the sponsors' requirements (Boynton 2004; Coons 2009; Hosking 1995; Wilcox 2012). The quantitative survey method is one of the most commonly used methods in epidemiology, since it satisfies most of these conditions (Bowling 2005; Bowling 2009; Carter 2000). This method consists of a series of procedures for collecting information from a sample of the population under study and for making statistical inferences about them (Bowling 2005; Bowling 2009). Questionnaires are one of the principal tools available for quantitative surveys.

Survey questionnaires

Survey questionnaires comprise a series of questions designed for gathering information about respondents' attributes, behaviours, beliefs, knowledge, attitudes, or opinions (Carter 2000). Survey questionnaires can vary in how potential respondents are contacted, the medium used to deliver questionnaires to respondents, and the mode of administering the questions (Bowling 2005; Bowling 2009). In relation to the latter, two principal modes of survey questionnaire administration are self administration and interviews (Carter 2000).

Although both modes have their advantages and disadvantages, self administered survey questionnaires are better for achieving a wide geographic coverage of the population and for dealing with sensitive topics such as drug taking or sexual behaviours (Bowling 2005; Bowling 2009; Carter 2000; Gwaltney 2008). Additionally, they are less resource-intensive than interviews, particularly in relation to specialised resources (Bowling 2005). For these reasons, self administration has often been the preferred option for administering survey questionnaires in epidemiological studies.

In addition, new areas of research (such as genetic studies) and the need for collecting increasingly large data sets pose new challenges for modern epidemiology, for which self administered survey questionnaires might be well suited.

Due to the popularity of self administered survey questionnaires, and their potential to meet the challenges facing modern epidemiology, it is important that researchers pay special consideration to the mode that they will be using to deliver self administered survey questionnaires.

Mode of delivery

The mode of delivery refers to how respondents complete a self administered survey questionnaire. Traditionally, paper-and-pencil self administered questionnaires have been the preferred delivery mode in a large number of research projects. They are typically simple to use, have relatively low implementation costs, and have low support and training requirements (Shah 2010; Wilcox 2012). However, paper-and-pencil survey questionnaires tend to be time-consuming, present a high risk of entry-related errors, have large data storage requirements, lack security and flexibility, and are difficult to distribute across geographically dispersed users (Cole 2006; Shah 2010; Shapiro 2004; Zhang 2012).

Developments in information and communication technologies (ICT) have enabled the electronic delivery of self administered survey questionnaires. Electronic modes of delivery could maximise both the speed and scalability of data collection, and reduce its costs, without compromising data quality (thus addressing some of the limitations of traditional paper-and-pencil instruments) (Coons 2009; Gwaltney 2008; Lane 2006; Shah 2010). For example, they have been shown to reduce the administrative burden and costs associated with running a research project, as well as the likelihood of entry-related errors (Brandt 2006). They are also likely to be more secure than traditional paper-and-pencil methods, more flexible, and can support the implementation of complex skip patterns as well as the continuous collection of data without temporal or geographical constraints (Coons 2009; Shapiro 2004). Consequently, the use of electronic self administered survey questionnaires has become common in several research areas such as pain, asthma, tobacco use, and smoking cessation (Lane 2006).

However, it is important to consider the impact that changing the mode of delivery can have on the responses collected. The properties of the responses given to a self administered survey questionnaire are typically the result of the interaction between the survey questionnaire, the respondents, and the mode of delivery (Bowling 2005; Tourangeau 2000). The use of an electronic mode of delivery may affect the interaction between these factors, thus altering the properties of survey questionnaire responses (Bowling 2005; Coons 2009; Tourangeau 2000).

Effect of electronic delivery modes on survey questionnaire responses

The characteristics of an electronic platform (i.e., delivery mode) can influence both survey questionnaires (i.e., survey questionnaire design, steps needed to complete a survey questionnaire, and setting in which data collection takes place) and respondents (i.e., respondents' personal characteristics and attitudes), thus affecting survey questionnaire responses.

Influence of electronic platform on survey questionnaires

Concerning their influence on survey questionnaires, electronic delivery modes can constrain the design of a survey questionnaire both in terms of layout and response formats (Bowling 2005; Bowling 2009; Coons 2009; Fan 2010). Questionnaire layout refers to the amount of information that can be presented to respondents in a single screen: scrolling or screen-by-screen layouts (Fan 2010). In small-screen devices, for example, items may have to be presented one at a time, as opposed to multiple items on a single page (Coons 2009; Gwaltney 2008). The type of questionnaire layout will determine the amount of information that is available to respondents when appraising and answering a question (Fan 2010).

Response format refers to the methods by which users can enter their responses to the questions in a survey questionnaire (e.g., visual analogue scales, adjectival scales, and Likert scales) (Fan 2010; Lane 2006). The type of response format is usually dictated by the type of data to be collected (Streiner 2008). However, the input method (e.g., touchscreen, optical readers, and keyboard) supported by an electronic platform can also determine the types of response formats that can be used in an electronic self administered questionnaire. Each response format is associated with different technical issues that can affect the level of effort required to answer a question (Fan 2010).

Electronic delivery modes can also influence the steps needed to complete a survey questionnaire. Depending on the type of questionnaire layout, respondents might have to provide an answer to all the questions before submitting the survey questionnaire, or they might have to submit their answer to one question before being able to move on to the next one (Fan 2010). The implementation of validation procedures may stop respondents from referring to previous questions when answering the current question (Bowling 2005; Gwaltney 2008; Lane 2006). In addition, technical problems with an electronic platform may cause a respondent to refuse to continue completing a survey questionnaire (Fan 2010). These factors can affect the cognitive processes experienced when providing answers to a survey questionnaire.

Moreover, the characteristics of an electronic platform can determine the setting in which data collection takes places. Changes to the setting may influence the thoughts, feelings, and behaviours that respondents experience when completing a survey questionnaire (Gaggioli 2013; Klasnja 2012; Tourangeau 2000); thus affecting the responses collected.

Influence of electronic platform on respondents

Electronic modes of delivery can also affect the responses given to self administered survey questionnaires through direct influence on respondents. Personal characteristics such as age and computer literacy may determine the ease with which users interact with an electronic platform (Coons 2009; Gwaltney 2008). Moreover, attitudinal factors, such as concerns about privacy, anonymity, and confidentiality, might influence respondents' willingness to provide accurate answers to certain items (Cheng 2011; Kaltenthaler 2008; Lane 2006). Additionally, respondents understand electronic modes of delivery through their social and cultural beliefs (Cheng 2011). These beliefs can influence the acceptability to respondents of electronic self administered survey questionnaires, which may result in biased reporting of data (Cheng 2011).

Data equivalence between alternative modes of delivery

Although advantageous from an administrative and cost-reduction point of view, the use of electronic modes of delivery can result in changes to the responses collected. The magnitude of these changes depends on the modifications made to the content and the format of both questions and responses during the adaption of the original instrument to an electronic format (Coons 2009). Coons 2009 identified three types of modifications:

  • minor modifications: those not expected to change the content or the meaning of the questions or the responses;

  • moderate modifications: those that might introduce subtle changes to the meaning of the items or questions; and

  • substantial modifications: those that will definitely change the content or meaning of the assessment instrument.

The adaption of any survey questionnaire to a new mode of delivery should be accompanied by evidence that demonstrates the equivalence between the two delivery modes (Coons 2009; Gwaltney 2008). Equivalence is a function of the comparability of the psychometric properties between the data collected using the original survey questionnaire and the data collected using the alternative delivery mode (Gwaltney 2008). Data equivalence involves the demonstration that the rank, mean, and dispersion of the scores remain relatively unchanged when using an alternative delivery mode (Coons 2009). According to Coons 2009, the level of evidence needed to demonstrate equivalence depends on the type of modifications made to the original instrument. Therefore, minor changes would require cognitive interviewing and usability testing; moderate modifications would require equivalence and usability testing; and substantial modifications would require a full assessment of the psychometric properties of the instrument.

Previous systematic reviews have evaluated the equivalence between paper-and-pencil and electronic modes of self administered survey questionnaires (Gwaltney 2008; Lane 2006). Lane and colleagues found that hand-held computers are as effective as paper-and-pencil methods for data collection, are faster, and are preferred by the majority of users (Lane 2006). Similarly, Gwaltney and colleagues found evidence supporting the equivalence between paper-and-pencil and electronic-administered patient-reported outcome measures (PROMS) (Gwaltney 2008). Additionally, Coons and colleagues proposed recommendations for the level of evidence required to support the equivalence between electronic and paper-and-pencil PROMS (Coons 2009). However, these reviews only considered specialist handheld and computer devices that are not normally available to the general public (e.g., personal digital assistants (PDA)). Recent developments in personal mobile devices (i.e., smartphones and tablets) have led to devices capable of delivering self administered survey questionnaires in a way that is accessible, easily customisable, and wide reaching. In addition, personal mobile devices could help address some of the methodological limitations faced by previous reviews and the limitations inherent to quantitative research methods.

Description of the methods being investigated

Smartphones and tablets are mobile devices with advanced computing and connectivity capabilities. Although current smartphones and tablets evolved from previous generations of mobile phones, the focus of this review will be on devices that became available with or after the first generation iPhone®. The reason for this choice is that the operating system framework of these devices focuses on small, distributed software applications (i.e., apps).

Mobile operating systems provide a platform that has modified both the functions that apps can perform and their distribution model. Through an operating system, apps are able to access the different computational and connectivity capabilities of a smartphone or tablet so as to enable it to perform specialist functions. In this context, apps can enable a personal mobile device to operate as a data collection tool for self administered survey questionnaires. With regards to their distribution model, apps can be built into the device, or can be developed by external parties. In the latter case, users can directly download these apps from marketplaces and install them onto their devices (Aanensen 2009; Wilcox 2012).

Furthermore, smartphones and tablets are equipped with built-in sensors that could unobtrusively capture some of the contextual and situational information that takes place whilst completing a survey questionnaire.

How these methods might work

By accessing the advanced computing capabilities of smartphones and tablets, apps can support the collection of complex data and deal with complex scoring requirements. Multiple input methods allow for the implementation of various response formats. The wireless connectivity capabilities of these personal mobile devices can enable the immediate transfer of data without temporal or geographical constraints (Aanensen 2009; Coons 2009; Gaggioli 2013; Haller 2009). The distribution model of apps may enable the deployment of survey questionnaires in a way that is flexible and convenient for both researchers and respondents.

The combination of the portability, reach (in the third quarter of 2012 approximately 56% of mobile subscribers in the UK owned a smartphone (mobiThinking 2013)), and personalisation of these personal mobile devices may help reduce the training requirements needed for users to complete a survey questionnaire, reduce the burden that the data collection process imposes on respondents, address some of the respondent-specific and attitudinal factors that act as barriers to the implementation of electronic devices in epidemiological research, and increase the variety of settings in which survey questionnaires can be completed.

Moreover, the ubiquitous presence of smartphones and tablets can help reduce the likelihood of recall bias by allowing respondents to capture data in real time. Finally, the high uptake of mobile technology offers a wide audience that researchers can target, facilitating the scalability of research studies.

Potential limitations of this review

One of the limitations of studies in this field is the difficulty in identifying the specific factors that are responsible for changes in survey questionnaire responses. This is particularly relevant for this systematic review if we consider the multiplicity of devices with differing technical specifications that currently inundate the market, and the rapid pace at which technology advances. Moreover, we anticipate that some of the included studies might not report the specific changes made to the original instrument during its adaption to a new mode of delivery.

In addition, there are large variations in the levels of technological literacy and access to smartphones and tablets across contexts. This might affect the generalisability of the results of the included studies and of this systematic review as a whole.

Why it is important to do this review

Delivery mode effects have been well documented for conventional electronic modes of data collection. However, these effects have not been evaluated for mobile apps.

The electronic delivery of self administered survey questionnaires via mobile apps may affect the survey questionnaire responses in similar ways to other electronic devices. However, the portability of smartphones and tablets has resulted in changes to usage patterns in terms of frequency, duration, and type of interaction with the device, as well as the location in which this interaction takes place (Ishii 2004; Oulasvirta 2012). This, in combination with the reach and high level of personalisation of personal mobile devices, and the popularity of apps, could introduce new ways in which delivery mode effects are expressed (both in terms of their magnitude and direction).

Therefore, it is important to evaluate the effectiveness of self administered survey questionnaires delivered via mobile apps, particularly if we take into consideration the number of research studies that are already starting to use this mode of delivery.

Objectives

To assess the impact of delivery mode on data equivalence by comparing self administered survey questionnaires delivered via mobile apps with alternative delivery modes.

Methods

Criteria for considering studies for this review

Types of studies

Coons and colleagues recommend using parallel randomised trials or cross-over trials when testing for the equivalence between measures (Coons 2009). Therefore, we will consider for inclusion in our systematic review trials that employed any of these study designs. We will also include trials that used a cluster-randomised trial study design. Finally, we will include studies using a paired repeated measures study design.

We will exclude any other type of study design.

Types of data

We will include data obtained from participants completing self administered survey questionnaires. We will consider the data resulting from using both validated and non-validated instruments. Although in measurement science it is important to ensure the validity and reliability of the instruments being used, a number of epidemiological studies still use patient-reported measures whose psychometric properties have not been assessed or are not available. These studies might still provide useful insight into delivery mode effects. For this reason, we will include both validated and non-validated instruments in this review but we will only use the findings of the former group in the quantitative results of this systematic review. We will use information from studies using non-validated instruments to inform the discussion.

We will consider the data offered by healthy volunteers and by those with any clinical diagnosis. We will also include the data resulting from individuals who are completing self administered surveys as part of a complex intervention evaluating different strategies to support the self management of long-term conditions.

We will exclude data from caregivers or parents who are completing survey questionnaires on behalf of someone else. We will also exclude data collected by interviewers.

We will not make exclusions on the basis of the age, gender, or any other socio-demographic variable of the individuals completing the self administered survey questionnaires, but will consider these in our explorations of heterogeneity if sufficient data are available (see Subgroup analysis and investigation of heterogeneity).

Types of methods

We will include trials that use a smartphone or tablet app to deliver and administer survey questionnaires. We will include both native apps that have been developed for a particular mobile device platform, as well as web-apps running on mobile devices. We will only consider smartphones and tablets that became available in or after 2007, as these devices are more compatible with the current software development framework that focuses on apps.

We will exclude apps that allow pictures to be taken with the inbuilt camera as a form of data entry. We will exclude studies where students, researchers, or employees are using smartphones or tablets to collect data as part of their studies, research, or job.

We will only consider studies for inclusion if they compare at least two modes of data collection, one of which must be via a smartphone or tablet computer app. Therefore, we will compare self administered survey questionnaires delivered using a mobile app versus the same survey questionnaire delivered using any other mode (either electronic or paper-and-pencil).

Types of outcome measures

Primary outcomes
  • Equivalence between questionnaires administered via two different delivery modes. We will measure equivalence using correlations or measures of agreement (intra-class correlation (ICC), Pearson product-moment correlations, Spearman rho, and weighted Kappa coefficient), comparisons of mean scores between alternative delivery modes, or both (Gwaltney 2008). We will focus on the equivalence of questionnaires, as opposed to equivalence between constructs or individual items, since the latter are rarely used as outcome measures in trials (Gwaltney 2008). For ICC, we will use 0.70 as the cut-off point for group comparisons (Gwaltney 2008). For other coefficients, we will use 0.60 as the cut-off point for concluding equivalence (Gwaltney 2008). For studies comparing mean scores, we will use the minimally important difference (MID) as an indicator of equivalence (Gwaltney 2008).

  • Data accuracy: comparison of the proportion or errors or problematic items between alternative delivery modes for self administered questionnaires.

  • Data completeness: comparison of the proportion of missing items between alternative delivery modes for self administered questionnaires.

  • Response rates: defined as the number of completed questionnaires divided by the total number of eligible sample units.

Secondary outcomes
  • Differences in time to completion of the data collection process between alternative delivery modes for self administered questionnaires.

  • Differences in respondents' adherence to the original data collection protocol.

  • Differences in the acceptability to the study participants of using alternative delivery modes for completing self administered questionnaires.

For all our outcomes, one of the alternative delivery modes for self administered questionnaires must be an app running on a smartphone or tablet computer.

Search methods for identification of studies

Electronic searches

We will search MEDLINE using the search strategy outlined in Appendix 1. We will adapt this search strategy for use in EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, British Education Index, Science Citation Index, Social Science Index, and the Campbell Library. We will also search registers of current and ongoing clinical trials such as ClinicalTrials.gov and the WHO International Clinical Trials Registry Platform (Ghersi 2009).

We will not exclude studies on the basis of their original publication language. We will limit our electronic searches to 2007 and later, as the type of devices that we are considering for inclusion were not available before this date.

Searching other resources

We will search the grey literature in OpenGrey, Mobile Active and ProQuest Dissertations. In addition, we will search Google Scholar. We will check the reference lists of relevant included studies and systematic reviews identified through the electronic searches for additional references. We will contact authors of ongoing trials or relevant publications in press for additional information on relevant studies. We will also contact software developers who might have carried out evaluative research during testing.

Data collection and analysis

Selection of studies

One review author will independently implement the search strategies described above, and two review authors will independently review the output. We will import the references into EndNote and remove duplicate records of the same reports. Two authors will independently screen the titles and abstracts in order to identify potentially relevant studies. Two authors will then independently screen the full-text reports of potentially relevant studies and assess them against our inclusion criteria. Any disagreements will be resolved through discussion between the two review authors performing the screening. If no agreement can be reached, a third review author will act as an arbiter.

Data extraction and management

Two review authors will independently extract data from the included studies using a structured data extraction form. Review authors will then compare their completed data extraction forms and follow up any discrepancies with reference to their original publication. We will summarise the information extracted in the 'Characteristics of included studies' table.

Assessment of risk of bias in included studies

Two review authors will independently assess the risk of bias for all included studies using The Cochrane Collaboration's tool for assessing the risk of bias in randomised trials. Therefore, we will assess the risk of bias across the following domains:

  1. random sequence generation;

  2. allocation concealment;

  3. blinding of participants and personnel;

  4. blinding of outcome assessment;

  5. incomplete outcome data;

  6. selective outcome reporting; and

  7. other bias.

For each included study, review authors will classify each domain as presenting low, high, or unclear risk of bias. Any discrepancies between the two review authors conducting the assessment of risk of bias will be resolved through discussion. If no agreement can be reached, a third review author will act as an arbiter.

We will assess the risk of bias for cluster-randomised trials across the following domains (Higgins 2011):

  1. recruitment bias;

  2. baseline imbalances;

  3. loss of clusters;

  4. incorrect analysis; and

  5. comparability with randomised trials

We will assess the risk of bias for cross-over trials across the following domains (Higgins 2011):

  1. suitability of the cross-over design;

  2. evidence of a carry-over effect;

  3. whether only first period data are available;

  4. incorrect statistical analysis; and

  5. comparability of results with those from randomised trials.

Measures of the effect of the methods

We will compare the characteristics of included studies in order to determine the feasibility of performing a meta-analysis. For continuous outcomes (i.e., comparison of mean scores between delivery modes, differences in time to completion of the data collection process, and acceptability), we will calculate the mean difference (MD) and 95% confidence intervals (CI). For studies using different measurement scales, we will calculate the standardised mean difference (SMD). For dichotomous outcomes (i.e., data accuracy, data completeness, response rates, adherence to data collection protocols, and acceptability), we will calculate the odds ratio (OR) and 95% CI.

Similar to Gwaltney 2008, we will calculate a summary correlation coefficient using a weighted linear combination method.

Unit of analysis issues

In the case of cluster-randomised trials, we will attempt to obtain data at the individual level. If these data are not available from the study report, we will request them directly from the contact author. We will conduct a meta-analysis of individual-level data using a generic inverse-variance method in Review Manager 5.2 (RevMan 2012), which accounts for the clustering of data. If access to individual-level data is not possible, we will extract the summary effect measurement for each cluster. We will consider the number of clusters as the sample size and we will conduct the analysis as if the trial was individually randomised. This approach, however, will reduce the statistical power of our analysis. For those studies that considered clustering of data in their statistical analysis, we will extract the reported effect estimates and use them in our meta-analysis.

Dealing with missing data

We will contact the authors of studies with missing or incomplete data to request the missing information. If the requested information cannot be obtained, we will use an available case analysis.

Assessment of heterogeneity

We will assess the included studies for heterogeneity across their population and intervention characteristics, and reported outcomes. We will conduct a meta-analysis of the included studies that are deemed homogeneous and assess them for heterogeneity using the I2 statistic. If we obtain a value greater than 0.5, we will not assess the included studies for publication bias nor perform a meta-analysis.

Assessment of reporting biases

Where available, we will include studies published in languages other than English in order to minimise language bias. We will conduct a comprehensive search of multiple bibliographic databases and trial registries in order to minimise the risk of publication bias. If we include at least 10 studies, we will assess reporting bias using a funnel plot regression weighted by the inverse of the pool variance. We will interpret a regression slope of zero as absence of small study biases, such as publication bias.

Data synthesis

If appropriate numerical data are available, and a meta-analysis is indicated, we will synthesise the data according to data type. For continuous outcomes, we will calculate either the MD or the SMD. For dichotomous outcomes, we will calculate the OR and the 95% CI. For studies using correlation coefficients, we will calculate a summary correlation coefficient.

If appropriate numerical data are not available or if a meta-analysis is not indicated, we will perform a narrative synthesis of the evidence.

We will use the Grading Recommendations Assessment, Development and Evaluation (GRADE) approach to assess the quality of the pooled evidence, the magnitude of effect of the interventions examined, and the sum of the available data on the main outcomes to produce a 'Summary of findings' table for each of our primary outcomes.

We will pool effect estimates if at least two studies remain for each outcome and the I2 value is less than 50%.

Subgroup analysis and investigation of heterogeneity

We intend to perform subgroup analysis according to whether the participants were healthy volunteers or had any given clinical diagnosis. We will analyse separately those participants completing surveys as part of a complex self management intervention. We will also analyse separately individuals less than 18 years of age from adults. We will perform subgroup analyses based on the type of device (i.e., smartphone versus tablets) and the form of data entry. We will also perform a subgroup analysis depending on whether the survey questionnaires were used for longitudinal data collection or for a single outcome assessment. In the case of longitudinal data collection, we will conduct subgroup analyses based on the duration of the follow-up period using six-month intervals. Finally, we will perform subgroup analysis based on whether the study was industry-funded or not.

We will investigate heterogeneity using the procedure described above. We will also assess heterogeneity on the basis of the number and types of questionnaire items, and the use of strategies used to increase response rates.

Sensitivity analysis

We will consider conducting a sensitivity analysis if one or more studies are dominant due to their size; if one or more studies have results that differ from those observed in other studies; or if one or more studies have quality issues that may affect their interpretation as assessed with The Cochrane Collaboration's 'Risk of bias' tool.

Appendices

Appendix 1. Search strategy for use in MEDLINE

  1. exp Data Collection/

  2. data.mp.

  3. information?.mp.

  4. exp questionnaires/

  5. (self adj administ*).mp. and 4

  6. diary.mp.

  7. exp Self-Assessment/

  8. exp Health Status/

  9. 2 or 3 or 4 or 5 or 6 or 7 or 8

  10. acqui*.mp.

  11. gain*.mp.

  12. collect*.mp.

  13. obtain*.mp.

  14. gather*.mp.

  15. captur*.mp.

  16. exp Hospital Information Systems/ or exp Medical Order Entry Systems/

  17. entry?.mp.

  18. keeping?.mp.

  19. exp medical records/

  20. approach?.mp.

  21. 10 or 11 or 12 or 13 or 14 or 15 or 16 or 17 or 18 or 19 or 20

  22. 9 and 21

  23. 1 or 22

  24. (primary adj2 data adj2 entr*).mp.

  25. 23 or 24

  26. exp Cellular Phone/ or exp Telephone/

  27. exp MP3-Player/

  28. ((handheld or hand-held) adj1 (computer? or pc?)).mp.

  29. ((cell* or mobile*) adj3 phone*).mp.

  30. (smartphone* or smart-phone*).mp.

  31. ("personal digital assistant" or PDA).mp.

  32. exp Computers, Handheld/

  33. "palmtop computer?".mp.

  34. (tablet adj3 (device* or comput*)).mp.

  35. ("Palm OS" or "Palm Pre Classic").mp.

  36. (palm* adj3 comput*).mp.

  37. blackberry.mp.

  38. Nokia.mp.

  39. Symbian.mp.

  40. (windows adj3 (mobile* or phone*)).mp.

  41. INQ.mp.

  42. HTC.mp.

  43. sidekick.mp.

  44. Android.mp.

  45. iphone*.mp.

  46. ipad*.mp.

  47. ipod*.mp.

  48. 26 or 27 or 28 or 29 or 30 or 31 or 32 or 33 or 34 or 35 or 36 or 37 or 38 or 39 or 40 or 41 or 42 or 43 or 44 or 45 or 46 or 47

  49. apps.mp.

  50. 48 or 49

  51. 25 and 50

  52. limit 51 to yr="2007 -Current"

Contributions of authors

JMB conceived the study and drafted the protocol. KH, CM, and JC contributed to the design of the protocol and provided feedback on several versions of it. AP and AS contributed to the design of the search strategies. All the authors read and approved the final version of the protocol.

Declarations of interest

This systematic review is part of JMB's PhD.

Sources of support

Internal sources

  • None, Not specified.

External sources

  • None, Not specified.

Ancillary