User involvement in digital health: Working together to design smart home health technology

Abstract Background Public involvement adds value to numerous aspects of health research, yet few studies have attempted to evaluate its impact on research. Evidence of such impact is needed to develop recommendations for best practice and ensure adequate resourcing. Aim To evaluate public involvement within a large interdisciplinary Science, Technology, Engineering and Mathematics (STEM) research project that focused on digital health. Methods The evaluation was conducted with members of the project's Public Advisory Groups (PAG) and with researchers who had participated in involvement activities. Two questionnaires were designed based on a public involvement value systems and clusters framework. Results Responses from members of the PAG (n = 10) were mostly positive towards normative values, which include moral, ethical and political aspects of involvement in research, and towards values concerning the conduct of public involvement and best practices. Researchers’ responses (n = 16) indicated they felt that involvement was generally effective and increased the quality, relevance and generalizability of their work. However, their responses about the validity of involvement in research were varied. They also highlighted several challenges including how well public involvement impacted on research, how decisions made in the research might differ from the views generated from public involvement, and barriers to researchers’ participation. Discussion and conclusion Our evaluation suggests that members of the public and the researchers value involvement. However, there is a need to consider how to embed public involvement to an even greater extent in STEM contexts and a need to address any barriers for researchers’ own involvement.


| INTRODUC TI ON
The involvement of the public-which includes patients, carers, and health and social care service users 1 -in health research has gained prominence over the last decade. Much has been written about the benefits that public involvement can have at every stage of the research cycle, 2,3 including setting research priorities, 4,5 designing clinical trials 6,7 and placebos, 8 and identifying treatment outcomes. 9,10 One widely accepted definition of this type of involvement, which is also adopted in this article, is research being carried out "with" or "by" members of the public rather than "to," "about" or "for" them. 1 Within this context, it is worth considering Arnstein's 11 classic "ladder of citizen participation" model, which conceptualizes the degree of involvement from high to low.
Although this model has since been refined to inform other ways to conceptualize public involvement in health research (eg, [12][13][14][15] ), the reality is research may include various forms of public involvement and these can change over time. It is therefore apt to distinguish three main levels of participation: consultation, where members of the public share their views and these views are used to inform decision-making; collaboration, where an ongoing partnership is established between researchers and the members of the public so that decisions about the research are shared; and user controlled, where members of the public hold the power over all strategic decisions in the research. 16 Recent developments in public involvement include guidance on how to achieve successful coworking, [17][18][19] as well as recommendations on how to report activities. 20 However, there is still a great need to build a research evidence base about the impact of involvement on research. 21,22 Doing so would contribute to ensuring the integrity of involvement activities, and enable the case to be made for support and adequate resourcing. 23 Science, technology, engineering and maths (STEM) fields are an example of where involvement remains a "work in progress," struggling to compete for time and resources. 24 One explanation for why public involvement is less firmly established within STEM is a relatively recent and deliberate departure from a one-way communication agenda, whereas in arts, humanities and social sciences, it is rooted in a tradition of participatory research approaches. This is especially problematic given that many STEM fields are heavily involved in the development of a range of digital heath solutions, which are frequently championed as a means of delivering care and empowering people to manage their health.
Research has uncovered a variety of barriers and facilitators that service users experience during engagement with digital health engagement strategies, which include but are not limited to engagement and recruitment approaches. 25 However, at an even earlier stage, public involvement may struggle in such contexts owing to the need to demonstrate its value and impact in STEM.
Conducting empirical evaluation of involvement takes further time and resource, but provides necessary evidence so that involvement can be prioritized alongside and embedded within STEM research.
Meaningful evaluations should reflect public involvement as part of the research process and, as such, must revisit its values and purposes. 26 In an effort to map out the values associated with public involvement in health research, Gradinger et al 27 developed a framework comprising three overarching value systems. These relate to (a) normative perspectives, which concern moral, ethical and political aspects of public involvement; (b) substantive perspectives, which concern the consequences of involvement; and (c) process-related perspectives, which concern the conduct and best practices of involvement. These value systems then contain five value clusters pertaining to each of them (see Table 1). This framework enables a structured approach to identifying what values different stakeholders attribute to public involvement, thus helping to manage potential conflict within a project and its wider organizational context. Although originally developed in the context of health and social care research, the framework has wider relevance. This framework was subsequently used in a modified Delphi study with stakeholders in public involvement in research, to explore areas of consensus and conflict around the proposed value systems. 23 That Delphi study highlighted existing shortcomings in substantive and process aspects of public involvement, which further support the need for robust evaluations of involvement to develop best-practice standards.
With these issues in mind, we conducted an evaluation of public involvement embedded within a large interdisciplinary STEM project that aimed to design a fit-for-purpose system for monitoring healthrelated behaviours in the home. The main foci of the strands of involvement work described below were to inform research and design, but also to explore issues pertaining to the role-out of the system into real homes. 28 To the best of our knowledge, this is the first evaluation of public involvement in a digital health project that uses the framework of Gradinger et al 27 In addition to reporting the outcomes of the evaluation, this article discusses the process of balancing the needs and the expectations of service user groups in a digital health project that was driven by several factors, including expectations of the funder, research targets, and development of a working system. We begin by describing the methods used, in the spirit of the

| Context
The SPHERE project aimed to design a smart home system com- The public engagement and involvement team also organized activities just with the researchers. These included annual workshops to discuss public involvement, and shorter lunchtime sessions held every 3 months to discuss issues emerging from public involvement activities.

| Sample
The evaluation was conducted with two different groups: (a) members of the SPHERE PAG; (b) researchers who had participated in public engagement and involvement activities.

| Questionnaire development
We chose to use a questionnaire approach to make it as easy as possible for members of the PAG and of the research team to participate. We were mindful in particular that members of the PAG were already generous in their time and that the researchers were already working at full capacity. We designed two

| Data collection
This evaluation was conducted roughly at the midway point of the

| Collation and analysis of responses
Data were entered into Excel spreadsheets for collation and reviewed by the authors. Given the small sample size, responses to questions with Likert-type response options were summarized in frequency tables and no further statistical analysis was performed.
The qualitative material in the free-text responses provided explanation and deeper understanding of experiences. These data were independently coded by AB, BM, FH and RGH, who subsequently discussed and refined them in a data analysis meeting. This process of critically revising the codes resulted in agreement of thematic categories, which were then applied to the data following a qualitative content analysis approach. 29 A descriptive summary was developed based on these findings.

| RE SULTS
Results are reported separately for members of the PAG and the re-

| The PAG's views
Ten members of the PAG returned completed questionnaires. Of these, five people reported no previous involvement in research; two people had been involved in clinical research; the remaining three people had previous experience of research or community engagement. Respondents were motivated to join the PAG because they felt their contribution could provide benefits to themselves (three people) and to others (six people), they supported the aims of the project (four people), and they felt they could provide specific insights that would lead to relevant and realistic outputs (four people). Listened to ideas about watch/arm rest charger-things older people know," P3) and raised awareness of ethical issues ("I believe we have been the 'common sense' element, giving examples and either questioning or reassuring. We have asked questions that make people/ researchers think an issue through," P3).   part in activities such as workshops (fourteen people) where feedback from the public was discussed with a view to informing ongoing research and development. In the following sections, the term "public engagement" is sometimes used instead of public involvement.  ; these respondents explained that they were either satisfied with the delivery of public involvement in SPHERE or did not feel they knew enough to provide a useful answer. This question was phrased such that the lower end of the Likert-type options (selected by six people in total) indicates researchers were mostly satisfied with the way in which the involvement took place within the project. The free-text responses focused on areas of improvement to achieve more robust outputs from involvement activities, which included how well the public involvement work was able to influence the research ("The path between the public's viewpoints to the work packages has always been incongruous.

| Substantive value system
[…] As an example, I need to think for a minute or two to recall directives that were imposed on the research we perform that followed directly from public engagement, but I can easily recall a number of cases where work packages made decisions that felt-to me at least-contrary to the mood reported in the reports that we received from public engagements," R13). Similar feedback was also reported in the additional comments section of this questionnaire, where one researcher wrote that they felt the public's views were not always taken on board because other team members "are not present at events or they base their opinion on conversations they've had instead of looking at the overall feedback" (R6).
Another issue raised in this section was that the project's involve- ing with other evaluations. 30 The researchers generally found their experience of involvement to be useful and felt it had increased the quality, relevance and generalizability of their work. However, their responses also indicated a need to consider how best to enable the involvement to have impact more deeply, as there were some research decisions that did not always accord with the views from involvement activities. In some ways, this is not necessarily a problem, as members of the PAG were comfortable that the researchers possessed technical "expertise" and a key impact of the involvement was enhancement of empathy for the future "users" of the technology under development. While some researchers did express resistance to hand over or even share ownership of the research to public involvement contributors, it is also feasible that such views could be explained by the relative newness of the researchers to public involvement in research. This lends further evidence to calls to create an empirical evidence base of the impact of public involvement in research, which will pave the way to best-practice standards. [21][22][23] The trend towards more fluid collaborations between universities and external communities has uncovered challenges related to translating experiential learning and intellectual challenge into appropriate end-of-project outputs. 31 Indeed, some researchers in our evaluation said that delivering research in line with the funded research agenda was the primary goal of their work and there was sometimes reluctance to alter plans on the basis of input from involvement activities. The focus was on developing a working system that could be replicated and rolled out into a large number of homes. Another study found that health researchers experienced similar tensions around the involvement of service users, strict deadlines, and the need to share power in research relationships. 32 We suggest that the current research funding landscape could con- or one-to-one interviews. We chose a questionnaire based on a decision to make participation as easy as possible. It is of course possible that other approaches could have generated different or further views, but we were heartened by the depth of answers provided in the free text boxes. The provision of questionnaires was also advantageous, because people were able to complete the evaluation in a time and place of convenience to them. We chose to use a mixed methods approach, using a triangulation process 33 that combined the collection of quantitative and qualitative information to obtain a more complete picture. This was for two reasons: first, we wanted to ensure the questionnaires were straightforward to complete, and we user-tested them within the evaluation team; second, we thought it vital that we collected detail about the quantitative material. We found the quantitative material provided a snapshot of experiences and opinions, while the qualitative material provided depth and information that could enable improvement and change. This is in keeping with recommendations for the use of mixed methods approaches, and the complementarity of quantitative and qualitative information. 33 The decision to develop different questionnaires for the researchers and for the PAG members was taken, as the substantive values associated with incorporating involvement into research were not obviously the domain of the PAG members. Although not all PAG members and researchers responded to our invitation to complete the evaluation, the diversity of backgrounds and the number who did provides confidence that the views captured reflect those of the wider group of PAG members and researchers. One caveat is that it is likely that the most engaged researchers were the ones who took part in the involvement activities. Additionally, we note that one of the evaluations was completed by a researcher who had not directly taken part in involvement activities. This might represent a failure on our part to make the events seem relevant and enticing to all researchers, or it might be that no amount of relevance or enticement would have encouraged some researchers to come. We did not formally collect information about why some researchers in the project were not involved in public involvement events, and this would be an excellent topic for further work.
Informally, we understood that researchers who did not come to events felt that their time priorities lay elsewhere in their work. It is important to acknowledge that as an evaluation team we thought that public involvement is useful and important to the delivery of research that is grounded in the values and views of members of the public. We are aware that this might have affected our interpretation of the evaluation material, and this is why we used the framework of Gradinger et al 27 as well as a robust approach to analysis.
In conclusion, public involvement in the project can be best described as "expedient," as members of the PAG and researchers were involved in a process that was fit for purpose and deliverable.
There is always scope to refine and improve involvement activities, and with more resource we would have conducted more coworking processes and explored how best to remove barriers to researchers' involvement. The evaluation indicates that the members of the public who were involved felt that their views were valued and that they were listened to, and that researchers in a technology development environment valued involvement. However, the occasional instances of respondents who were less positive about their experience of public involvement within the project suggest there is still need for improved communication about the value of public involvement as well as for consideration of the drivers for research. We think that this evaluation and critical reflection on our work represent a large move forward in fostering and nurturing public involvement in a digital health project.

ACK N OWLED G EM ENTS
This work was performed under the SPHERE IRC, funded by the UK Engineering and Physical Sciences Research Council (EPSRC), Grant EP/K031910/1. We thank the members of the Advisory Groups and the researchers who took part in this evaluation for their time and insights.