Personas and scenarios
Personas and Scenarios were the tools utilized which aided in understanding the system’s users and to imagine them using the system under various conditions. Three personas were developed from the user information, clearly identifying and differentiating the three groups of users that the system had. Developing user scenarios helped establish a vocabulary in both identifying users’ current needs, and establishing the validity/basis for a particular task or function, and focusing on the activities that need to be supported by the system, rather than making the users conform to the systems functionality.12 Questions like “Would our persona use or need this feature?” helped in thinking analytically and broadly about the various issues that the system faced.
The original version of UMClinicalStudies was very text-heavy, and, judging from the results of user observations/testing, not intuitive. The interface of UMClinicalStudies was broadly construed to include not just the visual/auditory display and interaction dialog, but the situation in its entirety, including interacting with a public that has many cautions already in place regarding clinical research. We also realized, through our work with the users that the system was considerably difficult to navigate for both researchers and volunteers—and action steps and paths were not clearly defined. Logging in to the registry (for new user registration or revisits) seemed to be a major problem for users since it seemed to be buried deep inside the Web site pages. Many key components of the site were several clicks away for the users. Because of the unintuitive nature of this earlier version of UMClinicalStudies (then called “Engage”) the bounce rate (the percentage of single-page visits or visits in which the person left the site from the entrance page) within the Web site was extremely high—average of 72.46%—which caused great concern.
From all the evaluations and user observations conducted, a list of priorities was generated that defined the project objectives for the new release. The underlying aim of the reconstruction process was to improve the usability and user experience of UMClinicalStudies for both the researcher and the volunteer audience. The top criteria for this phase of the redesign included, among other things:
Improving the site architecture to have the Web site appear as one application though it consisted of several different components.
Improving entry points and accessibility into the system, particularly making the process of logging into the Registry simpler and straightforward, both for researchers and volunteers.
Improving navigation within the Web site.
Improve the search functioning within the Web site.
Redesigning the study posting section—concentrating on improving the workflow for researchers and study teams to post and edit studies.
Improving ‘matches’ between research participant volunteers and open studies.
Making the Web site more intuitive and inviting, especially for the volunteers.
Improving the interactions and the overall experience that users have with the system.
Improved communication with users, including email notifications to researchers and volunteers when a match is made (a relevant study for volunteers, and potential research participants for researchers).
Reducing the bounce rate for the Web site to below 40%.
Once the project objectives were defined, the reconstruction team went about designing the new release of UMClinicalStudies. Brainstorming sessions within the team led to the creation of new site mockups that were then tested and a final interaction design was created for the new release. Once the new design was validated, a branding treatment was applied to improve the visual appeal of the Web site so it would appear warm and inviting to users.
We found it important to design with mindfulness to the tension that exists between the two main user populations—researchers and volunteers. Both user groups have different demographics, computer experience, ways of using the system, requirements, etc., and great care and attention to detail was undertaken in order to design for solutions that would meet both their needs.
Once the wire-framing and Web site architectural designs were complete, the mockups and design specifications were handed over to the developers to begin the implementation stage. The implementation stage consisted of developing the tool in iterative cycles with testing and analysis being done after every cycle. All team members, including the project manager (50% dedicated time), usability specialist (0.75 Full Time Equivalent [FTE]), Web designer (0.25 FTE), application developers (1.5 FTE), and users (approximately 200 hours), were involved in analyzing and evaluating every sequence.
Some key findings during analysis and evaluation were that the login links/entry points to the system for both volunteers and researchers were buried too deep within the Web site and difficult to find. As a volunteer, it was difficult to move from finding a study to actually registering an interest for that study. For a researcher, it was difficult to move from posting a new study to the site, then over to the registry to look for volunteers for the study. Other issues discovered included broken links, and dead ends. This drove the need to make logging in (or registration for first-time users) and moving between the components a top priority.
Other focus areas for site development were more specific to the interaction design and usability. We realized that the navigation for the site was not intuitive, despite the fact that it provided more features and content compared to peer sites. It also became a priority to provide additional and more consistent “Help” throughout the site—providing guidance specific to the page that people were located on, and placing consistent action buttons with standardized labels and icons.
Through the Heuristic evaluation process, we derived that the registry enrollment process was too lengthy for user expectations—also hindered by the fact that the site at that time required people to register in one session (not allowing a “save and complete later” option), which would cause some potential volunteers to abandon the process midway. Thus, a “save and complete later” option was implemented.
Confusing terms and vocabulary like “Posting Form” to indicate editing a study posting, “Public Posting” to view a posting, etc., were also remedied by using clearer terminology. The Web site was also heavily medically texted, and was well above an eighth-grade reading level (the recommended level for consumer-centric health literacy),13 and nonintuitive terms like “Bulletin Board” were debated upon and eventually replaced by more action-oriented terms like “Edit a study” for researchers, and “Find a study” for potential volunteers, etc.
Furthermore, by reviewing page analytics it was determined that a large percentage of potential volunteers utilized the system to browse open clinical trials—but not sign up within the registry. Because the study postings listed on UMClinicalStudies have the research teams contact information for each study, in addition to a staffed phone line to UMClinicalStudies staff, there are still direct routes for participation made available without requiring an account. This validated the importance of having various search strategies for volunteers such as by condition, or by studies recently added. Of significance to the improved search, the Google Mini was embedded within the application—allowing users to customize their search in the various file types, yet will only see the content the individual user is authorized to view.
From the surveys that were circulated to the researchers, one of the key components that required addressing was the number of research participants that were able to be recruited through the system—it was very low for some programs, particularly the University of Michigan’s Comprehensive Cancer Center. Studies that seemed to do well were those that were seeking a large portion of “healthy volunteers” (volunteers that did not indicate a specific disease or medical condition). The effectiveness of the match between study eligibility criteria and volunteer personal information was improved by ensuring that the match was including both the previous/current condition of the registrant as well as whether they were interested in studies seeking healthy volunteers—which made a huge impact on the effectiveness of the match (due to the fact that while an individual may have asthma, they may still be considered a “healthy participant” for other studies). Staffing solutions were also identified to assist groups like the Cancer Center in contacting and screening the volunteers that expressed interest in cancer studies by connecting with professional oncology nurses.
Another key issue identified by research teams was streamlining their workflow. Thus, we integrated study entry information into eResearch—the Web-based system that centralizes the review and approval process for Human Subjects Research Applications at UM. Information is pulled from the protocols entered into eResearch and prepopulates the posting information into UMClinicalStudies. This process ensures that research teams only need to enter study information into only one place.
By incorporating an optional email notification system—where volunteers are notified if new studies are matched to them, and researchers are notified when potential research participant volunteers have requested contact or are matched to their study based on eligibility criteria—the system has increased not only the awareness of opportunities, but also instigated a swell in user interest by just this simple transaction. Volunteers have been more proactively seeking studies and researchers have reported an increase in participants through the system. As such, the number of studies posted on UMClinicalStudies has increased by over 25% since this new version has been released, and the site bounce rate for all users was reduced to fewer than 40% as of June 2010.
A complete cycle of testing was performed once the development was complete. An alpha test within the teams and the organization was followed by a beta test within selected user groups. The feedback received from these tests was incorporated back into the release to make it more user-friendly.