Designing Web-based Forms for Users with Lower Literacy Skills
Previous research (Summers & Summers, 2003) has identified patterns of behavior and effective practices related to how lower literacy users interact with health-related Internet sites. However, prior research has not addressed how such users react to the unique challenges represented by interactive medical forms on health sites, such as interactive health quizzes, questionnaires, and registration forms. The goal of this four-month study was twofold: 1) to identify reading, writing, and navigational strategies of users with lower literacy skills when interacting with web-based forms in a medical context; 2) to develop design principles for making such web-based medical forms usable and accessible for lower-literacy adults. Eyetracking was used to gather data about how users interacted with a variety of web-based forms. Analysis of sessions with 26 low-literacy users (REALM score < 60) identified a variety of challenges users faced in completing forms. Based on these observations, proposed principles of effective form design were developed. Revised prototype forms were designed in accordance with these principles and iteratively tested with 14 users to verify improved usability.
While the “digital divide” has been studied in terms of access to computers and to the internet, less work has been done examining the accessibility of web-based information for lower-literacy users in terms of form and content (Summers and Summers 2005). About half of the adults in the U.S. read at the 8th grade level or below (Kirsch, Junegeblut, Jenkins, & Kolstad, 1993). Yet many websites intended for use by adults are written at a much higher reading level. One study of 1,620 state and federal government websites found an average readability of 11.0 grade level, an increase from the 10.8 average grade level found in 2004 (West, 2005).
Reduced access or limited understanding of health information can dramatically affect health care outcomes and costs (American Medical Association, 1999). Ironically, as government services move online they may become less available to the very constituents who need them most. It is therefore urgent that website designers learn to support the needs of lower-literacy participants.
Previous research has established some basic principles related to how lower literacy users interact with health-related Internet sites, and provided research-verified guidelines for how health websites can be made more effective for such users (e.g., Summers & Summers, 2005). However, to date there has been no published research on how such users interact with medical forms on Internet sites, such as interactive health quizzes, questionnaires, and registration forms. Such forms represent a qualitatively different type of challenge for Internet users, compared to the kind of purely informational pages on which the published research has so far focused. On informational webpages, users interact with the webpage only in order to find their way to the information that interests them. By contrast, in order to successfully use forms, users must interact by providing information of their own, typically as part of a structured sequential process, and then interpret feedback in the form of information, error messages, and completion messages.
PURPOSE OF THIS STUDY
This major study was designed to examine the strategies of low-and medium-literacy users for interacting with medical forms such as interactive quizzes, questionnaires, and registration forms on the Web.
In Phase 1, users were observed interacting with various forms and other interactive tools from several health sites. To help understand problem areas, researchers reviewed recordings of the sessions and used eye tracking tools to understand why users could or could not complete parts of a form. Phase 2 involved developing solutions to the problems and challenges observed in Phase 1 through the iterative development of a prototype registration process and health evaluation tool.
- 1Discover what reading, writing, and navigational strategies are employed by users with lower literacy skills when interacting with forms; discover what formats and interaction styles do or do not support these strategies
- 2Develop guidelines for developers of web-based forms that will support lower-literacy users
- 3Implement and refine these guidelines in a prototype website through iterative testing and redesign
- 4Test the validity of the guidelines through a qualitative analysis of user performance on the new prototype as compared to performance on the original site
The project consisted of two phases: Discovery, Analysis & Response
Phase 1-Discovery, Objective
Observe users with lower literacy skills in order to identify behaviors and interactions with a wide variety of forms, interactive tools, and quizzes, identify design elements and approaches that support user success, and identify design elements and approaches that increase failure.
26 participants interacted with pre-selected health forms, tools, or quizzes that were relevant to each user's particular health condition. Testing occurred in two-hour sessions throughout May 2005. Sessions were recorded using Camtasia and ClearView eyetracking software, with a Tobii 1750 eyetracker. The researcher interacted with the participant only when she needed to give direction about the next task or to prompt the user for feedback or thoughts. Users were asked to complete the forms, working “as if they were at home”; if users seemed distracted by the researcher's presence, the researcher would sometimes observe from the observation room in order to allow the users to work through the forms as they would do in a natural environment.
Recordings of the test sessions were reviewed in order to identify patterns of user behavior and patterns of interaction with particular form elements. Form elements or aspects of form design that seemed to support user success or to lead to usability problems were identified. The research team then identified a preliminary list of design challenges and design ideas to be incorporated in the prototype for Phase 2.
The eyetracking data was used to help understand user behaviors. When usability problems occurred, eyetracking data was used to determine whether participants had not seen relevant information or interface elements, or had seen information but not understood it. If participants did not see parts of the interface, the eyetracking sometimes helped identify where they expected to find the information they needed. This approach to the eyetracking data relies upon the assumption that what a person is looking at is what he or she is attending to, generally referred to as the “eye-mind” hypothesis (Goldberg & Wichansky, 2003). The eyetracking data also provided insight into users' “micro-level behaviors” (Goldberg & Wichansky, 2003), the visual processing strategies of which users are generally not aware but which can sometimes support additional inferences about complex cognitive processes.
The websites used in Phase 1 are listed in Appendix A. Forms included quizzes, health assessment tools, registration forms, eligibility screeners for health assistance programs, prescription order forms, dietary assessment tools, and other interactive tools.
26 users with serious/life-threatening conditions including high blood pressure, high cholesterol, diabetes, HIV, and asthma, were recruited from literacy centers and health clinics. All users had a REALM (Rapid Estimate of Adult Literacy in Medicine) score below 60.
In order to focus on usability and accessibility issues that were products of low-literacy rather than simple unfamiliarity with the computer or with the Web, all participants were observed using the computer prior to testing and screened for facility using computers according to the following pre-defined checklist:
Phase 2-Analysis and Response, Objective
Develop principles for the design of forms and other interactive web-based elements such as quizzes or assessment tools based on the observational research and iterative prototype design.
14 participants were invited to interact with two forms in a prototype version of the Pfizer for Living website: the new user registration and the heart attack and heart disease risk assessment tool. These forms were refined through a series of iterative qualitative tests. Problems identified in each round of 2-3 tests were addressed through design modifications and then the forms were re-tested. Once the design seemed stable, a final round of testing was conducted in which participants interacted with both the original version of the forms and the revised prototype version. The order in which the two versions were presented was varied.
Based on the qualitative observational tests in Phase 1 and the iterative prototype tests in Phase 2, design principles were articulated for making interactive quizzes, login screens, and registration forms more usable for lower-literacy users.
14 participants helped to test the prototype design. Three of the users participated in the Phase 1 testing as well (these participants had unusually high levels of difficulty in Phase 1 and the research team wanted to confirm that their needs had been met). One participant had a REALM score of over 60, and that participant's results were not used in the final iteration of the principles and prototype design. Participants met the criteria for computer and Web-browsing facility described above.
Findings from the two phases of research centered around two basic challenges: helping users progress through the forms, and helping users understand the results of their actions by interpreting feedback and error messages correctly.
Each finding below is based on observations of user behavior at existing sites from Phase 1 of the research, followed by design recommendations that were tested and verified in the prototype design (Phase 2).
USER PROGRESS THROUGH FORMS
Observation: Inability to begin the login process (confusion about new v. returning status)
Users did not always distinguish between being a new user and a returning user. For example, upon coming into a new site, many users put their name and a password in the “returning users” fields even though they were new to the site. In fact, some users who knew that they needed to register still tried to enter in their user name and password in the returning user fields. Eyetracking also showed that some users looked at the links to “register,” but they would still “log in.”
Provide a single entry point for new users and returning users. Have all users enter their email, then (in the space below the email field) ask if they have already registered with the site. Provide two radio button options for them to make their response, e.g.:
No, I would like to register now
Yes, I have already registered, and my password is: ____________
Below the radio buttons, provide a Log In button.
If users enter an email that is not already in the database, take them to a registration page. Include an explanatory message with a link to “log in” in case users who have already registered just need to re-enter their email address.
If users enter an email that is already in the database, but do not enter a password or enter the wrong password, take them to a “forgot password” page.
Observation: Difficulty generating usernames and passwords
Most users with lower literacy skills were unable to generate unique usernames on sites that required usernames, which meant that these users were unable to register successfully on such sites. Users experienced higher rates of success on sites that used email addresses as usernames. While not all lower literacy participants had email addresses, those users who did not have email addresses were without exception unable to generate unique usernames successfully.
The need to generate passwords also caused great difficulties for users with lower literacy skills:
Users tended not to read instructions for creating passwords, and they did not always understand error instructions.
Users who did read the instructions for creating passwords frequently misunderstood them. Several users thought that a minimum password length of eight characters meant that the password had to be exactly eight characters. This led to significant toil and anxiety.
Users were confused when prompted to provide a “hint phrase.” Some were unable to complete the registration process when a hint phrase was required.
Observation: Difficulty completing forms in the designed order
Users struggled significantly with forms that did not support a linear path through the form content. For example, on one site, the results of user input on the bottom right part of the page were displayed in a top left area of the page. As a result, many users generally didn't see these results. On an HIV drug interaction site, users had to search for their drug, highlight the drug from a list, then click a separate button to add the drug to a separate “user” list. Even users who figured out this process repeatedly forgot the final step of “adding” the drug. Users on these sites and on others like it were confused by having to return to earlier parts of the form in order to repeat steps.
Users with lower literacy skills were particularly dismayed and sometimes abandoned the task if they had to reenter data because they had used the BACK button and lost the data they had already entered. Users also responded with anxiety and sometimes with task abandonment if using the BACK button generated a warning message about needing to re-send data. Errors that required returning to earlier parts of the form were also harder to find and correct.
Users experienced anxiety and sometimes frustration when the form page refreshed after they inputted information. Some users were afraid that they had broken something when the screen refreshed; they wanted to close (abandon) the form or tool.
Keep the process linear, with no diversions to other sections of the page or to other pages.
Display the results of user actions within the user's current focus of attention, immediately to the right or below the input area, to support a linear reading path.
Don't refresh the page or design page contents to change location when the user inputs data.
Try to match the user's mental model of the process. For example, don't split up processes that seems like a single step from the user's perspective (e.g., requiring users to conduct a search for their medication, then select it by clicking the name, then click an Add button to add it to a list-rather than use a combined process).
Avoid popups or links to other pages.
For multi-page forms, provide buttons to navigate forward and back between pages.
Make sure users can navigate to a previous page without losing their data, whether they use the provided back button or the browser's back button.
Don't make users enter the same information more than once. For example, if they have already entered data such as their email address, pre-populate the email address field on new pages.
Observation: Mixed results from location indicators
Sites that provided a progress indicator, such as “Step 2 of 8,” helped motivate users to continue through the process, even when they were having difficulty with a page. However, when a site allowed users to click on the progress indicator, users would sometimes use it to navigate backward and forward and would lose track of what they had and hadn't done.
Observation: Skipping parts of forms below the fold line
When questions, answers, or links to the next screen appeared below the fold and required scrolling, users sometimes missed this content because they didn't notice that they needed to scroll down. For tools that had multiple pages, users sometimes assumed they were finished and abandoned the form because the “next” step was below the fold.
Keep each element of the form above the scroll line. If the form is longer than one screen, break it into several pages.
Make sure the action button to move on to the next step (for example, “Continue,” “Next,” or “Go to page 2 of 2”) appears appear above the scroll line.
Don't disable the action button for moving to the next step or page. If users proceed prematurely, present incomplete fields on a new page.
Observation: Confusion and skipping caused by embedded text
Forms with a lot of text, including instructions, long field labels, and descriptions of services, tended to trigger skipping. Users sometimes skipped over information that they needed in order to fill out the form correctly. The eyetracking information showed that users would sometimes read the very first text on the page, or might read small chunks of text immediately above the form fields. If this text seemed non-essential, these users would skip everything else. Most users with lower literacy skills didn't read any introductory text before attempting to fill in fields.
If users encountered unfamiliar medical words, they sometimes skipped those questions or answered them inaccurately. For example, when asked about relatives, users were unsure if their “relatives” included cousins, or spouses, or people who had already died. Similarly, users didn't know how to proceed if they were asked for specific medical information, like their systolic blood pressure.
Provide clear, concise instructions on how to use tools or fill in forms.
Place instructions immediately above the field they relate to.
Write all instructions at a 6th to 8th grade reading level or below, using familiar words.
Use a large text size (14 pt) to increase legibility.
Only show information that is relevant to the process of filling out the form.
Keep questions simple by using familiar words and allowing for imprecise answers.
Minimize medical terminology. When medical words are necessary, define the word in context.
Use simple sentence structures.
When possible, ask qualitative rather than quantitative questions. For example, allow users to say their blood pressure is “high” if they don't know the exact number.
Observation: Confusion over field labels
Labels that used jargon or unfamiliar words led to mistakes. For example, some users didn't know what “brand” meant and concluded that they were being asked for the drug's manufacturer. Some users didn't know what “residency status” meant and failed to identify themselves on forms as U.S. citizens. Users also struggled sometimes with jargon such as “Submit” or “Verify.”
Users routinely abbreviated field labels to a single information bearing word. For example, users saw “Email Address” as “Address,” and typed in the street address. Similarly, they saw “First Name” as “Name,” and typed in both first and last names.
When field labels weren't located close enough to the appropriate fields, users didn't always connect the label to the right field. Also, when labels appeared below fields, users sometimes assumed that text above the field, even when it was far away, was the field label.
When possible, use one-word field labels.
When possible, use familiar, easy-to-understand words.
Place field labels close to the relevant field, above or to the left, with ample white space between the field label and other fields.
Observation: Field length and display positioning as visual cues
Field length can be an important supplemental clue about expected input for users with lower literacy skills. Fixed field lengths can also help users notice and avoid input errors. If fields weren't large enough for the expected input, such as email addresses, some users experienced anxiety. Users also had difficulty identifying and correcting errors when fields were short and they couldn't see all the data that had been entered.
Users with lower literacy skills were generally more successful in navigating through fields that were displayed vertically rather than horizontally. However, fields that users saw as closely linked worked well when displayed horizontally.
Match field size to the expected number of input characters for fields such as zip code, telephone number, and year.
Make fields large enough to display all of the user input: for example, entire names, email addresses, and addresses.
Display form fields vertically rather than horizontally, except in the case of “naturally grouped” fields such as first name, middle initial, and last name; month, day, and year; and city, state, and zip code.
Observation: Difficulties using radio buttons and dropdown menus
Many users found checkboxes and radio buttons to be too small on most of the sites they used. Often users would click several times in the general vicinity of these fields trying to make their selections. Some users struggled with dropdown menus that required scrolling and were unable to manipulate such menus. (We note also that older users can have reduced mouse facility, making dropdown menus of all kinds challenging, although this was not a focus of the current study.)
Observation: Feeling “tricked” over privacy and consent issues
Most participants in the study had prior experience with feeling “tricked” into signing up for services/newsletters that they did not want. Some participants exhibited distrust and reluctance to enter personal information. On most sites, the privacy options were explained in relatively complicated language, which resulted in inaccurate selections.
For each consent option, explain exactly what the user is choosing. Don't assume that users have read or remember the content of other options or other content on the page. Make each statement of choice stand on its own as much as possible.
Write consent language and options at the 6th grade level or below.
On the confirmation page, repeat the options users selected. Provide an option to “unsubscribe” if they made a mistake.
INTERPRETING FEEDBACK AND ERROR MESSAGES
Observation: Difficulty processing feedback
Users consistently had difficulties processing the results of diagnostic or assessment tools when these results were presented in a popup or at the end of a long page, or when the answers appeared on a separate page at the end of the tool.
When presented with popups, users experienced many problems, including losing the popup behind the larger browser window or closing the popup without reading it.
Results presented at the end of the tool tended to be ignored or skimmed. When asked, users explained that there was just too much text. Eyetracking confirmed that users tended not to read longer chunks of text.
When the answers appeared at the end of a tool, users weren't able to map the results to the questions they had answered. For example, some users didn't realize that they were presented with the correct answers, or they had difficulty mapping the correct answers back to their responses.
Users didn't always read or understand answers and explanations when they were long or used complex words and sentence structure. Eyetracking data showed that users skimmed or skipped over difficult text, but they read the text when it was easy to read.
In a quiz or health assessment, provide feedback for each question as soon as the user answers it, on the same page, without refreshing the page. In prototype testing, this approach was highly successful. From eyetracking, it was clear that users actually read the text provided to them before moving on to the next question. In addition, users felt that the information was more complete and in-depth.
Use familiar words. If medical terms must be used, define them in context.
Focus information on benefits to the user.
Keep feedback short. Break information up into visual chunks if necessary.
Present extensive or complex quantitative information graphically.
Repeat the question, the user's response, and the resulting feedback on a final results page that consolidates all the information provided. The question and answer can be combined into an appropriate heading.
If the quiz is scored, visually emphasize a total score on the final results page.
If a diagnostic tool requires the user to answer multiple questions before feedback can be provided, users with lower literacy skills will fare best if no more than one screen is used to ask questions, and if the text for the results can then be fit on a single screen and be clear, concise, and well-chunked.
Observation: Difficulty finding and processing error messages
At-risk users had trouble noticing and processing error messages. Unfortunately, they were also more likely to make errors because they didn't always know the standard formats for entering items such as phone numbers. Eyetracking showed that users sometimes didn't find error messages, sometimes looked at them without reading them, and sometimes read them without understanding them.
Display only the fields that need correction on a new page. This simplifies the task of mapping the error message to the field that needs to be fixed.
Display all fields that need error correction in a vertical sequence.
Place error messages immediately above the relevant field, in red.
Provide clear, simple instructions for correcting user input.
Accept multiple formats for data. For example, accept both “01” and “1” as input in a “months” field, and accept both 5 and 5' as input in a “height in feet” field.
Don't make text fields (such as name, address, email address, or password) case-sensitive.
Previous research has identified two key cognitive and behavioral characteristics of lower literacy Internet users interacting with medical information online (Summers and Summers 2005). First, most such users seem motivated to avoid reading when possible. Second, such users seem to have a relatively narrow focus of attention as they move through all kinds of texts, including forms and interactive tools. The current study confirmed both of these patterns in the interactions of lower literacy Internet users with forms and other interactive online tools
Literacy research has demonstrated that the readability of a text depends on the familiarity and difficulty of the words used, and on their arrangement into sentences, paragraphs, and other chunks.
While most readability formulas focus primarily on word length and sentence length, our current research seemed to follow previous findings that word familiarity has more impact than word length (Doak, Doak, and Root, 1996), although we did not do quantitative measurement of this effect. We also found success in using sentence and paragraph structures that did not require users to hold the meaning of early clauses or sentences in working memory in order to interpret later text successfully. In other words, we attempted to write text that did not place large demands on users' working memory.
Previous research has also indicated that scanning is hard for lower-literacy users (Summers and Summers 2005). Reading itself takes a great deal of concentration and effort. Users can't grasp the structure of the page or form at a glance by reading headings and subheadings. In the current study, many users with lower-literacy skills attempted to fill out forms while doing a minimum of reading. Users tended to skip all introductory text, and to abbreviate field labels and instructions to a single key word. For example, “First name” was read as “name”; “email address” was read as either “email” or as “address.”
Alternatively, previous research has found that some lower-literacy users compensate by reading every word on the page so that they don't “miss” the answer (Summers and Summers 2005). Similar thorough reading has been reported for older users and users with less Web experience (Chadwick-Dias, et al. 2003; Theofanos and Redish 2003, Theofanos et al. 2004; Tullis and Chadwick-Dias 2003). This finding was verified in the current study, and informed the recommendation that fields be laid out vertically rather than horizontally, to avoid unintended groupings of unrelated items.
Focusing on a narrow field of view
Previous research has found that lower literacy users and some older users are less able to pay attention to cues about what might be coming up or remember where they came from because processing the text itself takes so much cognitive attention. As a result, they have an especially narrow field of view-as they move through form content, they are not “looking” ahead or behind, so they are less likely to notice content above, below, or to the sides of their focus of attention (Summers and Summers 2005).
The current study confirmed the importance of this finding in several ways related to user progress through forms. It was particularly crucial that field labels make sense out of context, and that pages make sense independently. Based on our findings, it is our recommendation that even adjacent paragraphs should be as independent as possible. If form elements cannot be understood without remembering the content of previous form elements, some low-literacy users are likely to misinterpret the form and may enter inaccurate information.
Recruiting lower-literacy participants for usability testing is made more challenging by the very problem that the research attempts to solve: the Web is currently not very usable for these participants, so it can be difficult to find lower-literacy participants who are already familiar with using the Web. Unfortunately, the scope of the current project precluded focusing on the fundamental difficulties of learning to use a mouse or understanding what a link is. Such issues remain fundamental to understanding the digital divide and have been identified by other researchers (Zarcadoolas, Blanco, Boyer, & Pleasant, 2002). Future research could profitably explore the potential for audio cues to help these users who are most at risk navigate through forms and participate in simple Web interactions.
Additional topics remain to be explored regarding health-related use of the Internet by lower-literacy users. In particular, further research could be conducted on the effects of animation and/or audio at health-related websites for lower-literacy users, including (a) use of animation to enhance comprehension, and (b) use of audio to enhance user navigation.