Utility of the multiple-stimulus without replacement procedure and stability of preferences of older adults with dementia

Authors


  • This study served as the dissertation for Paige Raetz. We thank Jim Carr, Wayne Fuqua, Cynthia Pietras, and Stephanie Peterson for helpful comments on a previous version of the manuscript. We also thank Stephanie Hood and Jenna Mattingly for their invaluable on-site support of this study.

Abstract

Paired-stimulus preference assessments have been used effectively with individuals with dementia to identify stimuli to increase engagement and to minimize negative affect and problem behavior. We evaluated whether a multiple-stimulus without replacement preference assessment could be used with older adults with dementia and whether preferences remained stable over time. Seven participants completed preference assessments and confirmatory engagement analyses every few weeks for 3 to 5 months; 1 participant failed to complete any preference assessments. Five of the 7 remaining participants displayed higher levels of engagement with the highest ranked stimuli than with the lowest ranked stimuli, confirming the hierarchy in the preference assessment. For the other 2 participants, lowest ranked items resulted in higher levels of engagement than the highest ranked items. Four participants exhibited stable patterns of preference over 3 to 5 months with correlation coefficients exceeding r = .5, suggesting that preferences may remain stable for some individuals with dementia.

Older adults represent the fastest growing proportion of the global population, with a predicted U.S. population of 72 million people 65 years and older by 2030 (Hooyman & Kiyak, 2008). With increased age comes an increased risk of developing dementia, a cluster of progressive cognitive, emotional, and physical changes that can be caused by several disease processes. It is estimated that by 2050 approximately 14 million individuals in the United States will be suffering from Alzheimer's disease, the most common cause of dementia (Vierck & Hodges, 2003). Alzheimer's disease and many of the other dementias are characterized by global deterioration of intellectual abilities (e.g., language, memory, judgment) and functional daily living skills (Bjorklund & Bee, 2008). Progressive declines in engagement in meaningful recreational activities also occur and are associated with other negative outcomes such as depression, increasing physical infirmity, increased risk of falls, and a reduced quality of life (LeBlanc, Raetz, & Feliciano, 2011). Furthermore, it has been reported that lack of engagement can lead to an increase in behavior problems (Camp, Orsulic-Jeras, Lee, & Judge, 2005). Consequently, one of the main therapeutic goals for this population is increased engagement.

Several researchers have examined strategies for increasing engagement in nursing home environments by providing commonly available recreational items. For example, Engleman, Altus, and Mathews (1999) trained staff in a nursing home to use a check-in procedure (i.e., every 15 min, they praised resident engagement and provided a choice of at least two activities for residents who were not engaged) that resulted in increased resident engagement. Camp et al. (2005) developed a Montessori-based program that involved older adults with dementia mentoring young children in sensory-based arts and crafts activities. Observations on four categories of engagement (i.e., constructive engagement, active engagement, passive engagement, and nonengagement) indicated that Montessori-based programming resulted in increased constructive engagement as well as reductions in passive engagement and nonengagement. In both of these programs, the items and activities were readily available and reasonable to offer frail elderly adults but were not individually selected.

Another approach to increasing engagement involves providing items identified via individual preference assessments, a common practice with individuals with intellectual disabilities (Hagopian, Long, & Rush, 2004; Vollmer, Marcus, & LeBlanc, 1994). Although direct-observation preference assessments (e.g., single stimulus, paired stimulus, multiple stimulus without replacement) are typically used with individuals with intellectual disabilities, preference assessments with individuals with dementia have typically employed questionnaires for the individual or a caregiver without direct confirmation of the effects of the items endorsed as appealing (e.g., Pleasant Events Schedule–Alzheimer's Disease [PES-AD], Cohen-Mansfield, Thein, Dakheel-Ali, & Marx, 2010; Teri & Logsdon, 1991). This approach is problematic because LeBlanc, Raetz, Baker, Stobel, and Feeney (2008) found that the traditional verbal report PES-AD frequently resulted in false negatives (i.e., said “no” but items subsequently produced engagement) for those individuals with moderate to mild dementia, suggesting that observation-based measures of preference may be necessary to identify the greatest number of items that will produce increased engagement.

Recently, a few studies have examined the use of direct-observation-based preference assessments with individuals with dementia (Staal, Pinkney, & Roane, 2003). Paired-stimulus (PS) preference assessments have been used most frequently (Fisher et al., 1992). LeBlanc, Cherup, Feliciano, and Sidener (2006) compared four different PS presentation formats (i.e., verbal, textual, pictorial, tangible) with individuals with dementia. Each highly preferred item on at least one assessment was provided noncontingently (i.e., one item per 15-min session), and the percentage of intervals with engagement was correlated with the selection percentage in the PS assessment. For each individual, a single modality (three vocal, one tangible) was highly correlated with engagement. Feliciano, Steers, Elite-Marcandonatou, McLane, and Areán (2009) also used PS assessments with nine older adults with dementia to identify items to use in behavior management protocols for the treatment of depression and agitation. Preferred items were incorporated into various behavioral interventions with significant reductions in agitation. Together these findings suggest that the PS preference assessment can be used successfully with individuals with dementia.

Because PS assessments can be time consuming, briefer multiple-stimulus without replacement (MSWO; DeLeon & Iwata, 1996; Higbee, Carr, & Harrison, 2000) assessments might be useful for frail individuals with dementia who can fatigue easily. Items are presented in an array for selection, and brief access to the selected item is provided while the remaining items are rearranged and re-presented. However, the MSWO presentation requires scanning abilities that might prove problematic for older adults with sensory or cognitive impairments. These same issues of frailty and progressive cognitive decline also make it important to examine stability of preferences for people with dementia. Studies with individuals with intellectual disabilities have produced mixed results, with some indicating relatively stable preference (Carr, Nicolson, & Higbee, 2000; Hanley, Iwata, & Roscoe, 2006) and others finding preference to be unstable (Mason, McGee, Farmer-Dougan, & Risley, 1989; Zhou, Iwata, Goff, & Shore, 2001). Zhou et al. (2001) examined stability with 22 adults with profound mental retardation over varying durations (average 16-month follow-up) and found stable preferences for only 10 of the 22 individuals. Hanley et al. (2006) evaluated stability of preferences by conducting a series of PS assessments at regular intervals across a 2- to 6-month period. Pearson correlation coefficients indicated that preference was highly unstable for two participants, slightly unstable for another, and stable for the remaining seven participants.

In summary, recent studies have examined the usefulness of PS preference assessments with older adults with dementia (Feliciano et al., 2009; LeBlanc et al., 2006). The MSWO procedure could be useful because it can be completed quickly; however, it requires a scanning repertoire. We examined whether the MSWO procedure could be used with individuals with dementia to identify items that result in increased levels of engagement. We evaluated the highest, medium, and lowest ranked items from the MSWO in engagement analyses. We also examined the stability of preferences across 2 to 6 months to determine whether the findings would be comparable to those obtained with other populations. Finally, we calculated correlation coefficients to determine how well the results of the first array of the MSWO compared to the mean results of the three arrays.

METHOD

Participants, Settings, and Materials

Eight adults who had been diagnosed with dementia (76 to 90 years old) served as participants (see Table 1 for demographics for all participants). Of the eight participants, seven were female, one was male, and all were under the care of either family or staff around the clock. Seven participants were Caucasian, and one was African American. Participants were recruited from an adult day-care center (n = 3), an assisted living facility (n = 1), general nursing facilities (n = 1), and dementia special care units of nursing facilities (n = 3) in the midwestern United States. Prior to participation, informed consent was obtained from all caregivers and assent was obtained from each participant. The exclusionary criteria included a prior history of an intellectual or developmental disability or a significant visual impairment that would preclude scanning an array of stimuli. All participants had a prior diagnosis of dementia by their physician or other professional that was confirmed with administration of the Mini Mental Status Exam (MMSE; Folstein, Folstein, & McHugh, 1975). The average MMSE score was 9, with a range from 1 to 18 (see Table 1), indicating that the participants were severely to moderately impaired. During the study, the MMSE was repeated monthly to identify any changes in functioning; however, no participant experienced more than a 3-point decline, and no change in score resulted in a recategorization of severity.

Table 1. Participant Demographics
ParticipantAgeGenderEthnicityMMSE
ScoreSeverity
Abigail88FemaleAfrican American11Moderate
Bill76MaleCaucasian8Severe
Cora78FemaleCaucasian10Moderate
Daisy87FemaleCaucasian5Severe
Ella77FemaleCaucasian3Severe
Francis84FemaleCaucasian17Moderate
Gertrude90FemaleCaucasian18Moderate
Henrietta78FemaleCaucasian1Severe

All sessions were conducted in a session room or bedroom inside the day center or the facility in which the participant resided. Each room contained a table, multiple chairs, and all data-collection materials and leisure items needed to complete the session. Participants experienced two sessions per day approximately once every 2 weeks. During the first session of the day, the MSWO preference assessment was conducted and lasted approximately 10 to 15 min. During the second session of the day, the engagement analysis was conducted with the items identified in the preference assessment. The engagement analysis consisted of a series of 5-min sessions, with the total analysis lasting no longer than 30 min, including breaks.

Procedure

Initial assessment

Each participant and their caregiver completed a version of the PES-AD (Teri & Logdson, 1991) interview that had been shortened to include only items and events that could be immediately provided in the engagement analysis (LeBlanc et al., 2008). Most caregivers were the adult children of the participants, but a few were spouses. The 17-item version of the interview (participant) or survey (caregiver) employed a yes–no response format for each item with a question about whether the individual would find the activity pleasurable (see Supporting Information). Based on the results of the two administrations of the PES-AD, seven items endorsed as preferred by both caregiver and participant and one item endorsed as nonpreferred by both caregiver and participant were selected for the preference assessments. The nonpreferred items were included to determine if the nonendorsed items resulted in a lower selection rate because of the risk of false negatives suggested by LeBlanc et al. (2008). If more than seven items were endorsed by both informants, items were selected based on ease of presentation. If the participant and caregiver did not agree on the nonendorsed items, two were included, with one nonendorsed item from each informant.

Preference assessment

An MSWO preference assessment was conducted using the eight items identified from the PES-AD. The eight assessed items for a given individual remained consistent across assessments throughout the study. These specific items were unavailable in their care setting other than in the course of the study, although other leisure activities were available. At the beginning of each MSWO session, the experimenter provided each item for 1 min in random order. Next, the eight items were positioned in a semicircle configuration with equidistant spacing on the table in front of the participant. The participant was instructed to “pick one,” and any attempt to pick more than one item was blocked with a repeat of the instruction. After an item had been selected, the participant was allowed up to 1 min to engage with the item while the remaining items were repositioned in the array. After 1 min, the prior selection was removed and the remaining items were presented for the next selection. This process continued until all items were selected or until selection ceased (i.e., no selection was made within 45 s of the presentation). Similar to Carr et al. (2000), three complete arrays were presented during each session. Between each array presentation, the researcher offered a break of no longer than 5 min during which the participant could leave the room. If the participant declined a break, the researcher proceeded with the next array. If the participant fell asleep and was unable to remain awake after rousing, the session was stopped.

Engagement analysis

After each preference assessment, an engagement analysis was completed with items that were ranked high (i.e., 1), medium (i.e., 4), and low (i.e., 8). If two or more items tied for a rank, the item was selected from those items via random draw. Three 5-min sessions were conducted with one item included in each condition and the order determined by random draw. At the beginning of each condition, the experimenter modeled engagement with the item, handed the item to the participant, and said “You may — with this as long as you like; let me know if you want to stop.” During the 5-min observation, social engagement was kept to a minimum to allow evaluation of the level of engagement under conditions that were likely to occur during ongoing treatment (i.e., staff provides activities but may be occupied with another consumer or paperwork). If the participant asked to stop the activity, the experimenter removed the item and escorted the participant from the room for a break (see below). When the participant returned, the experimenter asked him or her if he or she would like to engage in another activity. If the participant agreed to continue, the researcher began the next condition with one of the remaining items. If the participant requested to stop altogether, the researcher ended the session for the day. If the participant stopped engaging with the item but did not ask to end the activity, data collection continued until the session time elapsed. Between each condition, the participant was offered a break of up to 5 min during which they could leave the room. If the participant did not take a break, the next condition was initiated immediately.

Response Measurement and Interobserver Agreement

For the MSWO procedure, a selection response was recorded when the participant made physical contact with one of the items in the array. If the participant made contact with two items simultaneously, no response was recorded and the response to the re-presentation of the array was recorded. The experimenter served as an observer and a second, independent, trained data collector collected data during 61% of all assessments. An agreement was defined as both observers recording the same rank for a given selection. Interobserver agreement was calculated by dividing agreements by the total number of presentations and converting the result to a percentage. The mean agreement across all MSWO sessions was 96% (range, 79% to 100%). For engagement sessions, engagement was defined as any physical contact with the item or orientation to the item depending on the typical use of the item. For example, the appropriate use of a book included holding the book or looking down at the book on a table even if no physical contact occurred. For activities such as watching the news, engagement included looking in the direction of the television screen, but did not require physical contact. Observers collected data using 10-s momentary time sampling, and engagement was scored at the end of each 10-s interval. A second independent observer scored 59% of all engagement sessions during the experimental sessions. An agreement was defined as an interval in which both observers scored the interval identically (i.e., engaged or nonengaged). Agreement was calculated by dividing the number of agreements by the number of disagreements plus agreements and converting the result to a percentage. During sessions in which the participant requested termination of the session, all remaining intervals were scored as nonengaged to allow comparison across conditions. The average agreement across all engagement sessions for all participants was 98% (range, 89% to 100%).

Procedural integrity data were scored on checklists for the administration of the MMSE (26 steps), the MSWO procedure (36 to 40 steps), and the engagement analyses (10 steps). An observer scored 13% of all sessions and indicated whether or not the step was correctly implemented. Procedural integrity was then calculated by dividing the number of steps implemented correctly (across all participants) by the total number of steps and converting the result to a percentage. Procedural integrity was 100% for all assessment procedures and engagement sessions.

RESULTS

Utility of the MSWO Procedure

Seven participants successfully completed multiple MSWO assessments, making multiple selections and remaining alert throughout the session. The participant with the lowest MMSE score, Henrietta, did not complete a single MSWO array because of excessive sleepiness during the assessment.

Figure 1 depicts the ranks of a given item clustered together, with each bar representing the mean rank for the consecutive MSWO assessments for the first three participants. Abigail (Figure 1, top) experienced 10 MSWO assessments over a 5-month period (i.e., 10 bars in each item cluster). The first three assessments resulted in a book as her highest ranked item; the rankings for the book decreased and became more variable across assessments (i.e., ranked 3 to 7), and dice (the nonendorsed item) consistently remained ranked as 8 throughout all assessments. Bill (Figure 1, middle) completed nine assessments over a 4.5-month period, and the book consistently remained his highest ranked item (i.e., 1 for the majority of assessments, never lower than 4). Similarly, the news generally remained the lowest ranked item in the majority of assessments. The magazine (the nonendorsed item) was never ranked lower than 5 across all sessions. Cora (Figure 1, bottom) experienced eight MSWO assessments over a 3.5-month period. The book remained her highest or second-ranked item in almost every assessment. When the book was not her highest ranked item, the magazine usually ranked 1. The puzzle generally remained a lowest ranked item, ending up at 8 in the majority of assessments. The cards were her nonendorsed item and were ranked between 5 and 8 for every assessment. However, Cora selected the puzzle after the cards on almost every assessment.

Figure 1.

Mean ranks for the three arrays of each preference assessment for Abigail, Bill, and Cora. The item selected earliest in the assessment was ranked 1, and the item selected last was ranked 8.

Figure 2 depicts the MSWO assessment results for Daisy, Ella, Francis, and Gertrude. Daisy (top) completed a total of seven MSWO assessments across a span of 4 months, with a highly variable pattern of preference. Although the magazine was highest ranked in the first, sixth, and seventh sessions, it was ranked 3 or lower for the other four sessions (range, 3 to 8). Similarly, word puzzle was lowest ranked in the first, fourth, and sixth sessions, but was ranked 5 or higher for the other four sessions (range, 2 to 5). The nonendorsed item, dice, was rarely the last item selected and was selected as high as 3 in one assessment. Ella completed a total of eight MSWO assessments across a span of 4 months. Her top two item selections (coloring and puzzle) were fairly stable. However, there was increasing variability in the lower rankings. For example, watching TV was lowest ranked for the first and sixth sessions but was the highest ranked item for the second session. The nonendorsed item, dice, was usually one of the last items selected. Francis completed a total of eight MSWO assessments across a span of 4.5 months. The word puzzle was consistently highest ranked. Similarly, painting was consistently the lowest ranked, and the nonendorsed item, reading novels, was consistently selected in the first half of the array. Gertrude (bottom) completed a total of six MSWO assessments across a span of 3 months. Her results were more variable than those of Francis, but the magazine (M = 2; range, 1 to 5) and watching sports on television (M = 2; range, 1 to 5) consistently remained highest ranked, and the cards (the nonendorsed item) were consistently lowest ranked across all sessions (M = 7; range, 7 to 8). These results suggest that the MSWO accurately identified both preferred and nonpreferred stimuli.

Figure 2.

Mean ranks for preference assessments for Daisy, Ella, Francis, and Gertrude. The item selected earliest in the assessment was ranked 1, and the item selected last was ranked 8.

Figure 3 depicts the results of the engagement analyses for Abigail (top), Bill (middle), and Cora (bottom). The highest ranked (1), medium ranked (4), and lowest ranked (8) items from the corresponding assessment were used in each analysis, but the specific items changed across analyses as their rankings in the MSWO assessments changed. Abigail's highest ranked item always resulted in very high levels of sustained engagement (M = 98.3%; range, 93% to 100%), and her lowest ranked item resulted in much lower levels of engagement (M = 19.3%; range, 0% to 60%). Her medium ranked item resulted in high levels of engagement for some analyses and lower levels of engagement during the other analyses (M = 87%; range, 43% to 100%). These results indicate that the MSWO identified preferred items with a clear gradient of preference, confirming the predictive utility of the MSWO assessment. In addition, the nonendorsed item (dice) was repeatedly assessed as the lowest ranked item, and engagement remained low as predicted by the MSWO and the verbal report (i.e., true negative on the report measure).

Figure 3.

Percentage of intervals with engagement for the highest ranked, medium ranked, and lowest ranked items from the prior preference assessment for Abigail, Bill, and Cora.

For Bill, the results were variable across analyses. In one analysis (Session 7), clear parametric effects were obtained such that the highest ranked item produced higher engagement than the medium ranked item, which produced higher engagement than the lowest ranked item. In all other analyses, the relation between MSWO rank and subsequent levels of engagement was less consistent. In all but one analysis, either the highest ranked item (M = 63%; range, 23% to 93%) or the medium ranked item (M = 30.4%; range, 0% to 97%) resulted in higher levels of engagement than the lowest ranked item (M =25%; range, 6% to 50%) but usually not both. In the final analysis, all items produced only modest levels of engagement, with the lowest ranked item producing higher levels of engagement than either of the other two items. These results do not provide strong support for the relation between MSWO rank and level of engagement but do suggest that the MSWO was somewhat effective in identifying items that could be provided to produce engagement.

Finally, for Cora (Figure 3, bottom) all items resulted in moderate to high levels of engagement across all analyses. Overall, the highest ranked item (M = 92%; range, 66% to 100%) and the medium ranked item (M = 90%; range, 43% to 100%) resulted in slightly higher levels of engagement than the lowest ranked items (M = 80%; range, 33% to 100%). In addition, for Session 7, the lowest ranked item (the nonendorsed item) resulted in engagement lower than the other two items assessed, but still relatively high engagement (i.e., 60% of intervals) for a lowest ranked and nonendorsed item. These findings suggest that providing virtually any item resulted in engagement for a substantial proportion of the 5-min session, with a slight advantage for those items with the middle to highest ranks in the MSWO.

Figure 4 depicts the results of the engagement analyses for the remaining participants, Daisy, Ella, Gertrude, and Francis. Data for Daisy (top) show a variable pattern during the engagement analyses for the highest ranked (M = 62%; range, 0% to 100%), medium ranked (M = 44%; range, 3% to 100%), and lowest ranked (M = 35%; range, 0% to 100%) items. In two analyses (Sessions 4 and 6), clear parametric effects were obtained such that the highest ranked item produced higher engagement than the medium ranked item, which in turn produced higher engagement than the lowest ranked item. However, the highest ranked item resulted in the lowest engagement during the second analysis and, during two analyses (Sessions 1 and 3), the medium ranked item resulted in the lowest level of engagement. The lowest ranked item resulted in the highest or was tied for the highest engagement during two analyses (Sessions 3 and 7). The engagement analyses for Ella illustrate that all items resulted in high levels of engagement, with mean engagement for highest, medium, and lowest ranked items being 96%, 98%, and 87%, respectively. These results suggest that the differentiated findings of the MSWO were not confirmed and that any of the items used in the preference assessment could be used therapeutically in programming.

Figure 4.

Percentage of intervals scored with engagement for the highest ranked, medium ranked, and lowest ranked items from the prior preference assessment for Daisy, Ella, Francis, and Gertrude.

Francis's data suggest that, for the first four sessions, all items were highly engaging. For Analyses 5, 7, and 8, the highest and medium ranked items resulted in higher levels of engagement than the lowest ranked item. For Analysis 6, however, the lowest ranked item resulted in higher engagement than the highest and medium ranked items. Excluding Analysis 6, these data collectively suggest that perhaps any of the items used in the preference assessment could be used therapeutically in programming to increase engagement. Gertrude's engagement analysis results (bottom panel) illustrate that in Analyses 2 and 3 the highest ranked and medium ranked items produced higher levels of engagement than lowest ranked items, and the remaining four analyses resulted in the lowest ranked items generating equal or higher levels of engagement than the highest ranked items. These data suggest that the MSWO rankings were confirmed in only 33% of the sessions.

The criterion for describing the MSWO procedure as predictive of subsequent engagement was 75% of the engagement analyses for a participant indicating that the highest ranked item resulted in equal or higher levels of engagement than the lowest ranked item. This criterion was selected to be moderately stringent for evaluating predictive effects, although other criteria could also be applied. Based on this criterion, the MSWO procedure was effective in predicting subsequent engagement for five of the seven participants (Abigail, Bill, Cora, Ella, and Francis).

Statistical Analysis of Stability of Preference

The stability of preferences was evaluated for the five participants for whom the engagement analyses consistently confirmed the findings of the prior MSWO assessments. These five participants completed a mean of 8.4 MSWO assessments (range, 8 to 10) over a period of 3.5 to 5 months. Similar to Hanley et al. (2006), we used a Spearman rank order correlation coefficient to evaluate the stability of the results of the MSWO preference assessments statistically. This statistical analysis correlates the rank of items in each assessment with the rank in every other assessment. A critical correlation coefficient value (r) of .5 was used as the criterion for stability; correlation coefficients of r ≥ .5 represented stable preference patterns, and r < .5 indicated unstable preference patterns. Results from the correlations are presented in graphical form to assist in inspection of stability. Using the same graphing convention as Hanley et al., each graph has a criterion line at the value of r = .5, and each data point represents the correlation of the results of that particular assessment with every other assessment conducted over the entire span. A participant's overall pattern of preference over time was considered stable if more than half of all data points fell at or above the critical value.

For four of the five participants, preference remained relatively stable across assessments (i.e., r ≥ .5 for more than half of the assessments). For Abigail (see Figure 5, top left), exactly half of all correlations exceeded the critical value of r = .5 and the other half fell below the critical value. Thus, Abigail's data did not meet the criterion for stable preference. In contrast, the correlation coefficients for Bill, Cora, Ella, and Francis (top right, middle left, middle right, and bottom left panels, respectively) all met the criterion for stability, with the majority of the correlations at r = .5 or greater.

Figure 5.

Spearman rank order correlation coefficients between that preference assessment and every other preference assessment for Abigail, Bill, Cora, Ella, and Francis. Points above the line indicate strong positive correlations and stable preference.

Comparison of Three-Array and One-Array MSWO Rankings

MSWO assessment results for all seven participants were examined to calculate the agreement between the results of the first array presentation and the mean results of the three-array presentation (as used by Carr et al., 2000) for each MSWO assessment. Spearman's rank order correlation coefficients were computed to compare the ranked results from the initial array to the mean rank of all arrays in each completed MSWO. The mean correlations for all seven participants were between r = .62 and r = .88 (see Supporting Information). In addition, the majority of all correlations were above r = .5. For all seven participants, correlations were statistically significant at p < .01. These results indicate that the use of a single-array presentation can be as effective in identifying preference as a three-array presentation for individuals diagnosed with dementia.

DISCUSSION

A growing population of older adults with dementia is at risk for reductions in pleasurable activities and engagement (Alzheimer's Association, 2012; LeBlanc et al., 2011). In prior studies, PS preference assessments were used to identify items that increased engagement and positive affect (Feliciano et al., 2009; LeBlanc et al., 2006). This study evaluated a brief MSWO preference assessment procedure and examined the stability of identified preferences over several months. The MSWO assessment proved to be useful for the majority of our participants with dementia, with only one participant (Henrietta) unable to complete the entire assessment (i.e., three arrays). The procedure usually took less than 20 min, and the results predicted subsequent engagement for the majority of participants. However, hierarchical preferences were not always confirmed by subsequent engagement analyses. Demographic variables and cognitive status measures were not correlated with the predictive value of the MSWO, indicating that the procedure might be useful for any person with dementia but would need to be confirmed with an engagement analysis.

A few factors may account for the lack of corresponding parametric effects in the engagement analyses. In the engagement assessment, items were provided noncontingently rather than contingently (i.e., differential reinforcement of a specific response with contingent access to the stimulus). Noncontingent access was used because increased independent engagement in leisure activities is often the primary goal of programming for individuals with dementia (Engleman et al., 1999; LeBlanc et al., 2006). The noncontingent presentation may have contributed to the diminished gradient of preference (i.e., higher than expected engagement for medium to lowest ranked items) compared to a procedure in which a higher effort instrumental response was required to access an item. The use of a single-operant confirmatory analysis also may have contributed to the relative lack of parametric effects. Single-operant evaluations often result in a greater number of items with demonstrated reinforcing effects than concurrent-operants evaluations (Roscoe, Iwata, & Kahng, 1999). A concurrent-operants or free-operant evaluation may have resulted in clearer gradients; however, the single-operant procedure was useful for identifying as many preferred activities as possible. These two procedures could be directly compared in future preference assessment studies.

LeBlanc et al. (2008) found that participant and caregiver verbal reports about preference can result in false negatives, leading to underidentification of items that could be used therapeutically. The current findings are similar in that the item indicated as nonpreferred on the PES-AD only resulted in a low rank (rank of 7 or 8) on at least one of the MSWO assessments for five of the seven participants (Abigail, Cora, Daisy, Ella, and Gertrude). For the others, the nonendorsed item was selected before items that were endorsed on the verbal report, and many of the nonendorsed items resulted in moderate to high levels of engagement during the engagement analysis. Thus, even when individuals with dementia answer yes–no questions, verbal report may be suspect. However, both PS and MSWO assessments predict engagement and reinforcement effects (LeBlanc et al., 2008; Virués-Ortega, Iwata, Nogales-González, & Frades, 2012). Additional research on the MSWO procedure is warranted, and a different control condition (i.e., no item or randomly generated activity, rather than a nonendorsed item) may be necessary to provide a realistic comparison to standard conditions in long-term care settings.

Previous studies found mixed results when the stability of preferences for individuals with intellectual disabilities was assessed (e.g., Mason et al., 1989; Zhou et al., 2001). Most participants in the current study exhibited stable preferences across repeated assessments (i.e., most correlations for Bill, Cora, Ella, and Francis were statistically significant and above .5). It may still be valuable to assess preference periodically, because preference can shift over time and higher preference items produced longer durations of engagement in a prior intervention study (LeBlanc et al., 2006). Future research might investigate what factors (e.g., changes in cognitive status) affect the stability of preference and examine whether adding an open-ended question to the PES-AD helps to identify other items for engagement.

One other finding is worthy of note. A single-array presentation of the MSWO procedure correlated reasonably well with the results from all three arrays, supporting the findings of Carr et al. (2000). This finding is particularly important for this population, because fatigue and confusion may occur when arrays are repeated. Several participants found the three-array procedure lengthy, and some were visibly agitated when asked to begin the second and third arrays (e.g., “Why are you asking me again? I thought I just did this. I'm confused.”).

There are some noteworthy limitations of this study. First, only a relatively small number of participants completed all portions of the study, and no individuals experienced significant cognitive changes during the study. Future researchers might assess a larger number of participants over a longer time to evaluate whether changes in cognitive status affect preferences and engagement. Second, we did not evaluate all items from the preference assessment in our single-item engagement analyses, and we included only one nonendorsed item. Examining only the extreme and mid-point rankings allowed efficiency in conducting the study but did not always allow evaluation of items that were reported to be nonendorsed. Future studies might examine each item in the preference assessment in an engagement analysis or reinforcer assessment for validation of preference. In addition, future studies should investigate whether sustained increases in engagement occur when items are presented noncontingently throughout the day, as has been demonstrated with the PS procedure (LeBlanc et al., 2006).

Behavior analysts have an opportunity to export the technologies previously refined with individuals with autism and intellectual disabilities to a growing group of older adults with dementia who could benefit from services (LeBlanc, Heinicke, & Baker, 2012). The MSWO preference assessment was useful in identifying items to increase engagement for the majority of our participants with dementia. The shorter administration time of the MSWO may be particularly well suited to care environments for older adults that are frequently understaffed. In addition to the social good that can be accomplished by increasing the quality of life of older adults, researchers should continue to examine the degree to which prior findings with other populations (i.e., stability of preferences) are consistent with this population.

  • Action Editor, David Wilder

Ancillary