SEARCH

SEARCH BY CITATION

Keywords:

  • stimulus preference assessment;
  • magnitude;
  • preschool children

Abstract

  1. Top of page
  2. Abstract
  3. METHOD
  4. RESULTS AND DISCUSSION
  5. REFERENCES

We evaluated the extent to which access duration during stimulus preference assessments affects preschool-age children's preferences for leisure items. Results demonstrated that rankings for highly preferred items remained similar across both short- and long-access durations; however, overall preference hierarchies remained more similar across administrations of long-access-duration assessments than short-access-duration assessments.

Stimulus preference assessments can identify stimuli that can be used subsequently as reinforcers (Hagopian, Long, & Rush, 2004). One variable that may affect preference for and reinforcing efficacy of a stimulus is the duration of access for which the stimulus is provided during the evaluation (Hoch, McComas, Johnson, Faranda, & Guenther, 2002). In the preference assessment literature, different access durations have been provided (e.g., 5 s: Fisher et al., 1992; 30 s: DeLeon & Iwata, 1996; 5 min: Roane, Vollmer, Ringdahl, & Marcus, 1998). Steinhilber and Johnson (2007) compared the outcomes of two multiple-stimulus-without-replacement (MSWO; DeLeon & Iwata, 1996) preference assessments with different magnitudes (15-s vs. 15-min access). Both participants' results showed that some items were more highly preferred when they were available for 15 s, whereas other items were more preferred when they were available for 15 min. In addition, reinforcer assessments were conducted to determine levels of responding associated with high-preference (HP) stimuli when provided for preferred rather than nonpreferred durations. Results showed that rankings from the short-access and long-access preference assessments were predictive of response allocation during the reinforcer assessments. The results suggested that it may be important to match access durations during preference assessments to those access durations for which the reinforcer will be provided. However, Steinhilber and Johnson conducted their evaluation with only two participants with autism spectrum disorder, so the generalizability of these results may be limited.

Steinhilber and Johnson (2007) evaluated differences in preferences across access durations but did not evaluate the stability of rankings across administrations during one particular access duration. It is possible that stability of item rankings varied across short- and long-access assessments. It may be important to determine how item rankings change across assessment administrations within a particular access duration to create efficient and accurate assessment procedures.

The purpose of the current study was to replicate the work of Steinhilber and Johnson (2007) by conducting preference assessments with 30-s and 5-min access durations with typically developing children (a) to compare item rankings across access durations and (b) to assess the correspondence of item rankings across administrations within a particular access duration. We chose 30-s and 5-min access durations because previous researchers have evaluated the former (e.g., DeLeon & Iwata, 1996), and we hypothesized that the latter is a typical period of time for which stimuli might be delivered as reinforcers for young children.

METHOD

  1. Top of page
  2. Abstract
  3. METHOD
  4. RESULTS AND DISCUSSION
  5. REFERENCES
Participants and Setting

Eleven typically developing children, ranging in age from 3 to 5 years, participated. Preference assessments were conducted in a quiet room or area of the classroom, and a maximum of two assessments (separated by at least 4 hr) were conducted per day. Assessment length ranged from 5 to 40 min.

Data Collection and Measurement

Trained observers recorded behavior on data sheets. The dependent variable was item selection, defined as placing a hand on one of the presented items. Item rankings were determined by calculating selection percentages for each item by summing the number of times an individual item was selected and dividing that sum by the number of times that item was presented.

Interobserver agreement for item selection was assessed during an average of 40% of trials across participants. An agreement was defined as both observers recording the selection of the same item during a trial. Interobserver agreement for all participants was 100%.

Procedure

We conducted six MSWO assessments (three 30-s access and three 5-min access) in a quasirandom order with each participant. Each access duration was associated with a differently colored poster board labeled “a little” (30-s access) or “a lot” (5-min access). The poster board was placed on the table, and a timer was set for the appropriate access duration to aid in discrimination. Stimuli included in the assessments were seven leisure items (e.g., handheld video game, play set) that were not novel to the children but to which access was restricted throughout the school day. We attempted to choose items that would be more likely to be preferred for long-access durations than short-access durations and vice versa.

Before the start of each assessment, the therapist provided exposure to each stimulus by labeling each item, describing the length of access duration (i.e., short or long), and prompting the participant to label each item. To start each assessment, all seven leisure items were presented equidistant from each other and the participant. To begin a trial, the experimenter said, “Pick your favorite.” Contingent on item selection, the therapist provided access to the selected item, a timer that counted down access duration was placed in front of the participant, and the other items were removed. When access duration elapsed, the therapist removed the item and re-presented all unselected items in a different order than in previous trials. The assessment was completed when all items had been chosen and removed from the array. The array was presented once per assessment (three times total per access duration).

RESULTS AND DISCUSSION

  1. Top of page
  2. Abstract
  3. METHOD
  4. RESULTS AND DISCUSSION
  5. REFERENCES

Stability was examined based on similarity in rankings for HP items across assessments. Disparity was examined based on the displacement of an item ranked in the top two in one assessment to a ranking in the bottom two in the other; these rank values were chosen because of the relatively small array size. Quantitative analyses were conducted for across- and within-access duration data using nonparametric statistical tests conducted in SPSS (Version 21). Kendall rank-order correlation coefficients (τ) were calculated between overall ranks for items in the 30-s assessments and items in the 5-min assessments. Kendall correlations of concordance (W) were calculated to examine consistency of rankings within the three assessments for each access duration.

Figure 1 displays preference assessment data across administrations of each assessment type, Kendall τ coefficients, and Kendall W coefficients for all participants. With respect to stability, item rankings across 30-s access assessments and 5-min access assessments remained relatively stable, with the same HP item being identified across assessment types for 8 of 11 participants. For the three participants for whom the HP item differed across access durations (Isabelle, Fred, and Laura), the HP item in the 30-s access assessment dropped no more than two ranks in the 5-min access assessment. At least one item remained in the top two across access durations for all 11 participants; six participants (Lauren, Tim, Eliza, Fred, Ophelia, and Pete) ranked the same items in the top two across access durations. No participants displayed rankings that fit our definition of disparity. Thus, HP items remained highly preferred regardless of access duration, and item ranks did not differ drastically across access durations for participants.

image

Figure 1. Each participant's item rankings across assessment type (30-s access and 5-min access). Graphs are presented in order of decreasing correspondence between access durations according to Kendall τ coefficients. Kendall W coefficients for each access duration are also presented. *p < .05; **p < .01.

Download figure to PowerPoint

Kendall's τ analysis showed that rankings between each assessment type were significantly correlated for eight participants, although these were not the same eight participants described above for whom the same HP item was identified across assessments. Item rankings remained similar across access durations for the majority of participants, although stable rankings were not necessarily for HP items. Data for two participants whose τ coefficients were significant had different HP items identified across assessments (Fred and Laura), and data for two participants whose τ coefficients were not significant had the same HP item identified across assessments (Alan and Carmen). These findings indicate that associated rankings across assessments (i.e., a significant τ coefficient) may not necessarily reflect the same HP item across access durations, and unstable rankings across assessments may not necessarily reflect different HP items across access durations. The finding that 73% of participants ranked items relatively similarly across access durations differs from that of Steinhilber and Johnson (2007); the general stability of rankings that we found may suggest that a preference assessment with short-access duration is sufficient to identify an item that will be preferred for both short access and long access.

Kendall's W analysis showed that rankings within 30-s access administrations were significantly correlated for 4 of 11 participants, whereas rankings within 5-min access administrations were significantly correlated for 8 of 11 participants. Stability in rankings across 5-min assessments for the majority of participants is similar to results found in previous studies in which the stability of preference over time was examined (e.g., Carr, Nicholson, & Higbee, 2000). These results may suggest that the 5-min assessments provided a more accurate depiction of participants' preferences than the 30-s access assessments.

These results suggest that conducting a 5-min preference assessment may address issues following 30-s access assessments, such as when (a) an HP item from a 30-s SPA fails to function as a reinforcer when delivered for a period of time longer than 30 s, or (b) difficulty in interpreting assessment results occurs due to disparate rankings across 30-s administrations. However, results of the Kendall τ analysis showed good correspondence between all 30-s access and 5-min access rankings, suggesting that even with some instability of item rankings over assessment administrations with the same access duration, overall rankings across access durations remained relatively stable. That some obtained τ and W correlations did not achieve statistical significance implies that, for some participants, preference for some leisure items may be affected by the duration for which the items are available. Certain items may require a longer access period from which to derive reinforcing value (e.g., puzzle, video game); this should be a consideration when selecting items for use in preference assessments. Taken together, these data imply that a short-access preference assessment may be sufficient for most children, but a long-access preference assessment may provide a more accurate indicator of preference if the 30-s access assessment fails to identify an HP item to be used as a reinforcer.

There are some limitations of this study to address in future research. We selected 30 s and 5 min as the short- and long-access durations, respectively, because they are relatively common access durations in clinical and educational environments. However, 30 s and 5 min may not represent a large enough disparity for an access-duration effect to be demonstrated. Also, we did not evaluate the extent to which HP items functioned as reinforcers. Finally, these procedures were evaluated with typically developing children, a divergent population from previous research. Given these limitations, our conclusions should be viewed as preliminary, and future research is necessary to determine the conditions under which access durations are an important variable in preference assessments.

REFERENCES

  1. Top of page
  2. Abstract
  3. METHOD
  4. RESULTS AND DISCUSSION
  5. REFERENCES
  • Action Editor, Thomas Higbee