The four major approaches to quantification of time use among social scientists include: (1) continuous direct observation of time and motion; (2) real time activity sampling (referred to as work sampling when the activities of interest are associated with a particular job-related role such as bedside nursing); (3) self-report; and (4) derived estimates from administrative databases. These approaches differ in accuracy, intrusiveness, specificity, precision, bias, and efficiency.
Continuous Direct Observation
Direct observation of time-use behavior arguably remains the gold standard for accuracy in quantification of time use (Bratt et al., 1999; Larson, Aiello, & Cimiotti, 2004; Ver Ploeg et al., 2000). A designated observer follows a subject of interest in real time and records the duration of time spent on activities of interest and/or in locations of interest. The accuracy gained through direct observation does not come without significant costs, which include human resources for observation and data entry, intrusion during interactions and activity flow, and potential for changes in time-use behaviors based on perceived social desirability (Larson et al., 2004; Bratt et al., 1999; Weigl, Müller, Zupanc, & Angerer, 2009). Continuous direct observation of time use is often considered cost-prohibitive for large-scale projects.
Time-use estimates generated by work sampling reflect the proportion of time spent on activities or at locations rather than actual time duration. Estimation of time use through work sampling is achieved through recording predefined activities as they are performed and/or the worker's presence at predefined locations at a number of randomly selected times or established intervals (Robinson, 2010). Each recorded activity/location is considered an occurrence. The sum of occurrences for a given activity or location is divided into the sum of occurrences across all activities/locations to obtain a time-use estimate for that activity/location (Finkler, Knickman, Hendrickson, Lipkin, & Thompson, 1993; Pape, 1992).
Traditionally, the frequency of occurrences has been obtained through direct observation by trained observers shadowing one or more subjects of interest. Observers are equipped with a checklist of activities/locations and a stopwatch or pager programmed to alarm at predetermined or randomly determined times. At the predetermined intervals or at the sound of the alarm, observers record the observed activity/location on the checklist. Because work sampling involves intermittent rather than continuous recording of time-use behavior, it may be possible to enhance efficiency of data collection by having one observer shadow multiple subjects simultaneously.
In work sampling, accuracy and precision are a function of the number of sample points and the level of detectable time proportion desired (e.g., activities/locations occurring at a frequency of 5%, 10%, 15%, etc.). The number of sample points is determined by the number of subjects shadowed, the duration of the data collection period (i.e., days, weeks, or months) and the interval between sampling points (i.e., every 5, 10, or 15 minutes). Estimation of time use with a high degree of precision and confidence through work sampling often requires significant human resources, particularly if detection of activities/locations associated with infrequent time use is desirable (Pelletier & Duffield, 2003). For example, Finkler et al. (1993) reported a 20% or greater difference in time-use estimates obtained by continuous direct observation (8 residents observed for 24 hours) and work sampling (15-minute sampling intervals and 892 total observations) for 8 of 10 activities recorded. Almost all of the activities (9 of 10) occurred at a frequency of <20%. The number of observations required to achieve time-use estimates with 10% precision varied significantly based on frequency of occurrence: 1,532 for activities consuming >20% of time; 3,682 for activities consuming 10% of time; 7,007 for activities consuming 5% of time; and 21,822 for activities consuming 2% of time.
More recently, personal digital assistants (PDAs) have been used for self-report and direct entry of time-use data in response to randomly timed alarms for work sampling studies (Ogunfiditimi, Takis, Paige, Wyman, & Marlow, 2013; Robinson, 2010; Upenieks, Akhavan, Kotlerman, Esser, & Ngo, 2007). The use of PDAs can reduce the resources required for data collection and data entry compared to direct observation, but the issue of potential response bias persists (Donaldson & Grant-Vallone, 2002; Robinson, 2010). Moreover, an alarming PDA may be perceived by the bedside nurse as intrusive and disruptive to workflow, particularly when the time interval between alarms is shortened to enhance precision.
Due to the intensity of resources required for direct observation, self-report measures of time use are often used instead. The most commonly used self-report time-use measures include time diaries (TD), experiential sampling methodology (ESM), and stylized respondent reports (SRRs). TDs typically require respondents to keep a chronological record of their activities over a predetermined period of time. The record can be completed in real time or retrospectively. The most common approach involves free-form entries, allowing respondents to use their own description of activities and include actual start and stop times. TDs completed in real time are considered to have minimal recall error, and the recording of actual start and stop times enhances the accuracy and precision of time-use estimates (Otterbach & Sousa-Poza, 2010). Respondent burden can be significant, however, particularly when free-form responses are required. For this reason, TDs typically are limited to 1–2 days per participant, which potentially limits the potential for pattern recognition and generalizability of time-use estimates (Lin, 2012). Moreover, error may be introduced through the data coding process, and significant resources may be required for coding and data entry of the free-form responses.
Similar to the work sampling approach, ESM involves collection of data at multiple randomly selected times over a predefined period (i.e., day, week, month). Respondents are provided with a programmable device that is activated (e.g., to beep, vibrate, or buzz) randomly throughout the data collection period. In response to the alarm, respondents record information about what they are experiencing in that moment. In contrast to work sampling, the information recorded in ESM can be rich in detail and include multiple aspects of the time experience (e.g., cognitive, behavioral, and affective) (Juster, Ono, & Stafford, 2003). Self-report forms for ESM typically contain a set of core items, which may use a variety of response options: free-form text, fill in the blank, semantic differential scales, visual analog scales, and checklists (Ver Ploeg et al., 2000). As in work sampling, time-use estimates represent proportions of time spent rather than actual duration of time spent. Because detailed information is recorded in ESM, respondent burden and resources for data coding and entry can be significant.
SRRs of time spent require respondents to recall and estimate how much time they “normally” or “typically” spend on a list of predefined activities within a given time frame (e.g., day, week, month). Response options can be crafted to assess relative and/or absolute time spent. Response options to assess absolute time spent may be open-ended, allowing respondents to fill in a specific amount of time, or they may include ordinal scales with ranges of time duration (Manson, Levine, & Brannick, 2000; Otterbach & Sousa-Poza, 2010; Ver Ploeg et al., 2000). Stylized items may measure relative time spent by asking respondents to use Likert-type response options to rate the amount of time spent performing an individual task relative to all other tasks being considered (Manson et al., 2000). SRRs can be completed using a variety of formats, including interviews (by phone or in person), paper and pencil mail surveys, and online surveys. Although respondent burden and data collection costs are lower with SRRs than for the other self-report methods, the potential for recall bias and aggregation error is greater.
All direct observation and self-report methods have the potential for biased estimates that favor time spent on socially desirable activities (Donaldson & Grant-Vallone, 2002). Greater overall bias in time-use estimates has been consistently demonstrated in SRRs compared to TDs and direct observation (Bratt et al., 1999; Collopy, 1996; Juster et al., 2003; Lin, 2012; Otterbach & Sousa-Poza, 2010). Discordance between measures can be significant. For example, Bratt et al. (1999) reported mean absolute differences of 59–60 minutes in daily time-use estimates, and Collopy (1996) reported median absolute differences of 32–47%.
Reported concordance among time-use estimates across methods is highly variable. Hunting et al. (2010) reported moderate agreement (60%) between time-use estimates across nine task categories among construction workers, but differences in concordance were noted based on relative task proportion and job role. Intraclass correlation coefficients (ICCs) for major tasks (i.e., those performed >1 hour/day) ranged from 0.52 to 0.85 and were considered good to excellent. In contrast, ICCs for minor tasks (e.g., those performed for <1 hour/day) were primarily poor (ICCs 0.39–0.54). There was a trend toward higher self-report time-use estimates for major tasks and lower self-report estimates for minor tasks. Agreement among time-use estimates was higher among workers in specialized job roles with consistent work patterns (few tasks routinely performed in a controlled environment) compared to workers in roles with greater variability in work patterns (many tasks performed in response to variable situational contexts). In job roles with variable work patterns, over half of the time-use estimates differed by more than 1 hour. This is particularly relevant as work patterns among bedside nursing also are highly variable and context-dependent. Despite variable ICCs, there was good agreement (79%) between methods regarding the rank order of time-use estimates.
Burke et al. (2000) compared self-report time-use estimates from bedside nurses to time-use estimates obtained through direct observation and work sampling. No method effect was noted for estimates of the percentage of time spent across four main categories of care, suggesting good concordance between work sampling and self-report. However, significant discordance was noted between estimates of time duration for specific activities. Self-report estimates were 2–3 times longer than direct observation estimates.
Staffing Indices From Administrative Databases
The fourth approach to quantification of time use common among health services researchers involves staffing indices derived from data collected for administrative purposes in conjunction with normal daily operations. The most common staffing indices include nursing hours per patient day (NHPPD), registered nurse full time equivalents (RN FTEs), and nurse patient ratios (NPR), all of which involve aggregated measures of worked hours (from payroll data) and patient census (from billing data; Buerhaus & Needleman, 2000; Currie, Harvey, West, McKenna, & Keeney, 2005; Heinz, 2004; Kane et al., 2007; Numata et al., 2006; Thungjaroenkul et al., 2007). Payroll data about worked hours reflect time spent in crudely defined work roles, such as direct versus non-direct care. Time spent on specific activities within either role is not captured. Patient census data are primarily recorded for billing purposes and reflect the number of inpatients present in a facility at a single point once every 24 hours (most often at midnight when a new billing cycle begins). Actual time spent in the care of specific providers is not captured.
Although staffing indices may be more efficient than other time-use methods and immune from bias associated with perceived social desirability, other limitations of this approach have been reported. Standardized definitions for key variables used in the computation of staffing indices (e.g., direct care and FTE) across organizational databases have not been established. Consequently, significant discordance among staffing indices has been reported (Spetz, Donaldson, Aydin, & Brown, 2008). Moreover, staffing indices are biased estimates of time use because they are known to overestimate time spent in the direct care role (Upenieks et al., 2007). Finally, although staffing indices may be useful to study trends over time, they do not provide sufficient precision to accurately quantify the effect of time spent in nursing care on patient outcomes or uncover the mechanism of action through which nurse staffing might affect patient outcomes (Buerhaus & Needleman, 2000; Clarke, 2007). Staffing indices are crude surrogates for time spent with patients, and it is not known whether increases in nurse staffing actually result in increased time spent with patients.
Health services researchers and hospital administrators are eager for time-use estimates for bedside nursing staff that are accurate, precise, unbiased, reliable, and cost-effective. A single method superior in each of these characteristics has yet to be identified, and exploration of new methods is warranted.
Real Time Location System (RTLS) for Time-Use Estimation
Electronic capture of time and motion through real time location systems (RTLS) is an innovative and promising approach to time-use measurement that is now available for application in the healthcare setting. Radiofrequency identification (RFID) technology is the primary underlying mechanism for RTLS, and the terms are often used interchangeably. The original application of RTLS in healthcare was for asset tracking and supply chain management. RTLS application subsequently expanded to include tracking patient throughput to document dwell times and identify bottlenecks in patient flow. More recently, experimentation with RTLS as a method to quantify time spent at the bedside by nursing staff has been reported (Hendrich, Chow, Skierczynski, & Lu, 2008). Furthermore, a detailed description of the RFID mechanism for automated wireless time-use data collection has been reported (Jones, ).
In RTLS, location tracking is achieved using microchips embedded in tags worn by subjects of interest. Each microchip intermittently transmits a unique electronic signal to uniquely identified sensing devices (readers) placed in locations of interest. Upon recognition by a reader, a second wireless signal carrying information regarding the identification number of the tag and the location of the reader is transmitted. This transmission is received and recorded by another sensing device (interrogator), which adds an electronic timestamp reflecting the time the signal was received. A real time movement history for each tag is generated, enabling computation of time spent in each location.
Vulnerability to artifact is a limitation of the RTLS methodology. Artifact can result in invalid, misread, and/or missed RTLS signal entries, all of which represent measurement error and adversely affect the accuracy of time-spent estimates (Jones, 2012). Consequently, a filtering process designed to identify and correct entries due to artifact is a necessary adjunct for the RTLS methodology.
The application of RTLS to obtain time-use data for bedside nurses is in its infancy, and minimal empirical evidence regarding efficacy and feasibility has been reported (Fahey, Lopez, Storfjell, & Keenan, 2013). Therefore, the purpose of this pilot study was threefold: (1) to assess the efficacy of RTLS time-use estimates for bedside nursing staff compared to the gold standard of direct continuous observation; (2) to assess inter-rater reliability of manually filtered RTLS time-use estimates; and (3) to identify the monetary resources required to support time-use estimation among bedside nursing staff using RTLS technology.