There is a need for metrics that describe the full range of services provided by a clinical research unit; given that services have expanded to include such things as investigator training, regulatory compliance monitoring, and budget negotiations. We developed a tool and methodology that allows tracking of these expanded services. This not only allowed us to more accurately describe the work of the research unit staff, but to monitor the status of a study across the entire study lifespan from the idea to the publication. In addition to measuring work, it allows us to anticipate future needs in clinical staff and expertise because we are involved very early in study planning. We also expect that by analyzing these data from many studies over time, we will identify process barriers that will direct future program improvement.
Everything that can be counted does not necessarily count; everything that counts cannot necessarily be counted.
The nature of the academic clinical research unit has changed with a new emphasis on translational research and the development of the Clinical and Translations Science Awards (CTSA). These awards allowed and encouraged research to escape the confines of a hospital inpatient unit to now include outpatient and community based sites. Concurrent with this diaspora is an increased attention to the documentation of the ethical conduct of research. The implementation of which requires increased training, administrative support, preparatory detail and participant engagement. In addition, the regulatory environment now requires more extensive documentation by everyone at all levels of the research enterprise. Together these changes fundamentally change the nature of the service an academic research unit should deliver in order to provide investigators with the maximum amount of support in the conduct of their clinical research projects. These services include things such as guidance regarding study design, good clinical practice, and regulatory issues and documentation.[2, 3] Some of these activities occur long before or after a study participant walks through the door of the clinical research unit (CRU)—if they ever do walk through the CRU door.
At the same time, institutions and funding agencies are increasingly focusing on productivity and measurable outcomes to justify expenses and programs. Investigators are being asked to budget for the costs of many services which previously would have been seen as being paid through institutional funding or the General Clinical Research Center funding mechanism. This puts increasing pressure on the CRU to measure and justify the critical work they do in the facilitation of clinical research.
In the days before the CTSAs, unit activity was easily tracked by counting the number of patient visits, samples processed, etc. (Figure1). With the expansion of services, we have been challenged with devising new ways to accurately describe and quantify the work of the CRU and to do this in a way that does not take more time than the activity itself takes to perform. The quote above from Albert Einstein seemed appropriate as we struggled to quantify things like advice and networking? We needed to come up with a strategy for measuring unit activity that describes the considerable cumulative effort included under both ends of the effort curve (Figure1). We decided to take a more qualitative approach to our metrics as a supplement to the more traditional head counting. Here, we report on the process used for developing and implementing a supplementary clinical research unit activity report.
The methodology for compiling the metrics began as an iterative process by first devising a list of activities that was inclusive and specific. This was followed by a trial period to test the system for accuracy and practicality. Initially we attempted to track the hours spent on each activity for each protocol or study. But it quickly became clear that this strategy was too labor intensive because the staff do a considerable amount of multi-tasking. For example, they may make a single trip to the Bursar's Office to get petty cash for participant incentives for 3 different studies at one time. Because we wanted our focus to remain on providing quality service as opposed to documenting that we provide service, we needed a simpler methodology. We instead began documenting when each activity was performed; as dichotomous variables.
In order to compile the activity log we made a list of all the kinds of activities we support. A sample activity log is shown in Figure2. The plan was to simply indicate if an activity was performed over a 1 month period and not attempt to quantify the number of hours spent on any one activity. For example, a one minute phone call regarding an IRB question and a 4 hour study visit would each receive one checkmark for each of those types of activities for the appropriate study for that month. The checklist is then tallied monthly.
The staff members of the CRU use different strategies depending on their role within the unit and their routine mechanism for organizing their time. The administrative staff have a wider range of activities and assist a greater number of studies within a single month and therefore have a more complex set of source documents. These source documents include:
A printed daily MS Outlook calendar. These contain contact's name, phone number, company, phone number, time of call, purpose, follow up necessary, and notes.
For the administrative staff it was easier to assemble these documents and tally the information on a weekly basis then collate the weekly activity for the monthly report.
The technical staff (nursing, laboratory, etc.) who tend to work on a limited number of different studies within any 1 month, keep track of their activities as they work throughout the month. They each have a blank data log printed out for each study they work on each month (Figure2). They make a check by any activity they perform for each study as they perform the task. However, again, any one task only gets one check per month.
Once a month all the activities are consolidated onto a single activity sheet for each study by the unit manager. When compiling the data on any one study each activity only gets one “checkmark” per study per month regardless of the number of hours or the number of different staff members involved. For example, if in 1 month one study has one meeting, two phone calls and 7 emails all regarding assistance with contracting, the monthly activity log only receives one “checkmark” for “assistance with contracting” for that particular study.
Once emails and calendars have been coded onto the study log sheet, these are tabulated into a MS Excel worksheet. The spreadsheet rows correspond exactly with the list of activities on the activity log sheet. Each column includes the data for one study. The column headers also include the investigator name or other parameters by which that data may be sorted during future analysis. This could include such things department or division name or if the study is a pharmaceutical or NIH sponsored study. Each month the activities (checked boxes) are tallied for each type of activity across all studies. A new sheet is created for each month, thereby making a month the unit of time measured. Using Excel, columns for studies without activity in the month being tabulated can be hidden.
In this monthly summary analysis, any one activity may have more than one occurrence, for example, help with “assistance with contracting,” may have been utilized four times in a given month by four different studies. This provides a snapshot of the workload if not in hours at least by type and volume of service.
We developed a list of activities involved in the support of a clinical research study, capturing events throughout the life of a project from concept to publication (Figure2). The goal is not to create an accounting of effort of time spent on an activity in term of minutes and hours. The goal is to create a higher level of tracking during which the unit of time is 1 month. We broke activities down into four major areas:
Direct study activity.
Protocol and Site Development.
Investigator and Staff Development.
However, the list of activities is sufficiently detailed so that it becomes semiquantitative because any one study is likely to have multiple different activities in 1 month. A sample of a 6 month report is shown in Figure3. These data do not replace but supplement the more traditional CRU activity report of the number of study participant visits and the number of specimens processed.
We observed that in any given month, one or a few projects are the focus of activity, making it a less onerous task than this may initially appear. Also, within the unit one staff member will work on a limited number of studies or perform a limited scope of activities.
In addition to merely creating a summary column on each monthly sheet to create a monthly unit report, tracking of activity in this way allows us to understand other aspects of the research enterprise. We can collate the data on a single study across many months or years by creating a spreadsheet that links together all the columns for one particular study across many monthly spreadsheets. This gives a picture of the timeline of a study and a better understanding of the entire process and the average time on months of each phase of the clinical study process. Additionally, by looking at the data from many studies in this way it allows us to see where the delays and roadblocks might be and thereby identify processes in need of improvement. In addition, early involvement with and tracking of studies as they develop allows us to project staff and facility needs well in advance.
These data can also be sorted and subdivided to provide a summary of the activity for individual investigators, or departments. It can also be sorted by funding type to determine trends within the unit such as changes in the number of pharmaceutical company sponsored studies. The data can also be analyzed to address questions such as the overall timeline of pharmaceutical studies compared to studies with other funding sources. These analyses should allow us to observe trends and project future needs with regard to space and personnel.
We appreciated that the services provided to investigators by our CRU are much more extensive than merely providing nursing and laboratory processing support for the actual study visit. The major types of activities and their temporal relation are shown in Figure1. We were challenged to measure all the services we provided and to do this in a way that did not add undue burden thereby reducing the productivity of the unit and shifting the focus from doing the service to counting the service.
Our clinical research unit already had two ways of counting activity. We generate a study visit report that tracks the number of active studies, investigators, patient visits and samples processed. In addition, because we use a cost recovery model to partially support the unit, for each study we also track nursing hours, room use hours, supply costs, ancillary service costs etc. Because we felt that some of the most important work we do is not captured in these numbers, we devised an additional activity tracking tool, which we report here. Much of this work takes place long before the first study participant enters the unit or long after they leave. These services include regulatory guidance, networking connections, project design and funding strategies.
Although this tracking scheme does not track hours spent on any one activity, because we tally the activities each month, the unit of time is in months as opposed to hours. Also, because the data are tabulated by study, we can group and subtotal the data by investigator or subspecialty or for the entire research unit on a monthly basis. By reviewing the data over many months, we can observe for example that numerous investigators ask for financial administration help for many consecutive months indicating that this is an area where we might justify additional staff or additional investigator training. Likewise, this type of review of many months of data could reveal that after providing IRB submission support, we do not have any activity on those studies for many months. This could provide data about a possible roadblock to research.
The data from this tool will eventually allow us to track the life cycle of a study, identify investigator needs, track trends, identify bottlenecks in the study startup process, and allow for better alignment of staff to service need, and identify barriers to study completion and publication. The data from this tool is not intended to be the sole or even the primary productivity measure of CRU activity. It is intended to be used in conjunction with other reports that track space utilization by counting the number and type of study visits and hourly productivity by tracking cost recovery for services rendered. Taken together these three report types comprehensively describe CRU productivity.
We have developed a tool for measuring the wide range of services provided by our clinical research unit. This tool measures the types of services rendered on a monthly basis as opposed to an hourly report of hours. We believe this higher level observation of the CRU activity will facilitate long-term planning that will advance the goal of improving health.
This work was supported by CTSA grant (8UL1TR000149) and CHRISTUS Santa Rosa Children's Hospital. The authors thank John Roache and the other members of the UTHSCSA CTSA for helpful discussions in the development of this tool.