Feasibility of a standard cognitive assessment in European academic memory clinics

Standardized cognitive assessment would enhance diagnostic reliability across memory clinics. An expert consensus adapted the Uniform Dataset (UDS)‐3 for European centers, the clinician's UDS (cUDS). This study assessed its implementation acceptability and feasibility.


INTRODUCTION
The detection and timely diagnosis of Alzheimer's disease (AD) and related neurodegenerative pathologies is a global priority. 1 The first step of the diagnostic process is to ascertain cognitive deterioration in patients referring to specialty centers. 2 Currently, this assessment is not standardized across European academic memory clinics. 3,4 A wide range of diagnostic tools is available both, in digital and paper and pencil versions. 4 However, the lack of standardization leads to inconsistent diagnosis. 3,5 Exceptions are German speaking-countries which widely implemented the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) -Neuropsychological Battery (NAB) owing in large measure to the availability of local norms. 6 Recently, a consensus adapted the third version of the National Alzheimer's Coordinating Center Uniform Dataset (NACC UDS-3), 7 the most used research battery, for European memory clinics: the clinician's Uniform Dataset (cUDS). 2 The cUDS will overcome the issues of data variance by providing standard definition of the clinical disorder, tools and procedures. 2,4 The selection of the cognitive domains and tests to detect mild cognitive impairment (MCI) due to different etiologies was based on experts' opinion, 2 the Diagnostic Statistical Manual for Mental Disorders-5th version (DSM-5)'s criteria, 8  tests; Boston Naming Test, for an estimated administration duration of 60 to 65 min 2 (Section S1).
Standardization will improve the diagnostic reliability and allow patients to request follow-up assessments or second opinions without repeating the same baseline tests and regardless of their location, as is already the case with blood tests. 2 Aligning clinical practice with research procedures allows diagnostic biomarkers and treatments to be recommended according to their demonstrated informative or therapeutic value, and consistently across centers. 2,4,9 At the research level, it facilitates data pooling, cross-study comparisons, and the selection of more homogeneous patients, reducing variability. 2,4,10 The added-value extends also to practical and logistical advantages of increased time efficiency in clinical routine, [11][12][13] as well as long-term economic benefits for the health-care systems (e.g., saving unnecessary duplicated efforts). 2,14 Based on such benefits, other standardization initiatives are currently ongoing worldwide. 15,16 The implementation of novel standardized procedures in medical settings requires changing long-established clinical routines, which is often perceived as a challenge. 13,[17][18][19] This is due to a wide variety of factors that can hinder the translation of sound research findings into clinical practice. [20][21][22] The evaluation of feasibility and acceptability is a necessary step to ensure a successful implementation in medical settings. 18 Within a proposed implementation strategy ( Figure 1), we aimed at assessing the feasibility and acceptability of cUDS implementation in European academic memory clinics. Specific objectives entailed (i) the identification of general barriers and facilitators related to the feasibility of cUDS implementation; (ii) the assessment of clinicians' willingness (or acceptability) to implement as well as related causal mechanisms/mediators; (iii) the identification of barriers specific to clinicians willing or unwilling to implement; and (iv) the identification of concrete next steps to overcome the identified barriers and to proceed in the analysis with the piloting-feasibility phase.
We used the methodological framework Process Evaluation (PE), typically used to evaluate processes and outcomes of complex medical interventions, 18,23 also in the dementia field, 24,25 to guide our analysis of feasibility. PE uses mixed-methods designs to capture information based on both theoretical assumptions and unbiased information from the context. 18

Data collection
We invited clinicians in charge of patients' assessment from the 72 eligible EADC memory clinics (e.g., neuropsychologists, geriatricians, neurologists, or psychiatrists) to answer the survey online over a 3-month period (September-December, 2020). We sent the questionnaire to the EADC network via email indicating the estimated time of completion (15 min). The clinicians included in the mailing list were officially registered as EADC referents for the year 2020 and invited to forward the survey to the practitioner in their center. To facilitate completion, we provided an electronic copy of the survey (pdf) in advance.

Data analysis
According to our mixed-methods design, we initially performed quantitative and qualitative analysis separately and then interrelated the results on feasibility and acceptability ( Figure S1). 28 We calculated response rates over the total eligible EADC centers (N = 72) 27 and computed services profiling, barriers and facilitators on the overall responses, even if independent clinicians belonged to the same center. We performed descriptive statistics using the software R Studio (RStudio Team, 2020). For the qualitative analysis, we developed a coding scheme based on a deductive approach and our research questions identifying four a priori domains for the classification of barriers and facilitators. 17 We divided the domains into: "clinical-methodological" (e.g., clinical-psychometric characteristics of the tests, such as local norms, appropriate selection of tests to include), "implementation process" (e.g., logistics/time), "external" (e.g., cultural-economic factors), and "unclear/none" (Methodology Section 3, Figure S2). We per- For the coding of categories, we used both inductive and deductive analysis approaches to capture the unexpected meaning of responses while taking into account theoretical considerations, 26 assigning at least one code to each clinicians' answer. To assess the reliability of the coding, we calculated the intercoder reliability (ICR), which showed an agreement between raters of 54%, with ĸ = .51, equivalent to a moderate level of agreement. We considered the answer to the question "Would you use the 1-h cUDS as a standard battery to assess MCI patients in your center?" as a proxy of acceptability, and F I G U R E 1 Roadmap of implementation. The figure shows the steps required for effective implementation, from the initial consensus definition of the standard battery cUDS at the Geneva workshop in 2018, the survey investigating cUDS hurdles and facilitators to the implementation in memory clinics and clinicians acceptability (current status of our work), to the piloting stage (small-scale implementation) and the evaluation of effectiveness of real-world implementation (large-scale implementation), until the intervention reaches routinization in the health-care system. For each of these steps, specific methodological frameworks can support the design of the implementation strategy.
used it to stratify the analysis of barriers and facilitators, setting a threshold of optimal acceptance at 80%. 3,31 We then applied logistic regressions to test the relation between independent variables like economic reimbursement and acceptability of cUDS.   Table 1). The majority of clinicians (96%) declared to use formal definitions for the diagnosis of MCI, although these were heterogeneous ( Table 1). Half of the cUDS tests (4/8) were already frequently used, particularly the fluency tests (Table S2).

Responding EADC clinician and center profiles
Unavailable tests (e.g., Multilingual Naming Test used in the U.S.) had local equivalent tests examining the same function with appropriate local norms (e.g., Boston Naming Test) ( Table S2). The surveyed EADC centers had also requests for foreign patients' assessment in 88% of cases. Figure 2A and S4.A shows barriers envisioned by clinicians to implement the cUDS in their clinical context. Up to 64% of the hurdles relate to the implementation process (43%) and clinical-methodological (21%) domains, consisting mainly of logistical issues (13%) and unavailability of local norms (12%). External factors (20%) seemed to influence minimally, with 8% reporting low financial resources (for the extensive comments, please refer to Table S3.A). Figure 2B and S4

Willingness to implement the cUDS and economic reimbursement in EADC memory clinics
A moderate proportion of clinicians (65%) reported acceptability toward cUDS implementation. Clinicians in Northern European regions showed acceptability rates at 86%, Eastern European regions at 80%, Western European regions at 60%, and Southern European regions at 56%. According to EADC clinicians, time for cognitive assessment is highly covered by the insurance or health-care system in their centers.
Indeed, 71% of respondents reported to receiving medium (61% to 95%) to high (96% to 100%) levels of reimbursement ( Figure S5). Only were grouped into four main domains: clinical-methodological, implementation process, external, and unclear/none. Open responses were provided by 51 of the 51 survey responders for both barriers and facilitators. Numbers in the bars represent the responses' frequency percentage calculated for each coded category. For space and illustrative purposes, the displayed range is between 0% and 16%. For the display of the full range percentage (0% to 100%), please refer to Figure S4 of the Supplementary Material.

F I G U R E 3 Barriers and facilitators by willingness to implement the cUDS. In the (A) and (B) charts, the y axis reports barriers' or facilitators'
labels clustered according to three main domains: clinical-methodological, implementation process and external factors, for clinicians willing (green, N = 33) and those unwilling (red, N = 18) to implement the cUDS. Numbers in the bars represent the responses' frequency percentage calculated for each coded category. For space and illustrative purposes, the displayed range is between 0% and 25%. For the display of the full range percentage (0% to 100%) please refer to Figure S6 of the Supplementary Material.   (Table S3.B).

An example is:
"We prefer to use a specific battery tailored for each patient" (ID: 19,

DISCUSSION
With this study, we investigated the feasibility and acceptability of cUDS implementation in EADC academic memory clinics, showing that 65% of answering clinicians would be willing to implement cUDS. The NACC created a standard battery, the UDS-3, and implemented it with a top-down strategy that linked its use with National Institutes of Aging (NIA) funding as an Alzheimer's Disease Center. 7 In Australia, the Alzheimer's Disease Network conducted a survey asking clinicians about the most commonly used cognitive testing tools in public and private memory clinics, to define a standard that may be more easily applied in clinics. 15 Although other standardization efforts are ongoing worldwide (e.g., UDS-3 for research purposes, 7 CN-NORM for pre-clinical AD 16 ) and nationally (e.g., Spain, 32 Netherlands 33 ), the one most relevant to our work was published by the Australian colleagues. 15 Consistent with our results, they found considerable variability in terms of assessment practices and organizational aspects (e.g., funding, staff availability) 15 ; however, we provided an analysis of how and why those aspects can affect the implementation of a defined standard cognitive battery. 2

Implications for future standardization
This survey results allowed us to identify key elements necessary to structure future implementation based on end-users needs and constraints. 17,18 The first step is to provide the necessary materials, such as translations, cultural adaptations, and local norms, for each European country (Figure 4), especially for Northern European regions that expressed a high propensity to use a common battery. For countries already using a local standard, it is necessary to provide conversion tables to translate scores 34 and ensure datasets compatibility ( Figure 4). More general needed actions consist of, but are not limited to, feasibility analysis adopting cUDS to evaluate its implementation in real-world settings (e.g., pilot studies), 24 the creation of a standard operating procedure, 13 and the development of digital tools to harmonize data entry, ultimately facilitating score computation, display, storage, and sharing. 35 In light of the cultural variability and the foreign F I G U R E 4 Prioritized next steps for clinicians willing and unwilling to adopt the cUDS. We identified and prioritized required next steps for implementation based on clinicians' reported barriers and facilitators, stratified by willingness to implement.
patients' assessment requests across EU regions, an additional step is to generate normative values in different languages.

Addressing resistances to change clinical practice
As expected, clinicians showed resistance to change their clinical practice. 23  The Harmonization Consortium did not intend to preclude patients' customized testing or clinicians' decision-making during the full diagnostic process. 36 Rather, a standard assessment aimed at providing a baseline of quality standard for clinical routine, 13 which allows using consistent definitions and processes across clinics. 2   We hypothesize that this discrepancy is due to concrete obstacles of different national funding policies, but also to clinicians' uncertainties regarding the added-value of the standardization initiative, leading to the overestimation of other potential obstacles. 40 To overcome this issue, it will be important to provide up-to-date evidence of cUDS superior diagnostic performance to both policy-makers and clinicians.

Diagnostic performance
One specific uncertainty was related to the lack of evidence on cUDS diagnostic performance compared to other cognitive batteries (e.g., CERAD-NAB). The cUDS, based on the UDS-3, 7 was developed to be more sensitive to MCI detection than the CERAD-NAB, which is specific for AD-dementia. 41 The two batteries overlap for some tests (the

CONCLUSIONS
In this study, we showed acceptance from clinicians, although still moderate, to proceed into the implementation of a standard cognitive assessment in European academic memory clinics. The next steps are the provision of materials and tools to facilitate the transition from the local to the cUDS battery, together with the management of logistical, financial and time issues. Nonetheless, there are some limitations. Although we achieved a relatively high participation (64%), EADC centers only included academic memory clinics and cannot be considered representative of the whole European context. Also, the use of a priori domains for the interpretation of clinicians' answers has the advantage to understand the mechanisms of implementation, but narrows data interpretation to the researchers' perspective and expertise. 26,29 This approach is more susceptible to "preconception bias", while giving a detailed analysis of some theoretical aspects. 18 However, adopting such approach ensures replicability, comparability and reliability of results, especially when ICR is calculated to minimize subjectivity and variability (Section S3). 26,29 Despite limitations, this study has two important implications. First, by importing methods from implementation science, we provided guidance and support for the implementation of an improved diagnostic procedure. Based on these findings, future studies can identify country-and context-specific requirements of adaptations ( Figure 1). 23 Second, this methodology can be used to help accelerate the adoption of new neuroscientific developments into clinical practice, even beyond cognitive testing (e.g., biomarkers). 10,18,19 More studies along this line may help speed clinical dementia research and overcome the difficulties related to translating clinical scientific discoveries into everyday clinical practice. Open access funding enabled and organized by Projekt DEAL.

CONFLICTS OF INTEREST
The authors report no conflicts of interest. For all the survey respondents involved in this study, informed consent was not necessary.
Author disclosures are available in the supporting information.