Development of Performance Indicators for the Primary Care Management of Pediatric Epilepsy: Expert Consensus Recommendations Based on the Available Evidence
Address correspondence and reprint requests to Dr. D.A. Caplin at Division of General Pediatrics, University of Utah School of Medicine, 50 North Medical Drive, Salt Lake City, UT 84132, U.S.A. E-mail: email@example.com
Summary: Purpose: To use available evidence and expert consensus to develop performance indicators for the evaluation and management of pediatric epilepsy.
Methods: We used a three-step process to develop the performance indicators. First, research findings were compiled into evidence tables focusing on different clinical issues. Second, an advisory panel of clinicians, educational and public health experts, and families of children with epilepsy reviewed the evidence. The advisory group used the evidence to draft a preliminary set of performance indicators for pediatric epilepsy management. Third, 13 internationally recognized experts in pediatric neurology or epilepsy rated the value of these indicators on a 5-point scale [1 (essential) to 5 (not necessary)] in a two-round Delphi process. Positive consensus was reached if ≥80% of experts gave an indicator a “1” rating and negative consensus if >80% gave an indicator a “5” rating. Indicators that achieved positive consensus during either round of the Delphi process constituted the final set of indicators.
Results: Of the 68 draft performance indicators, the expert panel members achieved positive consensus on 30 performance indicators: eight indicators related to diagnostic strategies and seizure classification, nine related to antiepileptic drug use, six related to cognitive and behavioral issues, six related to quality of life, and three related to specialty referrals.
Conclusions: We identified 30 potential indicators for evaluating the care provided to pediatric patients with epilepsy. The next step is to examine the relation of these performance indicators to clinical outcomes and health care utilization among pediatric patients with epilepsy.
Epilepsy is one of the most common chronic medical conditions affecting children and adolescents, costing an estimated $12.5 billion annually in the United States (Ellenberg et al., 1984; Mitchell et al., 1994; Begley et al., 2000; Shinnar et al., 2000) Even greater than its financial impact, however, is the strain that uncomplicated pediatric epilepsy is adding to an already overextended child neurology workforce (Laureta and Moshe, 2004; Rothman, 2004). The 2003 Child Neurology Workforce Study found that the average wait to see a specialist in pediatric neurology is 7 weeks. One third of the pediatric neurologists reported that many of the patients who were referred to them had conditions that were insufficiently complex to warrant specialist care and could have been managed by pediatricians (Rothman, 2004). Because childhood epilepsy often remits over time, uncomplicated epilepsy might be considered to be a lesser clinical challenge than several other neurologic disorders (Rothman, 2004). A systematic review that examined the effectiveness of specialty epilepsy care compared with primary care management suggests that primary care may be a valid option for the management of uncomplicated epilepsy (Bradley and Lindsay, 2001).
Growing concerns about the quality and consistency of pediatric care led the Quality Standards Subcommittee of the American Academy of Neurology (AAN) to publish guidelines for the evaluation of a child's first nonfebrile seizure (Hirtz et al., 2000), for the treatment of children after a first seizure (Hirtz et al., 2003), and for the use of new antiepileptic drugs (AEDs) in the treatment of new-onset (French et al., 2004a) and refractory (French et al., 2004b) epilepsy. Guidelines specific to the primary care management of uncomplicated pediatric epilepsy have not been developed.
Pediatric epilepsy poses many unique and challenging management issues. The developing brain is uniquely affected by seizures and their management. (Kalviainen et al., 1992; Mitchell, 1994; Mitchell et al., 1994; Mandelbaum and Burack, 1997). For example, medication management is an especially important concern in the treatment of children with epilepsy because of the potential impact that medications can have their learning abilities, attention span, and behavior (Kalviainen et al., 1992; Mandelbaum and Burack, 1997; Williams et al., 1998). Suboptimal treatment can lead to emotional and social difficulties, academic failure, or additional seizures that might have been prevented (Camfield and Camfield, 2003).
The quality of epilepsy management depends, in large part, on the actions of health care providers. Donabedian's structure/process/outcome model is a conceptual framework for quality assessment that links the provision of care to patient outcomes within the context of particular healthcare system factors (Donabedian, 1997, 2005). In Donabedian's model, the three elements are interdependent, with health system factors influencing the process of care, which, in turn, has an impact on outcomes (Donabedian, 1997, 2005). Processes of care, the primary focus of the current study, represent the things done to and for the patient by health care providers during the clinical interaction that can influence patients'outcomes. For epilepsy care, reduced seizure frequency with fewer side effects represents a desirable health outcome (Camfield and Camfield, 2003).
To evaluate the quality of pediatric epilepsy care, it is first necessary to identify the care-related processes indicative of effective care. These care processes can then be used as performance indicators to examine the quality of care provided to patients and to improve patient outcomes. Unfortunately, studies of the clinical issues affecting children with epilepsy have not been translated into performance indicators. In this report, we describe the development of preliminary performance indicators that can be used to assess the primary care management of pediatric epilepsy, based on a review of the available evidence and expert consensus. Our focus was the management of uncomplicated epilepsy in primary care, for children with epilepsy, defined as two or more unprovoked seizures in children ages 0–18 years.
The traditional Delphi technique is a method systematically to solicit, evaluate, and collate independent opinions from experts without group discussion (Dalkey, 1969). Prior studies have shown that this method allows the systematic organization of pertinent information where the only alternative is a subjective or anecdotal approach. In addition, the Delphi technique has been successfully used in studies of quality of care (Hearnshaw et al., 2001), health education (Broomfield and Humphris, 2001), and in the formulation of treatment guidelines, with goals similar to ours (van Steenkiste et al., 2002). Although it is not the only method for identifying indicators of performance, the Delphi method is an often-used and reliable method for gaining expert opinion. Expert input, anonymity, and controlled feedback of group response promote consensus while preventing any one individual from dominating the process (Hearnshaw et al., 2001, Roberts-Davis and Read, 2001).
When formulating professional recommendations (Broomfield and Humphris, 2001; Roberts-Davis and Read, 2001) or guidelines for treatment (Schutzman et al., 2001; van Steenkiste et al., 2002), it has been suggested that these decisions be based on empirically derived information in addition to expert opinion (Fink et al., 1984; Bass and Micik, 1997; Karceski et al., 2005). We therefore modified the traditional Delphi process by providing to the experts a group of draft performance indicators based on the available evidence.
Developing performance indicators
Analyzing the evidence base
In the first step, we searched multiple databases for studies published between 1980 and 2004 that focused on the diagnosis or management of pediatric epilepsy or on the outcomes of pediatric epilepsy patients. We rated eligible studies according to American Academy of Neurology (AAN) criteria (Hirtz et al., 2000, 2003; Shevell et al., 2003; Caplin et al., 2004) and compiled the findings of these studies into evidence tables that focused on various categories of issues related to clinical management. (For an explanation of how the evidence was graded and synthesized, see Appendix 1). Based on the review of the evidence, we concluded that only a modest amount of highest quality information is available for evidence-based care activities specific to pediatric epilepsy. Consistent class I and II evidence is not available regarding many management issues. In addition, available evidence supports certain activities not routinely addressed in pediatric epilepsy management (Caplin et al., 2004).
Drafting preliminary indicators
It was critical that all parties who would be potentially affected by the primary care management of epilepsy be included in the process of developing the performance indicators. Thus we convened a pediatric epilepsy advisory panel to assist in the drafting of preliminary performance indicators. The panel members, all of whom were residents of Utah, included two clinicians, two educational representatives, two public health experts, a pediatric psychologist, and three parents of children with epilepsy. They reviewed the evidence tables and discussed the study findings within these tables. The panel members drafted a provisional set of 68 performance indicators for pediatric epilepsy management and refined these indicators over the course of three subsequent meetings.
Delphi consensus process
We conducted a modified Delphi process to achieve consensus among a group of internationally recognized experts on a set of preliminary performance indicators for pediatric epilepsy management. The experts were selected based on their expertise in pediatric or neurology care, clinical research, or leadership in pediatric epilepsy organizations. Because of the clinical focus of this project, it was necessary that the majority of the panel members have some clinical responsibility. Of the 16 individuals that we approached for participation on the panel, three were unable to participate within our time frame. Thus the Delphi panel consisted of 13 members: four neurologists, a neurologist/epidemiologist, an internist/health services researcher, a behavioral scientist, a clinical neuropsychologist, three clinical epilepsy specialists, and two pediatricians. All panel members remained anonymous to each other during the rating process. The modified Delphi process was managed by two research investigators who did not participate in the ratings and had no vested interest in the ultimate result of this process.
The Delphi rating process consisted of two rounds and involved the provisional indicators drafted by the pediatric epilepsy advisory committee. During Round 1, members of the Delphi panel were briefed on the goals of the project, the Delphi method, and were provided with the draft provisional indicators, and the evidence that related to each indicator (Hirtz et al., 2003; Shevell et al., 2003). The panel members were asked to consider the following questions when rating the draft indicators:
- ○ What should be addressed in the diagnosis, care, and management of pediatric epilepsy?
- ○ What outcomes are relevant to care? How do you know when treatment is successful? How do you measure management success? Are ways available to identify successful treatment?
Specifically, panel members were directed to rate each indicator for its necessity in evaluating the primary care and/or the outcome of pediatric epilepsy patients on a 5-point Likert scale in which 1 designated an indicator that was essential, and 5, an indicator that was not necessary. If participants gave an indicator a 2, 3, or 4 rating, they were asked to comment on whether the indicator could be modified to make it more useful and to suggest revisions that might do so. We then used these comments and suggestions to revise, refine, or combine indicators that failed to receive a consensus rating of 1.
In Round 2, the Delphi panel received the revised indicators for further review, as well as a list of the indicators that had received a consensus rating of 1 or 5 in Round 1. The rating form for Round 2 contained a brief summary of evidence supporting the usefulness of each indicator, as well as any additional evidence or commentary from panel members. The Delphi panel used the same 5-point scale to rate each indicator in Round 2. After Round 2, all indicators on which the Delphi panel had reached a positive consensus were distributed to the panel members for a final review.
After each round, we collated the Delphi panel participants' ratings, calculated the frequency of each rating for each indicator, and determined whether panel members had reached either a positive or negative consensus about the value of each indicator. To ensure that a substantial proportion of the panel members agreed with each indicator, we conservatively defined positive consensus as ≥80% of the participants giving an indicator a “1” rating and negative consensus as ≥80% of the respondents giving an indicator a “5” rating. Indicators on which Delphi panel members reached positive consensus constituted the final set of recommended indicators.
In Round 1, panel members reached positive consensus on 12 of the 68 draft indicators and negative consensus on one. Of the 55 indicators on which no consensus was reached in Round 1, we left three unchanged, revised 19, and combined elements of the remaining 33 to form 11 new indicators. As a result, 33 indicators were sent to the Delphi panel in Round 2. From Round 2, panel members reached positive consensus on 16 of these 33 indicators and negative consensus on two.
Table 2 summarizes the performance indicators adopted by the panel. A total of 28 performance indicators were adopted through positive consensus in either Round 1 or 2. Two received a unanimous rating of “1” (indicators 1 and 7; Table 2). Two additional indicators (indicators 25 and 26) that were one vote short of positive consensus after Round 2 underwent minor revisions and were accepted by the panel during a final review.
Table 2. Consensus performance indicators for evaluating pediatric epilepsy care
|Diagnostic issues||1||Diagnosis of epilepsy made only after two or more unprovoked seizures||(Shinnar et al., 1990; Camfield et al., 1985; Annegers et al., 1987)|
|2||EEG ordered after a first unprovoked seizure||(Camfield et al., 1985)|
|3||Wake and sleep EEG ordered as standard practice||(Carpay et al., 1997)|
|4||Referred patient for specialist evaluation if prolonged or video-EEG study is warranted (i.e., if diagnosis remains in doubt or patient has had a high frequency of episodes)||b|
|5||Evaluation included an effort to distinguish epilepsy from syncope, “pseudoseizures,” migraine, and ADHD||c|
|6||Inquired about family history of epilepsy||c|
|7||MRI ordered for the evaluation of a child with new-onset partial seizures||(Lawson et al., 1998; Lawson et al., 2002)|
|Seizure classification||8||Indicated clinical and EEG findings, impact on management, and prognostic factors when common benign epilepsy syndromes present (BRE, CAE, BECTS) c||(Arts et al., 1999; Ma and Chan, 2003; Croona et al., 1999; Frank et al., 1999; Rating et al., 2000; Sato et al., 1982)|
|AED use||9||Optimal AED determined by given seizure type, seizure syndrome, and clinical circumstances||(Epilepsy, 1998; Frank et al., 1999; Duchowny et al., 1999; Rating et al., 2000; Guerreiro et al., 1997; Glauser et al., 2000)|
|10||Side-effect profiles used when deciding on AED||(Guerreiro et al., 1997)|
|11||AED monotherapy initiated first, as adequate for most pediatric epilepsy patients and preferable to polytherapy||(Guerreiro et al., 1997; Frank et al., 1999; Rating et al., 2000; Epilepsy, 1998; Chiba et al., 1985; Arts et al., 2004)|
|12||Treatment initiated with a “first-line” monotherapy AED, unless specifically indicated Recommended AEDs: valproic acid (GTC, absence), oxcarbazepine (partial), carbamazepine (partial), ethosuximide (absence), phenobarbital (GTC <2 yr of age)||(Glauser et al., 2000; Sato et al., 1982; Chiba et al., 1985)|
|AED monitoring||13||Routine blood and/or urine monitoring of monotherapy AED not indicated when a standard dose results in complete seizure control without side effects||c|
|14||AED levels monitored when likely to be helpful||(Camfield et al., 1985)|
|AED side effects||15||Family informed about potential common or serious side effects of specific AED prescribed, outlining plans to monitor||(Clusmann et al., 2004; Aman et al., 1994; Wheless et al., 2004; Verrotti et al., 2004)|
|AED discontinuation||16||Information about epileptiform EEG, abnormal neurologic findings, age at onset, and other risk factors for poor outcome used in decisions about AED discontinuation||(Arts et al., 1999; Shinnar et al., 1994; Todt, 1984; Arts et al., 2004)|
|17||A 2-yr seizure-free interval before considering AED discontinuation, unless indicated by special circumstances||(Shinnar et al., 1985; Shinnar et al., 1994; Arts et al., 2004)|
|Cognitive issues||18||Educational progress monitored to assess for academic difficulties||(Vermeulen et al., 1994; Chen et al., 1996)|
|19||Evaluation recommended when attention and concentration problems arise with epilepsy or AED treatment||(Croona et al., 1999; Aman et al., 1994; Borgatti et al., 2004)|
|20||Parents of children with ADHD advised that treatment with stimulant medication is not contraindicated in children with epilepsy||(Gucuyener et al., 2003)|
|Behavioral issues||21||Behavioral status monitored at initial evaluation and when risk factors are present||(Austin et al., 2001; Oostrom et al., 2001; Lendt et al., 2000; Camfield et al., 2003)|
|22||Careful history taken regarding psychosocial problems (especially depression and social problems) at initial visit, when risk factors are present, when seizure control is poor||(Oostrom et al., 2000; Ronen et al., 2003; Caplan et al., 2004)|
|23||Parents/families instructed when to monitor for behavioral side effects associated with AED use||c|
|Quality of life issues||24||Parent and child concerns elicited and addressed separately in an age-appropriate manner, including medication, side effects, risk of injury and death, social integration, stigma, and adolescence issues||(Sabaz et al., 2003)|
|25||Clinician discusses with parents limitations in child's activities that are needed, based on the child's seizure profile rather than on the diagnosis of epilepsy||c|
|26||Clinician recommends that bathing be supervised and showering be preferable for older children (with the door unlocked)||c|
|27||Inquiry made about family adjustment to diagnosis and factors that might interfere, such as family stress and conflict and emotional problems. Referred for intervention if indicated||(Williams et al., 2003)|
|Specialist referral||28||Suspicion of more-complicated or less-common epilepsy syndromes or poor treatment response resulted in specialist referral (e.g., JME, CPS, simple partial seizures)||(Shinnar et al., 1985)|
|29||Specialist consulted before initiation of less commonly used AEDs: tiagabine, gabapentin, vigabatrin, methsuximide, felbamate, sulthiame, stripentol, flunarizine, levetiracetam, and zonisamide||(Rating et al., 2000; Appleton et al., 1999)|
|30||Specialist consulted early in treatment when clinical features and risk factors for intractability exist||(Arts et al., 2004)|
Of the final 30 indicators of quality pediatric epilepsy care (Table 2), eight address diagnostic issues and seizure classification, nine address the use of AEDs, six address cognitive and behavioral issues, five address quality-of-life issues, and two address specialty referrals. Thirteen indicators (2–4, 7, 11–13, 18, 19, 21, 24, 29, and 30) focus on physicians' use of specific strategies (e.g., ordering diagnostic tests, prescribing medication); nine (1, 5, 9, 10, 14, 16, 22, and 28) focus on the decisions or clinical judgments made by physicians (e.g., their choice of AED, consideration of differential diagnosis), five (6, 8, 13, 17, and 27) address historical information that physicians must obtain from patients, and four (20, 23, 25, and 26) address information that physicians should provide to patients and their families.
Our unique focus on the primary care management of uncomplicated epilepsy yielded some interesting indicators. By focusing on the management of uncomplicated epilepsy, our experts reached consensus on several performance indicators that address basic diagnostic and treatment issues, such as the use of EEG testing, AEDs, and specialty referrals. By focusing on primary care pediatrics, they reached consensus on several other indicators that address the ancillary cognitive, behavioral, familial, and developmental issues that pediatricians often encounter in routine practice and that may influence epilepsy management. These issues, such as the comanagement of epilepsy and attention deficit hyperactivity disorder (ADHD) (consensus indicators 19 and 20), are not typically addressed by other epilepsy guidelines.
The consensus-building process highlighted a number of challenges in the comprehensive evaluation of care processes. First, although the literature base identified the original 68 indicators, expert consensus did not always agree that what was most represented by the literature was equally significant in a primary care clinical setting. Notably, panel members failed to achieve consensus on certain performance indicators, precisely because some members thought the indicator was not measurable or was difficult to identify in a clinical encounter. Although our experts unanimously agreed that issues such as patient adherence and patient satisfaction were important, no consensus was reached on how these issues could be addressed through a performance indicator.
Our intention was to develop measurable indicators for the evaluation of pediatric epilepsy management. The majority of the consensus indicators achieved this goal through the identification of a specific clinical strategy or other information that could be ascertained in the medical record. However, some indicators delineated processes of care such as physician decision making or the provision of specific information to patients and families. These indicators are more challenging to assess because they involve activities that are only indirectly measurable. In addition to being difficult to measure accurately, the outcomes of these processes are not likely to be immediately apparent.
We did identify recommendations similar to several of our performance indicators in recently published guidelines and reviews (Camfield and Camfield, 2003; Hirtz et al., 2003; French et al., 2004a; Mayor, 2004a, 2004b) For example, indicators pertaining to the discontinuation of AEDs (16 and 17; Table 2) are consistent with evidence from several published studies (Arts et al., 1999, 2004; Todt, 1984; Shinnar et al., 1985, 1994) as well as with recent guidelines from the British National Institute of Clinical Excellence (NICE) (Mayor, 2004a, 2004b) and from the Scottish Intercollegiate Guidelines Network (SIGN, 2005), both of which were developed through a process similar to ours. However, the NICE and SIGN guidelines include additional recommendations related to the management of complicated epilepsy and specialty care for all patients with epilepsy. In contrast, our focus was on those tasks that a trained primary care clinician could reasonably perform. Our experts identified numerous situations that they believed should be reserved for specialty management, such as complicated clinical situations (e.g., refractory epilepsy, use of less-common AEDs, pregnant women with epilepsy, status epilepticus). The discrepancies between published guidelines and our indicators were often due to our expert panel's intentional omission of indicators that related to detailed clinical considerations or specific strategies that they believed were better suited to specialist care.
Many indicators on which panel members failed to reach consensus related to issues that they thought could be better addressed by specialists (indicators 1, 2, 4–6, and 8; Appendix 2). Panel members were concerned that primary care practitioners would be uncomfortable with an activity that was more of an exception than a general rule of care [e.g., clinician repeats EEG if first EEG is normal but (a) is inadequate for specific circumstance (e.g., no sleep, no hyperventilation); (b) the child is very young; or (c) the child is not responding to treatment]. In addition, they considered the rate of epilepsy among patients of a typical general practitioner to be low enough that the average clinician would be unlikely to see patients requiring more-complex care without first having the opportunity to consult with a specialist.
Several factors should be considered when reviewing our findings. First, although the members of our expert panel were selected to represent diverse perspectives regarding pediatric epilepsy care, the opinions of this small group of experts may not reflect those of the broader pediatric epilepsy community. Second, the “80%” cutoff that was required for consensus is conservative and may have led to the rapid elimination of potentially useful indicators. Perhaps a greater number of experts or iterations of the rating process would have resulted in a broader array of opinions, allowed more meaningful comparisons between clinical disciplines, or allowed us to achieve consensus on some borderline indicators.
The performance indicators developed in this study should be viewed as preliminary. They should be refined further before they can be adopted into practice. As part of this refinement process, it will be critical to explore further the needs and opinions of individual groups that are invested in pediatric epilepsy care, including patients, families, clinicians, and educators. Measurement strategies must be developed for each indicator. Some indicators, such as those addressing physicians' use of particular strategies (e.g., prescribing medicine, ordering tests) could be assessed through chart review or analyses of administrative data. The assessment of other indicators, such as those pertaining to physician judgments, may require novel approaches, such as the use of clinical scenarios or simulated patients. Finally, patient or family surveys may be useful in assessing indicators that focus on physicians' provision of information to their patients. All of these strategies should be pilot tested and evaluated for their measurement properties (e.g., responsiveness, reliability, validity). We are currently conducting a pilot study to examine the measurability of certain indicators, by using two methods: chart review and parent surveys. We will analyze these data in relation to physician performance (and documentation) of individual indicators and explore the appropriateness of each method of measurement.
In conclusion, we present a set of performance indicators for evaluating epilepsy management in general pediatric practice. Recently, national organizations (e.g., the American Epilepsy Society) have expressed interest in developing a treatment guideline for pediatric epilepsy. We have identified a number of management issues that should be useful to that process. The important next step is to examine the relation between the performance indicators developed in this study to critical clinical outcomes such as epilepsy control, psychosocial adjustment, family satisfaction with care, and health care utilization among pediatric patients with epilepsy.
Acknowledgment: We acknowledge the hard work of Beth Henderson, MS, project coordinator, and the members of our expert panel: Joan Austin, D.N.S., FAAN; James Bale, M.D.; Anne Bergin, M.D.; Carol Camfield, M.D.; Mary Connolly, M.D.; Pat Dean, R.N.; Colleen Dilorio, Ph.D.; David Dunn, M.D.; Pat Gibson, B.S.; Jaya Rao, M.D.; Robert Terashima, M.D., FAAP; David Thurman, M.D.; E. Ryann Watson, Ph.D.
Table APPENDIX 1.. American Academy of Neurology Recommendations based on Classification of Evidence
|AAN evidence classification scheme for a diagnostic article|
|Class I: Evidence provided by a prospective study in a broad spectrum of persons with the suspected condition, by using a “gold standard” for case definition, where the test is applied in a blinded evaluation, and enabling the assessment of appropriate tests of diagnostic accuracy|
|Class II: Evidence provided by a prospective study of a narrow spectrum of persons with the suspected condition, or a well-designed retrospective study of a broad spectrum of persons with an established condition (by “gold standard”) compared with a broad spectrum of controls, where test is applied in a blinded evaluation, and enabling the assessment of appropriated tests of diagnostic accuracy|
|Class III: Evidence provided by a retrospective study in which either persons with the established condition or controls are of a narrow spectrum, and in which test is applied in a blinded evaluation|
|Class IV: Any design in which test is not applied in blinded evaluation OR evidence provided by expert opinion alone or in descriptive case series (without controls)|
|Level A rating requires at least one convincing class I study or at least two consistent, convincing class II studies||A, Established as effective, ineffective, or harmful for the given condition in the specified population|
|Level B rating requires at least one convincing class II study or overwhelming class III evidence||B, Probably effective, ineffective, or harmful for the given condition in the specified population|
|Level C rating requires at least two convincing class III studies||C, Possibly effective, ineffective, or harmful for the given condition in the specified population|
|—||U, Data inadequate or conflicting; given current knowledge, treatment is unproven|
Table APPENDIX 2.. Performance indicators not adopted (no consensus)
|Diagnostic issues||1a||Clinician repeats EEG if first EEG is normal but (a) is inadequate for specific circumstance (e.g., no sleep, no HV), (b) the child is very young, or (c) the child is not responding to treatment|
|AED use||2||Clinician considers risk factors for recurrence after a single (unprovoked) seizure in decision to start treatment|
|3||Clinician offers choice of AED preparation when available|
|4||Clinician initiates “second-line” AEDs after poor efficacy or unacceptable side effects occur with first-line AEDs or as “add-on” medications|
|5||Clinician monitors and addresses potential drug interactions when considering polytherapy|
|AED monitoring||6||Clinician considers factors that influence AED levels when interpreting data (e.g., timing of dosage relative to blood sampling, compliance, polytherapy)|
|7||Clinician inquires about adherence and counsels the patient's family accordingly when seizures are not controlled and/or when AED levels are unexpectedly low|
|AED discontinuation||8||Clinician identifies those children in whom discontinuation can be considered after only 1 seizure-free year|
|Cognitive issues||9a||Clinician advises family that ADHD with epilepsy may be related to lower IQ scores|
|10a||Clinician reassures patients and families that AED administration is unlikely to result in long-term declines in cognitive or motor performance|
|11||Clinician monitors for the presence of memory, communication, and other cognitive problems and suggests evaluation when concerns arise|
|12||Clinician monitors for cognitive side effects of certain AEDs and makes appropriate medication adjustments and/or referral|
|Quality-of-life issues||13||Clinician addresses the risk for death among children with epilepsy, including sudden unexpected death in epilepsy (SUDEP); clinician reassures parents that death from epilepsy is rare and generally relates to underlying severe neurologic abnormalities rather than the child's epilepsy|
|14||Clinician routinely inquires about parent/patient satisfaction with care and outcomes|
|15||Clinician alerts families to the importance of sleep hygiene among children with epilepsy|
|16||Clinician assesses quality of life as an outcome of pediatric epilepsy patients|