SEARCH

SEARCH BY CITATION

Keywords:

  • assessment;
  • examination;
  • implant dentistry;
  • dental education;
  • postgraduate;
  • undergraduate;
  • clinical competencies;
  • skills;
  • knowledge

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix

Learning in academic settings is strongly related to the way the students are tested or examined. Assessment therefore must be integrated in the curriculum design, coordinated and should reflect the learning outcomes of the education. Assessment within the field of implant dentistry must fulfil four major objectives: complete and direct the learning process with feedback (formative), ensure that students are adequately prepared (summative), assess attitudes and skills such as critical thinking, reflection and self-assessment ability, and supply continuous feedback to teachers on curricular content and impact. Different assessment methods should be used to assess different levels of competencies throughout the curriculum. Various forms of written or oral assessment methodologies are applicable at earlier stages in the curriculum. At intermediate levels, interactive assessment methods, such as patient simulations (paper based or virtual) and more could encourage the necessary synthesis of several disciplines and aspects of the theoretical knowledge. At higher levels of competence, documentation of clinical proficiency by means of reflective portfolios and diaries is an appropriate assessment method with both formative and summative potential. The highest level of competence requires performance assessment using structured, objective, clinical criteria. The group strongly encourages the use of reflective forms of assessment methods which engage the students in a process of self-appraisal, identification of individual learning needs and self-directed learning. The ultimate goal of this would be to allow the student to develop a lifelong learning attitude.

1st European Consensus Workshop on Implant Dentistry University Education: Group E

Group members

Anders Nattestad, Chairperson, Professor, University of Pacifi c, USA

Nikos Mattheos, co-chair, Associate Professor, Griffi th University, Gold Coast, Australia

Rolf Attström, Professor Emeritus, University of Malmö, Sweden

Antonio Carassi, Professor, University of Milano, Italy

Deborah Faust, Senior Manager, Professional Development, Biomet 3i, USA

Martin Janda, DDS, PhD, University of Malmö, Sweden

Christina Lindh, Associate Professor, University of Malmö, Sweden

Bruno Loos, Professor, Academic Centre for Dentistry Amsterdam (ACTA), The Netherlands

Maciej Marcinowski, DDS, University of Poznan, Poland

Todd Metts, Zimmer Dental

Corrado Paganeli, Professor, University of Brescia, Italy

Cemal Ucer, Professor, University of Salford, UK

Tommie Van de Velde, DDS, University of Ghent, Belgium

Per Vult von Steyern, Associate Professor, University of Malmo¨ , Sweden

Wilfried Wagner, Professor, J. Gutenberg-University, Mainz, Germany

inline image

Group E photo (from left to right):

Christina Lindh, Cemal Ucer, Maciej Marcinowski, Bruno Loos, Antonio Carassi, Tommie Van de Velde, Martin Janda, Anders Nattestad, Corrado Paganeli, Wilfried Wagner, Per Vult von Steyern, Rolf Attström, Nikos Mattheos.

Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix

Implant dentistry has been one of the most dynamically evolving fields in oral health care. During the last two decades, dental implant treatment has become widely performed and documented, resulting in implants being firmly established as part of mainstream dentistry. Following significant expansion of indications for implant treatment, the recent advances in implant technology and treatment modalities have resulted in a rapid increase of interest from the public for such treatment. Consequently, oral healthcare professionals will increasingly encounter patients restored with dental implants, provide dental care and maintenance for them or treat new patients seeking implant treatment. A modern curriculum should therefore adequately prepare dental students with knowledge and competencies in implant dentistry at both undergraduate and postgraduate level of studies.

A major aim of evaluation in healthcare higher education is to assess individual achievement in order to satisfy (external) requirements and to document accomplishments or failures. In addition, a modern curriculum should prepare not only qualified physicians, but also independent learners, able to cope with increasing amounts of information and learning needs throughout their professional life (1). At the side of the classical knowledge and competencies, a modern curriculum needs also to focus student learning to the development of cognitive and reflective skills such as critical thinking, self-assessment ability and attitudes such as lifelong learning. In this respect, competency-based learning and assessment holds a strategic role (2). Evaluation, which takes different formats, could also provide feedback and motivation for continued improvement and lifelong learning.

Learning in academic settings is strongly related to the way the students are tested or examined (3). Moreover learning in any subject is influenced by the interaction between the learner and the learning resources (4, 5). Despite the obvious relation between learning and mode of assessment, examinations in healthcare educations are often carried through as a simple control of the facts the students have accumulated during a given phase of the education. The modern expanded learning objectives can only be achieved when matched with innovative and multifaceted assessment strategies and methodologies, a process known as blueprinting (2, 6).

Recently, there has been a rapid evolution of training and assessment methods used in medical and dental education from traditional curriculum-based ones towards more sophisticated competence-based evaluation strategies (7, 8). Reflective assessment methodologies, such as diaries, logs and portfolios have increasingly gained importance in higher education. Advances have also been made to standardise subjective judgements and to develop objective methods of assessment in order ‘to develop a set of performance standards, to generate assessment evidence from multiple sources, and to expand the search for knowledge to search for ‘reflection in action’ in the working environment’ (9, 10). There has also been a move towards developing evidence-based validation of assessment techniques – so called Best Evidence-Based Assessment (10).

Aims

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix

The current position document aims to investigate and describe a complete framework for the assessment of knowledge, competencies and attitudes related to Implant Dentistry. The position document will specifically address the application in undergraduate, as well as postgraduate university dental education.

It is reasonable to assume that the complete spectrum of Implant Dentistry cannot be taught within the current duration of the undergraduate curriculum. A certain amount of core knowledge and competencies should be possessed by the general dentist, whilst another array of deeper understanding and more advanced skills should be targeted by postgraduate education. It is not within the aims of this position paper to exactly distinguish between the two and define the content of the undergraduate and the postgraduate education. Therefore, this document will deal with assessment techniques corresponding to specific learning objectives, and whether these objectives will be best targeted by the undergraduate or the postgraduate circle of studies is a matter that will be discussed elsewhere.

The authors also recognise that there exists a wide variety of assessment methods and strategies (11). Assessment strategies will be described based on available evidence and best practices, as independent as possible from specific instructional approaches and systems. A successful assessment must be in accordance with the availability of human and material resources, time, as well as the cultural and professional characteristics of the institution, the students and the patients.

General functions of assessment

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix
  • • 
    In general, assessment of students’ ability to diagnose plan and execute therapeutic interventions should fulfil four important aims (Fig. 1).
  • • 
    An assessment model for competencies related to implant dentistry therapeutic interventions should ideally:
  • • 
    Be a valid and reliable estimate of students’ knowledge and skills (secure validity, reliability (12, 13) and transparency).
  • • 
    Reflect the actual clinical situations and problems that the physician will be called upon to solve (secure authenticity/relevance).
  • • 
    Be applicable to different educational environments and availability of resources (secure flexibility).
  • • 
    Address the significant time component of implant dentistry treatments, from planning to executing interventions, as well as long term maintenance and the treatment of complications (long term, continuous, prospective assessment).
image

Figure 1.  General function of assessment.

Download figure to PowerPoint

Specific functions of assessment within implant dentistry

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix

Diagnosing, planning and executing treatment with dental implants, as well as maintaining implant patients or treating complications require knowledge and skills in multiple domains. Therefore, assessment should reflect this interdisciplinary character, as well as support the progressive development of these skills, placing special emphasis in the development of a self-directed learning and lifelong learning attitude. Given the complexity of clinical performance, many different tests should be used in a continuum when assessing fitness to practice both formatively and summatively.

An interesting model to approach such a progressive assessment (of knowledge, competence and performance) was proposed by Miller in his ‘competence pyramid’ (14). The base of the pyramid represents the ‘knowledge’ component of education (‘knows’) followed by ‘knows how’ (applied knowledge) and ‘shows how’ to ‘does’ stages (clinical performance) (Fig. 2). Practical skills are developed on existing knowledge base which is updated continuously using evidence-based acquisition of knowledge. The progression implies that there is an increasing level of competencies and applied knowledge (9).

image

Figure 2.  The Miller Pyramid for assessing knowledge and skills in medical education [adjusted from Miller (14)].

Download figure to PowerPoint

Determination of clinical competence (level 4) requires observation and ‘performance assessment’ at the work place to show that the candidate has attained experience, knowledge, skills and behaviour to carry out the competencies required by the profession. Miller’s model can be an appropriate approach to the assessment of knowledge and competencies in implant dentistry, where each level can be approached using different teaching, learning and assessment methods. In addition, depending on the level of the education (undergraduate/postgraduate) and the level of the required competence, certain skills may be required at the highest level (‘does’), whilst others may be needed at a lower level of competence. At the base of Miller’s pyramid, the knowledge can be tested with summative or factual tests [e.g. multiple choice questions (MCQs), essays, written or oral examinations]. The ‘knows how’ stage can take place in a preclinical or simulated situation (e.g. clinical context-based tests and MCQs). The ‘shows how’ needs performance assessment in realistic settings [i.e. performance assessment in vitro– objective structured clinical examinations (OSCEs), simulated patient-based tests] with a final ‘does’ level evaluation carried out in clinical settings using portfolios and objective structured clinical assessments (performance assessment in vivo). Work-place assessments in isolation are of limited value and therefore it is essential that they cover the whole content of the curriculum to a level of competence defined by a series of learning outcomes. The work-place assessment could involve a real patient encounter (e.g. mini-Cex) (15), direct observation of skills, discussion of clinical material such as radiographs and charts, clinical simulations and reflective learning exercises (9).

Assessment of knowledge (‘knows’ level)

The knowledge base of implant dentistry extends over many areas and scientific disciplines. It is therefore evident that the teaching of such a wide knowledge base should involve teaching by professionals from many dental disciplines and areas of basic sciences. Although the teaching must be comprehensive and multi-disciplinary, it should ideally be centrally coordinated and coherent, thus presenting students with the whole spectrum of complex interrelations between biology, bio-engineering and technology, behavioural factors and different treatment modalities.

The assessment of the knowledge base, as an integral and crucial part of the learning process, must follow the same principle of comprehensive approach. Furthermore, assessment strategies must match the learning outcomes and teaching formats being used (2). For example MCQ tests are more valid test of core knowledge rather than communication skills which would require an interactive method of assessment. Rather than relying on separate discipline-based assessments, the authors encourage a comprehensive and multi-disciplinary assessment of this knowledge base. This would ideally start early at the curriculum and evolve gradually as students’ clinical exposure increases. This way the students will be able to see the application of implant dentistry as an integral part of available restorative treatment options for the treatment of tooth loss.

Various forms of written assessment methodologies are applicable at this stage, each with particular strengths and weaknesses (16). Many of these tests suffer from low validity although they could be designed to have high reliability. Essays and oral examinations are useful tools for searching/recalling and synthesising information and knowledge, yet they are no longer favoured as the sole method of assessment due to unreliability and low level of generalisability. Whenever possible, the authors would encourage active assessment procedures, not only to evaluate the students’ knowledge base and understanding within relevant disciplines, but also their ability to inter-relate and synthesize knowledge from different scientific areas and disciplines. Such an assessment can be encouraged when using case-based scenarios with a problem-based methodology.

Assessment in the ‘knows how’ level

At this stage, the assessment moves to the ‘knows how’ level of Miller’s pyramid in which clinical context-based tests are employed. The assessment is then targeted towards the ability to understand and plan an evidence-based treatment intervention, well placed within the potential and the limitations posed by the individual patient. Therefore, meaningful assessment strategies at this level would place the student at the centre and require demonstration of judgement, reflection and discussion on the choices made with respect to the anatomical, functional, aesthetics and preferences of the individual patient.

Paper-based simulation tests are widely established methodologies for assessment in the ‘knows how’ level (17). One of the best-known paper-based simulations is the so-called patient management problem (PMP) (18). This clinical simulation presents the candidate with a patient’s initial complaint. Ultimately, the candidate has to make a diagnosis and devise a management plan. To do so, the candidate has to take a history, indicate which parts of the physical examination he or she wanted to perform and order any additional diagnostic procedures he/she deemed necessary. Candidates are only provided with the information they ask for (17).

Another paper-based simulation is the so-called key-feature approach (19). A key-feature case – often also referred to as a Cambridge case – consists of a short description of a (mostly clinical) scenario in which a problem is presented. The number of questions per case is limited and they ask for essential decisions only. Question formats may vary from MCQs to short open-ended ones. Case descriptions are concise, with a limited number of questions per case to allow the inclusion of a large number of cases to yield more reliable scores. By focusing on essential decisions the approach remains valid, the idiosyncrasy problem is avoided and efficient problem-solving is rewarded.

Another format is the extended-matching question (20). Here the questions include a long list of options (up to 20–30) and a series of related cases. The candidate has to select the most appropriate response option for each of the cases. A lead-in – for instance: ‘For each of the following cases select the most probable diagnosis’– is used to guide the student. Case descriptions are concise so that many different cases can be included in a test.

Information technology has added significant functionality and enhanced the potential of such simulations. For example, a web-based free text ‘virtual’ patient has been successfully used for training of undergraduate students in learning to take medical histories, with encouraging results (21).

Other methodologies which have been reported for the ‘knows how’ level include standardised patients–actors (22) in order to improve the objectivity of the assessment of the clinical scenario.

Assessment in the ‘shows how’ level (performance in vitro)

At this level, the student is required to actively demonstrate ability to perform complete or parts of a therapeutic intervention in a simulated environment. This could include cognitive as well as clinical or operative skills in real patients or in virtual subjects. In the context of dental education, this level could be perceived as the last ‘pre-clinical’ step.

The OSCE is often proposed for testing, competencies including patient assessment, history taking, diagnosis, empathy, active listening, communication and counselling skills and treatment planning, all based on applied knowledge (23, 24). The method was proposed with a view to increase the reliability and objectivity of measurements of performence. Correlations between OSCEs and written knowledge tests are often high (25).

It would appear that, a large portion of what is measured by the standard OSCE belongs to the ‘knows’ layer (18). Analysis showed that didactic predictors (NBDE Parts I and II and comprehensive MCQ examinations) explained 20.4–22.1% of the variability in OSCE scores. The findings suggest that OSCE examinations are more likely to measure other qualities such as problem-solving ability, critical thinking, and communication skills (25). The validity and reliability of OSCEs have been shown to depend on the number of stations and skills evaluated (26).

Assessing specific operative skills

Treatment planning exercises could, for example, be carried out using three-dimensional (3D) simulation software. These tools are based on data acquired by conventional computerized tomography (CT) or cone beam CT. Different sections of the radiographic data are visualised and matched with a 3D view of the scanned object. It is possible to take a closer look at different anatomical structures zooming, rotating and slicing aspects of the 3D-object. Implants can be ‘virtually’ planned, placed and restored according to a chosen treatment plan and positioned related to anatomical and occlusal information. The planning itself can then be printed or saved and left for evaluation with the supervisor. Such tools, although primarily intended to be training tools, can have a meaningful contribution in assessment scenarios.

One step further, such technology could be utilised to direct hands-on exercises, although this implicates increased costs. Technology based on radio-opaque resin models and scans can be used to assess discrete competencies or operative skills such as incision making, suturing or implant placement. An example of research evaluation can be seen in van der Velde et al. (27) (Fig. 3).

image

Figure 3.  Cross section at the position of the first right premolar (14) of a volume-rendered endentulous maxilla with a radio-opaque scanning prosthesis in situ. Implants are planned within this simulation software according to a chosen treatment plan and positioned related to anatomical and prosthetical information. Note the possibility to correct angulation and height of abutments according to the height of the soft tissues.

Download figure to PowerPoint

Three-dimensional simulations and ‘virtual reality’ devices constitute a recent development in the field of training and assessing operative skills. The virtual skills lab is already evaluated and shown to be an effective training instrument in specialties such as laparoscopic (28) or lately also general surgery (29). Implementation of such devices in oral surgery is still limited or at experimental stage (30, 31), although similar simulations have been tested in operative dentistry (32). Virtual reality will most certainly play a significant role in the training and assessment of surgical skills in the near future, yet it poses certain limitations at present, mainly related with high costs and not enough authenticity level.

The established preclinical skills lab, equipped with mannequins and plastic models can serve the purpose of assessment of skills in relation to prosthetic restoration of implants. Meanwhile, training and assessment of periodontal surgical skills (practical knowledge, flap design, elevation, osseous handling, drilling, suturing) on human or animal cadavers (33) (subject to legislation) and various inanimate models (34) well documented and established practice.

Assessing cognitive skills and attitudes

Assessment of skills including the ability to describe, analyse and synthesize information, problem solving, critical thinking and prioritising could be based on case presentations. During a presentation, skills can be assessed, such as the student’s ability to plan and execute interventions, his/her ability to relate general knowledge to the needs of the individual patient, his/her understanding of the patient’s profile and background and more. An example of competencies assessed during a presentation matched with appropriate criteria can be seen in Table 1. However, important skills such as patient assessment, history taking, empathy, active listening, communication and counselling skills are of crucial importance for the successful practise of modern dentistry and should be assessed throughout the curriculum using different methods. Whereas such skills could be well assessed at a preclinical level, the authors would also encourage continuation of assessment at the ‘does’ level of the pyramid, in an authentic clinical situation interactively.

Table 1.   An example of competencies and criteria for assessment based on case presentation
CompetencyCriteria against which you should consider your rating of the trainee
Case reportThe report is legible, signed, dated, and appropriate to the problem, understandable in relation to, and presented well. The content and the structure of the report conforms to the requirements as set out in .
Clinical assessmentCan discuss how they understood the patient’s story and how, through the use of further questions and an examination appropriate to the clinical problem, a clinical assessment was made from which further action was derived.
Investigation and referralsCan discuss the rationale for the investigations and necessary referrals. Shows understanding of why diagnostic studies were ordered/performed, including the risks and benefits and relationship to the differential diagnosis.
TreatmentCan discuss the rationale for the treatment, including the risks and benefits.
Follow-up and future planningCan discuss the rationale for the formulation of the management plan including follow up.
ProfessionalismCan discuss how the care of this patient, as recorded, demonstrated respect, compassion, empathy, and established trust. Can discuss how the patient’s needs for comfort, respect, confidentiality were, attended to. Can show how the record demonstrated an ethical approach, and awareness of any relevant legal frameworks. Has insight into own limitations.
Overall clinical careCan discuss own judgement, synthesis, caring, effectiveness, for this patient at the time that this record was made.

Assessment in the ‘does’ level

The final level of the pyramid requires demonstration to independently diagnose, plan and execute complete or parts of therapeutic interventions on actual patients according to predefined clinical competencies. Performance builds upon discrete competencies but also encompasses other influences including system related issues (government programmes/guidelines, patients’ expectations, practice facilities etc) and individual influences (attitudes, beliefs, interactions with others, physical and mental health, family etc) (10). This produces a complex situation in which, in addition to assessing knowledge and practical skills, assessment of overall performance of the clinicians is essential, in the work-place in a real world setting. Competence at this level can be assessed by measuring the ‘performance criteria’ which represent the tasks, skills, behaviour and knowledge a competent professional should have when performing in the clinic. The evidence of competence comes from observing the type and results of the performance a clinician demonstrates when carrying out a clinical task at the workplace under the influence of clinical settings and circumstances (Fig. 4). Therefore, the outcome-based education focuses on the end-product and clearly defines what the learner is accountable for. Thus the learning outcomes lead to curriculum design which in turn specifies what the end-product (reflective–competent practitioners) should be and guides the assessment process (1, 2).

image

Figure 4.  Components of clinical competence and performance (adapted from Newble, 1992) (10).

Download figure to PowerPoint

Thus, structured objective tests have been designed to assess clinical operative skills as a measure of achievement of overall clinical competencies (clinical performance) (16). Such models are often described as ‘practice-based assessment’, ‘in-practice assessment’, or ‘performance-(outcome) based assessment’ and are designed for criteria-based assessment of clinical procedures by one or more assessors. (35) In this model, the assessors could be tutors/mentors as long as there is full transparency to the trainer/trainee (15) and the assessors are trained (36, 37). Practice-based assessment is regarded as an objective tool in evaluating the skills, knowledge and clinical performance of a clinician with high validity and has been used for both formative and summative assessment in various chair-side observation settings (9, 37–41).

In-practice evaluation gives assessor the opportunity to make multiple observations over a period of time, in different clinical circumstances or settings, when assessing the performance of students who in turn get specific feedback (formative assessment). This fosters student-centred education and self-directed reflective learning activity using regularly updated personal development plans (PDP). To promote learning, the assessment should be transparent and formative – i.e. students should learn from the assessment process and receive feedback on which to build/develop their level of knowledge and skills.

Whilst different forms of chair-side observation techniques have been utilised in the past, structured, objective checklists or outcome-based rating scales (38, 39) have recently been proposed as ideal tools for both formative and summative assessment of clinical performance. Table 2 shows a simple example objective checklist that is used to assess history-taking skills. Other similar methods used include clinical work sampling (40) or practice video recording (41).

Table 2.   A simple matrix for performance based assessment in the field of history taking
 OBSNeeds improvementSatisfactorySuperior performance
  1. Each of the competencies assessed in matrix must be matched to clearly defined performance criteria.

History taking
 Extra-oral examination skills    
 Intra-oral examination skills    
 Communication skills    
 Clinical judgment    
 Professionalism    
 Organisation/efficiency    
 Overall clinical care    

Clinical performance assessment tools include:

  • 1
    Clinical evaluation exercise by observing clinician/patient interaction,
  • 2
    Case-based discussions, case-based presentations
  • 3
    Direct observation of practical skills
  • 4
    Student reflective statements,
  • 5
    Multiple source feedback (Mini-PAT) – previously known as 360 degree appraisals to test attitude and behaviour (42)
  • 6
    Mini-clinical evaluation exercise (Mini-CEX) (15)

Most common barriers and limitations for performance-based assessment include lack of time, large class sizes, inadequate resources, lack of educational training and conflicting priorities for clinical teachers – for example, service demands and research (43).

Progressive formative assessment methodologies

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix

The formative functions of assessment are crucial for the learning process. In this sense, a continuous structured assessment which follows the development of the student is a valuable curriculum mechanism for providing feedback and guiding towards new learning objectives. Formative functions will extent the aim of the assessment from ‘How good I am?’ to ‘How do I get better?’. Such assessment instances have to be regularly implemented in the curriculum and constitute a prospective observation reflecting student’s development. The authors would like to briefly discuss some assessment methodologies with prospective and developmental focus, which could be applicable during different phases and levels of the education.

Knows/knows how levels: interactive examination

The interactive examination is a methodology developed in an attempt to introduce a standardised assessment scheme for dental students, based on the principles of dialogue and reflection (44). In addition to assessment of traditional competencies, this methodology focuses on students’ ability to self-assess their competence/proficiency and to define their own learning objectives. This methodology could be especially applicable during the two first levels by Miller (‘knows’–‘knows how’) as it could significantly enrich the traditional written or oral examination schemes. In a prospective manner this methodology offers a deeper insight on the progress of students’ ability to self-assess their competence. Such an insight becomes increasingly important, at the light of findings which show novice surgeons to repeatedly overestimate their competence (39, 45, 46).

In brief, the Interactive Examination is based on the following modules:

  • 1
    Students are asked to assess their own competence in areas corresponding to the learning objectives of the course.
  • 2
    Students self-assessment is then matched to the assessment of their clinical instructors in the same areas, resulting in initial feedback.
  • 3
    Students are presented with a problem in the form of an actual clinical scenario and are then expected to present a solution or suggest an action strategy.
  • 4
    Once students have submitted their answer, they receive the response of a practicing dentist in the same clinical scenario.
  • 5
    The final task of the students is to compare, in writing, their own answer with that of their colleague. Their comparison should identify similarities and differences, discuss the reasons why these differences have occurred and define new personal learning objectives and areas for improvement as needed.
  • 6
    Individualized developmental feedback is given to all students based on all modules.

The student’s own self-assessment (phase A–B) is a starting point in the process. The student’s essay (phase C) provides an insight in student’s disciplinary (and inter-disciplinary) knowledge and competence as well as their problem solving and critical thinking abilities. The ‘comparison with the colleague’ (phase D–E) is a critical part of the assessment. The colleague’s response document serves as immediate feedback but is also a powerful stimulus for students to reflect in writing on their self-perceived competence. The qualitative evaluation of the comparison document through an established and comprehensive set of criteria, allows the instructor to acquire deeper insight into student patterns of reasoning and prioritising, as well as the extent of their ability to benefit from feedback and to self-identify their learning needs.

Shows how/does levels: portfolio assessment – reflective diaries

The documentation of clinical performance (i.e. performance in vivo) using a highly structured portfolio comprising an audited logbook of cases, documentary evidence of proof of achievement of operative skills and reflective diaries (7, 47) constitute one of the latest developments in medical and dental education (35, 48).

The Portfolio assessment is used particularly at the last two Miller competency levels (‘shows how’–‘does’) when the student becomes increasingly involved in active clinical interventions after (s) he attains knowledge and skills at ‘know how–show how’ levels. At present, maintenance of a clinical portfolio under supervision of a suitably trained mentor/supervisor appears to be a feasible and well accepted method for assessment of clinical competencies and performance in implant dentistry (2, 9).

A recently designed portfolio model from the University of Salford merges portfolio assessment with mentorship for assessment of competencies in implant dentistry at Diploma/Master’s degree level (37). During the clinical stages of training students work alongside clinical mentors who are appointed by the programme team to provide, supervision, coaching as well as assessment using structured, objective, criteria-based clinical evaluation tools. Mentors are trained for their role, and are selected according to either a long standing experience or postgraduate qualifications in implant dentistry.

As part of their portfolio requirement, the postgraduate students negotiate a PDP with reference to the learning objectives of the course and their personal needs. This provides each student with a structured pathway through the programme and encourages student-centred, supervisor-validated learning opportunity for the individual. It also ensures that all students have an equal opportunity to experience the range of clinical assessments objectively.

Clinical practice based on a portfolio is assessed formatively and summatively. Formative assessment may be carried out by the supervisor or mentor dependent upon the area of practice in a modular format. Each module consists of clinical competencies that must be achieved by the student. Students are also required to complete a minimum number of cases to demonstrate that they have met the clinical competencies.

A model portfolio used in implant training and assessment at postgraduate level could consist of five sections as follows:

  • 1
    Personal development plan: the student in consultation with his/her mentor(s) decides plans his/her training requirements in line with course learning outcomes
  • 2
    Student self-assessment and reflective practice: students compare and contrast their achievements and abilities, in a self-assessed manner, with that of their peers and assessors as described above. This fosters self directed learning and self appraisal for lifelong learning.
  • 3
    Logbook of cases: students/clinicians maintain a contemporaneous record of all consecutive cases managed and treated. This logbook is maintained through out clinical practice as an ongoing record of clinical activity and for audit purposes.
  • 4
    Case-based reports/presentations: student is required to provide a structured detailed report of a minimum number of completed cases (typically 5—10 cases) written to a predetermined standard similar to a scientific report.
  • 5
    Competency assessment forms: these are used by mentors/clinical assessors to carry out, structured and objective, formative assessment of clinical performance in each specific competency area relevant to the level of expertise being assessed.

The established use of a portfolio is as a tool for personal development, whilst formal assessment approaches constitute a recent, promising but less researched development. Reliability can be enhanced with the use of strict criteria (49) and when pairs of assessors discuss their marking decisions (50). Further research is needed to unroll the full potential and application of portfolio in summative assessment models (51).

Conclusions

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix

Assessment should be integrated in the curriculum design, coordinated and should reflect the learning outcomes of the education. Assessment should be implemented to all levels of implant dentistry education, from elementary theoretical knowledge to the expert performance. No single method of assessment is appropriate for all levels and aspects of implant education. Different assessment methods should be used to assess different levels of competencies throughout the curriculum (Fig. 5). The group concluded that the model described by Miller (14) would be appropriate to assess competencies in implant dentistry.

image

Figure 5.  Assessment methods of knowledge and skills as relating to the Miller Pyramid.

Download figure to PowerPoint

The group strongly encourages the use of reflective forms of assessment methods which engage the students in a process of self-appraisal, identification of individual learning needs and self-directed learning. The ultimate goal of this would be to allow the student to develop a lifelong learning attitude.

Various forms of written or oral assessment methodologies, such as MCQs are applicable at the lowest competence levels (knows) and can be in particular helpful as summative assessment tools for large student numbers. The group encourages the use of interactive assessment methods, whenever possible, which would require the synthesis of several disciplines and aspects of the theoretical knowledge and apply them to relevant clinical scenarios. Such methods would include patient simulations (paper based or virtual), interactive examination and more.

The group feels that especially at higher levels of competence (‘shows how’, ‘does’) documentation of clinical proficiency by means of reflective portfolios and diaries is appropriate. Apart from their obvious developmental and formative role, the portfolios could be also used for summative assessment schemes, as well as for auditing clinical appraisal. The highest level of competence (‘does’) requires performance assessment using structured, objective, clinical criteria.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix

Appendix

  1. Top of page
  2. Abstract
  3. Introduction
  4. Aims
  5. General functions of assessment
  6. Specific functions of assessment within implant dentistry
  7. Progressive formative assessment methodologies
  8. Conclusions
  9. Acknowledgements
  10. Conflicts of interest
  11. References
  12. Appendix

Participants in Group E were:

  • 1
    Anders Nattestad, Chairperson, Professor University of Pacific, USA.
  • 2
    Nikos Mattheos, co-chair, Associate Professor, Griffith University, Gold Coast, Australia.
  • 3
    Rolf Attström, Professor Emeritus, University of Malmö, Sweden.
  • 4
    Antonio Carassi, Professor, University of Milano, Italy.
  • 5
    Deborah Faust, Senior Manager, Professional Development, Biomet 3i, USA.
  • 6
    Martin Janda, DDS, PhD, University of Malmö, Sweden.
  • 7
    Christina Lindh, Associate Professor, University of Malmö, Sweden.
  • 8
    Bruno Loos, Professor, Academic Centre for Dentistry Amsterdam (ACTA), The Netherlands.
  • 9
    Maciej Marcinowski, DDS, University of Poznan, Poland.
  • 10
    Todd Metts, Zimmer Dental.
  • 11
    Corrado Paganeli, Professor, University of Brescia, Italy.
  • 12
    Cemal Ucer, Professor, University of Salford, UK.
  • 13
    Tommie Van de Velde, DDS, University of Ghent, Belgium.
  • 14
    Per Vult von Steyern, Associate Professor, University of Malmö, Sweden.
  • 15
    Wilfried Wagner, Professor, J. Gutenberg-University, Mainz, Germany.