Impact Factor 3.552

The 2nd most cited open-access journal in Clinical Neurology

Opinion ARTICLE

Front. Neurol., 05 April 2011 | https://doi.org/10.3389/fneur.2011.00017

Developing competency testing tools for the incoming neurology residents

  • Chief of Neurology, Hines VA Hospital and Neurology Residency Program Director, Loyola University Medical Center. Hines, IL, USA

Clinical skill evaluations exist for in-training and post-residency maintenance of certification but there are no established criteria for competency testing at the beginning of the residency program. Various programs have used different techniques to assess these skills, but most do not use any assessment methodology prior to their residents working in their inpatient/outpatient setting or starting in-house night calls. Several tools have been developed in various institutions to achieve procedural competency within the residency program. It is our plan to develop such an assessment tools prior to our residents assuming service responsibilities on the inpatient or outpatient clinical services. In order to streamline, a question based step wise approach is presented for the ease of understanding.

Stepwise Approach to Competency Testing for Resident Evaluation

❖ Step 1: Necessity of clinical skills assessment (CSA) in the residency programs.

❖ Step 2: Necessity of faculty observation of residents in the residency program.

❖ Step 3: Necessity of faculty training in evaluation and observation of skills.

❖ Step 4: Faculty development as part of competency tools training at Academic institutions.

❖ Step 5: Systems Issues and Challenges Involved.

Step 1: Necessity of Clinical Skills Assessment in the Residency Programs

Change in examination format of American Board of Psychiatry and Neurology

• Effective for residents entering residency training in neurology (PG-2) or child neurology (PG-3) as of July 1, 2005, documentation of satisfactory performance of their clinical skills will be required as part of their credentialing process (www.abpn.com). Neurology or child neurology residents entering residency training prior to July 1, 2005, must utilize the current certification process and will have up to February 1, 2013, to complete that process. Candidates who do not meet this deadline will be required to complete a minimum of five clinical skills evaluations as set by the American Board of Psychiatry and Neurology (ABPN). These same criterion sets are used by residents for certification purposes after their graduation. These in-training clinical skills evaluations must be completed by a minimum of three faculty members. These faculty members are either trained (those who have participated in the ABPN Part II examinations), or untrained. For adult neurology residents, three of the evaluations should be completed no later than the PG-3 year, and all five of the evaluations should be completed no later than the PG-4 year. The evaluation could be conducted in the inpatient or an outpatient setting. For the neurology residency at our program, we use an assessment period of 1 h with the first 30 min allocated to the residents’ history and physical exam of their assigned patient, then 10–15 min to present a summary of the relevant findings on history and neurological examination. The remainder of the time is spent in discussion with feedback provided by the faculty member. The individual faculty member will also determine if the resident passed all three core components of the assessment: Medical interviewing; Neurological examination; and Humanistic qualities, professionalism, and counseling skills of the clinical evaluation. The Nex Form 1 or 2 from the ABPN is utilized to capture those assessments.

Anticipated impact with a change in assessment format and timing

• Will faculty effort by the Program Directors and other faculty increase? The answer is clearly yes. This requires an increased degree of faculty effort and commitment from all or most members of the department. The question which comes in parallel to the above is implications on the residents. Residents will start taking in-house call independently and start working on the inpatient/consult setting only if they have successfully completed the competency requirement.

Methods for competency testing

No single method has shown to be an effective tool and thus we have proposed the combination approach. We are hoping to achieve the competency testing by utilizing the following methods:

○ Pre-test at the beginning of July or in the first week of their residency. A score of 50% or more will be considered as successful.

○ Two hours of didactics every Monday, Wednesday, and Friday on common Neurological topics including emergencies for 4 weeks.

○ Two hours of hands on workshop every Tuesday and Thursdays on common Neurological topics including emergencies for 4 weeks.

○ Demonstration by faculty or chief resident/s a complete Neurological history and examination on one to two patients.

○ Successful performance of five vignettes in neurological emergencies.

○ Successful performance of two CSA by two different faculties.

○ Post-test at the end of 4 weeks and a score of 80% or more will be considered as successful.

• Please see attached flow diagram listing various tools, period of time during which they need to accomplish the goals, corresponding competencies achieved with each tool, requirement for a successful score, and what if unsuccessful. Residents who are unable to successfully complete will need to remediate. In addition, they will take 4 night calls with a senior resident divided over 4 weeks (preferably once per week). This change signifies a purposeful move from basing clinical competency on time within the residency to demonstrating competency before advancing in the residency. Eventually this new assessment process will be reflected in our attending staff as they now know that the individuals evaluating their patients and supervising their care at night are “certified as being competent.”

yes

Why should such internal measures of competency such as the clinical skills assessment be viewed as relevant and important for our residency?

• Foremost is the fact that this type of assessment is required by the ABPN for board eligibility and continue as a form of MOC so CSA remains an essential, valid, and reliable component for assessing a residents’ competency. These clinical skills of medical interviewing, physical examination, and counseling remain vital to the effective care of patients despite documented limitations. Two important studies have shown that the medical interview alone produced the correct diagnosis in nearly 80% of patients presenting to an ambulatory care clinic with a previously undiagnosed condition (Fox et al., 2000). Hampton et al. (1975) also demonstrated that the medical history produced the final diagnosis in the majority of patients, with laboratory investigation providing the final diagnosis in only one of 80 consultations. Despite advances in technology, accurate data collection during the medical interview and the physical examination remains the most potent diagnostic tool available to physicians (Hampton et al., 1975; Peterson et al., 1992; Kirch and Schaffi, 1996). Research has repeatedly demonstrated that a multiple-choice examination cannot attest to a trainee’s proficiency in clinical skills. In medical school, the addition of the CSA can help assure that a medical student has attained a basic level of clinical skills sufficient to begin the next stage of their education in residency.

• The majority of trainees desire effective forms of evaluation and feedback from their faculty (Ende, 1983; Gil et al., 1984). AAMC, ACGME, ABMS strongly approve the evaluation of students, residents, and fellows in clinical skills (American Board of Internal Medicine, 2001; American Association of Medical Colleges, 2007, July 16; Accreditation Council for Graduate Medical Education, 2007, July 16). The Medical Council of Canada and the Educational Commission for Foreign Medical Graduates include clinical skills examinations as an integral component of the licensure process (Brailovsky et al., 1997; Grand’Maison et al., 1997). Effective physician–patient communication has also been shown to improve health outcomes (Stewart, 1995) as effective communication involves patient participation and most patients want an active role in the decision-making processes (Kogan et al., 2009). The importance of developing their clinical skills allows the resident to progress through George Miller’s pyramid as they progress from knows, knows how, shows how to do. Chimowitz et al. (1990) have demonstrated the importance of the bedside examination in the accurate diagnosis of neurological disorders. Some authors have abstracted studies using a modified best evidence medical education and have found limited evidence of portfolios in the undergraduate setting (Buckley et al., 2009; Kogan et al., 2009). Further studies are required to streamline how this would translate into the resident education.

Step 2: Necessity of Faculty Observation of Residents in Practice within the Residency Program

Why faculty observation?

• Direct observation has been an informal and underutilized assessment method across all specialties. Fortunately, it has started to get included as part of the medical education curricula (Fromme et al., 2009). Evaluating residents in natural settings with actual patient remains essential to training qualified physicians for performance-based clinical skills assessment (Kogan et al., 2009). Faculty are in the best position to document improvement over time and to certify trainees have attained the appropriate level of skill in medical interviewing, physical examination, and counseling. Before a faculty begins to observe, a trainee has to first know how to perform a clinical skill or maneuver and before they then acquire experience through practice with actual patients. Physicians in general are poor at self-assessment in the absence of guidance and data (Duffy, 1998). The biggest problem in the evaluation of clinical skills is the lack of observation of trainees by faculty. Research continues to document serious deficiencies in clinical skills among students and residents. Errors are reported in several aspects of basic physical examination skills (Johnson and Carpenter, 1986; Fox et al., 2000).

• Empirical evidence supports the observation that direct supervision helps trainees gain skills faster, and their behavior changes more quickly (Kilminster et al., 2007). Self supervision was not effective, but faculty supervision was associated with improved patient safety and quality of care. These observed deficiencies in trainees’ clinical skills have lead to a significant push by educators and accrediting agencies to reemphasize both the training and evaluation of clinical skills (Turnbull et al., 1998; Johnston and Boohan, 2000; Long, 2000). Deficiencies in interviewing skills persist, and unfortunately in the views of some, those skills may actually decline with time (Pfeiffer et al., 1998; Fromme et al., 2009) and these communication skills do not appear to improve after completion of residency training, unless there is an active intervention. In one of the studies, the authors observed six different methods of performance evaluation including simulated patients; video observation; direct observation; peer assessment; audit of medical records, and portfolio or appraisal. While peer assessment was found to be the most feasible method in terms of costs and time, its long term impact on education and quality of care remains unknown (Overeem et al., 2007). Miller and Archer (2010) study reveals no evidence of assessment tools (mini-clinical evaluation exercise, direct observation of procedural skills, and case based discussion) leading to improvement in performance, although subjective reports on their educational impact are positive. Future research designs need to pay special attention to unmasking effectiveness in terms of performance improvement.

Step 3: Necessity of Faculty Training in Evaluation and Observation of Skills

Why is it necessary to include faculty training?

• First, faculty members must appreciate that direct observation is important and an obligation of being a teacher. However, being a good clinician and teacher is not equal to skilled in observing others’ competencies and providing feedback. Observation of trainees in the work setting is essential to their development, but while limited, research demonstrates that significant deficiencies exist in faculty members’ direct observation of evaluation skills (Holmboe, 2004a).

• The ultimate responsibility to certify competence in clinical skills falls upon the residency program directors and their associated teaching faculty (Trained or untrained). While we spend a great deal of time in refining the evaluation forms or rating scales, less is known and less time is spent on determining the validity of faculty ratings. It is disturbing to see that the objective structured clinical examination (OSCE) positive predictive value of faculty ratings for required interviewing skills was just above 10%. Another study found that faculty members could not reliably evaluate one-third of the physical examination skills assessed and had the most difficulty with examination of the head, neck, and abdomen (Elliot and Hickam, 1987). "Eyes cannot see those things if brain does not know what to look for". Similarly, a medical educator who possesses deficiencies in their own clinical skills is less likely to detect those deficiencies among trainees. Faculty members are very uncomfortable about admitting their own limitations, despite the powerful role modeling such an act engenders (Richards et al., 1996).

Step 4: Faculty Development as Part of Competency Tools Training at Academic Institutions

Why is faculty development necessary?

• Faculty development remains one of the essential roles for those who pursue an academic career and an educator at any given academic institution. Faculties have joined medical school after years of education and training but they have had little experience with their own future development (Harris et al., 2007). A review of the literature indicates that competencies can be learned from self-help guides, single-event workshops facilitated by more able peers and a more theoretically grounded and detailed approach to teacher development (McMillan, 2007).

• The process of developing faculty’s clinical assessment skills of their residents can be facilitated by having junior faculty partnered with more senior faculty when assessing CSA on any of their residents (Steinert et al., 2006) Depending upon the program requirements and comfort level of junior faculty, other tools can similarly be partnered with a more senor faculty member. However, faculty development is not an easy task and it requires support from one's own department, institutional leaders, appropriate resource allocation and recognition for teaching excellence. Establishing a network of local and national individuals who share similar ideas is very helpful (Collins, 2004; Ladden et al., 2004; McLean et al., 2008). Faculty development programs improve teaching competencies of its participants. A comprehensive faculty development program should include Professional, Instructional, Leadership, and Organizational Development (Wilkerson and Irby, 1998). Faculty training as mentioned in this previous section becomes necessary and should be incorporated as part of the faculty development.

Step 5: Systems Issues and Challenges Involved

• One of the biggest problem in the evaluation of clinical skills is simply getting faculty to observe residents. Utilizing the additional tools does places an extra initial burden on most of us as educators. The pressure can be dissipated based upon the program size of the faculty to have some faculty participating in workshops and helping out with didactics as well as CSA. Program directors take the major blunt but in order to have the successful team and to produce physicians who can take care of us in the future, we all as faculty and educators need to step up to the plate. It has been observed that some junior physicians displayed only little willingness to change in response to multisource feedback. Performance changes were more likely to occur when feedback was credible and accurate or when coaching was provided to help subjects identify their strengths and weaknesses (Miller and Archer, 2010). In one of the studies, the AAMC found that faculty members rarely observed student interactions with patients, noting that the majority of a student’s evaluation was based on faculty and resident recollections of student presentation skills and knowledge (Scenes, 1997). There is little evidence even today of greater faculty involvement in teaching and observing clinical skills (Holmboe, 2004b; Holmboe et al., 2004). There is some evidence suggestive of over two-third of faculty still rated the overall performance of a resident depicting marginal performance as satisfactory or superior.

• There are problems in rating scales with Halo effect as well as Leniency error (Herbers et al., 1989; Noel et al., 1992). There are some financial limitations, but the use of standardized patients to evaluate and teach clinical skills is a valuable methodology in medical education and assessment (Richards et al., 1996). Chief residents and junior faculty can be potential sources for “standardized patients” and can be easily trained. There are certainly limitations with regard to expense and these standardized exercises are not meant to replace observation of actual patients. In addition, standardized patients (or residents) may have less validity with more advanced trainees (Ram et al., 1999; Furman et al., 2010).

Conclusion

• Clinical skills assessment needs to be completed by Neurology residents for certification purposes. We are proposing an approach to utilize a combination of tools for competency testing in the residency programs. In addition to CSA, other tools including pre-test, post-test, dedicated workshops, vignette examination, and didactics on common issues as well as neurological emergencies can be utilized. Medical interviewing, physical examination, and counseling at present remain the most important and effective diagnostic and therapeutic tools.

• These tools can be utilized early on in residency programs to identify deficiencies in residents.

• As an educator and physician in an academic setting, we have responsibility not just toward our patients but we have an obligation to provide an equally outstanding service to our students, residents, fellows as well as our junior faculty members. Our ongoing challenges should not preclude us from investing our time in the education, assessment, and feedback of our future physicians.

References

Accreditation Council for Graduate Medical Education. (2007). The General Competencies. AMEE Guide (Kilminster; Med Teach). Available at: www.acgme.org

American Association of Medical Colleges. (2007). AAMC Educational Outcomes Project. Available at: www.aamc.org

American Board of Internal Medicine. (2001). Portfolio for Internal Medicine Residency Programs. Philadelphia: American Board of Internal Medicine.

American Board of Psychiatry and Neurology. Certification Requirements. Available at: www.abpn.org

Brailovsky, C. A., Grand’Maison, P., and Lescop, J. (1997). Construct validity of the Quebec licensing examination SP-based OSCE. Teach. Learn. Med. 9, 44–50.

Buckley, S., Coleman, J., Davison, I., Khan, K. S., Zamora, J., Malick, S., Morley, D., Pollard, D., Ashcroft, T., Popovic, C., and Sayers, J. (2009). The educational effects of portfolios on undergraduate student learning: a Best Evidence Medical Education (BEME) systematic review. BEME Guide No. 11. Med. Teach. 31, 282–298.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Chimowitz, M. I., Logigian, E. L., and Caplan, L. R. (1990). The accuracy of bedside neurological diagnoses. Ann. Neurol. 28, 78–85.

Pubmed Abstract | Pubmed Full Text

Collins, J. (2004). Teacher or educational scholar? They aren’t the same. J. Am. Coll. Radiol. 1, 135–139.

Pubmed Abstract | Pubmed Full Text

Duffy, D. F. (1998). Dialogue: the core clinical skill. Ann. Intern. Med. 128, 139–141.

Pubmed Abstract | Pubmed Full Text

Elliot, D. L., and Hickam, D. H. (1987). Evaluation of physical examination skills. Reliability of faculty observers and patient instructors. JAMA. 258, 3405–3408.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ende, J. (1983). Feedback in clinical medical education. JAMA 250, 777–781.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Fox, R. A., Clark, C. L. I., Scotland, A. D., and Dacre, J. E. (2000). A study of pre-registration house officers’ clinical skills. Med. Educ. 34, 1007–1012.

Pubmed Abstract | Pubmed Full Text

Fromme, H. B., Karani, R., and Downing, S. M. (2009). Direct observation in medical education: a review of the literature and evidence for validity. Mt Sinai J. Med. 76, 365–371.

Pubmed Abstract | Pubmed Full Text

Furman, G. E., Smee, S., and Wilson, C. (2010). Quality assurance best practices for simulation-based examinations. Simul. Healthc. 5, 226–231.

Pubmed Abstract | Pubmed Full Text

Gil, D. H., Heins, M., and Jones, P. B. (1984). Perceptions of medical school faculty members and students on clinical clerkship feedback. J. Med. Educ. 59, 856–863.

Pubmed Abstract | Pubmed Full Text

Grand’Maison, P., Brailovsky, C. A., Lescop, J., and Rainsberry, P. (1997). Using standardized patients in licensing/certification examinations: comparison of two tests in Canada. Fam. Med. 29, 27–32.

Pubmed Abstract | Pubmed Full Text

Hampton, J. R., Harrison, M. J. G., Mitchell, J. R. A., Prichard, J. S., and Seymour, C. (1975). Relative contributions to history-taking, physical examination, and laboratory investigation to diagnosis and management of medical outpatients. BMJ 2, 486–489.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Harris, D. L., Krause, K. C., Parish, D. C., and Smith, M. U. (2007). Academic competencies for medical faculty. Fam. Med. 39, 343–350.

Pubmed Abstract | Pubmed Full Text

Herbers, J. E., Noel, G. L., Cooper, G. S., Harvey, J., Pangaro, L. N., and Weaver, M. J. (1989). How accurate are faculty evaluations of clinical competence? J. Gen. Intern. Med. 4, 202–208.

Pubmed Abstract | Pubmed Full Text

Holmboe, E. S. (2004a). Faculty and the observation of trainees’ clinical skills: problems and opportunities. Acad. Med. 79, 16–22.

CrossRef Full Text

Holmboe, E. S. (2004b). The importance of faculty observation of trainees’ clinical skills. Acad. Med. 79, 16–22.

CrossRef Full Text

Holmboe, E. S., Hawkins, R. E., and Huot, S. J. (2004). Direct observation of competence training: a randomized controlled trial. Ann. Intern. Med. 140, 874–881.

Pubmed Abstract | Pubmed Full Text

Johnson, J. E., and Carpenter, J. L. (1986). Medical house staff performance in physical examination. Arch. Intern. Med. 146, 937–941.

Pubmed Abstract | Pubmed Full Text

Johnston, B. T., and Boohan, M. (2000). Basic clinical skills: don’t leave teaching to the teaching hospitals. Med. Educ. 34, 692–699.

Pubmed Abstract | Pubmed Full Text

Kilminster, S., Cottrell, D., Grant, J., and Jolly, B. (2007). AMEE Guide No. 27: Effective educational and clinical supervision. Med. Teach. 29, 2–19.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kirch, W., and Schaffi, C. (1996). Misdiagnosis at a university hospital in 4 medical areas. Report on 400 cases. Medicine 75, 29–40.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Kogan, J. R., Holmboe, E. S., and Hauer, K. E. (2009). Tools for direct observation and assessment of clinical skills of medical trainees: a systematic review. JAMA 302, 1316–1326.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Ladden, M. D., Peters, A. S., Kotch, J. B., and Fletcher, R. H. (2004). Preparing faculty to teach managing care competencies: lessons learned from a national faculty development program. Fam. Med. 36(Suppl), S115–S120.

Pubmed Abstract | Pubmed Full Text

Long, D. M. (2000). Competency-based residency training: the next advance in graduate medical education. Acad. Med. 75, 1178–1183.

Pubmed Abstract | Pubmed Full Text

McLean, M., Cilliers, F., and Van Wyk, J. M. (2008). Faculty development: yesterday, today and tomorrow. Med. Teach. 30, 555–584.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

McMillan, W. J. (2007). “Then you get a teacher” – guidelines for excellence in teaching. Med. Teach. 29, e209–e218.

Pubmed Abstract | Pubmed Full Text

Miller, A., and Archer, J. (2010). Impact of workplace based assessment on doctors’ education and performance: a systematic review. BMJ 341, c5064.

Pubmed Abstract | Pubmed Full Text

Noel, G. L., Herbers, J. E., Caplow, M. P., Cooper, G. S., Pangaro, L. N., and Harvey, J. (1992). How well do internal medicine faculty members evaluate the clinical skills of residents? Ann. Intern. Med. 1117, 757–765.

Overeem, K., Faber, M. J., Arah, O. A., Elwyn, G., Lombarts, K. M., Wollersheim, H. C., and Grol, R. P. (2007). Doctor performance assessment in daily practise: does it help doctors or not? A systematic review. Med. Educ. 41, 1039–1049.

Pubmed Abstract | Pubmed Full Text

Peterson, M. C., Holbrook, J. H., and Hales, D. V. (1992). Contributions of the history, physical examination, and laboratory investigation in making medical diagnoses. West. J. Med. 156, 163–165.

Pubmed Abstract | Pubmed Full Text

Pfeiffer, C., Madray, H., Ardolino, A., and Willms, J. (1998). The rise and fall of student’s skill in obtaining a medical history. Med. Educ. 32, 283–288.

Pubmed Abstract | Pubmed Full Text

Ram, P., van der Vleuten, C., Rethans, J. J., Grol, R., and Aretz, K. (1999). Assessment of family physicians: comparison of observation in a multiple-station examination using standardized patients with observation of consultations in daily practice. Acad. Med. 74, 62–69.

Pubmed Abstract | Pubmed Full Text

Richards, B. F., Rupp, R., Zaccarro, D. J., Cariaga-Lo, L., Harward, D., Petrusa, E. R., Smith, A. C., and Willis, S. E. (1996). Use of a standardized patient based clinical performance examination as an outcome measure to evaluate medical school curricula. Acad. Med. 71, S49–S51.

Pubmed Abstract | Pubmed Full Text

Scenes, P. (1997). The role of faculty observation in assessing students’ clinical skills. Contemp. Issues Med. Educ. 1, 1–2.

Steinert, Y., Mann, K., Centeno, A., Dolmans, D., Spencer, J., Gelula, M., and Prideaux, D. (2006). A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME Guide No. 8. Med. Teach. 28, 497–526.

Pubmed Abstract | Pubmed Full Text | CrossRef Full Text

Stewart, M. A. (1995). Effective physician-patient communication and health outcomes: a review. CMAJ 152, 1423–1433.

Pubmed Abstract | Pubmed Full Text

Turnbull, J., Gray, J., and MacFacyen, J. (1998). Improving in-training evaluation programs. J. Gen. Intern. Med. 13, 317–323.

Pubmed Abstract | Pubmed Full Text

Wilkerson, L., and Irby, D. M. (1998). Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad. Med. 73, 387–396.

Pubmed Abstract | Pubmed Full Text

Citation: Chawla JPS (2011) Developing competency testing tools for the incoming neurology residents. Front. Neur. 2:17. doi: 10.3389/fneur.2011.00017

Received: 08 March 2011; Accepted: 08 March 2011;
Published online: 05 April 2011.

Copyright: © 2011 Chawla. This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with.

*Correspondence: jasvinder.chawla2@va.gov