What’s Wrong with Teacher Evaluation and How to Fix It: Osmosis
Unfortunately, and despite what appears to be a concerted effort across the last several decades, the assumption that a picture of educator skill and practice can be gained through observation alone simply doesn't work. In the final analysis, this simplistic approach to teacher evaluation most certainly results in neither teacher improvement nor increased accountability. Teachers don't value or trust their own evaluation, administrators view it as merely one more bureaucratic hurdle to check off, and it has no credibility with parents and other stakeholders.
So, what can we do about the abysmal state of teacher evaluation? Firstly, we need to recognize what's wrong, and secondly, we need to fix it. In the first post in this series, I discussed how observation does not equal evaluation. Today's post is about purposeful, data-driven evaluation.
The Problem: Osmosis
Pronunciation: \äz-ˈmō-səs, äs-\
Definition: a process of absorption or diffusion suggestive of the flow of osmotic action; a usually effortless, often unconscious, assimilation
Example: learned a number of languages by osmosis
(Adapted from Merriam-Webster's Online Dictionary, 2010)
Merely walking through the classroom occasionally doesn't constitute evaluation. That kind of minimalist effort is fraught with error, is based on subjective impressions, is unreliable, and is not fair to teachers. To yield accurate, trustworthy evidence, performance evaluation requires a systematic and concerted effort. The one-minute manager simply doesn't apply here.
How to Fix It
The best solution to the osmosis/no-evidence trap is a straightforward one: rely on data, not intuition, to make judgments about teacher performance. And to rely on data requires data. Thus, a well-designed, evidence-based teacher evaluation system will include multiple methods for documenting performance, such as those suggested in Figure 1.
Taken collectively, multiple data sources can provide a fuller, fairer, and more accurate portrait of the teacher's performance.
In the next post in this series, I'll address the problem of one-size-fits-all evaluations.
© James H. Stronge. Used with permission.
James H. Stronge is the Heritage Professor in the Educational Policy, Planning, and Leadership Area at the College of William and Mary, Williamsburg, Va. He also is the president of Stronge and Associates, an education consulting group that focuses on teacher and leader effectiveness. His research interests include policy and practice related to teacher quality and teacher and administrator evaluation. His work on teacher quality focuses on how to identify effective teachers, how to connect teacher performance to student success, and how to enhance teacher effectiveness.
Stronge has presented his research at numerous and conducted workshops for school districts and educational organizations throughout the United States and internationally. Among his current research projects are international comparative studies of national award-winning teachers in the United States and China and developing a U.S. Department of State-sponsored principal evaluation system for American schools in South America. Additionally, he has worked extensively with states, regional organizations, and local school districts on issues related to teacher quality, teacher selection, and teacher and administrator evaluation.
Stronge has been a teacher, counselor, and district-level administrator and has authored, coauthored, or edited 22 books and more than 100 articles, chapters, and technical reports. Connect with him by e-mail at firstname.lastname@example.org.