I find it difficult to both attend a meeting as a member of the Board and take notes of said meeting for a blog post, especially a post I hadn't decided to do until some time afterward. Ergo, there are holes. A very busy week, nonetheless, with two Board meetings and me trying to pull together my first slidedeck for a course I begin teaching next Tuesday...all this by way of apology for getting this post out so long after the fact. Do remember these are not "minutes" and any errors and/or misrepresentations are unintentional, but are mine.
This past Monday, May 23, the Department brought forward the topic of Educator Evaluation (Ed Eval) for a Special Meeting of the Board at 5:00 PM: "Educator Evaluation: Overview, Progress Report, and Key Issues - Discussion". The Board's agenda and back-up documents are HERE.
The Department presented a brief overview of the Ed Eval system that the Board adopted in 2011 as part of the run up for Race to the Top dollars. Senior Associate Commissioner of Instructional Support, Heather Peske opened with five key priorities that had been identified for MA's new system:
- to promote growth and development
- to place student learning at the center
- to recognize excellence
- to set a high bar for tenure and
- to shorten timelines for improvement
To answer the question, "why a new Ed Eval system", Peske said that prior to 2012, evaluations rarely included student outcomes and rarely singled out excellence among educators. The system failed to ensure educator input or continuous improvement, she said, and failed to differentiate meaningfully between levels of effectiveness. By way of example, she offered: one district, a sample of 58 teacher evaluations looking at over 1,000 indicators of performance, resulted in only 1 indicator for 1 teacher rating that was less than satisfactory.
We heard from two panels. The first panel presented a statewide, national, and research view. Members of that panel included:
Parenthetically, Barbara Madeloni, President, Massachusetts Teachers Association, had been scheduled to be part of this panel, too. Prior to the Board of ESE meeting, she sent a joint letter to the Commissioner, copying the Board (letter was also signed by Tom Gosnell, President, AFT Massachusetts) explaining why she had decided not to participate. I have done a copy/paste of the content of her letter at the bottom of this post. Attached to Madeloni's email was an AFT/MTA white paper explaining why evaluating educators with statewide standardized test results is neither valid nor productive.
The Department shared the "staggered rollout" of the Ed Eval system:
- The first wave was with Level 4 schools and early adopter districts (2012-2013);
- The second was in districts that had signed on to the Race to the Top initiative (2013-2014);
- And in 2014-2015, the remaining districts with at least half of educators evaluated with the new system
I noted (according to DESE's handout) that since the first roll out, the number of "needs improvement" educators has steadily decreased: from 6.8% in 2012-13 to 4.1% in 2014-15. (At the NASBE Legislative Conference last month, I liveblogged the discussion with Charlotte Danielson about using state teacher evaluation systems to promote teaching and learning. Danielson said that most eval systems are for the 5-6%, not for the 94%.)
Representing a national view was Kati Haycock, who commented that it was "evaluation season" at Ed Trust. She stated that self-reflection and quality feedback are important keys in the process there. About MA's system, she was emphatic in stating that we should "stay the course".
Haycock said that MA's system had "avoided the quagmire that other states had gotten into" through their "formulaic approach" of state imposed algorithms and "gotcha" nature to evals.
Ross Wiener commented on the "priority of educator evaluation across the country. He talked of the importance of "valuing professional judgement, valuing growth and development, and valuing local control". He said the appreciates MA's "balanced approach" to evals and that the state avoided a "formulaic algorithm".
In response to Board questions, Haycock said there are two goals/reasons for Ed Evals:
- To grow the knowledge and skills of students. Here, she likened educators' love for teaching to nurses with a good bedside manner; a bedside manner is important but not sufficient for the job. Educators with content and curriculum strengths she likened to salespeople who know a lot about their product; knowledge of the product is important, but they are judged by how many times they made the sale. In each case, educators must be solid in their love for teaching and learning and in continuous improvement so that they can teach every child. She said [strong ed evals] isn't unlike physicians' ratings which are very tough and very important.
- Good evaluation systems are strong signaling systems about what's important. The strong ones focus on both the resources they have to give and the "bounce" one gets on productive efforts.
Ron Noble spoke to the Massachusetts policy; that Ed Evals have to be inclusive of a "body of evidence". That teachers and their evaluators approach evals in a "reflective, growth-oriented manner".
In response to a Board question, Haycock talked about the "issue of the moment" which she saw was the "focus on the achievement of low-income students, who are lagging substantially", and "the propensity to assign our least well-educated teachers to teach them". She said "we need to take responsibility for that", and that parents of students of color "won't be satisfied unless educators can accelerate the learning of students who come to school already behind". She said that what the adults and children are doing in MA is very powerful.
We next moved on to the second panel: Classroom, School, and District Perspectives - a much larger panel! I believe each person had four minutes to present their views.
Shakera Walker, Senior Manager of Teacher Leadership and Professional Development, BPS (also a member of the Educator Evaluation Task Force 2010-2011 - she was a teacher then):
- Previously, Principals had been challenged to complete teacher evaluations; now they are being completed;
- New evals offer Teacher voice and agency;
- She receives quality feedback and there are high expectations for all;
- In BPS, they use PAR (Peer Assistance and Review), an approach to peer mentoring and professional development.
Kate Fenton, Chief Instructional Officer, Springfield Public Schools:
- There are 2,400 Teachers in SPS;
- They have experienced accomplishments and challenges with the new system;
- SPS implemented a 2-day inter-rater reliability certificate program;
- [Fenton had more to say, but I didn't write it down. I do have an odd notation that I did write down, apparently attributable to her: School Improvement Grants @ Site Councils. This makes no sense to me, however.]
Gene Reiber, Teacher, Hanover Public Schools:
- They have experiences positive changes in culture and climate;
- Trying to identify "root causes" of student achievement (came about after they understood a too high number of students saying they experienced suicide ideation);
- Reflection is a mindful part of this work (artifacts, data, student work);
- Experienced anxiety at not having time for DDMs: compliance and PD.
Mike Sabin, Principal, McDevitt Middle School, Waltham:
- Positives/Areas of Growth:
- Spirit of continuous improvement;
- Teacher voice, goals, self-assessment, examples of student learning;
- Areas for Improvement:
- Teacher with "Needs Improvement" at the end of the third year - especially if a Teacher has moved from one particular area to a new one in their third year (in their first year of a new role) - need not result in loss of hire, and the rubric doesn't account for this;
- More resources are necessary;
- He wants a great Teacher in every classroom - that requires better tools and resources;
- Need to consider Principal work conditions and needs
- Do the math and you can understand that it will take a good chunk of time to do thoughtful, meaningful evaluations;
- Sabin is doing 14-16 evals/year; others noted that number has been as high as 35;
- He said that it isn't difficult to imagine taking at least 20% of a Principal's time to do thoughtful, meaningful evaluations;
- He also noted: "The culture of 'Needs Improvement' as a terrible thing is very strong".
Michelle Ryan, Teacher, Randolph Public Schools:
- Implementation of Ed Evals has resulted in authentic, collaborative conversations
- Common goals
- She noted that she is able to talk with other teachers about tips they might have for her to fine tune her practice
- Can applaud strengths and get support where necessary
- Promotes risk-taking for assessments
- Opportunities to be a mentor teacher at the educator preparation level
- One of the challenges has been in the small number of administration able to take on the evals - need to make a role for instructional leader in order for rich conversations
- They use "teach point" to upload their artifacts and sometimes uploading too many can be a problem
Kim Smith, Superintendent, Wakefield Public Schools:
- WPS began implementing the new eval system during 2012-2013 school year;
- They formed a Steering Committee of teachers and administrators and ultimately adopted the model system developed by DESE;
- Then they chose to adapt the model rubrics, which proved important to a smooth implementation the following year because teachers had a voice in defining the language behind the system;
- Union leadership and other educators helped refine rubrics to "make them our own", which led to deeper understanding, buy-in, and ownership;
- The Steering Committee has continued to meet monthly over the past four years to provide recommendations for each new aspect of the system, including implementation of DDMs and student/staff feedback;
- The committee also developed an Educator Resource Manual to assist faculty in navigating the new system, including homegrown forms, examples of evidence, and a DDMs orgainzational tool;
- Smith believes that by owning the process they have avoided a "compliance mindset";
- They asked, "How do we help educators view their practice through the lens of student performance, and examine ways that their instructional practice impacts student growth?"
- Subsequently, they seized the opportunity of DESE's - for districts to build an alternative pathway;
- They ended up creating a fifth standard (the state's rubric has four standards), which WPS saw as "what students do" - this focused the work on the relationship between teacher practice and student growth.
I don't remember who noted their "bias" regarding "training and support", but their district is addressing it.
Likewise, someone referred to the importance of "distributed leadership" and "differentiated support" - both seen as positive results of the evaluation implementation.
The meeting adjourned at approximately 7:03 PM.
- - -
- - -
May 20, 2016
Dear Commissioner Chester,
We are writing to let you know of our concerns about Monday night’s panel on the Massachusetts Educator Evaluation system, and to inform you why I, Barbara, have decided not to participate.
The idea for a panel originated when Board of Elementary and Secondary Education member Ed Doherty informed you in March that he would like to invite members of MassPartners for Public Schools to a BESE meeting to discuss one aspect of the Educator Evaluation system: the mandate that District-Determined Measures and standardized test scores be used to generate a “student impact rating” of low, moderate or high for all licensed educators. The first such ratings are supposed to be issued this fall. As this deadline has approached, administrators and educators alike have become increasingly concerned that there is no valid, reliable and useful way to use these test scores in a meaningful evaluation system. In fact, there is concern that these ratings will do more harm than good.
We were disappointed that you decided not to focus on the impact rating but to broaden the discussion to the larger educator evaluation system. Our members do have a lot of views on the system as a whole. Deep reservations about it were expressed at recent MTA and AFT Massachusetts annual conventions. But we don’t want that longer and more complicated discussion to deflect you from our immediate concerns about the student impact rating mandate. This particular aspect of the evaluation system needs prompt attention because it is: (a) widely discredited as invalid and unreliable; (b) about to go into effect; and (c) no longer required by the federal government, with passage of the Every Student Succeeds Act.
If we had been able to have the discussion we wanted, we would have told the department and board about the widespread disapproval of the mandate among both evaluators and those being evaluated. We would have mentioned the recent New York court decision that found using these kinds of value-added measures to judge educators is invalid and unreliable. And we would have shared with you our position paper, produced by AFT Massachusetts and the MTA, which we are attaching to this email.
We repeat our request that you hold a hearing or panel discussion with the entire BESE on the impact rating issue in the very near future. We will also welcome the opportunity at another time for Massachusetts educators and the associations that represent them to engage in a deeper conversation on the entire educator evaluation system.
Very truly yours,
Barbara Madeloni Thomas Gosnell
President President
Massachusetts Teachers Association AFT Massachusetts
cc: Members of the Board of Elementary and Secondary Education