The vast majority of my adult life, and a little over half of my entire life, I have been privileged to be an educator. But this is not the only job I have ever had. I was recently reminded of a moment during my year as a lawfirm lackey in a gleaming Seattle skyscraper. This included more or less the only professional performance evaluation I ever had to endure. It was one of the worst days of one of the worst years of my life.
I have no need to go into the particulars that made this evaluation less-than-ideal (and had them threaten not to give me a much-needed raise of fifty cents an hour). But I learned a ton about how not to treat workers that day. The relevant part of the story was how I sat there with my boss (who I rarely saw) and my boss’s boss (whom this was the only time I ever saw), who handed me a packet on which was written all of the things that people thought I was doing wrong.
Twentysomething me wasn’t ready to handle criticism, I think, but even so, the way it was passed to me could not have been much worse. I had to sit and read this thing while two people literally watched me do it. And then we had to have a talk about it.
The only word I can use to describe this is humiliating. There was no way that experience made me a better worker.
I am reminded of this because of a situation that is about to come up at school. While my job is nonevaluative, I still have to pass on some data that will be difficult for some teachers to see.
Teachers in a department agreed to do a pretty cool student survey (of my own invention: I am very proud of it) around engagement in September, and we are now giving it again here in January. We want to see, among other things how kids feel about their subject area.
I give these teaches so much credit for making themselves vulnerable by measuring this. We are getting some data back that is absolutely fascinating.
First of all, students in this subject almost universally like it less after a semester of studying it than they did at the start of the year. Some of this might be a honeymoon effect from their first time taking the survey in the first few days of school, when everyone is optimistic. But I also have data from grades from 7-12 that show students steadily–remarkably steadily, a perfectly smooth line on the graph–like the topic less and less every year they have to take it. So I think it’s more than a honeymoon effect.
But back to the teachers.
Most teachers lost some ground—kids like the subject less now. One wonderful teacher has managed to launch her kids in the opposite direction: while most teachers lost 1-2 points of topic interest, this teacher’s students have GAINED two points. (If you’ve been in her room, you are not at all surprised by this.) And a couple have seen their students’ interest in the topic completely tank…by three and even four points, which is quite a lot on this scale.
My plan all along was to get permission from each teacher to share their data (they have granted it), sit down with them for a morning department meeting, put up these numbers and a few of the survey questions that show why kids have lost interest (spoiler alert: it’s almost entirely about relevance), and let the teachers turn to each other to get advice or to brainstorm ways to get kids to like their subject matter more (happier, interested kids perform better than miserable surly ones).
But when I saw the disparities in numbers, even within teachers who share the same prep, I suddenly questioned my plan. Simply put, I was brought back to that Seattle skyscraper. Even though teachers voluntarily agreed to share their data, if I were one whose numbers were especially rough, would I be able to engage in productive discussion or thoughtful learning at that meeting?
So I turned to Charli, one of the teachers in the department I trusted most–one whose results were fairly typical (a 1-and-a-half-point loss). I sat down with her and showed her the results for all of the teachers who shared her prep. I included her name, but changed her colleagues’ names to X (5 point loss), Y (1 point loss) and Z (less than half a point loss).
“If I were teacher X, I would leave the meeting crying,” she said.
“Who is teacher Z? I need to talk to teacher Z,” she also said.
Alas, I am struggling with how to sit with both of Charli’s reactions. How can we keep numbers anonymous and still have teachers help each other? On the other hand, how can we share our results without losing the ears and hearts of the teachers who need the most help?
With Charli’s help, I have a plan.
I will talk to each teacher individually about their scores. Show them individually. Take a look at where the teachers are strongest and where they need more help.
Then, about a week after I talk to the last teacher, I will bring us into a room together, where I will show aggregated data for each class, not separated by teacher. Then I will ask teachers to reveal what areas they most need help in.
That passage of time between the individual and department meeting feels really important. It eliminates the Seattle skyscraper problem I experienced. If I had received my evaluation a day or two beforehand, I would have had time to digest it, to think about it, to calm down with it, and even to repeat it around my apartment in a funny voice. Then, when I walked into my evaluation meeting, I would have been able to have a productive conversation about it. I also would have been spared the humiliation of them watching me read my own negative evaluation in front of them.
I don’t think that a department meeting with these teachers’ survey results on the wall would have been as troublesome as my 1997 evaluation. But I do think that I need to avoid any possibility of a teacher checking out emotionally. Seeing our numbers, our data, and our reality is a prerequisite to getting better. But by giving each individual teacher time to digest their numbers, I maximize their effectiveness in changing them.
What have you all done to help teachers see their tough realities in a way that maximizes their abilities to change?
Help me as your time allows. How might you apply this approach to a current set of data I’m working through. It’s a mid year check on a new cell phone protocol we’re doing this year. We did one about 4 weeks into the school year and will do one more late spring. Could give tons of details on the protocol but the key point here is, some numbers have dipped - like your example, honeymoon is an aspect, teacher stamina for such wanes at times, one man’s mountain to die on is another man’s mogul to just, sorta, bounce over. You know? But my question for you is what are the pros and cons of sharing strand data of the comments. For example: those who feel the protocol makes cell phone issue better to way better have interesting comments. Those (about 13%) who feel it’s actually made things worse than in past also have interesting comments. And those who feel it’s about the same as year(s) past may have the most interesting of all. My first thought was, share the quantitative stuff and the comments - they’re all pretty free of anything that traces it back to any individual. It’s pretty safe that way. My other thought was to answer or reply to a good third of them that have pretty easy answers or insights or amens - an opportunity to show empathy, acknowledge and imperfect protocol, and a few examples of low-hanging fruit where the person just didn’t know this little operational piece - voila, they feel a little better just that fast. This data, unlike yours, at worst is an indictment of the system or me for being a leader of the system, so it wouldn’t out teachers or drive anyone to tears. I think the value in seeing the comments grouped this way is to actually grow empathy as a staff - “wow; some are struggling with this”, or “man, I hadn’t thought of that idea raised by the person who said it’s about the same”. To me it has potential to calibrate, hear other perspectives and nuance from colleagues I might not see all that often. Anyway, would love to read your thoughts or connect on a call or visit sometime. Love reading this stuff. Be well.