The Fallacy of Educational Data
By Patrick Leonard - life-long educator and EdTech Executive
Mark Twain once said, “There are lies, there are damn lies, and then there are statistics”. In the world of K-12 Education, “statistics” can be used to prove just about anything - for good or evil. The reason this is so, is that in Education, there is no real agreement in how we are supposed to keep score!
What makes a school or district “good”? Is it summative test scores, graduation rates, college acceptance, climate surveys, or advanced Placement enrollment? Or is it something else?
In my former life, I was a college basketball coach. No matter where you are in the world or at whatever level you are playing basketball, there is universal agreement on how to keep score - one point for a free throw, two points for a field goal, and three points for a field goal behind the three-point line. Simple, right?
As coaches we could utilize myriad strategies, offenses, defenses, and set plays to try to maximize our points or reduce our opponents points, but at the end of the game, we all agreed on how to keep score!In Education, it seems that whatever “point” you are trying to prove, just select the data that makes you look good and commission a study! Then “publish” your “research” with the help of a good marketing/PR firm and you are on your way to being an expert.
And therein lies the problem. Public Education has its share of detractors and critics, some have legitimate concerns and work with the school systems to try and improve them. However, many detractors are after the funding Public Schools receive. Many of these detractors are well funded and very politically connected. Their well-funded studies only tell “their” side of the story.
After the passing of No Child Left Behind, giant testing companies pushed legislation to “test our kids” into excellence! However, to maintain their hold on the school systems, they needed a specific “failure rate” to ensure states, districts, and schools purchased their supplemental curriculum and resources. This one was easy. Psychometricians and research scientists employed by the testing companies made sure to calibrate the tests to achieve desired failure rates. They decided how we were going to keep score, wrote the rules by which everyone had to play, hired to scorekeepers and judges, and then published their predetermined findings.
An ancillary effect of this testing craze was that legislation mandating increased testing caused an enormous increase in spending on education. Detractors of public education used this increase in spending and “flat” test score results to hammer their message. Their message being that spending money on Class Size, Special Education, Teacher Salaries, Better Technology, etc. does not help - look at the flat scores!!
These detractors have been empowered to set the agenda, set the scoring metrics, and use the power of Public Relations to hammer Public Schools and force the public school systems on the defensive.
Back to my basketball coaching days, if my opponent was able to set the “tempo” of the game, we were playing catch up and would be put on the defensive. Public school systems need to get better at telling our story and showcasing all the great work we do for ALL kids. To tell this story, we have to be the ones to decide how we are keeping score and by what metrics we are going to be evaluated!
In coaching, because of the inherent “competitive” nature of our world, we were less inclined to share our great practices, less we allow our competitors to gain and advantage and beat us. This is becoming all to true in Education as well. Charter schools are pitted against public schools, home schooling is pitted against Private schools and educators are then inclined to “hoard” their secrets.
The excessive emphasis on “Test Scores” and single-metric evaluation made Education a competitive sport, instead of a cooperative endeavor.
It is time to set the score.
We now know that it is not possible to set one measure for learning. It takes the difficult process of a community deciding what is important and worth measuring and then developing a system around that. So it requires universal comparisons for schools - test scores and grad rates, etc. as well as the flexibility to let each school and even each educator define the right scoring system, evidence, and research to back it up.
We need to get better at telling our success story.
Please join MIDAS Education this work.