I continue to marvel at Ellen Condliffe Lagemann’s An Elusive Science: The Troubling History of Educational Research. That book and Clark Kerr’s The Uses of the University (which I’ve now read twice) have been astonishing experiences for me this term. I wish I’d found them both earlier, but I’m glad I’ve found them now.
I wrote about Lagemann’s book in my last post. I want to continue with another, more focused look at a section called “Developmental Perspectives.” Here Lagemann tells the story of the rise of behaviorism as the fundamental paradigm of educational research, a paradigm that devolves into a kind of “social bookkeeping.” (The phrase immediately brings to mind some of the extremes in the new craze for web-based “analytics.”) Yet in that rise, even when it was happening, there were dissenting voices, warnings, even temporary halts in the headlong rush to reductive measures and models of human learning. One such warning came at the very moment the Educational Testing Service was about to be founded. As Lagemann tells the story, “the original proponents of such an organization were William S. Learned and Ben D. Wood, the directors of the Carnegie Foundation’s Pennsylvania Study.” They wanted to keep academic standards high, a laudable aim to be sure, but their models of cognition were narrow and simplistic. Like the miasma theorists in Steven Johnson’s The Ghost Map who thought cholera was caused by bad air, not water-borne bacteria, these experts were well-intentioned but working from a paradigm of fixed innate ability and stimulus-response learning whose basic assumptions were wrong. We are still living with the dire consequences in many ways, including systems of educational “assessment” that use commodity methods to produce commodified learners.
Carl Brigham tried to intervene. He was not anti-testing. In fact, Lagemann tells us he was a psychometrician who had helped to develop the SAT and “was working to improve the SAT and other tests.” (More context: earlier in his career, Brigham had espoused racial theories of intelligence that he later disowned. Brigham’s break with his earlier views shaped many of the concerns he later expressed about uncritical adoption and use of standardized testing. You can read some of his story in this fascinating Frontline interview with Nicholas Lemann.) What Brigham opposed was not testing, but a testing industry that encouraged schools to adopt these instruments uncritically and use them crudely, without an adequate understanding of the complexities of learning, particularly the social aspects of learning. Here’s how Lagemann describes Brigham’s effort, and his rationale:
In an article published in School and Society, as well as in correspondence with J. B. Conant, whom Learned and Wood had enlisted to help their cause, Brigham had expressed grave concern about two matters. The first was “premature standardization”–developing norms to give meaning to test results before the full significance of what had been tested was fully understood. The second concern was that there had been a lack of research into questions that were essential if tests were to be meaningful. As Brigham explained, “the literature of pedagogy is full of words and phrases such as ‘reasoning,’ ‘the power to analyze,’ and ‘straight thinking,'” none of which is understood. Unless there was more research into such fundamental processes, Brigham insisted, testing would interfere with efforts to develop reasonable objectives for education [my emphasis]. Claiming that the demands of the market and the claims of “educational politicians” had stunted the development of a valid science of education, Brigham feared that sales would overwhelm the research functions of a large permanent testing service. As he put it, “although the word research will be mentioned many times in its charter, the very creation of powerful machinery to do more widely those things that are now being done badly will stifle research, discourage new developments, and establish existing methods, and even existing tests, as the correct ones.”
Brigham’s words could have been written yesterday. His warnings are still urgent, perhaps even more so than when they were first written. Yet they haven’t been heeded, and the results have not been pretty, either. When Campbell’s Law kicks in, true insight disappears:
The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.
Under these circumstances, the “powerful machine to do more widely those things that are now being done badly” will also shape the entire schooling experience so lopsidedly that whatever the original test sought to measure, even imperfectly, can no longer be measured at all. Instead, the practices begin to measure themselves, untethered from complex realities, and to distort, even eliminate, the contexts in which deep learning can occur. Yet we will have self-validating data to make us feel we’re making progress, and a steady market for more feature-laden varieties of (proprietary) porcine lipstick.
Lagemann tells us that Brigham was right: “the very existence of ETS helped perpetuate existing educational practices,” and “for a time turned scholarship in education away from the progressive purposes that had been so central to it during the interwar era.” The consequence was a shift from trying “to improve the effectiveness of instruction” toward the different goal of “perfecting instruments of selection,” a shift that persisted until the “cognitive turn” of the 1960’s.
And now here we are in 2011, with a system that continues to appear to distinguish “academics” from “education.” Have we now come to the point in higher education at which the high-stakes testing world of NCLB and its kin, amplified by the worst models of computer-aided instruction, has concealed from us the choices we are making by selling us perfected instruments of selection in the guise of improved educational effectiveness? I often think so, and the thought frightens me. We’re being sold miasma meters to wave around instead of accepting the challenge of thinking hard about complex questions and designing our systems to be elastic enough to prevent the “vendor lock-in,” literal and metaphorical, of institutionally palatable patent medicines that will forever stunt our capacity for intellectual growth.
What could be more disastrous for a democracy?
was sent this the other day.
https://secure.wikimedia.org/wikipedia/en/wiki/Cobra_effect
which feels similar. ie getting more of something that fits the reward
but not necessarily getting a better outcome.
Gardner,
Do you think we should assess student learning? If so, how would you propose we do so? And how would you allocate institutional resources based on differences in student learning? Finally, if technology can be used to facilitate teaching & learning, is there a role for doing so in assessing those dynamics?
Fact: less than 60 percent of students who start college actually finish and less than 30 percent of Americans over 25 have a bachelor’s degree (ranking the U.S. #12 amongst industrialized countries). What if anything should we do about this?
My worry is that the “honeymoon” of Ed Tech’s “potential” to be transformational is nearing an end, if not over. It’s time to show what it can AND can’t do. How should we go about do this? Otherwise, why invest more time & resources into it?
Thx,
John
@lucychili: I hadn’t heard of the Cobra Effect. That’s perfect. Thanks!
@John: See next blog post. 🙂
The honeymoon may be coming to an end, but what have most folks at most institutions done save dress up BlackBoard as yet another solution to assessing teaching and learning when it does anything but. Seems to me that the honeymoon for edtech has not even begun yet because few institutions, if any, have truly grasped no less tried to implement the transformational potential of this cognitive explosion. How can it be over? What is your timeline? And if a BA is truly the solution to the US becoming the most educated industrial nation (though I would argue that stat might be problematic) why wouldn’t we spend the necessary time, energy, and money to emerge ourselves into the messy complexity that is learning. It seems to me there is a bottom line here that is imaginary and counter-productive to the process of exploring—something institutional education seems to be getting better and better at stripping away from the learning experience.
Pingback: “Here I stand” – Campbell’s concerns on analytics and other stuff « The Weblog of (a) David Jones
Dear Gardner,
I’ve enjoyed getting to know how you see the world, starting from your webinar on the learning analytics MOOC, but here on your blog there is much else of interest. I also share your passion for Doug Engelbart’s work – one of my heroes.
I started to compose a reply to this blog post, but it grew into something that in the end I put on my blog:
http://people.kmi.open.ac.uk/sbs/2012/08/learning-analytics-are-assessment-regimes
Yours,
Simon
Pingback: Learning analytics are assessment regimes