top of page
Writer's pictureGeoff Chapman

The Comeback Kid: Is formative assessment back for good?

Updated: Dec 31, 2023

Seldom a day passes without calls to reduce summative exams, replacing them with the measurement of soft ‘21st Century’ skills. But how to measure these skills? The weight of historic evidence and day-to-day usage usually produces one answer: a summative exam. But formative assessment and their systems are starting to gain traction again. Is this a new trend or a re-heated flash-in-the-pan? Will we recognise summative assessment in 2030 as we do in 2020?


The rationale proposed by Paul Black and Dylan Wiliam's 2009 paper 'Developing the theory of formative assessment' has washed back into educator practice and tech vendor development. Cross-pollination and blending of practice across education programmes and tech solution providers to ‘assess better’ is being witnessed. But haven't we been here before?


The hard-won public confidence and social justice element of fair, valid and reliable summative assessment, especially for school exams, underpins many critical decisions. Summative assessment’s exalted position is underpinned by a back catalogue of research, mainstream acceptance, and time-honoured paper-on-the-desk experience.


Using formative assessment systems to assess different skills for a more detailed, tech-driven approach was pushed to the margins, ignored by the mainstream, and often belittled. So what’s different this time round? Is it ‘once bitten, twice shy’, or is there renewed confidence about real, successful deployments? The problem and the solution? “Anything a summative system can do, a formative one can do just as well.


The last decade has witnessed changes such as credentialing, MOOCs and remote proctoring, in many assessment situations. Some changes were initially technically challenging, expensive, and hard to manage. Approaching a new decade, a cross-pollination of good formative and assessment practice means that test programmes have been modernised, scaled and secured.


Why is it different this time round? History is littered with test owners and institutions wanting better assessment, but curbing their enthusiasm. Some attempts were left to marginal, bespoke and esoteric system deployments that could not be replicated or scaled efficiently. Projects to move from summative assessment such as New Hampshire’s Performance Assessment of Competency Education (PACE) programme has taken over 20 years to build, deploy, measure and refine.


At the start of the decade, the United States PARCC and Smarter Balanced school assessments, were intended to promote “through-course” assessment - nearer to the chalk-face experience and engaging for teachers. The technical and financial challenges inevitably meant governing boards and funding organisations de-scoping their plans, de-risking the requirements, and retreating to the safe space of summative assessment.


There is growing, compelling evidence that solutions associated with formative assessment are becoming more insightful, better funded, and easier to adopt than their summative counterparts. According to analysts HolonIQ, venture capital investment in ed-tech in 2018 reached an unprecedented USD$8.2bn. Little, if any, of that amount was allocated to summative solutions. Formative assessment technical challenges and barriers are being overcome, with more choice and flexibility from the supplier communities.


For example, the blurring of remote proctoring solutions to provide remote tutorial guidance, translation/ interpreting, as well as proctoring/ invigilation is also gathering pace. According to a recent report by ed-tech journal Edsurge, test owners for school authorities in the US states of Louisiana, Georgia, North Carolina, and New Hampshire are now actively exploring moving away from summative assessment towards a formative blend.


Evidence from Oxford University's October 2017 landscape report on e-exams noted that while objective assessments had been conducted in their Medical Sciences Division and the Department of Continuing Education for over 10 years, only 4% of survey respondents stated that they used electronic essay exam tools. Many of these were actually derived from the institution's Virtual Learning Environment. The Oxford report points to “patchy and fragmentary implementation” of actual e-testing/ e-exams systems within Higher Education.


Supplier-side evidence from Dutch assessment company Andriessen claimed that over 90% of tests created by customers were wholly multiple-choice, with essay/ extended response accounting for just over 3%. For many test owners, its probably not a stretch to think that a formative system and their associated, emerging complementary tools might be good enough for the majority of their assessment needs.


But shouldn’t the demands of summative assessment deserve a full-blooded, dedicated solution for delivery, test authors, psychometricians, and data analysts? Up until quite recently, the dedicated software development time and marketing effort meant that there were generally two routes: on-screen versions of paper exams delivered in test centres; bespoke test software developed for sizeable test owner customers that could be configured for others. Tech-enabling traditional, summative processes.


The test centre model is still steeped in a quasi-industrial, yield management process: or bums-on-seats, to use the vernacular. Summative test software suppliers struggle to get large-scale, multi-market access, and are too often dependent on a few cornerstone or singular customers/ funders. Following the money trail can point towards change.


Evidence from recent global ed-tech investment shows large flows (ironically) to test preparation tools in China and India, but also talent management systems, and back-office rationalisation. Put simply, the money isn’t following test centres or summative exam systems – it’s investing in the formative world. Solutions with minimal legacy baggage, such as the learning management system Canvas, have successfully deployed into HE institutions and other environments - unencumbered by issues such as hosting, and liberated with a modern, user-orientated interface.


So the future is still exciting and from a much better position for formative assessment than 2010. A smörgåsbord of technology options, once in the margins, now have a nascent body of evidence and clearly articulated case studies.


So is formative assessment the Comeback Kid? While it’s far too early to forecast the demise of summative assessment, opportunities await those who can join-the-dots between compelling formative assessment tools and established summative assessment practices, learn from previous deployment mistakes, and understand the drivers of macro-level funding movements.

bottom of page