Revenge of the knowledge-based curriculum - MaxBruges.com

🧠 Revenge of the knowledge-based curriculum

16 February 2025

Unknown knowns and known nonsense.

The ever-interesting Ben Thompson has been playing with OpenAI’s new Deep Research tool. His write-up of how he used it to generate a report on Apple’s earnings announcements is worth reading, but this section stood out to me:

The issue with the report I generated [
] is that it completely missed a major entity in the industry in question. This particular entity is not a well-known brand, but is a major player in the supply chain. It is a significant enough entity that any report about the industry that did not include them is, if you want to be generous, incomplete.
It is, in fact, the fourth categorization that Rumsfeld didn’t mention: “the unknown known.” Anyone who read the report that Deep Research generated would be given the illusion of knowledge, but would not know what they think they know.

Ben Thompson, Deep Research and Knowledge Value, 2024

Unseen and unknown🔗

Sound familiar? In teaching, we face that illusion of knowledge twenty times a day.

The stuck student is easy to identify and intervene with; it’s the confident but compromised one that’s hardest to fix. We’re too easily fooled by flowing paragraphs and correct responses to prompted questions. Often, one only spots the depths of the misunderstanding in high-stakes, unscaffolded assessment on unfamiliar content, where the fallback on routine and skills leaves glaring gaps in answers.

“However, to conclude, it can clearly be seen that the allegorical play is
”

The unseen poetry portion of GCSE English is the purest crucible for this: students diligently marching their way through PETAL paragraphs but totally missing the central conceit of the dramatic monologue or allegory, simply because they lack the knowledge of those forms.

Most heart-breaking of all is the shock that the grading of these sorts of answers is usually met with: the student doesn’t know what they didn’t know. From their perspective, their perfectly planned answer used all the right sentence stems, ticked all the AOs and followed all the rituals from the classroom.

Because when it comes down to it, knowledge trumps all else.

The curriculum pendulum may be about to swing back from knowledge to skills. But in a world full of fluent-but-fallible bots, that would be a mistake: our students desperately need a solid grounding in facts to spot when hallucinations creep in.

Trust but verify🔗

I’ve no quick fixes for this in English, other than that cowbell of Literature study: read moar.

But it does prompt a useful, meta-cognitive lesson for a students to take and apply outside of the classroom and exam hall.

Large-Language Models are going to - and should! - play a central part in their working lives. Teaching them to use the tools properly, modelling safe and accurate usage, is a duty we as educators have to embrace today.

As Ben notes above, the models are extremely good at behaving like over-confident GCSE candidates, producing fluent paragraphs that miss key information. If our students can become domain experts, then they can be equipped to spot when these Rumsfeldian knowledge gaps emerge, and challenge them.

To out-smart the machine is pretty awesome experience for anyone, let alone a neophyte dipping their toes into the pool of academia for the first time. What better confidence boost can there be than telling ChatGPT it’s got it wrong, and having it agree? The best way to learn is to teach, so teach that chatbot who’s boss.

Gell-Mann and GPT🔗

Above all, an interaction like this can pierce the veil of infallibility and teach students an essential lesson in Gell-Mann Amnesia:

You open the newspaper to an article on some subject you know well. [
] You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

Michael Crichton, 2007 (yes, that Michael Crichton)

Swap ‘newspaper’ and ‘journalist’ for ChatGPT, and you’ll see exactly what Ben Thompson is driving at. Of course, Crichton’s point is that we forget this once we leave our domain of expertise, slipping back in to trusting the authority of fluency.

But for a brief, shining moment, our students can know how right they are, and know that these tools exist to augment their knowledge, not replace it.