“Producing an output is no longer an indicator of knowledge or skill.”
That reflection has been on my mind lately, as AI tools like ChatGPT, Gemini and others reshape the way students learn and produce work. As Dean of BLC Spain, I’ve had a front-row seat to what seems to become one of the biggest transformations in education in decades.
What happens to the very idea of assessment in higher education when a polished essay, a marketing plan or even functional code can appear in seconds?
We’ve all seen the dramatic headlines. A new MIT Media Lab study, covered in TIME, found that students who relied on ChatGPT showed the lowest brain-engagement across 32 EEG regions and delivered essays teachers called “soulless.” The paper warns of “cognitive debt” from over-reliance on generative AI, though it is still awaiting peer review and was based on just 54 subjects, so the findings should be read with caution.
The problem isn’t the tool; the problem is what we’re asking students to do with it.
At BLC Spain, we are not just keeping up with this shift. We are building our Master’s programs around it. By combining advanced AI tools with real-world learning experiences, we prepare students for what today’s global job market actually demands.
In this post, I want to explore why assessment must evolve alongside these tools, and how we’re leading that change.

Output-Based Assessment Is Dead (or Should Be)
For decades, higher education has evaluated outputs — the final essay, the exam, the submitted project. But in a world where machines can generate those outputs faster (and sometimes better) than we can, marking the product tells us less and less about the learner.
AI isn’t just a calculator; it’s a collaborator. If a student can hand in something AI-generated, that doesn’t automatically mean they’ve cheated or learned nothing. It simply means we need to ask different questions:
- What process did they follow?
- What decisions did they make along the way?
- How did they verify information, select prompts, reflect on limitations?
Real learning now lies not in what is produced, but in how it is produced.

Assessment Should Mirror Real Thinking
Tech voices such as Balaji Srinivasan argue that the real bottleneck in the AI era is shifting from prompting to verifying AI output. “AI prompting scales, because prompting is just typing. But AI verifying doesn’t scale, because verifying output means knowing the topic well enough to correct the AI,” he notes.
In a follow-up post, he predicts that whole new job markets will form around proof-of-human and proof-of-authenticity. That idea has direct parallels in education: if outputs are easy to fake, our value shifts to guiding and verifying the thinking that produced them.
So instead of asking “What did you write?” we should be asking:
- “Why did you choose this approach?”
- “How did you fact-check the AI’s suggestions?”
- “What would you do differently next time—and why?”
This isn’t abandoning rigor; it’s redefining it for an AI-accelerated world.
What This Looks Like in Practice
At BLC Spain we are already redesigning assessments around process-over-product. In all our Master programs, our students face different assessment challenges, such as one-on-one interviews where students defend how they built their campaigns, live debates and conversations, where they discuss different topics using their own voices, and projects that grade both the final deliverable and the way to get there.
These formats are harder to “fake,” yet they mirror real-world work far better than a traditional timed essay or just asking for the submission of a “strategic plan”..
This Is Not the End of Learning. It’s the Beginning.
I’m not against AI — far from it. Like the printing press, the calculator or the internet, it’s expanding what’s possible. But if we keep assessing only the final hand-in, we’ll miss the point. I believe shifting our focus to how students think, reflect and grow — even when using AI — doesn’t dilute learning; it deepens it.
Because ultimately, education isn’t about outputs. It’s about humans becoming more capable, aware and intentional when connecting with each other.
Héctor Sampedro – BLC Spain Academic Dean (June 26, 2025)
References
- Chow, A. R. (2025, June 23). ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study. TIME. https://time.com/7295195/ai-chatgpt-google-learning-school/ time.com
- Hudson, E. (2025, June 19). ChatGPT harms brain activity: study. LinkedIn News. https://www.linkedin.com/news/story/chatgpt-harms-brain-activity-study-6936409/ linkedin.com
- Srinivasan, B. (2025, June 4). AI PROMPTING → AI VERIFYING… [Tweet]. X. https://x.com/balajis/status/1930156049065246851 ww.twstalker.com
- Srinivasan, B. (2025, June 24). AI doesn’t do it end-to-end… [Tweet]. X. https://x.com/balajis/status/1937517664907460980 x.com
- Srinivasan, B. (2025, June 24). AI will create many jobs in verifying AI… [Tweet]. X. https://x.com/balajis/status/1937518143611793653 x.com