Quick Tip – There’s still time for students to enter our essay and video contests! Deadline is March 13th! (Note that students who entered the AI Challenge are not eligible for the essay contest, but may enter the video contest.)
Last year, I read well over 400 student essays for our annual contest. A lot of them sounded the same: polished, hollow, and written in a voice that belonged to no teenager I’ve ever met. I can’t prove AI wrote them. I never intended to try. (That way lies madness! AI detectors don’t work, just fyi.) But after years of working with large language models—and a career spent paying attention to voice—I knew what I was reading. And I couldn’t bear to read another round of it.
Subscribe Now to Stay in the Loop
Stay updated on new videos, fresh resources, and student contest announcements.
So instead of fighting the current, we designed a new contest. If students are going to use AI anyway, we thought we’d build something that encourages them to use it well: to think critically, challenge what comes back, demand sources, and do their own reasoning. That’s how the AI Challenge was born. And now that submissions are well into the judging process, we’re seeing things we didn’t expect.
These aren’t final conclusions. But the patterns are clear enough, and honest enough, to share now. What students are telling us about how they used AI, what tripped them up, and what surprised them challenges some popular assumptions. And some of it will sound awfully familiar to any teacher who has watched a student hit a wall and decide to climb it anyway.
Effort is the Dividing Line, Not Access
Every student in this contest had access to the same free tools. The gap between submissions had nothing to do with technology and everything to do with what students chose to do with it. (Including whether or not they could follow directions!)
Some submissions coasted. Surface-level questions, surface-level answers, a reflection that read like someone checking a box. Others dug in, circling back, pushing harder, chasing the part of the answer that didn’t quite add up.
“AI works the strongest when it’s challenged. Asking surface-level questions only results in surface-level responses.”
That’s a student talking, not a teacher. And it’s a lesson some of them clearly learned the hard way. Another student put it more personally:
“However, my understanding was very surface-level, and I did not fully realize how complex and far-reaching the effects of tariffs actually are.”
No defensiveness there. Just honest recognition that the tool gave back exactly what the student put in. The more the student continued to question, the more s/he learned. AI alone as a tool didn’t level the playing field. If anything, it made differences in student effort impossible to hide.
Agency Beats Prompt Cleverness
There’s been a lot of talk lately about writing “good prompts.” What we’re seeing suggests that prompt technique is secondary. A simply asked question often worked just as well as a highly complex one. (At least, in the context of what we asked students to do with AI in this challenge. Prompting does matter in other contexts.)
What mattered was whether a student saw themselves as the one in charge of the conversation.
Students who treated AI as an authority accepted whatever came back. Students who treated it as a sparring partner pushed back, redirected, and refused to settle. One student nailed the distinction:
“AI’s assistance is contingent on the questions I ask. The depth and usefulness of the output depend on my input.”
Another described the shift in real time:
“I had to push back and ask extra questions… This made the chat more useful and helped me form my own opinion instead of just accepting what the AI said.”
And then there was this, which we’ve been quoting around the office ever since:
“Accountability and validation are the prompter’s responsibility, not the AI.”
That’s not a teacher or a tech executive. That’s a student who figured out where the buck stops. (Insert celebration here! Well done, young padawan.)
Iteration Isn’t Instinct. It’s a Skill.
Even students who showed real agency often stopped too soon. A serviceable answer appeared, and the conversation ended. The strongest submissions kept going—asking who benefits, who pays, what gets left out, and what breaks when you push the argument to its edges.
One student described crossing a threshold we rarely see in traditional assignments:
“The discussion soon evolved into a challenging debate prompting me to pause and conduct further research outside of the chat to respond thoughtfully.”
Read that again. The student left the AI to go do research so they could come back and argue better. That’s not a student being replaced by technology. That’s a student being provoked by it.
Others discovered what happens when you stop agreeing:
“As I contradicted what ChatGPT said, my understanding became more refined.”
And another noticed a direct cause-and-effect between pressure and quality:
“When you challenge the AI it ends up formulating sources from the two opposing sides.”
Here’s the uncomfortable part for educators: many students have never been taught that learning often starts after the first answer.
Iteration isn’t something most of them do naturally. It’s something they have to be taught.
The Judgment-Free Discussion Zone Nobody Expected & Everyone Needs
This one is so important! Again and again, students described their AI conversation as a space where they could explore both sides of an issue without worrying about what anyone would think. No classmates listening. No teacher grading the opinion. No social cost to trying on an unfamiliar argument.
One student captured it perfectly:
“Instead of treating disagreement as something to dismiss, the conversation treated it as something to analyze.”
Several students entered the conversation sure of where they stood. They didn’t necessarily leave with a different opinion—but they left with a different understanding of why the other side exists.
“After my discussion, my views are now more open to both sides of the argument.”
And then there was this reflection, which stopped us in our tracks:
“I came in ready to debate the issue of term limits. I am going out honestly unsure—and that seems to be a valid feeling.”
That uncertainty isn’t confusion. It’s what happens when a student sits with complexity instead of running from it. In most cases, engaging with the other side didn’t paralyze students. It sharpened them. Some refined their original positions. Some softened their certainty. A few changed their minds entirely. What mattered is that they engaged with the mess rather than pretending it wasn’t there.
In a society that has lost the ability to discuss different points of view, these students found a way to do so. That gives me so much hope.
Students Trust Confidence. That Can Be a Problem.
AI writes with the calm authority of someone who is never wrong. Students noticed—eventually. But not always fast enough.
Some assumed that a confident, well-structured answer was automatically a complete and neutral one. Others caught the gap between sounding right and being right:
“I realized that confidence doesn’t always mean neutrality or completeness.”
Another student admitted to being caught off guard:
“The information sounded confident and well written at first… I did not expect AI to have wording that could be misunderstood.”
The strongest submissions showed students developing this radar in real time—learning to question tone as much as content. One student summed up the new ground rules:
“I have to make sure that what I am seeing is true, non-biased, and especially not hallucinated.”
In a world where the most polished answer is one click away, validation isn’t a bonus skill. It’s the skill.
How many times have you told them to check their work? A gazillion? (And mostly, they don’t listen. I know.) They now must realize they need to check the AI’s work as well. You can’t just copy and paste from the AI.
For one thing…that’s how you leave the prompt in and get caught using AI where you’re not supposed to be! (Like the one AI prompt I discovered in the middle of a student…errr, an AI…reflection.)
Polish is a Terrible Proxy for Thinking
This one is for the teachers and even our judges. Some of the most impressive-sounding submissions had the least depth. Big vocabulary doesn’t equal big meaning. Meanwhile, some of the strongest thinking showed up in work that was rough, uneven, and unmistakably human.
One student described an outcome that would look like failure in most classrooms:
“Ultimately, I was left divided, an outcome I was not expecting… I don’t view this as a negative outcome.”
Another drew a line between thinking and copying:
“My conclusions arose from deliberate reasoning rather than uncritical acceptance.”
And sometimes the most revealing insight was the simplest:
“My thinking changed a lot as we talked.”
Six words. No polish. Completely real. That sentence tells us more about what happened in that student’s head than a thousand words of borrowed eloquence ever could.
My own confession? After reading a bunch of AI chats about the same topics, my thinking on some of these topics changed as well. It made me think along lines I hadn’t before. That’s powerful!
What This Means (So Far)
These observations are early. We’re still reading, doing final round scoring, and looking for patterns in the work. But a picture is already forming.
AI is not the shortcut its critics fear or the miracle its cheerleaders promise. In the hands of a student who has been taught to question, push back, and sit with uncertainty, it’s something far more interesting: a thinking partner that never gets tired, never judges, and never lets you off the hook—if you don’t let it.
We’ll share more after the contest winners have been announced. For now, we’ll leave you with the single most important thing we’ve learned so far: the students who got the most out of AI were not necessarily the ones who already knew how to use it.
They were the ones who refused to stop thinking.


