Quick Reminder: SITC.org has a new contest, the AI Challenge. Check it out!

Barbara Oakley best-selling author

Subscribe Now to Stay in the Loop

Stay updated on new videos, fresh resources, and student contest announcements.


Subscribe Now to Stay in the Loop

Stay updated on new videos, fresh resources, and student contest announcements.


Artificial intelligence is everywhere—headlines, classrooms, even students’ homework. For many teachers, that’s a little unnerving. Is AI going to replace authentic student work? Will it undermine the very skills we’re trying to teach?

The truth is, AI is here to stay. Instead of treating it as a threat, we can use it as an opportunity. When guided carefully, AI can actually become a tool for building one of the most important skills we want our students to develop: critical thinking.

AI has a knack for producing polished, confident answers. That can make it tempting for students to copy, paste, and call it a day. But polished doesn’t always mean correct. AI can:

  • Present outdated or inaccurate information.
  • Miss important context.
  • Sound authoritative while still being flat-out wrong.

That’s where teachers come in. Instead of banning AI, we can show students how to challenge it—how to treat AI like a conversation partner rather than an answer key.

Simple Strategies to Build Critical Thinking with AI

Here are a few classroom-ready approaches to help students go “beyond the bot”:

  • Ask the AI twice. Have students pose the same question in two different ways and compare the answers. What changed? What stayed the same? Why might wording matter?
  • Ask two (or more) AIs. Give the same prompt to two different AIs and compare the answers. What did they have in common? What was different? Did they disagree on any major facts? Check their sources.
  • Fact-check the output. Assign students to verify one or two key claims from an AI response using trusted sources. How reliable was the bot’s answer?
  • Play devil’s advocate. Encourage students to push back on the AI: “What would someone who disagrees say?” or “Can you argue the opposite viewpoint?”
  • Spot the bias. Have students analyze whether an AI response leans toward a particular perspective. What voices are included—or missing?
  • Reflect on the process. Ask students to write a short reflection about how they used AI, what they learned, and what they still questioned. This shifts the focus from the product to the thinking behind it.
  • Teachers can take these activities even further. Have students ask the AI to explain why it reached a particular conclusion or to outline its reasoning step-by-step. Then, compare that reasoning to human logic or textbook explanations. This helps students recognize when AI makes unsupported leaps or faulty assumptions.
  • Encourage students to use AI for brainstorming, not for final drafts. It can help them gather examples, outline ideas, or clarify definitions—but the deeper analysis and synthesis should come from the student. By clearly defining AI’s role in the early stages of thinking, we keep ownership of the ideas where it belongs.
  • Require citations and source tracing. Make it a classroom norm that any AI-generated information must be checked against at least two reliable human sources. This turns fact-checking into a habit, not a punishment.

  • Treat prompt-writing as a literacy skill. Let students experiment with how phrasing, tone, or specificity affect the quality of AI responses. They’ll quickly see that the quality of the question determines the quality of the thinking.
  • Make it collaborative. Have small groups “coach” an AI together—refining prompts, evaluating results, and deciding what’s useful. The discussion that follows mirrors real-world teamwork and editorial judgment.
  • Try an “AI error hunt.” Give students a deliberately flawed AI answer and ask them to find what’s wrong. This turns critical reading into an active, engaging challenge. 
  • Add reflection through journaling. Ask students to keep a brief AI-use log explaining when, why, and how they used the tool, plus what they learned or changed because of it. Over time, these reflections strengthen self-awareness and digital responsibility.
  • And when teachers use AI themselves—say, to create rubrics or brainstorm examples—model the process out loud. Show students exactly what you asked, what the AI gave you, and how you verified or modified the results. That transparency helps them see AI as a thinking partner, not a shortcut.

Above all, assess process over polish. Reward evidence of curiosity, questioning, and revision—not just how clean the final essay looks. When students know that the thinking is what counts, they’re more likely to use AI as a learning tool instead of a substitute.

Students are already experimenting with AI outside of class. By giving them space to explore with guidance, teachers can model responsible use. You’re showing students that:

  • It’s okay to use new tools—but they must be used wisely.
    • Critical thinking means questioning sources, no matter how polished they look.
    • Reflection and reasoning matter more than simply finding the “right” answer.

This not only strengthens academic skills but also prepares students for a future where AI will be part of nearly every career.

An Opportunity for Students: The AI Challenge

If you’re looking for a way to bring this kind of work into your classroom, Stossel in the Classroom is offering something new this year: The AI Challenge: Thinking Beyond the Bot.

High school students are invited to explore a topic using AI, question the responses they get, and reflect on the process. The challenge isn’t about producing a polished essay—it’s about showing how well students can use AI critically and ethically.

The deadline for entries is January 9, 2026, and it’s a great chance for students to practice skills that go far beyond one assignment. 

AI isn’t going away—but neither is the need for sharp, questioning minds. By teaching students to push back, verify, and reflect, we can help them move beyond the bot and toward real understanding.

And who knows? The next time you hear, “But ChatGPT said…”, you might also hear the follow-up: “…but here’s why I didn’t just believe it.”