Human beings generally dislike dissonance in any form, but cognitive dissonance is something we not only dislike but actively work to avoid. As I’ve written before—and experience daily—I live on the fence when it comes to AI. What’s been fascinating in my work with schools and school districts is how quickly the narrative has shifted from fear and uncertainty to optimism, even outright cheerleading, around AI in education. This is largely a result of people learning about and interacting with AI. Naturally, as we reduce our ignorance, we also reduce our fear. The challenge, however, is that this shift can sometimes dismiss legitimate concerns about AI. I’m not convinced that’s the best approach.
The other day, I was speaking with Tom D’Amico, a recognized leader and advocate for technology and AI in education. Tom’s reputation is well-earned through years of thoughtful work in this space. He shared a story about his daughter, a teacher, whose students expressed discomfort using AI and requested alternative assignments. Tom was excited that students were speaking up and that teachers were responding by offering multiple ways for students to demonstrate their learning. This story is likely not unique. It reflects the diverse and evolving attitudes toward AI—an ongoing and growing tension that also presents an opportunity.
Beyond the tensions about productive struggle, the promise of efficiency is introducing its own tensions. We can’t deny AI’s ability to save time—it’s an enticing prospect, especially in times of tight budgets and financial pressures. School districts are understandably exploring these opportunities. I’m now spending more time supporting the business side of education as they navigate AI’s potential. But this pursuit of efficiency comes with its own set of questions, and rightly so. Just because AI can produce efficiency doesn’t mean we should automatically embrace it.

My son recently shared a telling example from his workplace in the finance sector. His company runs a large call center fielding inquiries from seniors concerned about their finances. The company is considering using AI to handle basic and routine questions, leaving human agents to address more complex issues. On the surface, this makes sense. However, it raises an important point: those basic calls provide agents with a mental break from the more demanding interactions. Without that balance, agents risk burnout. While this is a narrow example, it underscores the nuanced tensions that accompany AI adoption—tensions that we need to acknowledge and navigate thoughtfully.
For educators, this moment offers a valuable opportunity. We can engage in rich, challenging discussions about the use and value of AI, while also designing differentiated experiences that honour varying comfort levels and perspectives. Too often, educators shy away from conflict. I would argue this is a time to lean into it. Some tensions are inevitable—AI is impacting us in ways beyond our control. Other tensions arise from the careful balancing of AI’s costs and benefits. These are precisely the conversations we need to welcome and examine more deeply. Tension is important and necessary.
Ultimately, fostering a culture that embraces these discussions will help us move forward as a community, rather than deepen divides. This is the work ahead of us—and it’s work worth doing.