THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
Yerko Sepulveda, Head of Community Engagement and Wellness (Former Director of Community Engagement and Belonging), Porter-Gaud SchoolYerko Sepúlveda is a belonging advocate, researcher, and educational innovator focused on equity-centered leadership. He leads community engagement and wellness initiatives, teaches at Harvard Project Zero, and advances culturally responsive practice through consulting, training, and transformative learning partnerships across institutions.
Artificial intelligence in K-12 education has received a lot of attention lately. In order to make teaching easier, educators and administrators recommend limiting AI to procedural tasks like grading, scheduling, summarizing, and planning lessons, while protecting human-centered thinking.
That instinct is understandable, but isn’t it incomplete if we truly design student-centered learning?
Our pause isn't about where we use artificial intelligence. Rather, it is about how we understand its impact on student agency, belonging and contribution. In the world we are preparing students for, efficiency alone isn't adequate for what matters most in learning. Our use of artificial intelligence either activates agency and belonging or quietly replaces the conditions that make those things possible.
A K–12 perspective with broader stakes
I write this from within the K–12 system, not because these questions belong only to schools, but because K–12 is one of the last sustained spaces that intentionally cultivate patterns of agency, authorship, and participation. The ethical decision-making process students develop here and how they interact with technology here do not disappear after they graduate. Higher education, the workplace, and civic life carry these habits.
AI in schools (or everywhere, really) isn't really about technology at all, but about agency. We should be talking about who makes decisions, who carries responsibility, and who is positioned as a contributor rather than a user. We experience participation differently when those dynamics shift. AI conversations inevitably turn into belonging conversations. People's experiences of themselves within a system determine whether they feel truly a part of it.
Belonging is not comfort; it is contribution
Throughout the last 50 years, belonging research has identified different indicators: feeling safe, valued, appreciated, respected, included, and loved. The research on belonging consistently points to a deeper reality: people feel most connected when they are contributing as community members. Feeling seen and present isn't enough to belong. Being needed is what makes you feel needed.
“If AI is to support belonging-as-agency, students must be invited to think with and about the tools they use to examine how technology shapes their learning and their responsibility to the communities they belong to.”
Making decisions and acting on them is central to belonging, not an add-on. When a learner's thinking matters, when their choices carry responsibility, and when their participation shapes something bigger than themselves, they experience belonging. A student-centered learning environment places people at the center, not at the margins.
If AI is framed primarily as a tool for efficiency and optimization, it may support participation without contribution. While learning may feel easier, it may also feel less meaningful. Passivity can coexist with comfort, but true belonging cannot.
Beyond procedural uses of AI
In today's K-12 schools, the prevailing consensus suggests artificial intelligence should be used mostly for procedural efficiency, with human thinking preserved elsewhere. This framing assumes students think, act, and belong outside of technological systems. But they don't. The students of today have grown up in a technological world, so how can we separate technological advancement from their everyday lives?
Generative AI, on the other hand, produces content based on specific prompts and immediate input, whereas Agentic AI is designed to act, manage processes, make decisions, and keep work moving. AI cannot produce belonging; rather, it removes or preserves responsibility for the human using it.
Intellectual outsourcing is the clear risk when AI is used to generate content. The risk is greater when it is used to carry out learning tasks with minimal student involvement: erode authorship, ethical decision-making, and contribution. Both cases raise a similar question: does this technology expand learners' ability to contribute meaningfully or quietly displace them?
Educators worry about artificial intelligence when students only use it for efficiency. Learners unconsciously learn that speed matters more than meaning and purpose, and that a bot-produced product is better than theirs. With AI, decision-making and responsibility opportunities are reduced, so belonging isn't protected.
Students are learning more than content
Artificial intelligence is not only being used by learners, but also shaping them. Students quickly internalize the values embedded in their systems. These systems shape behavior by signaling what is rewarded, what is automated, and what requires human judgment, much as behaviorist learning models did. The tool takes over learners' thinking without intentional reflection. Over time, participation becomes about moving efficiently through tasks instead of contributing something meaningful.
This is not a failure on the part of students or educators. It is a predictable developmental outcome since people adapt and are enculturated by the systems they participate in.
Then, if AI is to support belonging-as-agency, students must be invited to think with and about the tools they use to examine how technology shapes their learning and their responsibility to the communities they belong to.
A different lens for AI adoption, some provocations
Rather than asking whether artificial intelligence should be paused, limited, or restricted to procedural tasks, schools and organizations more broadly would benefit from a more demanding lens:
• Does this use of AI expand learners’ capacity to make meaningful decisions?
• Does it strengthen relational connection and shared responsibility?
• Does it support psychological safety in service of contribution, not mere compliance?
• Does it position people as authors and contributors, not just efficient users?
K–12 students carry these habits with them into higher education, civics and professional lives if we get this wrong. In the same way, organizations inherit employees who work efficiently but feel disconnected from purpose. Universities inherit learners who know how to earn credentials but struggle to stay engaged. The underlying challenge is the same: high-achieving systems that value performance over participation.
Belonging requires more than human presence
Overall, the goal is not to preserve a pre-AI version of education; that is unrealistic. The goal is to ensure that as tools grow more powerful, people grow more agentic.
Belonging is not sustained by keeping technology at bay. It is sustained when individuals experience themselves as responsible, necessary, and impactful members of a community. The pause we need is not about slowing innovation; what we really need is to deepen intention.
Artificial intelligence will shape learning and work whether we like it or not. The real question is whether it will shape people as contributors who belong in communities or as efficient participants who comply. The hope? That decision is still ours as we design human-centered learning.
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info

However, if you would like to share the information in this article, you may use the link below:
www.educationtechnologyinsightsapac.com/cxoinsights/yerko-sepulveda-nid-3707.html