Our AI Crisis isn't Technical. It's Human.

Fatma Mili, Professor, Grand Valley State University

Fatma Mili, Professor, Grand Valley State University

Through this article, Fatma Mili, Professor at Grand Valley State University, explores why our deepest challenges with AI aren’t technical but human. She argues that rethinking how we educate scientists, engineers, and humanists—by embedding ethics, social impact, and interdisciplinary reflection into the core of learning— is essential to ensuring that technology serves society rather than outpaces it. By confronting the myth of inevitable progress, she offers a compelling case for restoring agency, accountability, and critical inquiry to building and deploying powerful tools.

Science and technology have long shaped our civilization. Each breakthrough alters how we live, work, and relate to one another. Some changes arrive suddenly, others gradually, but all contribute to a sense that progress is inescapable. Once we make a discovery, we cannot “unknow” it. Once we build a powerful tool, we rarely shelve it. This feeds a dominant narrative: we’re on a linear, irreversible march toward a more advanced world.

The truth is more complex. We don’t just follow progress— we shape it. What we discover, fund, and build reflects cultural priorities. Our institutions, values, and norms promote specific innovations while sidelining others. These choices—what questions we ask, what technologies we develop—are rarely acknowledged, yet deeply embedded in how we teach, think, and talk about science and technology.

When a new technology emerges, debate is often dismissed as anti-progress. The conversation is flattened into “for” or “against,” making exploring more urgent and nuanced questions challenging: Who benefits? Who bears the cost? And who gets to decide? Once a tool is deployed, these conversations often come too late to be effective.

This is why we must embed reflection on values, impacts, and unintended consequences into the early stages of technological development, especially in education. We must change the way we prepare future engineers, scientists, and policymakers.

We teach science as a fixed body of truth, divorced from the messy, imaginative discovery process. But it’s precisely this questioning of paradigms and imagining new possibilities that makes science powerful and exciting. Decades after Thomas Kuhn’s The Structure of Scientific Revolutions, we still present science as linear knowledge accumulation. This discourages creative minds and narrows our collective imagination. We’re comfortable pointing to historical examples where innovation harmed vulnerable communities, but we remain reluctant to question today’s technologies as they unfold.

The disconnect in the humanities and social sciences is just as troubling. We graduate students with little understanding of the technologies shaping their world. Many trust these tools blindly—or mistrust them entirely. Either way, their ability to question, critique, and guide technological development is diminished. As a result, we lose the ethical lens and critical inquiry that the humanities are uniquely positioned to offer.

“We must teach all students to ask, not only can we build this, but should we? Only then can we ensure technology serves humanity and not vice versa”

This gap is perilous in the context of artificial intelligence. Large Language Models (LLMs) and generative tools are advancing quickly and integrating into daily life with little oversight. Their capabilities impress us, and their ease of use seduces us. We feel compelled to adopt them lest we fall behind our peers, students, and competitors.

Yet we know these tools have limits. Many overestimate their intelligence or underestimate their risks. They absorb and amplify social biases. They hunger for data—often extracted without consent. They rely on undervalued labor, exploit communities already depleted of resources, and consume vast amounts of energy, threatening climate progress. It feels like the genie is out of the bottle—and we’re left scrambling to catch up.

Some compare AI to nuclear technology—another invention perceived as an existential threat. But unlike nuclear energy, AI has not yet sparked collective, global regulation. The pace of development continues to outstrip the creation of guardrails.

This is the moment when we need the humanities and social sciences most. To help us ask the right questions. To illuminate the costs we’re not measuring. And to confront the consequences of our siloed disciplines.

Can we afford to graduate engineers who innovate without considering social impact? Or are humanists who shy away from technology unable to interrogate its influence? Can we accept students who use powerful tools without assessing their ethical cost?

What if we start with something simple: ask every user to consider the real cost of a single AI query. Whose data trained the model? Who entered and labeled it—and were they compensated fairly? How much energy and storage did it require? Whose rights or privacy were compromised? Whose dignity was overlooked?

These answers won’t always be available, but asking the questions is essential. It shifts our mindset from passive users to accountable participants. At a time when AI's rise tempts us to surrender to inevitability, education must do the opposite. We must dismantle disciplinary silos. We must teach all students to ask not only can we build this? But should we? Only then can we ensure technology remains in the service of humanity—and not the other way around.

Weekly Brief

Read Also

Elevating Engaged Teaching in a Digital Era

Elevating Engaged Teaching in a Digital Era

Dr Robert Reuter, Assistant Professor in Educational Technology and Head of the Research Institute for Teaching and Learning, University of Luxembourg
Embedding Immersive Technologies within the Curriculum: Strategies for Sustainable Development

Embedding Immersive Technologies within the Curriculum: Strategies for Sustainable Development

Richard Walker, Associate Director (Digital Education), University of York
Aligning Innovation with Impact in Higher Education

Aligning Innovation with Impact in Higher Education

Matthew Street, Head of Digital Learning, University of Warwick
Advancing Digital Teaching and Lifelong Learning

Advancing Digital Teaching and Lifelong Learning

Martin Ebner, Head of Educational Technology and Dean, Study for Teacher Education, Graz University of Technology
How to Craft Europe's Next Chapter Through Digital Unity

How to Craft Europe's Next Chapter Through Digital Unity

Olga Wessels, Head of Brussels' Office, ECIU
Teaching in a Time of Transition

Teaching in a Time of Transition

Ronan Gruenbaum, Dean of International Affairs & Program Development, Hult International Business School