The Generative AI Conundrum

Eitel J.M. Lauria, PhD. Professor of Data Science & Information Systems. Director, Graduate Programs, School of Computer Science and Mathematics, Marist College.

Eitel J.M. Lauria, PhD. Professor of Data Science & Information Systems. Director, Graduate Programs, School of Computer Science and Mathematics, Marist College.

Artificial intelligence has been progressing steadily for many years now, but the achievements in the last fifteen years can only be deemed as spectacular, with the inception of GPU-powered deep neural networks and breakthroughs in computer vision and natural language processing that have completely changed our ability to learn and make inferences from perceptual data.

The latest cool kid in the block is Generative AI. Foundation model is a broader term currently used by the AI research community to characterize transformer-based large language models (LLMs) trained on massive amounts of data that can be fine-tuned for a wide range of tasks. Decoder-only LLMs, with OpenAI's GPT series as their current poster child, are among the most popular models used today, and they keep evolving in complexity and power. The number and extent of use cases and applications they generate is difficult to fathom. They will affect every imaginable field of human endeavor and could bring incredible progress to humanity. They are doing it as we speak, and the pace is breathtaking.

But the manner in which Generative AI is being developed and released, and the way it is beginning to be consumed by society, is also a double-edge sword, and should not be taken lightly. We have already experienced the negative effects in earlier stages of smart AI: racial bias, denigrating characterization of minorities, toxic language, massive levels of disinformation.

Also concerning is the matter of automation, but I should leave this to economists who can perhaps better explain how the disruptive power of these technologies will affect pretty much every activity in our lives, from studying to working, to socializing.

The historian Yuval Noah Harari has raised the issue of diminishing human creativity due to overreliance on GenAI: in other words, if you have an AI capable of creating text, images, sound, and video better than the average human being, why care to continue producing new human expression? In my role as an educator, I often wonder how these technologies should be introduced in the classroom. They undoubtedly improve productivity of domain experts; I experience this in my research when I interact with a foundation model using natural language, eliciting answers to questions I formulate, which in many cases lead to an enhanced creative process. But what happens with learners who don’t know what they don’t know? How should we use these tools to enrich the learning process without impairing students? We know now that the constant multitasking of users in Web 2.0, and the distractions it induces, has been detrimental to our attention span. We should not repeat the same mistakes with GenAI. We will have to adapt, and plan wisely, as the tsunami of AI-generated content seems inevitable. But we must be in control this time. We must align AI technology’s goals with individual and societal goals, or the consequences could be dire.

The issue of control and alignment becomes much more critical when we consider the possibility of these models approaching superintelligent AI -models that can learn, understand, reason and solve problems in a way that is comparable to human intelligence, but completing these tasks at speeds orders of magnitude faster than the human brain. What this means for the human species is a matter of speculation, but it is very clear that controlling the superintelligent AI and aligning it with humanity are both crucial matters. A July 5, 2023 announcement in Open AI’s site calls

for the creation of a “superalignment” task force. The authors state that although superintelligence seems distant, it could arrive in the next ten years, and they go on to say that “currently, we don't have a solution for steering or controlling a potentially superintelligent AI”. In the 2022 Expert Survey on Progress in AI, participants were asked about the likelihood of a superintelligent AI causing human extinction or severe disempowerment. The median answer was 10%. We don’t actually know if or when superintelligent AI will arrive, but these statements are not only troubling, they are also reckless.

A group of distinguished AI researchers and tech executives have recently called for a slowdown in developing systems more powerful than the current GPT-4. This is probably futile. Companies are not going to press the brakes. Instead, the tech industry offers the term “responsible AI” as an all-encompassing solution to deal with the impact of these continuously evolving AI technologies. Some government regulation initiatives have emerged in the form of congressional hearings, incipient policies, laws, and administrative bodies to try to address societal concerns and ensure public safety, but much more needs to be done. It is essential to educate society and governments, and raise awareness, as there is too much at stake.

Ezra Klein, in his New York Times column this last March, states the following: "There is no more profound human bias than the expectation that tomorrow will be like today...Typically, that has been possible in human history. I don’t think it is now".

I couldn't agree more.

Weekly Brief

Read Also

The Indispensable Role of Emotional Intelligence in K-12 Technology Leadership

The Indispensable Role of Emotional Intelligence in K-12 Technology Leadership

Steve Richardson, Director of Information Technology, Homewood-Flossmoor High School
Reimagining Learning in a Digital World

Reimagining Learning in a Digital World

Dr. Darren Draper, Administrator of Technology and Digital Innovation, Alpine School District
Simplifying Online Program Tuition: Residency-Based Pricing in a Digital Age

Simplifying Online Program Tuition: Residency-Based Pricing in a Digital Age

Jonathan Paver, Director of Online Programs, Minnesota State University, Mankato
Empowering the Future of Online Learning: A Holistic Vision for Transformational Education

Empowering the Future of Online Learning: A Holistic Vision for Transformational Education

Mark Campbell, Director of Online Learning, Holy Family University
Transforming Education Through Technology Leadership

Transforming Education Through Technology Leadership

Hector Hernandez, Director of Technology Operations, Aspire Public Schools
Preparing for Generation Alpha in the Age of AI

Preparing for Generation Alpha in the Age of AI

Kevin Corcoran, Assistant Vice Provost of the Center for Distributed Learning and Rebecca McNulty, Instructional Designer, Center for Distributed Learning, University of Central Florida