AI is Everyone Responsibility

Dr. Leif Nelson, Executive Director of Learning Technology Solutions, Boise State University

Dr. Leif Nelson, Executive Director of Learning Technology Solutions, Boise State University

Artificial neural networks–the computational models underlying modern large language models (LLM’s) like ChatGPT–are not new. They were first used in the 1950’s to reduce echo and other background noise in telephone calls. Around the same time, a group of scholars coined the term “artificial intelligence” to describe this kind of self-teaching, pattern recognition algorithms they were researching (and to acknowledge the inspiration they took from neurobiological structures). Nearly 70 years later, there is much rhetorical noise in this current “AI summer” as LLM’s are dominating headlines, capturing the public’s attention, and being used in a wide range of applications. Today’s AI is upending dearly held assumptions about the value and nature of work, education, and human creativity. Specifically, ChatGPT and its ilk (Bing, Bard, et al) have been prompting legitimate concerns about the practice of writing and the propensity for student cheating, hallucinations (or “confabulations”) and fake news, occupational efficiencies and industry disruptions, usefulness and harmfulness, and myriad predictions ranging from grandiose optimism to catastrophist pessimism about how this technology will indelibly alter the future of humanity. 

Despite being at the center of so much hype and hysteria, AI itself is not the problem; rather, many of the problems ascribed to the current generation of generative AI are byproducts of broader social conditions in which these AI tools exist. From an economic standpoint, the attention and social media industry already extracts, exploits, and regurgitates whatever human content and data it can. In a similar vein, current generative AI tools are “trained” on prior art, but an emerging challenge with these models arises when the “black box” problem obfuscates questions of ownership and attribution. From an environmental standpoint, data may indeed be the “new oil” as the cloud computing industry has recently surpassed the airline industry in carbon emissions, a problem that is undoubtedly compounded by the compute power needed to run LLM’s. And in education, the growing preoccupation with performance and completion metrics coinciding with a transformation to online and digital environments has created a climate in which students may be inclined or even encouraged to utilize tools that will improve the speed and quality of their outputs, despite the fact that doing so may shortcut some of the harder, tedious work that is conducive to “good” learning.

Some of the largest companies in the world have been quick to hop on the AI bandwagon and are releasing these tools with unprecedented speed and imprudence. As the web was once seen as a kind of virtual, global, “public square” from its almost anarchic inception in the 1990’s to the democratized “web 2.0” in the mid-2000’s, it has increasingly become a commercialized marketplace of influencers and followers with their likes and views, all underpinned by algorithmic ad sales on a handful of juggernaut platforms like Google, Facebook, and the like. The (unsurprising) lesson to be gleaned from this evolution is that those with the most capital will do whatever they can to grow in scale and to maintain dominance. Ethical violations are downplayed and shirked. In congressional hearings and litigations, leaders from these big tech companies assume a coy posture that their products are “just platforms” and that they are not responsible for how the platforms are used and abused (even when they are). Almost anticipating a future need for plausible deniability, contemporary AI leaders like Sam Altman are already warning of the dangers of their own products. 

Many claim that the era of AI is inevitable and that the best, if not only, course of action is to understand how to use these tools well and responsibly. This perspective may be fatalistic, but it is also realistic and has merit. Hundreds of millions of people using a particular tool or platform may on the one hand be seen as a feather in some tech executive’s cap, but it is also a powerful reminder of the responsibility that everyone has with regard to individual habits and practices in the use of these tools. Here are five recommendations related to the use of AI that take into account the ways in which individual activities relate to broader societal impacts.

“From an economic standpoint, the attention and social media industry already extracts, exploits, and regurgitates whatever human content and data it can.”

1. Use AI judiciously in order to conserve energy, safeguard sensitive data, and ensure that the interactions that these systems are being trained on are of good quality. 

2. AI is a tool. There are times to use and to not use certain tools for certain tasks. Dependency on tools can become a crutch. Terms like “hand-made” or “made from scratch” connote characteristics of quality, care, and wholesomeness. Perhaps writing “from scratch” (i.e., unassisted by AI) will take on a similar meaning as varying degrees of “assisted” writing will increasingly become the norm. Spend more time writing by hand. Literally. With a pen/pencil and paper. The embodied act of handwriting will feel different and more authentic than even typing in a word processing application. 

3. Teachers and researchers should move past the false dichotomy of having to either “embrace or ignore” AI and instead consider how AI might impact their respective fields and disciplines in the coming years. They should have discussions and engage in reflexive praxis with peers and students on this topic. 

4. The risk of misinformation, bias, and forms of social manipulation– whether intentional or unintentional– being promulgated by AI tools is high. Critical thinking and information literacy skills will be more important than ever.

5. The real promise of technology and automation should be to reduce the need for humans to carry out routine, monotonous tasks. Use AI for these types of tasks, and spend the extra time doing things that foster human relationships or creative capacity.

On a macro-level, it will be the responsibility of governments and other agencies to develop overarching policies and guidelines for the AI industry. These efforts will take time and will have some degree of brittleness as the technology evolves. In the meantime, everyone should educate themselves about AI and its potential promise and downsides. Progress, in its crudest sense, is the accumulation of individual human decisions and actions. When these decisions and actions are well-informed and account for potential consequences, everyone stands to benefit.

Weekly Brief

Read Also

The Indispensable Role of Emotional Intelligence in K-12 Technology Leadership

The Indispensable Role of Emotional Intelligence in K-12 Technology Leadership

Steve Richardson, Director of Information Technology, Homewood-Flossmoor High School
Reimagining Learning in a Digital World

Reimagining Learning in a Digital World

Dr. Darren Draper, Administrator of Technology and Digital Innovation, Alpine School District
Simplifying Online Program Tuition: Residency-Based Pricing in a Digital Age

Simplifying Online Program Tuition: Residency-Based Pricing in a Digital Age

Jonathan Paver, Director of Online Programs, Minnesota State University, Mankato
Empowering the Future of Online Learning: A Holistic Vision for Transformational Education

Empowering the Future of Online Learning: A Holistic Vision for Transformational Education

Mark Campbell, Director of Online Learning, Holy Family University
Transforming Education Through Technology Leadership

Transforming Education Through Technology Leadership

Hector Hernandez, Director of Technology Operations, Aspire Public Schools
Preparing for Generation Alpha in the Age of AI

Preparing for Generation Alpha in the Age of AI

Kevin Corcoran, Assistant Vice Provost of the Center for Distributed Learning and Rebecca McNulty, Instructional Designer, Center for Distributed Learning, University of Central Florida