THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
I think that GPT-like models, in the way they are being developed and released by the AI research community and the way they are beginning to be consumed by society, are a double-edge sword. A proper term that the AI research community uses now is foundation models to characterize large pre-trained neural network models trained on massive amounts of data that can be fine-tuned for a wide range of natural language processing (NLP) tasks. This goes back to the self-attention paper and the inception of transformer based large language models (it seems like ages ago, but it happened less than six years ago!).
These models, with Open AI's GPT series (Generative Pre-trained Transformer) as their current poster child, are a spectacular feat of science and engineering that do not cease to amaze even those of us who are in the field. And the number and extent of use cases and applications they generate is difficult to fathom. They will affect every imaginable field of human endeavor. And they could bring incredible progress to humanity. They are doing it as we speak.
However, there is a dark side to it that we must not take lightly. We have already experienced the negative effects in earlier stages of smart AI: racial bias, derogatory characterization of minorities, toxic language, and massive levels of disinformation. Imagine just one use-case of toxic AI 2.0: something (like GPT-4) trained to be so eloquent that it can drastically affect human beliefs through simple conversation.
It is also a matter of automation, but I should leave this to economists, who can perhaps better explain how the disruptive power of these technologies will affect pretty much every activity in our lives, from studying to working to socializing. What are the economic consequences of this disruption, and how easily can human society adapt to these changes without introducing major negative disruption? What are the consequences of such major negative disruption for countries, cultures and the world as a whole?
Yuval Noah Harari has raised the issue of diminishing human creativity due to lack of incentive: in other words, if you have an AI capable of creating text, images, sound, and video better than the average human being, why care to continue producing new human expression? I would prefer to believe that this is not the case and that human creativity will prevail over any form of artificial form of expression. But Harari has a point, as there is a risk that reliance on these models could lead to a reduction in human creativity as people become overly dependent on AI-generated expression. And this applies to software also; GPT-4 codes quite well J. I guess we will have to adapt, and perhaps human creativity will be smart-AI aided creativity. And that is not necessarily bad.
" I guess we will have to adapt, and perhaps human creativity will be smart-AI aided creativity "
Then there is the concern of these models approaching artificial general intelligence (AGI) which means models that have the ability to learn, reason, understand, and solve problems in a way that is comparable to human intelligence. The issue, in this case, is that any AGI implies, by definition, super intelligence, as these models would complete these tasks at speeds orders of magnitude faster than the human brain. What this actually means for the human species is a matter of speculation, but it is very clear that it is a critical matter of control and alignment: controlling the AGI (being able to pull the plug when necessary) and aligning it with humanity (the latter is easier said than done: what do we mean by humanity when we have such diverse groups of humans populating the globe, with different nationalities, cultures, ethnicities, religions, beliefs, well-being, etc., etc.). In the 2022 Expert Survey on Progress in AI, participants were asked, "What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?" The median answer was 10 percent. This is, for lack of a better word, CRAZY. (You can Google it, it is available online)
Elon Musk and a group of AI experts and tech executives have called for a pause/slowdown in developing systems more powerful than the current GPT-4. This is probably futile, and it is not going to happen. Companies are not going to press the brakes. Still, it is important to raise awareness, as there is too much at stake. The quest for smarter AI is becoming an arms race between/among large tech. Humanity has at least tried in the past to deal responsibly with potential WMDs (think nuclear, biological, chemical, genetic manipulation). I don't think we quite understand that smart AI, in spite of all its potential, is a new breed of WMD, and we are not treating it as such.
There is a good metaphor in a fantastic book by Stuart Russell, a professor of CS at Berkeley and acclaimed AI expert (the 2019 book is called Human Compatible; Artificial Intelligence and the Problem of Control. Russell posits that if humanity were to meet an alien race, that would be the greatest event in the history of humanity. And that the arrival of an AGI would be equivalent to meeting a superior alien race. What would happen if, one day, humanity receives an email from a superior alien race notifying us that they will arrive in a few years -decades at most? There would be massive hysteria. Russell uses the term pandemonium to try to minimally describe the effect that this news would have on us. Instead, our response to the potential arrival of a super-intelligent AI remains underwhelming. We take it as business as usual. The speed of progress in AI at a technical level, and results, are breathtaking. I am not sure what the solution is, but the problem is developing in front of our eyes, and humanity does not seem to take it very seriously.
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
However, if you would like to share the information in this article, you may use the link below:
www.educationtechnologyinsightsapac.com/cxoinsights/gptlike-models-nid-2523.html