THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
There are many challenges facing the modern-day institution of higher education. One is the quickly changing technology environment, including large language models (LLMs). LLMs include Chat GPT, Google’s Bard, Meta’s Lama, and others. Chat GPT was released in November 2022, and many educational systems released statements against using the models to complete academic assignments, while other educational entities completely banned access to the technology. The use of such technology extends beyond the classroom into the admissions essays and beyond. How should an educational system respond? What should an individual instructor do since the students are likely to use the technology regardless of the stance of the educational system?
First, it is essential to acknowledge that LLMs are not going away. Research within academia and industry is driving the models and their applications forward. Since Chat GPT-3 release in November of 2022, both Bing and Google have embedded similar models into their search engines; specialized models have been built to generate images, convert text to speech, and extract text from images. This is just a list of a few specialized models built on the LLM infrastructure. LLMs are also being advanced with more parameters or different training sets, such as Chat GPT 4.5, Llama2, and others.
An LLM has three essential components: an encoder, a model, and a decoder. The encoder transforms the user input into information the model can understand. The model is a sophisticated architecture that ingests the input and generates output. An example of the architecture can be found here. The decoder converts the results generated by the model into text a human can understand. The advances are in the architecture that accurately predicts the probability of the next word given the entire input prompt and the previous results generated.
How should an educational system respond to the newest round of technological innovations? The educational systems should provide a framework and training for educators. The framework should clearly define plagiarism with the LLMs in mind. For instance, the LLM cannot plagiarize since it is a probabilistic model. The students can plagiarize the LLMs by claiming the model's output as their own. It is reasonable for the students to acknowledge that a specific LLM or application containing the model was used. It is the student's responsibility to check the accuracy of the results, including citations. The model only knows that a citation is highly likely at a specific point in a sentence, but it does not cite references directly. The citations are likely made up or hallucinated. If made-up citations are submitted, then there is a chance that a model created the entire submission. Without acknowledgment, this would be an example of an academic integrity violation. The institutions should also provide training to the educators at the appropriate level so the instructors can make informed decisions about incorporating LLMs into their classes. The training should include the framework developed by the university and methods to identify if the submission was developed by a model. The impact of LLMs is broad, affecting many disciplines, but not every educator has the same level of understanding or background. The training should be appropriate to the audience and likely customized to the specific discipline.
The educators should decide on the best ways to incorporate the new technology based on guidance from the educational institution, the curriculum they are responsible for, and their knowledge and comfort with the models. The assessments can be customized to the use of the technology. For instance, a writing assignment could be modified so the LLM generates the text, the student evaluates it, or the model could create ideas or an outline. In technology-based courses, the LLM could develop code, ideate approaches, or convert computer code from one language to another. It is essential to understand that LLMs are a model, and there is a famous quote in statistics: ‘That all models are wrong, but some are useful’ (George Box). The assessments can incorporate this concept, allowing the students to understand when the model is helpful versus not. Regarding ethical use, the students also need to learn when and how to acknowledge the assistance provided by the LLM.
The new technology should not be seen as a threat to the educational experience, just like moving statistical calculations from hand to software does not disrupt the application of the method. The educator needs to understand the assessment's goals in the curriculum context. For instance, if the equations for the statistical technique are essential to the curriculum, then the software should not be used, so the focus is on the calculations. If the application of the method is important, then the software facilitates this since many more examples can be worked on in class and at home due to the computer's speed. In statistics, repeating the application is often important and expected by future employers, so software is utilized. This example has many parallels, and each individual assignment has the ability to incorporate LLMs. It is up to the instructor to determine the specific outcome and appropriately design the assessment, including using LLMs.
Incorporating this technology can seem challenging, but this is not the first time an innovation has caused disruption. LLMs likely represent one of the most significant technological leaps forward in a long time, surprising the educational world. With the appropriate support, it can be incorporated into the classroom since it will be incorporated into the workplace.
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info