THANK YOU FOR SUBSCRIBING
Be first to read the latest tech news, Industry Leader's Insights, and CIO interviews of medium and large enterprises exclusively from Education Technology Insights
THANK YOU FOR SUBSCRIBING
The world of higher education, and the world in general, has not stopped chatting about ChatGPT and how good the responses to requests of all types have been from the burgeoning queries made to the AI website. Many have been exploring the dozens of similar platforms, some of which come from the OpenAI organisation behind ChapGPT and Dall-E (which will create art on demand in the style of your choosing) and the discussions in higher ed seem to have been almost entirely restricted to how students could use the Turing-test surpassing ChatGPT to create essays that would not be identified by the traditional plagiarism-checking software; and what educators can and should do to avoid such risks of cheating. By the time this is published there will be yet another announcement of a platform which can accurately identify content created by A.I., at least giving a probability of its accuracy.
But first, let us remember that cheating and plagiarism existed long before A.I., long before online exams and assessments and long before plagiarism-checking software, such as TurnItIn, was available. Some have proposed returning to exams, as if they were cheat-proof (would we need invigilators and proctors in exams if that were the case?) or even that we should return to students submitting assignments with pen and paper (as if no one had ever cheated with those tools).
There could be a time in our students’ careers when plagiarism could be considered a virtue if it met the criteria of "innovative combination" where, according to you.com (another AI Bot) “An invention that uses a combination of existing elements to solve a problem is typically called an ‘innovative combination’. An invention is considered innovative when it combines elements in such a way that the result is a new product or process that solves a problem in a novel manner”. That is, if a student uses A.I. as a tool to help create new solutions, isn’t that ‘innovation’ rather than ‘plagiarism’?
However, this article is not to discuss the merits or otherwise of assessment tools, except to say that all the research shows that exams do not work as effective pedagogical assessments, and at the university level, surely we should be trying to assess our students in ways that will mimic how they will be assessed in the workplace? How many people, apart from Olympic athletes, have their entire year’s performance assessed on the basis of one three-hour stint?
At Hult International Business School, the mission is ‘to be the most relevant business school in the world’ which means preparing students at the bachelor, masters, MBA and executive education levels to be better prepared to be able to make a positive impact in the workplace and beyond.
At the undergraduate level, the redesigned Bachelor of Business Administration teaches and assess students on the Core Skills and Mindsets demanded by employers, identified through a big data approach in partnership with Burning Glass Technologies that mined many millions of online job postings worldwide for the types of positions BBA students fill after graduation. This ensures that all students, through the first three mandatory modules, learn how ethics, responsibility and sustainability should be a red-thread going through every decision they make in business and in life, and how to future proof themselves to the changing demands of the world over the next few decades.
What that means in practice, in addition to understanding the basics of coding and how to identify disruptive technologies that could kill their business or open a universe of new revenue streams, is how to differentiate themselves from A.I. and ensure they are not working for robots in the future. As Stephen puts it, if they can’t produce more insightful analysis and solutions than A.I. then they will be working at robot-wages of $20 per month. A.I. Bots have already passed multiple professional examinations (legal, medical, MBA) with distinction and can interpret all connected data relating to any combination of professional enquiry, and it will not be long before A.I. Bots become accredited and allowed to perform some of the duties of most professionals (at robot rates). For example, to diagnose but not prescribe. This will lead, of course, to humans employing such A.I. Bots as personal assistants rather than the less reliable humans.
“There could be a time in our students’ careers when plagiarism could be considered a virtue if it met the criteria of "innovative combination" where, according to you.com (another AI Bot)”
If you haven’t already, do enter a question or two into ChatGPT and, remembering that this technology is in its infancy and about to start shooting up the hockey-stick graph of evolution and development in getting more ‘life-like’ and better at avoiding obvious errors, see how competent the responses already are.
The significant differentiator for our students as they enter adulthood and the world of work is how to use their empathy and other ‘human virtues’ to identify the pain points humans experience in every-day interactions, be that the mundane but stressful process of buying property, to life-changing decisions regarding an individual’s health and well-being.
With every module on the BBA built around a main ‘challenge’ (such as building a startup or a social enterprise, or working as consultants for a live client) the challenge for the Future Proofing module uses Stephen’s bespoke methodology that allows students (and anyone) to quantify the value that a potential solution has on the end-user or consumer. This is not just about the amount of money that consumer is willing to invest to alleviate the ‘pain’ of the given problem (although, of course, that is a factor) but also the time and effort they would need to spend adopting behaviours that would be considered virtuous in the context of the impact the pain has on their lives. That is to say, by approximating the value impact in the life of a typical consumer, the student is able to determine if the total investment will be regarded as ‘worth it’ by the consumer before any investment in a good idea is required. Thereby reducing risk and cost to any investor.
This methodology unlocks the personal critical thinking capacity to solve problems in a novel manner by including human behavioural properties that A.I. simply does not have.
The challenge put to our first cohort of students to go through the redesigned Future Proofing module is how to improve healthcare for the patient, the consumer, when all systems – whether private or publicly funded – have no more money to invest? Whilst some proposed solutions are very pedestrian, some have soared into the clouds with analysis and creative problem solving that would be the envy of any consulting firm and impossible for A.I. to replicate.
For example, one of the teams' good ideas included leveraging those medical doctors who preferred to be guided by their consciencerather than using a herd average when determining benefits versus risk assessments based upon an individual's condition and circumstances. Interestingly the students in question worked out that if you did not have a conscience (as is the case of A.I.) then you could not fully anticipate the impact of conscientious behaviour on systems or factor in their value potential or employ them in the solution set.
The question for us all, therefore, as we go into this brave new world of science fiction becoming reality and A.I. sounding more eloquent and more knowledgeable than highly educated humans, is not ‘how can we mitigate its effects?’ but ‘how can we make A.I. work for us?’. How can we ensure that mundane tasks make best use of A.I. liberating time for us humans to do what we can really excel at: showing empathy to our fellow human, identifying and reducing the pain and discomfort of others, both at the physical level and, perhaps more importantly, at the emotional and psychological level.
Read Also
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info