Exploring the implications of generative AI in higher education

Generative artificial intelligence platforms, such as ChatGPT allow users to engage in human-like conversation with artificially intelligent chatbots. Trained on vast internet data sets and deep learning algorithms, these models are capable of generating coherent, convincing, and often (though not always) correct and accurate responses to user inputs by predicting the most likely response based on the patterns and text of the input.

At a recent webinar, Navitas educators, student support staff, administrators, managers, and corporate staff reflected on the implications of such tools for teaching, learning, and academic integrity. Staff shared their anxieties, curiosity, apprehension, and fascination about this powerful new tool that has broken records, accruing over 100 million users just two months after launch. Clearly, generative AI is here to stay, and with it comes a plethora of questions regarding its affordances, limitations, and impact on academic integrity, creativity, critical thinking, and student learning and engagement.

An interview with Professor Kurt Debattista (University of Warwick, UK)

Karen McRae (Manager, Academic Education) and Alyce Hogg (Lead, Educational Developer) from L&T Services, talked to Professor Kurt Debattista, Director of Research Degrees at the University of Warwick in the United Kingdom. Professor Debattista is a visual computing expert and has worked to apply machine learning techniques to solve problems in computer vision, graphics, and engineering.

In this interview, Professor Debattista shares his knowledge regarding the large language models (LLMs) that power modern chatbots like ChatGPT, he addresses common misunderstandings about artificial intelligence, considers what the future might hold, and provides advice on how educators can get prepared.

 

What are some common misunderstandings about AI?

Perhaps the most common misunderstanding is that AI models such as ChatGPT have human-like intelligence and the ability to think. Certainly, they are good at solving certain types of problems and providing certain services, but in human terms, they are not all that smart. Rather, they are designed to follow rigorous mathematical and software engineering models.

It should be noted that the decision-making process of the more advanced AIs is not always fully understood because of the underlying complexities. Efforts are being conducted to improve this scenario in a sub-field termed understandable AI.

Will the integration of generative AI into software packages (such as the Office365 suite) impact the way students think and write?

Yes. I think the new natural language processing (NLP) and imaging-based generative methods have the potential to contribute significantly to student development. If the interface is easy to use it could see broad adoption. ChatGPT on its own has already seen significant uptake and this is the first of the large language models that are easily available to the public.

Is it likely that in the future chatbots (like ChatGPT) will be connected to the internet? This seems to have been disastrous for Microsoft Bing. What does the future hold?

Yes. There are, as with all technologies, going to be teething problems, but if you consider how much performance has improved in the last 10 years, it is bound to get better as algorithms improve and new methods are developed.

AI detection services can yield false positives: will we get to a point where it is impossible to tell if the content was created by a human, or generated by artificial intelligence?

I think that this is inevitable. Third-party systems that are not integrated into the original language models can only perform that well for a brief amount of time. Every new AI detection service can be used to train new generative methods, so it will be an arms race that, I suspect, the generative AI will eventually win. There is also the issue of false positives; if these cannot be reduced significantly then such systems are useless, to begin with.

There is a possibility that the AI’s generated text could be “watermarked” using methods that can predict with significant certainty that text was generated from a specific AI. However, these methods require the generative AI’s creator to build them in as part of the original system. If one of the large language model generators decides not to do this, then that system will be favoured, and the others could become extinct.

It is worth noting that the methods we are seeing now are the first mainstream ones that produce convincing results. They will keep improving. I think this does not really matter; we will learn to embrace the technology.

What would you tell a teacher who is currently feeling anxious or stressed about the impact of ChatGPT on student learning and academic integrity?

I would say to think about this as a new tool that we have to live with. It is best to make peace with these technologies. The impact of calculators on student learning was feared at first and with the introduction of the internet there were fears of plagiarism. However, AI can and should be used to enhance the student experience. Of course, the opposite is also true – if we do not educate students to use these tools, they will be at a disadvantage compared to their colleagues who are using them.

Evaluation of student progress may have to change. There are some creative ways around this. Interviews are an obvious method. You can also get creative, and turn the problem on its head. Provide students with output from an AI and ask them to evaluate it, for example.

Do you believe chatbots could replace teachers?

Perhaps eventually, but not in the near future. One of the most advanced and promising fields of AI in the last 20 years has been the development of intelligent vehicles (IVs). During this time, we have been promised that IVs will be commonplace on the road in the next 10 years. It still feels like we are 10 years away, or possibly more from that happening.

AI will assist for now, and I see this happening in many fields, law, software development, medicine, engineering, editing, etc., but humans, especially the ones who are experts in their fields will be able to use these tools to obtain an advantage.

I, for one, hope that AI can do marking for me at some point. If it could handle admin also that would be brilliant; however, I doubt any AI will ever be able to navigate the complex administrative processes that teachers are required to handle.

What support do you think we should be giving our staff and students to learn about, and ethically use, generative AI tools?

Ensure that both staff and students know how to use it. Explain to your students how it works and why it works. Having an overview of how it works, even if not too technical is beneficial.

Show students how to use it in practice. Use it and discuss it in class with them. Ask them to search for things they know about. Show them when it is right and when it is wrong and why this happens. Ask them to find what is missing from a response. This is not dissimilar to how we taught students about the advantages and disadvantages of using Wikipedia.

What is your prediction for the future of AI in education (and potentially other areas)? Where is all this going?

It is difficult to say. We see rapid advances over 10-15 years and then, typically, things slow down. There is more investment and more at stake with this current generation of AI. I suspect the basic tools will get better. Large language models like ChatGPT will improve, for at least a few more generations. It is likely to be the same for the broad generative models for images, videos, and possibly even games.

The goal of AI is to eventually achieve Artificial General Intelligence (AGI). AGI is the ability of a non-living organism to perform and learn tasks similarly to humans and animals, i.e., to have cognitive abilities on par with, and eventually superseding human capabilities. It is hard to predict when AGI will be achieved. I believe it is likely to get there eventually if technological progress continues uninterrupted, but this will probably take half a century or more.

 

 

To continue the conversation about generative AI, or to ask any questions or seek further information, please contact Navitas L&T Services at learningandteaching@navitas.com. Stay tuned for upcoming professional development events (for Navitas staff) on generative AI and more!

Feature photo: Unsplash Possessed Photography