Does ChatGPT make students lazy thinkers? - Insight into a sensible approach to AI at the University of Bern

27.03.24
Download as PDF
Author
Carol Blaser

Summarised for you:

Artificial intelligence (AI) is developing at a rapid pace and can increasingly take over human labour. Socially, we are therefore confronted with the question of how we want to organise cooperation with artificial intelligence. In an interview, the philosopher of science Prof Claus Beisbart provides an insight into the integration of AI at the University of Bern.

The future of the lazy

The UniBE Foundation focuses on promoting research that is future-orientated and innovative, which is why AI is a recurring theme for us. The areas in which AI can work are expanding all the time. Today, an AI such as ChatGPT can already take over creative activities of human labour, such as text production. It seems that AI systems are increasingly doing the thinking and problem-solving for humans. Thanks to AI, we may soon be able to spend more time on our hobbies. But what if the widespread use of AI makes us lazy to think? Such a development would certainly be dangerous for a think tank like the University of Bern.

In conversation with Claus Beisbart, who works at the Institute of Philosophy as Professor of Philosophy of Science, I am now interested in what consequences the use of AI is already having at the university today and what social challenges we will still have to face in the future.


Interview with Prof Claus Beisbart

UniBE Foundation: Mr Beisbart, what made you decide to focus one of your research areas on AI and ethics?

Currently, many paths lead to artificial intelligence. In the philosophy of science, I have previously worked with computer-based methods. The main focus was on computer simulations, which are used in climate science, for example, to make predictions. Currently, however, computer simulations are increasingly being replaced or supplemented by so-called 'machine learning'. There is currently a lot of hype surrounding this 'machine learning'. I am interested in what happens to the sciences when such processes, which are largely 'black boxes' for us, are applied. Ethics also quickly become relevant. For example, one ethical question is what we need to know about an AI application in order to be able to use it in medicine, for example.


AI makes decisions in order to arrive at a solution. In most cases, the processes involved in the decision-making process are not visible to us. In addition, we do not understand why the AI has made a particular decision. If this is the case, then the AI is considered to be 'black box'.


What practical ways do you see to incorporate inclusive and transparent AI into research?

There are very different approaches. A big problem for universities is that ChatGPT and other large models come from the industry. We didn't create them ourselves and can only analyse them retrospectively, just as we can analyse all users. The university should lead the way and develop models itself. Another approach is explainable AI (XAI). This means that the AI is built from the outset in such a way that it is easier to understand. In approaches focussing on interpretability, attempts are made to make algorithms understandable after the fact through precise analysis. Such methods need to be further developed at the university so that they can then also be used in practice.

"We absolutely have to use AI. Blocking it is useless."

And what can be done in teaching?

As a university, we are making a fundamental contribution to educating people about AI. It is important that people know exactly how AI works, what the dangers are, but also what the opportunities are, so that they can deal with it sensitively. We have an important educational mission here. We should definitely use AI in this context. There is no point in blocking it. It's about finding a responsible way of dealing with it.

You have developed a digital module on ethics and AI for students. What motivated you to do this?

The Vice-Rectorate Teaching had the great idea of creating an online module "Competences for the (digital) future" for students of all subjects. I was happy to make a contribution to ethics. Society must create a framework for embedding AI in our lives in a compatible way. It is important that this is not just driven by corporations and their own policies. I think we humans need to consider where and how we want to use AI. Ultimately, that's what the module is all about for me; encouraging people to think about AI for themselves.

Click here for the teaser of the online module "Ethics and digitalisation".

ChatGPT has been a constant companion in the unit's everyday work since its introduction a good year ago. What challenges do you see in dealing with this AI in a university context?

We have already talked about the opacity of the AI. ChatGPT is also not trained to tell the truth or cite sources. This is very inconvenient for academic work. So you have to think very carefully about what you use ChatGPT for. I see ChatGPT as an attempt to form a kind of average of what is said in the training data from the Internet. So if I am interested in what is generally said about the topic, ChatGPT is the right address. But if I'm interested in the truth or what the reasoning and sources are, then it's difficult with ChatGPT.

How did you experience the use of ChatGPT among students last year?

This semester was the first time since the introduction of ChatGPT that I taught our methods course, in which philosophical writing is practised. I allowed the students to use ChatGPT for smaller tasks. However, the enthusiasm was limited. In general, I have the impression that our philosophy students enjoy writing themselves. They realise that they have to learn to write and are interested in doing so. Especially in philosophy, linguistic expression and thinking are closely linked.

Papers and essays are prone to being written with ChatGPT. There would be the possibility of introducing other vehicles, such as oral examinations, for performance assessment. What do you think?

I do see a certain problem with ChatGPT. It is getting better and better and there are also other language models that can be used. Skilful prompting will probably bring you closer and closer to a good seminar paper in the near future. This is difficult to distinguish from 'real' seminar papers or essays. There are also ways of editing the text and adding a few spelling mistakes so that the text looks more like a student paper.

Even if there is a risk of cheating, I wouldn't want to go so far as to abolish the seminar paper. Seminar papers are very important in our philosophy programme. Students deal with a topic in detail over a longer period of time. At the end of the day, it's about developing your thinking, which for me is part of personal development. Students scrutinise their assumptions, introduce new concepts and refine their arguments. This process is relevant for their own reflection. This cannot be demanded in the same way in an oral examination. What can be done, however, is to review a seminar paper. This allows me to recognise how well the student has understood the topic and whether the work submitted is really their own work. There are also other reasons why such a discussion is pedagogically valuable - for example, to clarify misunderstandings that may have arisen during the correction.

Digitalisation requires humans and machines to work together. What opportunities are there?

The big opportunity is for AI to take over work that we don't like doing. AI could therefore take over uninteresting or boring tasks. There is also the possibility that AI will do this work better than we do. In medicine, for example, AI sometimes provides better diagnoses than humans - although this involves very specific tasks.

And what are the risks?

I work with computer-based methods. My team and I have a research project that attempts to implement moral and philosophical reflections on the computer, which can be seen as AI to some extent. However, this is not 'machine learning', which is currently so hyped. Nevertheless, it is also a type of artificial intelligence, as human thought processes are modelled by a computer. Moral thinking is simulated to a certain extent. I also have some ideas about how I could use ChatGPT in research, but I haven't realised them yet.

"We definitely want human science."

Do you think that researchers will one day become unnecessary due to AI?

No, I don't think so. However, it has to be said that AI is very good when it comes to individual tasks. It is always specialised tasks that AI takes on and where it can be trained to be better than a human. What AI has lacked so far, however, is general intelligence. Even the decision on a relevant research topic and the use of my time and resources is not something that AI can easily do for me. We definitely want a human science and not a science that runs along automatically and researches something that we no longer have any influence over. As things stand at the moment, humans are also needed for many intellectual tasks, such as making cross-connections. So humans will still be needed for the foreseeable future.


After half an hour, I say goodbye to Mr Beisbart. The conversation has shown me that AI at the university and in society certainly opens up new, interesting ways for us to deal with knowledge in a creative way. But it also made me think and showed me that we as a society are significantly involved in shaping the use of AI in everyday life and at university. We should therefore not shy away from engaging with it.


PROF. CLAUS BEISBART is a professor at the Institute of Philosophy specialising in the philosophy of science at the University of Bern. He is also associated with the Centre for Artificial Intelligence in Medicine (CAIM).

In collaboration with the Institute of Philosophy, the Department of University Didactics and Teaching Development and the Support Centre for ICT-supported Teaching and Research (iLUB), he provides an insight into the ethics of digitalisation in an online module. Picture zvg


ARTIFICIAL INTELLIGENCE (KI) Technologies that imitate human cognitive processes are described as artificial intelligence. Within AI, a rough distinction is made between weak and strong AI.

Strong AI can recognise tasks independently and find a solution to them. It can acquire the knowledge to solve problems independently. Strong AI can therefore handle knowledge creatively and innovatively, just as a human can. However, such an AI has not yet been realised.

Weak AI solves specific and recurring problems. It is trained to recognise patterns, just like a navigation system or a speech recognition app, for example. ChatGPT is also a weak AI. It is a language model that is based on 'machine learning'.

MACHINE LEARNING
The process of developing statistical models based on self-adaptive algorithms is known as 'machine learning'. This allows AI to recognise patterns and correlations in large data sets. The AI can then apply recognised patterns to unknown situations in order to react and make predictions. The most interesting AI today is based on the fact that the systems learn from the successes and failures of their application.

A particularly powerful type of 'machine learning' is Deep learning. This involves many layers of artificial neurons. ChatGPT also utilises this technology. It can recognise patterns in particularly large data sets and then make possible predictions.


chevron-down