At the Süddeutsche Zeitung's digital summit, the deputy editor-in-chief deftly abolished his own job: Asked by the presenter what part of his work could be done by artificial intelligence (AI), his answer was "All of it!". But then, in an interview with Hannah Schmid-Petri, Chair of Science Communication at the University of Passau, he was noticeably relieved that the research findings painted a more differentiated picture.
Schmid-Petri, who is also a member of the board of directors of the Bavarian Institute for Digital Transformation (bidt) at the Academy of Sciences and Humanities, is investigating the state of trust in AI-generated journalism in a project funded by the academy. At the digital summit in Munich, she presented the journalists with initial findings from a representative study she conducted together with her team member Daria Kravets-Meinke.
"The majority of respondents still consider news from human journalists to be more credible than AI-generated text", said Schmid-Petri. This is consistent with other studies. But the study also showed that, under certain conditions, AI is leading the race, especially among people who have more positive attitudes towards technology.
For the study, the researchers created lead texts on the introduction of a speed limit on German motorways and labelled them with a reference to who wrote the article: journalist, AI newsbot or journalist using AI tools. In addition, they embedded the headline in German mainstream news outlets, namely tagesschau.de, Die Welt and t-online. These versions were presented by the market research institute IPSOS to more than 3,000 participants, who completed an online questionnaire.
Human authors will be somewhat disappointed to learn that many participants didn’t even notice who penned the article. However, they did remember the respective news outlet and it was shown that a high level of trust in a particular media organisation also leads to more positive credibility judgements. "This trust is the media companies' most important currency and it’s important not to gamble it away", said Professor Schmid-Petri.
The study is part of the bidt research focus on "Humans and Generative AI: Trust in Co-Creation", which the professor heads as bidt director. It comprises ten projects by researchers from a variety of academic disciplines and universities that focus on both sides of the collaboration: people and technology. "We are inves-
tigating the conditions under which appropriate and meaningful trust in AI products arises in various application scenarios", said Schmid-Petri, explaining the overarching objective.
Designing trustworthy AI co-pilots
Business information scientist Ana-Maria Sîrbu speaks of a calibration of trust that could be supported by a "mental match" between humans and machines. Sîrbu works with Professor Ulrich Gnewuch, who holds the Chair of Explainable AI-based Business Information Systems. The University of Passau is also represented through his involvement in the bidt focus area. He heads the GenAICopilot project, which investigates how AI co-pilots need to be designed so that employees have the right level of trust in them.
Such co-pilots are already in use at many companies, were they typically assist employees with non-technical backgrounds in data analysis and data-driven decision-making. However, says Professor Gnewuch, this doesn't always lead to good decision-making processes, for example when employees blindly trust the
AI's answers or, conversely, when they are overly sceptical. In both cases, forms of explainable AI can help, i.e. approaches that make it possible for people to retrace the reasoning of artificial intelligence systems.
Sîrbu's doctorate builds on her master's thesis, which she completed as part of her double degree programme in Information Systems at the University of Passau and the University of Turku, Finland. In it, she programmed a data assistant based on generative language models. In one variant, users can use a button to call up an explanation in which the prototype describes the steps that led it to its answer. Surprisingly, the participants in the experiment tended to press the button only once, but not again
for later queries. This seems to indicate that people felt that if the AI had got one answer right, it would also find correct answers later on. Sîrbu would have liked to see more interaction with the button to allow her to determine different patterns in user behaviour.
The business information researcher initially came to Passau from her home town of Craiova, Romania, as an Erasmus student. She liked it so much here that she took up the challenge of studying a degree programme in German, which to her was a foreign language. She later applied for the double master's programme with the University of Turku, Finland, and it was around the same time that the door to academia opened for her, when the programme convenor, Professor Jan Krämer, offered her a student assistant job at the Chair of Internet and Telecommunications Business. Sîrbu enthusiastically took him up on that offer: "I’d always wanted to gain an insight into scientific work", she says. She assisted doctoral candidates with their experiments – and the experience she gathered in that job comes in handy in her current research.
At the moment, she is summarising the findings of her master's thesis for a presentation at the European Conference on Information Systems taking place in Jordan in June. This is the next step in her academic career. She hopes to make important contacts with fellow researchers and gain new insights for her project – for example, how to get people to engage more with the AI’s explanations. This could be a step towards achieving the right level of trust.
In addition to the research focus, bidt is funding a number of consortium projects, including one based at the University of
Passau: A team led by communication expert Professor Florian Töpfl is researching how large language models are being adapted to Russia's propaganda.