Skip to main content

Searching for the right level of trust in AI

In a bidt research focus, an interdisciplinary team of researchers led by Professor Hannah Schmid-Petri is working on questions of trust in artificial intelligence. Her team is investigating AI-generated journalism, while another is working on trustworthy AI co-pilots.

At the Süddeutsche Zeitung's digital summit, the deputy editor-in-chief deftly abolished his own job: Asked by the presenter what part of his work could be done by artificial intelligence (AI), his answer was "All of it!". But then, in an interview with Hannah Schmid-Petri, Chair of Science Communication at the University of Passau, he was noticeably relieved that the research findings painted a more differentiated picture.

Professor Schmid-Petri talking to Ulrich Schäfer, deputy editor-in-chief of the SZ. Photo: Mirjam Hauck

Schmid-Petri, who is also a member of the board of directors of the Bavarian Institute for Digital Transformation (bidt) at the Academy of Sciences and Humanities, is investigating the state of trust in AI-generated journalism in a project funded by the academy. At the digital summit in Munich, she presented the journalists with initial findings from a representative study she conducted together with her team member Daria Kravets-Meinke.

"The majority of respondents still consider news from human journalists to be more credible than AI-generated text", said Schmid-Petri. This is consistent with other studies. But the study also showed that, under certain conditions, AI is leading the race, especially among people who have more positive attitudes towards technology.

For the study, the researchers created lead texts on the introduction of a speed limit on German motorways and labelled them with a reference to who wrote the article: journalist, AI newsbot or journalist using AI tools. In addition, they embedded the headline in German mainstream news outlets, namely tagesschau.de, Die Welt and t-online. These versions were presented by the market research institute IPSOS to more than 3,000 participants, who completed an online questionnaire.

The charts show the relationship between attitudes towards AI and the perceived credibility of AI-generated content and articles written by people.
© Schmid-Petri/Kravets-Meinke

Human authors will be somewhat disappointed to learn that many participants didn’t even notice who penned the article. However, they did remember the respective news outlet and it was shown that a high level of trust in a particular media organisation also leads to more positive credibility judgements. "This trust is the media companies' most important currency and it’s important not to gamble it away", said Professor Schmid-Petri.

The study is part of the bidt research focus on "Humans and Generative AI: Trust in Co-Creation", which the professor heads as bidt director. It comprises ten projects by researchers from a variety of academic disciplines and universities that focus on both sides of the collaboration: people and technology. "We are inves-
tigating the conditions under which appropriate and meaningful trust in AI products arises in various application scenarios", said Schmid-Petri, explaining the overarching objective.

Designing trustworthy AI co-pilots

Business information scientist Ana-Maria Sîrbu speaks of a calibration of trust that could be supported by a "mental match" between humans and machines. Sîrbu works with Professor Ulrich Gnewuch, who holds the Chair of Explainable AI-based Business Information Systems. The University of Passau is also represented through his involvement in the bidt focus area. He heads the GenAICopilot project, which investigates how AI co-pilots need to be designed so that employees have the right level of trust in them.

Business information scientist Ana-Maria Sîrbu is researching how trustworthy AI co-pilots can be designed.

Such co-pilots are already in use at many companies, were they typically assist employees with non-technical backgrounds in data analysis and data-driven decision-making. However, says Professor Gnewuch, this doesn't always lead to good decision-making processes, for example when employees blindly trust the
AI's answers or, conversely, when they are overly sceptical. In both cases, forms of explainable AI can help, i.e. approaches that make it possible for people to retrace the reasoning of artificial intelligence systems.

Sîrbu's doctorate builds on her master's thesis, which she completed as part of her double degree programme in Information Systems at the University of Passau and the University of Turku, Finland. In it, she programmed a data assistant based on generative language models. In one variant, users can use a button to call up an explanation in which the prototype describes the steps that led it to its answer. Surprisingly, the participants in the experiment tended to press the button only once, but not again
for later queries. This seems to indicate that people felt that if the AI had got one answer right, it would also find correct answers later on. Sîrbu would have liked to see more interaction with the button to allow her to determine different patterns in user behaviour.

The business information researcher initially came to Passau from her home town of Craiova, Romania, as an Erasmus student. She liked it so much here that she took up the challenge of studying a degree programme in German, which to her was a foreign language. She later applied for the double master's programme with the University of Turku, Finland, and it was around the same time that the door to academia opened for her, when the programme convenor, Professor Jan Krämer, offered her a student assistant job at the Chair of Internet and Telecommunications Business. Sîrbu enthusiastically took him up on that offer: "I’d always wanted to gain an insight into scientific work", she says. She assisted doctoral candidates with their experiments – and the experience she gathered in that job comes in handy in her current research.

At the moment, she is summarising the findings of her master's thesis for a presentation at the European Conference on Information Systems taking place in Jordan in June. This is the next step in her academic career. She hopes to make important contacts with fellow researchers and gain new insights for her project – for example, how to get people to engage more with the AI’s explanations. This could be a step towards achieving the right level of trust.

This report was published in the Campus Magazin (01/2025) 

Prof. Dr. Hannah Schmid-Petri, Inhaberin des Lehrstuhls für Wissenschaftskommunikation an der Universität Passau.

Professor Hannah Schmid-Petri

researches public debates – both online and offline

How are digitalisation issues publicly discussed and what consequences does that have for political processes?

How are digitalisation issues publicly discussed and what consequences does that have for political processes?

Professor Hannah Schmid-Petri is the holder of the Chair of Science Communication at the University of Passau and one of the principal investigators of the DFG Research Training Group 2720 "Digital Platform Ecosystems (DPE)". She is also a member of the Board of Directors of Bavarian Research Institute for Digital Transformation and part of the jury for the DFG Communicator Award. Before her time in Passau, she was a senior assistant at the Institute of Communication and Media Studies at the University of Bern.

Professor Ulrich Gnewuch

studies the use of artificial intelligence in companies

How can we design AI-based information systems in a human-centered way?

How can we design AI-based information systems in a human-centered way?

Prof Dr Ulrich Gnewuch holds the Chair of Explainable AI-Based Business Information Systems at the University of Passau. His research at the intersection of information systems and human-computer interaction focuses on the design, use, and impact of artificial intelligence in business and society.

More information

In addition to the research focus, bidt is funding a number of consortium projects, including one based at the University of
Passau: A team led by communication expert Professor Florian Töpfl is researching how large language models are being adapted to Russia's propaganda.

bidt project on Authoritarian AI: How large language models are adapted to Russia's propaganda

bidt project on Authoritarian AI: How large language models are adapted to Russia's propaganda

In a project funded by the bidt, researchers from the Universities of Passau and Bamberg are investigating how Russia is developing its own generative AI models under strict supervision and how authoritarian data affects AI systems in democratic systems.

Bluesky

Playing the video will send your IP address to an external server.

Show video