What does it mean for the human generative drive when we work with AI tools we cannot see through? A team led by Professor Marina Fiedler is investigating this question in the DFG project GAIO.
Today, AI systems screen job applicants, generate text, and support strategic decisions. Increasingly, they also act as autonomous agents that independently plan tasks, choose tools, and carry out multiple steps in sequence. Yet hardly anyone can trace how they arrive at their results. What does it mean for us when we work every day with tools we cannot understand because they are opaque?
This is the question pursued by the research project GAIO (Human Generative Drive in the AI-Opaque Workplace), led by Prof. Dr. Marina Fiedler, who holds the Chair of Business Administration with a focus on Management, People, and Information at the University of Passau. "At the center is what we call the human generative drive: the inner motivation to develop creative solutions independently, take responsibility for one's own work, and make a meaningful contribution," the researcher explains.
But how does the opacity of AI affect this drive? "It's conceivable that opacity undermines it: if people can neither understand AI outputs nor learn from or build on them, the sense of authorship, the willingness to engage in critical reflection, and creative problem-solving could all erode. But it is equally possible that opacity has little effect on the generative drive — for example, because people develop their own strategies for dealing with non-transparent systems," says Prof. Dr. Fiedler. That, she notes, is precisely what she and her team intend to investigate in GAIO, with no predetermined conclusions.
Why AI is opaque
AI opacity does not have one cause, but three. First, AI providers deliberately conceal how their models work for reasons of competition and liability. Second, users often lack the technical knowledge to understand how these systems function. And third, even with full access and expertise, a fundamental hurdle remains: human thinking simply cannot keep pace with the complexity of today's AI models — a cognitive limit that cannot be overcome through more knowledge alone.
These sources are especially pronounced in AI agents. Because agents independently plan entire chains of tasks, select tools on their own, and make intermediate decisions that largely remain invisible to users, they amplify all three sources of opacity at once: even someone who understood the underlying model would see neither which steps the system actually took, nor why it chose one path over another. Each source of opacity affects the generative drive in its own way — and calls for its own response.
What the project investigates
In everyday work, it is often assumed that AI integrates smoothly into existing workflows, that more transparency is automatically better, and that technical explanations are enough to establish trust and understanding. GAIO puts these assumptions to the test and pursues three research objectives:
How the researchers will proceed
GAIO combines qualitative and quantitative methods in three successive steps. First, the researchers will accompany employees at three partner organizations through observations and interviews in their daily work, in order to understand how AI opacity is concretely experienced and where it affects the generative drive. Building on this, the team will use a large-scale survey and laboratory experiments to test which transparency measures actually work — and which do not. In a third step, the researchers will develop and pilot three to five practical interventions together with the partner organizations.
What will come out of it
The findings will feed into an evidence-based toolkit that gives organizations concrete strategies for shaping AI transparency in a differentiated and effective way. The goal is a working world in which AI — including in the form of autonomous agents — does not crowd out the human generative drive, but rather creates space for it.
The German Research Foundation (DFG) is funding the project for a period of three years.
Image credit: This illustration was created using AI.
| Principal Investigator(s) at the University | Prof. Dr. Marina Fiedler (Lehrstuhl für Betriebswirtschaftslehre mit Schwerpunkt Management, Personal und Information) |
|---|---|
| Source of funding |
DFG - Deutsche Forschungsgemeinschaft > DFG - Sachbeihilfe
|
| Projektnummer | 574128993 |