Symbolic picture: Adobe Stock
The study of law involves vast amounts of text. Generative language models are now capable of processing text with astonishing accuracy. This would seem to make law an ideal field of application for large language models – but is that really the case? Professor Brian Valerius, holder of the Chair of Artificial Intelligence in Criminal Law, asked his guest from the field, Sven Galla, for his thoughts on the matter. Galla studied law at the University of Passau and founded RATIS Rechtsanwaltsgesellschaft mbH, where AI is already being used. The conversation between the professor and the University of Passau alumnus took place as part of the lecture series ‘Artificial Intelligence – Between Hype and Reality’. We are publishing an abridged version here.
Professor Brian Valerius: What does an initial legal consultation by a language model look like in practice?
Sven Galla: Currently, the language model is responsible for drafting an initial response to a legal question. The language model is not yet involved in direct communication with the client. The legal question submitted by the client via email, voice or text message or via an internet portal is sent to the language model with a system prompt for a response. These answers are then reviewed and revised by a lawyer, both manually and with further AI support. The result of this process is information that has been reviewed and approved by a lawyer. This is sent back to the client via their chosen communication channel. The process of generating the initial response, revising it and finalising it is documented in a database for AI training purposes.
Valerius: What are the advantages and disadvantages compared to an initial consultation provided solely by a human being?

Lawyer Sven Galla was a guest speaker at Professor Brian Valerius' lecture series. Photo: Simon Landenberger
Galla: In the vast majority of cases, initial legal consultations are currently provided as part of a mass business model through legal expenses insurance companies for a low flat fee per case over the phone.
In view of the shortage of skilled workers, providers of telephone legal advice are finding it increasingly difficult to employ lawyers on these terms. As a result, initial telephone consultations have to be provided by less qualified Human Resources staff ‘off the cuff’ in as short a time as possible in order to be economically viable for the providers. This inevitably comes at the expense of the quality of the information provided. However, providers do not have to fear liability for incorrect information, as information provided purely by telephone is not documented and therefore cannot be verified by the person seeking legal advice. However, the lack of documentation of the answer also means that many people seeking legal advice subsequently seek further advice. For this reason, legal expenses insurers are willing to increase the flat-rate fees in cases where the initial consultation is documented. Under these conditions, however, it is not economically viable for providers of telephone legal advice to document the advice provided by their staff.
In contrast, the response to a legal question generated by a language model – almost in real time – not only provides a cost-effective and rapid legal assessment of the problem, but also offers a suggested wording that is understandable to the person seeking legal advice. The work of the expensive lawyer then focuses on reviewing and revising the generated response. The more reliable the legal classification and the better the wording suggested by the AI, the less work the lawyer has to do. The efficiency gains pay off in terms of the cost-effectiveness of the service, despite the higher costs of employing qualified lawyers. The result is a quick and inexpensive answer to the legal question that is also reliable, understandable and verifiable for the person seeking legal advice. The disadvantage compared to a purely human initial consultation is that this result cannot (yet) be presented in an interactive telephone call with the customer without the flaw of provisionality.
Valerius: How do you ensure that the language model is legally up to date?
Galla: Our AI application is based on current language models, which means that it lives and benefits from their updates. The application also has a modular structure, allowing us to switch to a language model that is better suited to our area of application at any time, depending on the different developments of the language models. Otherwise, legal quality and topicality are ensured by the information and continuing professional development of the lawyers, who will in future be able to use the revisions of the AI-drafted responses documented in the database to generate responses in similar cases.
Valerius: How do language models deal with cases that are not represented in the training data?
Galla: If the response generated by the language model is completely unusable, for example because the case is not represented in the training data, then using AI does not result in any efficiency gains compared to a response researched and formulated by a lawyer. In this case, only the response formulated by the lawyer can be used in future to generate responses by AI in similar cases.
Valerius: Who is liable for any misinformation?
Galla: The provider of the initial consultation and developer of the AI application is a licensed law firm. It is liable to the customer for any incorrect advice and has insurance coverage of one million euros for liability claims.
Valerius: What does the future hold? Can language models replace lawyers?
Galla: Language models are already replacing lawyers in certain areas, such as careful reviews in the case of mergers or acquisitions, where only a fraction of the legal staff is needed for the same type of work. These applications will continue to increase in the future, so that the overall demand for lawyers will decline. It is unlikely that language models will completely replace lawyers to the extent that no lawyers will be needed in the future, as appropriately qualified lawyers will still be needed for quality assurance and the further development of language models.
In the area of law enforcement or legal implementation, the use of language models can have a disruptive effect if, for example, they are used to develop alternative methods of conflict resolution and broad acceptance can be achieved for these methods. The lack of effectiveness and acceptance of the conflict resolution mechanisms provided by the state, particularly in the form of civil proceedings, is already paving the way for this.
This text was machine-translated from German.
Professor Brian Valerius
How transparent does artificial intelligence have to be?
How transparent does artificial intelligence have to be?
Brian Valerius has held the Chair of Artificial Intelligence in Criminal Law at the University of Passau since October 2022. In his research, he deals with substantive criminal law and criminal procedure in its entirety. Last but not least, he is dedicated to issues of medical law and the legal challenges of digitalisation and artificial intelligence.



