Author: Carlos Matheus López
Lawyer, university professor and arbitrator
The article is also available in French and Spanish:
Published and translated by the firm Winter – Dávila & Associés
Paris, April 2, 2026
Artificial intelligence has entered the field of international arbitration with a compelling promise: greater efficiency, reduced costs, and potentially better-reasoned decisions. However, as is often the case with disruptive technologies, its true value lies not so much in what it can do, but in how—and within what limits—it is used.
A first observation is clear: AI offers undeniable advantages. It enables the drafting of more precise arbitration clauses, the analysis of vast volumes of documents in a matter of seconds, and assistance in the selection of arbitrators. It also optimizes administrative tasks, improves evidence management, and allows for predictive analyses regarding the likelihood of success of certain arguments. In a system such as international arbitration—where time and complexity are critical factors—these capabilities constitute a structural transformation.
Yet this technological promise quickly encounters a fundamental limit: the temptation to replace the human arbitrator.
The key is not to reject artificial intelligence, but to use it appropriately. This requires acknowledging that AI is a powerful but fallible tool; useful, but not autonomous; efficient, yet devoid of judgment.
ALSO READ: What is an arbitration award?
In theory, nothing would prevent parties from appointing a “robot arbitrator.” Programs already exist that can organize arguments, analyze evidence, and even draft arbitral award templates. In practice, however, such a prospect appears deeply problematic. Arbitration is not merely an exercise in information processing: it is also an act of judgment, balancing, and responsibility. These dimensions remain, to this day, irreducibly human.
Moreover, even an intermediate solution—consisting of partially delegating decision-making to AI—entails similar risks. When an arbitrator relies on arguments generated by artificial intelligence, their independence and impartiality may be compromised, and they lose sight of the intuitu personae nature of their mandate, which by its very essence is non-transferable.
The guiding principle must be unequivocal: artificial intelligence may assist the arbitrator, but must never replace them.
To this must be added the operational risks associated with AI. The first is bias, which not only reflects inequalities present in data but may amplify them at scale. The second is algorithmic opacity, that is, the inability to explain the reasoning leading to a given conclusion. The third lies in so-called “hallucinations,” namely the generation of erroneous information presented with an appearance of credibility. Recent cases before various courts have demonstrated that this risk is far from theoretical: lawyers have cited non-existent case law generated by AI, exposing themselves to disciplinary sanctions.

AI-generated image
In response to these challenges, a consensus is emerging at both the normative and practical levels. The European AI Act classifies systems used in the administration of justice and alternative dispute resolution as “high-risk,” imposing heightened requirements in terms of transparency, human oversight, and explainability. At the same time, various soft law instruments—such as the Silicon Valley Arbitration & Mediation Center’s Guidelines on the Use of Artificial Intelligence in International Arbitration, and the CIArb guidance on the use of AI in arbitration—converge toward similar principles: technological competence, non-delegation of decision-making, verification of outputs, and protection of confidentiality.
This last point is far from secondary. The use of AI often involves introducing sensitive data into systems whose processing methods are not always transparent. In a field where confidentiality is a cornerstone, this risk may undermine the very integrity of the arbitral process.
Ultimately, the issue is not to reject artificial intelligence, but to use it with discernment. This implies recognizing that it is a powerful but imperfect tool; high-performing yet devoid of discernment. The arbitrator, for their part, is not merely a decision-maker: they are the guarantor of fairness, originality, and legitimacy.
ALSO READ: Third-party funding contracts for litigation
Accordingly, the guiding principle must be clear: artificial intelligence may assist the arbitrator, but must never replace them.
The future of arbitration will be neither entirely human nor purely artificial, but hybrid. However, this evolution carries a subtle risk: like the myth of Pygmalion, that of the creator becoming enamored with their own creation and attributing to it qualities it does not possess. In the arbitral context, this would translate into excessive reliance on artificial intelligence. This is why, even when technology improves the process, it is essential to recall that the decision—and the responsibility that follows—must ultimately remain in human hands.
LEGAL NOTICE: This article has been prepared for informational purposes only. It is not a substitute for legal advice directed to particular circumstances. You should not take or refrain from taking any legal action based on the information contained without first seeking professional, individualized advice based on your own circumstances. The hiring of a lawyer is an important decision that should not be based solely on advertisements.
If you want advice related to the subject of the
article do not hesitate to contact us!
(email: contact@wdassocies.com)


Leave A Comment