Today, AI is no longer limited to legal research or document automation. In international arbitration, it is already being used to draft procedural orders, summarize the parties’ positions, prepare procedural calendars, and sometimes even to produce a draft of the reasoning section of an award. Technologies such as Technology-Assisted Review, analytics platforms like Lex Machina and Premonition, the Arbitrator Intelligence service, and even AI “digital secretaries” currently being tested by some institutions are increasingly applied.
However, new opportunities also bring new risks. The recent case LaPaglia v. Valve demonstrated that the use of generative AI can become grounds for challenging an arbitral award: the losing party argued that the arbitrator had “delegated” reasoning to ChatGPT. This raises a fundamental question: where is the boundary between permissible use of AI and impermissible delegation of decision-making?