Artificial intelligence (“AI”) is increasingly transforming the dispute resolution practice, offering potential efficiency whilst raising critical ethical and practical considerations. AI tools like large language models (“LLMs”) and agentic AI are reshaping how law firms handle disputes and related legal services. They focus on language and text-based content, using deep learning and natural language processing to interpret text inputs and generate outputs, or operate as autonomous agents to make decisions and act on goals with minimal human input.
Amidst the rising popularity of AI in the business world, how can professionals in the legal industry keep abreast of AI innovation and navigate relevant concerns, such as hallucinations or data privacy, to take full advantage of the opportunity ?
Part A : Advantages of Adopting AI in Dispute Resolution
Whilst litigation AI tools are relatively recent with developing case law and regulations, there are certain benefits of AI.
- Automating Routine Tasks
Amongst the key emerging AI use cases, AI can streamline the discovery stage of litigation cases by reviewing large volumes of documents for relevance and privilege. For example, e-disclosure platforms are able to learn from the coding made by human reviewers to suggest how additional documents in the review set should be coded, and prioritise documents that are likely to be most relevant for further review. LLMs may also automate the labour-intensive procedures in document disclosure, as commercially available AI tools can now perform a fully automated first-tier disclosure review, including acquiring, pre-processing, transforming raw data from pleadings, and reviewing an entire document set.
- Drafting Documents / Research Benefits
LLMs can not only assist with drafting when provided with appropriate prompts, but also process images, including both stills and live footage. In the drafting domain, LLMs can provide first drafts of correspondence, witness questions, or other documents. For example, AI can assist in drafting possible questions for cross-examinations. Furthermore, LLMs linked to legal research databases are emerging in the market, which could significantly expedite complex legal researches.
- Litigation Strategies & Legal Project Management
AI adds value to legal project management by offering data-driven insights and predictive analytics for informed decision-making. By examining legal disputes, industry trends, and online data through horizon scanning, it can identify potential high level case tactics and highlight concerns in advance. AI can also evaluate a litigant’s financial position to predict their behaviour, whilst analysing defendants’ assets to estimate recovery prospects. Most critically, AI may help to forecast potential case outcomes by considering characteristics of the case like the prior decisions made by the judge, and the legal arguments or defences put forth.
Part B : Key Risks
On the other hand, AI technology is still evolving and presents genuine risks for law firms. Here are some key examples of potential risks, although many are not new and can be addressed.
Inaccurate Outputs and Biased Decision-Making
A widely reported issue involves AI “hallucinations”, which are instances where LLMs generate plausible but factually incorrect responses based on built-in predictive patterns that may not accurately reflect reality (i.e. “statistically” based computer algorithms). A similar issue is model drift, where an AI system’s performance degrades over time due to changes in input data or shifting real-world conditions, leading to inaccurate predictions or flawed legal analysis. This may lead to missed facts or misapplied laws that are counter-productive to the purpose of employing the AI, which is to provide fast and accurate services.
Also, the “fairness” of the output of AI tools largely depends on the quality of its data sets and programme design, leading to a risk in poor or biased decision-making. Indeed, recent news has reported on AI “misuse” in litigation, with various overseas courts criticising lawyers for citing “false cases”. Biased application may also lead to discriminatory results in terms of race, sex, or social-economic backgrounds. The ethical and professional concerns in these cases are evident, where AI application is inappropriate in the administration of justice.
Data Privacy & Confidentiality
One of the biggest regulatory and ethical risks in using AI tools, especially in open AI platforms, is the breach of relevant confidentiality duties of law firms in divulging sensitive client data. As stated in Principle 8.01 of the Hong Kong Solicitors’ Guide to Professional Conduct, solicitors are responsible for protecting the client’s right to confidentiality which continues indefinitely. Express or implied authorisation from the client must be obtained before any disclosure. This means that the legal data should be at all times protected by legal professional privilege.
However, AI cloud services present an inherent risk to the clients’ data privacy as the provider of the model, or people relying on the same LLM model would potentially be able to access that data. LLMs generally lack the ability to distinguish between confidential and non-sensitive data as well. So, professionals who utilise public AI platforms for confidential case analysis using sensitive prompts, jeopardise both their practice and their clients’ interests through potential confidentiality data breaches.
Part C : Mitigation Strategies
Broadly speaking, law firms should be able to mitigate and manage these risks. Effective safeguards exist to manage AI risks, including requiring source citations for all AI-generated content, and implementing verification protocols to cross-check outputs against reliable references. Firms may implement continuous monitoring of AI use, regulate AI model training with updated legal datasets, and give human oversight to ensure outputs are compliant.
Risk management practices like client communication or protocols in obtaining informed consent may not be enough to totally eliminate the potential breach in confidentiality when using AI. Firms may consider developing robust security infrastructures for online compliance when handling client data. For example, making use of special enterprise versions of LLMs that are said to offer a more secure solution that can safeguard client confidentiality and prevent sensitive data from being incorporated into model training datasets.
Conclusion
The emergence of AI represents a transformative shift in dispute resolution practice, with their increasing capability of handling complex “big ticket” litigation or arbitration tasks. However, solicitors must remain vigilant in how they input data, interpret AI outputs, and integrate these tools into their practice to avoid potential legal, ethical and regulatory pitfalls, whilst leveraging their full potential. Ultimately, the key lies in augmenting legal talent and expertise with AI-driven efficiency, ensuring that technology enhances, rather than replaces, the nuanced judgment essential to legal practice.
In addition to handling litigation and arbitration cases, our firm has expertise in advising clients on AI and technology-related laws and ethics, and conducting training on AI-related risk management and drafting AI users’ Handbook. As AI technology is generally welcomed by the courts and arbitral tribunals, our Dispute Resolution team stand ready to help clients to optimally navigate AI tools and super charge efficient resolution to disputes and their businesses.
If you have any questions about the above eNews or legal queries generally, experienced lawyers in our Litigation and Dispute Resolution team will be happy to assist you.