Artificial intelligence (AI) and its power is being explored and now leveraged across almost all sectors in various ways, however there has been some growing concerns for its use in the legal industry. Google it and commentators from across the profession and around the world are asking the same questions: how will it affect legal processes?; Could it undermine justice?; How will it impact lawyers, and could it even replace lawyers? So it’s unsurprising that in response, the world’s largest lawyers association – the American Bar Association – recently created a taskforce to research and explore the true impact of AI on legal practice.
AI is already being heavily used by the profession, through document automation, practice management tools, and contract review to predictive artificial intelligence. However, the rapid rate of AI development and its potential has caused concern about how it can be misused and abused to undermine legal processes in all jurisdictions. Across Asia there are differing levels of concern and regulations around AI, for example, China has created extensive laws whilst in Singapore there are no laws governing their use. Given the different approaches and apprehensions, it’s valuable for law firms and lawyers to understand which concerns are rooted in fact and which are rooted in fiction. This will allow them to better understand what kind of upskilling is required to harness AI’s potential in law. Whilst there are a number of tools we could explore, there are two main AI technologies which are behind the concerns.
Technology that can create different kinds of content – for example, images, audio and text – is defined as generative AI. It’s been around since the 1960s but it was only until the twenty-tens where the creation of generative adversarial networks (a type of machine learning) accelerated it’s development. The most familiar and famous large language model being used today is ChatGPT.
The main worry is that this language model will start to replace lawyers. However, its notable limitations mean it’s very unlikely – robot lawyers aren’t a realistic prospect. Chief Justice of Singapore Sundaresh Menon recently reminded lawyers of the perils of using generative AI tools like ChatGPT, saying that, “such tools are obviously not bound by values like honesty and integrity, and can therefore provide wholly incorrect answers”. He cited a case in Singapore and in the US where individuals had used ChatGPT for their submissions “that included fabricated case law.” This clearly highlights the risks of using ChatGPT without caution.
Data protection is also a big issue. Inputting client’s confidential information into open AI systems like ChatGPT could risk it being discovered, misused and undermined by others.
However this doesn’t mean generative AI should be avoided. It can still play a positive role in a lawyer’s day-to-day when used correctly, mitigating time-consuming tasks, thereby improving efficiency and providing a value-add to the client. For example generative AI can provide generic guidance or summarise large pieces of text.
So, as generative AI increasingly becomes more sophisticated and popular, and as similar products enter the mainstream, training should be provided to lawyers to make the most of this innovative technology.
But the emphasis should be on how to avoid the dangers too – for example the importance of validating source material from the results.
This due diligence AI tool can automate the extraction of relevant data from sets of documents and by using natural language processing to understand text or spoken word like humans can. As a result of this, thousands of documents can be quickly classified and categorised to facilitate further review. This is why it’s such a brilliant tool for identifying risks, highlighting key considerations, and improving compliance as it aims to reduce the likelihood of human error when manually reviewing masses of information in a tight timescale . It automates a task which can be a mammoth job for legal teams, given the huge amount of data that exists around cases, companies, individuals and transactions.
There is a commonly held perception that an increased use of document analysis and automation tools for due diligence will mean a reduced need to hire lawyers. Rather than hiring large teams to perform this task, this AI tool can review thousands of documents and create them in a short space of time.
On the flip side, it actually increases the ability of firms to take on larger projects, especially smaller firms which wouldn’t have had the capability historically to do so. But does that still mean less lawyers? No. Across the board, more lawyers will be required to perform the other tasks required for these bigger (and therefore more complex) cases that can’t be replicated through AI, and technology alone in its current form is unlikely to complete 100% of the job a lawyer would do.
These systems require the lawyer to have the understanding and apply the context of what needs to be achieved through such a process, and will need to input the data too. The tool cannot apply much meaning to what’s been found, and what it means for the transaction. Training here is vital for lawyers to use this AI tool effectively, so they can take on larger projects, and the firm therefore remains competitive.
AI isn’t human
Too much credit is given to the power of AI in law, when really it struggles with understanding complex areas of legal argument and the nuances that lie between the points of law.
Although AI moves fast, understanding doesn’t quite keep up. It’s dangers can be misunderstood, but it doesn’t mean the benefits and the opportunity for upskilling should be ignored.