One of the topics that I am currently investigating is artificial intelligence. I have subscribed to ChatGPT4, and working my way through the new discipline of “prompt engineering”. And it may prove very useful over the longer term, as I am reasonably sure that AI will not prove to be a passing fad, but will become what might be termed an “embedded technology”. By that I mean something so fundamental to our daily lives, that we simply do not notice its wonder any more: like electricity, or the omnipresence of oil, in so many things and processes.
If so, then the civil justice system and the legal process, will not be immune to the introduction of AI, amidst its processes. This month guidance has been published for the judiciary in England and Wales on AI, and how it might impact on their work: the guidance can be found here AI-Judicial-Guidance. The guidance concludes by noting what AI can be used for-and what it should not be-in the judicial context:
Potentially useful tasks
• AI tools are capable of summarising large bodies of text. As with any summary, care needs to be taken to ensure the summary is accurate.
• AI tools can be used in writing presentations, e.g. to provide suggestions for topics to cover.
• Administrative tasks like composing emails and memoranda can be performed by AI.
Tasks not recommended
• Legal research: AI tools are a poor way of conducting research to find new information you cannot verify independently. They may be useful as a way to be reminded of material you would recognise as correct.
• Legal analysis: the current public AI chatbots do not produce convincing analysis or reasoning.
Indications that work may have been produced by AI:
• references to cases that do not sound familiar, or have unfamiliar citations (sometimes from the US)
• parties citing different bodies of case law in relation to the same legal issues
• submissions that do not accord with your general understanding of the law in the area
• submissions that use American spelling or refer to overseas cases, and
• content that (superficially at least) appears to be highly persuasive and well written, but on closer inspection contains obvious substantive errors
It follows hard on the heels of more expansive and detailed guidance from Australia, which can be found here: AIJA_AI-DecisionMakingReport_2023update. Both the domestic document and the Australian document, contemplate AI as being a normal part of the legal ecosystem: albeit not a part without its risks. But one of the parts of the report I found most interesting, was its description of what is already happening in England and Wales:
Another example is the English Traffic Penalty Tribunal (TPT). The TPT decides motorists’ appeals against Penalty Charge Notices (PCNs), issued by local authorities and charging authorities in England (outside London) and Wales, for parking and traffic contraventions. The Tribunal comprises 30 part-time adjudicators who are judicial officers working remotely with the support of 14 administrative staff. The process employs ‘Triage questioning’ for appellants during the appeal registration process which guides them through the information they need to provide to initiate an appeal, including about themselves, the vehicle and the PCN. Other technology is also used. The process provides for the upload of evidence, such as photographs and videos, to PDFs of documents, to screen captures of WhatsApp messages. Appellants have the option to select either:
1 an e-decision: A TPT Adjudicator will decide the appeal without a hearing or talking to the parties, often asking questions in a message and the parties replying promptly.
2 a telephone hearing: the motorist can ask for teleconference with the adjudicator and an Authority representative usually taking part.
In the United Kingdom, an online portal known as Money Claim Online (‘MCOL’) has, since 2002, facilitated simple, small claims of £100,000 or less without the need to enter a court building or engage a solicitor. A comprehensive practice note, which supplements the Civil Procedure Rules, Practice Direction 7E – Money Claim Online, delineates the rules and procedure applicable to the MCOL, including the types of claims that can be made (dir 4) and the way that a claim ought to be commenced (dir 5).
A separate portal, made public in 2018 and known as the Civil Money Claims portal,61 allows applicants to make a claim if the value of their loss is less than £25,000 (raised from £10,000 in May 2022).62 Since its public beta testing in March 2018, more than 378,000 claims have made using the portal, with 97,315 claims filed in 2022 alone. It has settled 50.4% of the 9,560 mediation appointments made in 2022, within an average of 24 days. The system has achieved a 95% user satisfaction rating. The programs take users through the eligibility requirements necessary to make a claim before determining whether their matter is suitable for the MCOL or the Civil Money Claims portal. If the case is defended and certain automatically generated documents are filed via the system, the claim may go to mediation or the local court. However, non-response or a willingness by the defendant to pay the sum can facilitate a ‘judgment’ through the money claim online portal. The user inputs the terms of the ‘judgment’ (e.g. the method of payment, whether it is to be paid by instalments) to be confirmed by the court. The portal can be used to issue a warrant in the event of non-payment.
So how might AI come to bear on credit hire litigation? There are a number of ways, in which AI could soon become very important indeed.
Let me give you one example. One of the issues that has troubled me over the years, is that in some cases claimants are being found to be fundamentally dishonest, when they are more aptly to be described as either unsophisticated or careless or both. If this is then coupled with a “factory farming” approach to claims management and litigation, where a multitude of individuals at a solicitor’s firm have dealings with the claimant, then the scene is set for disaster.
An initial account may be taken for the purpose of completing a claims notification form, which is partial and incomplete. A different account may be given to a GP, working to a long list of 15 minute appointments, but finds its way into the medical report. The report is sent to the claimant for checking, who does not bother to read it properly, or certainly does not compare it line by line with the claims notification form.
A yet further account is then given in a witness statement, drafted using a template, not in the claimant’s own words, who again simply does not read it but signs the witness statement where indicated. All of these documents are then put in a trial bundle, and provide rich and fertile ground for defence counsel, to cross examine the claimant and exact damaging concessions. But in a world where AI is deployed, the claimant would simply give one account, prompted and guided to the relevant issues of liability, causation, quantum etc by the AI programme, and that account would be pulled through to all other documents where the claimant’s input is required.
At a stroke, human error in the presentation of claims could be largely removed.