In a recent unnamed case a litigant-in-person (meaning someone who is representing themselves without a lawyer) tried to present fictitious submissions in court based on answers provided by the ChatGPT chatbot.
The civil case, heard in Manchester, involved one represented party and one unrepresented: proceedings ended for the day with the barrister for one side arguing there was no precedent for the case being advanced. The following day, the litigant-in-person returned to court with four case citations, each backing the point they were trying to make.
On closer inspection by the barrister, it transpired that one case name had simply been fabricated, while the other three were real case names but with the relevant cited passages being completely different to the judgment in each. For all four citations, the paragraphs quoted were completely fictitious. It is understood that the judge quizzed the litigant-in-person, who admitted they had asked the AI tool ChatGPT to find cases that could prove their argument.
The chatbot appears then to have delved into a bank of case names and created excerpts purportedly from these cases which responded to the question asked of it. The judge accepted the misleading submissions were inadvertent and did not penalise the litigant.
Oliver Kew
Published on 10/07/2023