Lawyers Who Use AI May Not be All That Intelligent

JOOTB_FinalI saw a New York Times article last week about a lawyer for a plaintiff in a personal injury suit against an airline who used CHATGPT to draft a brief opposing a motion to dismiss his case.  As it turned out, that was not a great decision.

According to the Times article, the brief cited a litany of supporting cases that were, um, made up.  Lawyers for the airline apparently searched unsuccessfully for the cases and finally let the court know what they found.  Or, more appropriately I guess, what they didn't find. 

The plaintiff's lawyer (I'm not going to name him because there but for the grace of God go any number of us) apparently threw himself on the mercy of the court, which was his only option given that the only checking he did on the bogus cites was to ask the CHATGPT program if they were accurate.  The program assured him they were.  But it now appears AI is not the most reliable source.  The Judge set a hearing for June 8 to discuss potential sanctions.  So, no word yet on what discipline the lawyer will face.  I can see the judge cutting him a break, but I can also imagine the judge making an example of the lawyer, which means the hammer could come down hard.

To avoid this messy situation, a federal judge in Texas has recently adopted a policy on the use of Artificial Intelligence.  Judge Brantley Starr now requires lawyers appearing before him to submit a "Mandatory Certificate Regarding Generative Artificial Intelligence."  From now on, lawyers will need to certify that "no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being."  Judge Brantley adopted this policy because "[t]hese platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up—even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases, and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law, or the laws and Constitution of the United States (or, as addressed above, the truth)." 

Suffice it to say, Judge Starr is no fan of AI.  Accordingly, the policy provides "the Court will strike any filing from an attorney who fails to file a certificate on the docket attesting that the attorney has read the Court's judge-specific requirements and understands that he or she will be held responsible under Rule 11 for the contents of any filing that he or she signs and submits to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing."

If a lawyer or a party violates Rule 11, they can be responsible for paying the legal fees the other side incurs in responding to the brief.  So, this policy has some teeth.  As is often the case, don't mess with Texas. 

About The Author

Jack Greiner | Faruki Partner