Lawyer who used ChatGPT in lawsuit gets found out

Lawyer who used ChatGPT in lawsuit gets found out

Let this be a lesson to us all...

Executive male reading papers on couch with robot hands
Executive male reading papers on couch with robot hands/Pexels

During a legal case where a man sued an airline over a serving cart hitting his knee during a flight, there were some inconsistencies. 

Usually, the legal team files a brief which includes details supporting the case against the opposing party. This means that the lawyer may cite similar cases ruled on in the past. 

This obviously helps as precedence for the judge to rule in the complainant's case. 

In this case, the attorney in question, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, threw himself under the bus. 

Like a kid who was too lazy to complete his homework, he decided to use the 'not-so-reliable' (it seems) services of artificial intelligence. 

He used ChatGPT to help write the brief, and it was found that many of the cases cited did not exist. 

Schwartz admitted that he used ChatGPT in the preparation of the legal brief for his client, Roberto Mata. 

The AI program cited at least six cases of its purported research that were not legitimate. 

"The whole thing serves as a cautionary tale, especially relevant since many experts are predicting that generative A.I. systems like ChatGPT will transform the jobs of lawyers and other professionals." (Facebook)

MORE FROM JACARANDA FM


This is a classic example of why the current generation of AI chatbots is not equipped to write legal briefs. 

This brings light to the fact that these tools are not as reliable as they have been portrayed to be. The only thing here is, you cannot, as a professional adult, blame it on the chatbot. 

As technology becomes a trusted source more and more by people all over the globe, it is important to acknowledge its stature as one of assistance, and not complete reliability. 

Image Courtesy of Pexels

Show's Stories