ChatGPT: Lawyer Faces Court Hearing For Citing AI-Invented Non-Existing Cases
A New York lawyer is facing a court hearing after his firm used the AI chatbot, ChatGPT, for legal research, The New York Times reported.
The court found out that several legal cases referenced by the lawyer and his firm in an on-going case never existed.
The original case, for which the research was carried out, involved a man suing an airline.
When the man's legal team submitted a brief citing past court cases to support their claim, the airline's lawyers informed the judge that they were unable to locate several of the cited cases.
Expressing regret for relying on the AI, the lawyer said that he was not aware of the possibility of the AI tool giving out false information.
The senior lawyer also promised to 'never use AI to supplement his legal research in future without absolute verification of its authenticity'.
Such instances where chatbots provide convincing but completely made-up answers, is called hallucination. And this is not the first time where ChatGPT was reported to be hallucinating.
Last month, ChatGPT cited a case accusing a professor of sexually harassing a student and invented a news article on the same. In reality, such a news article had never been written.