AI’s Greenhouse Effect on Human Cognition

New York lawyer faces reputation damage after citing fake AI-generated legal precedents in court documents.

The Precedent Problem

When a New York lawyer referenced numerous non-existent precedents generated by a language model in a court filing, his reputation was destroyed. The lawyer explained that he was unaware language models could make such errors. Does this explanation justify his actions, or does it rather call into question his thoroughness, diligence, and inquisitiveness?

AI’s Growing Influence

Almost three years have passed since that case, yet some still rely uncritically on content generated by language models. Recently, we heard about an Australian branch of a large corporation that had to return part of its compensation to the government for an AI-generated report. The bibliography contained references that did not actually exist.

Broader Implications

These are no longer just anecdotes about students generating AI essays at the last minute or complaints about technological shortcomings. They are primarily evidence that in order to use something effectively, one must understand what it is and what it can and cannot be used for.

Human Responsibility

Regardless of the narratives of companies selling the technology, the only person who bears responsibility for how generated “knowledge” is used is the human being. And humans very much want to believe that modern tools have freed them from the obligation to verify the content they pass on.

Previous Article

FAMA Festival: The Legend of Polish Counter-Culture

Next Article

Comet Emits Jets and Flees