Stanford professor Jeff Hancock faces accusations of citing a non-existent study in his testimony related to Minnesota’s proposed deepfake legislation. This incident was brought to light by the plaintiff’s attorneys in a case against conservative YouTuber Christopher Kohls. The context involves a political debate about free speech and the legality of deepfakes during elections.
Hancock’s testimony was used by Minnesota Attorney General Keith Ellison to defend the proposed law, suggesting deepfakes threaten political integrity.
The allegations state that Hancock’s declaration included a reference to a fake study titled “The Influence of Deepfake Videos on Political Attitudes and Behavior,” which the plaintiff’s legal team claims does not exist in the journal it was purportedly published. They argue that this citation is a likely creation of an AI language model, potentially undermining the credibility of his entire declaration.
The plaintiff’s lawyers noted that the citation does not appear in any academic databases, which raises significant questions regarding its authenticity. They concluded:
“The declaration of Prof. Hancock should be excluded in its entirety because at least some of it is based on fabricated material likely generated by an AI model”.
AI’s role in this courtroom dramaThe implications of these allegations extend beyond this case. They challenge the reliability of AI-generated content within legal contexts, a concern that echoes recent events where lawyers faced sanctions for using fabricated citations in legal documents. The court filing underscores that the veracity of expert testimony can be severely impacted by AI’s potential to produce inaccuracies, often referred to as “hallucinations.”
Hancock has a well-documented background in misinformation studies, having contributed significant research in the field and produced popular public talks on the subject. Yet, he has not yet publicly commented on the claims against his testimony.
The viral Kamala Harris deepfake and its implications
Investigations into the validity of the declarations used in this court case are ongoing, which raises concerns for future applications of expert testimonies influenced by AI-generated data.
The Minnesota deepfake legislation, under scrutiny, aims to impose legal constraints on the distribution and creation of deepfakes around election periods. Opponents of the bill argue that the legal framework could infringe upon constitutional free speech rights, invoking concerns about censorship and the implications for digital expression. As this case unfolds, further analysis is expected regarding the intersection of technology, legal standards, and free speech rights.
It remains to be seen how the court will respond to the allegations surrounding Hancock’s testimony and if this will set a precedent for how AI-generated content is treated in legal proceedings. The legal community is closely monitoring this case for its implications on upcoming legislation related to digital content and misinformation in political contexts.
Featured image credit: rorozoa/Freepik
All Rights Reserved. Copyright , Central Coast Communications, Inc.