The Pillow Case: When AI Cites Ghosts in the Courtroom
Today, we are exploring an area that has been on our radar for some time: the intersection of legal research and AI. And we'll discuss it by pillow talk. More specifically, the litigation of My Pillow CEO Mike Lindell’s lawyer, stemming from his claims that voting machines were rigged in the 2020 election, and the resulting fallout. Lindell is not a character in a play; his lawyers had the starring role.

The Horror of the Bench Slap
There is nothing worse, other than disbarment, than a judicial bench slap to the face, which very well might lead to disbarment. With Artificial Intelligence (AI) becoming increasingly ubiquitous, more lawyers are being caught off guard and potentially at risk by allowing AI to perform their work without sufficient oversight. Publicly chided by the very system they once swore to uphold, it becomes a matter of professional shame.
Spectacular-aily Failures
Not only can an AI algorithm be wrong, but it can be wrong in spectacular ways. Not citing the case correctly is one area where there's some leeway. But citing a non-existent case? That’s on a whole other (obvious!) level of error. Proper representation demands zealous advocacy. A fabricated citation undermines everything.
A Hallucinated Case: Matter of Burghardt
This was a serious misjudgment in professional lawyer ethics, even before the advent of AI. Take, for example, In the Matter of Burghardt (New York, 1986), where the court addressed an ethical violation involving an attorney who fabricated legal citations. Michael Burghardt submitted a brief with citations that appeared legitimate but were entirely fabricated. They were formatted correctly and even included fabricated quotations. When questioned, Burghardt claimed the cases were unpublished. He was disbarred for two years.
What an astounding find for my research. Except it wasn’t. That case was fiction. It was a hallucination created by OpenAI’s ChatGPT. AI created a case about creating case law—Kafkaesque. I wasn’t trying to trick the chatbot. I expected real citations. The irony was not lost.
When I asked ChatGPT to verify, it replied: “I couldn’t find any credible record—New York or elsewhere—of a 1986 case titled Matter of Burghardt involving fabricated case law by an attorney. It’s possible that the story has been misstated, conflated with another incident, or even become part of legal folklore over time.”
This is called the passive voice. As my legal writing professor used to say, passive voice demurs culpability (“the gun was shot”). It’s stunning—the machine hides behind language to disavow its own mistake. It eventually gave up the pretense: “It appears that the citation may have been in error.”
The Pillow Case
On July 7, 2025, Judge Nina Y. Wang issued an order in the U.S. District Court for the District of Colorado making absolute a prior show-cause order. Source: This was related to the defendants’ response and motion for leave to correct previous filing errors, which included citation issues and legal misstatements.
A Motion in Limine—a request to exclude prejudicial evidence—was the issue at a final pretrial hearing. Christopher Kachouroff, lead counsel for Lindell and My Pillow, appeared at the hearing. After arguments, the judge prompted Kachouroff to respond to the glaring errors. It was like a parent coaxing a child to confess—except in federal court.
Kachouroff, from McSweeney Cynkar & Kachouroff PLLC, represented a three-person law firm in Virginia. Their website boasted big-firm representation at reasonable rates. The site’s last update? 2014. A red flag that tech-savviness might not be their strength.
Rule 11 and Ethical Duties
Federal Rule of Civil Procedure 11 (FRCP Rule 11) demands that attorneys ensure their filings are factually and legally sound. Full text: It also holds entire firms accountable for violations.
Meanwhile, ABA Model Rule 3.3 "Candor Toward the Tribunal" states: "A lawyer shall not knowingly make a false statement of fact or law to a tribunal."
This directly applies to reliance on AI. If an attorney cites fabricated law—even unknowingly—they may still violate Rule 3.3 if they failed to verify the source.
Rule 11 is the enforcement tool. It doesn't just impose fines; it opens the door to disciplinary referral. A violation here isn't just embarrassing; it's also a serious issue. It can end a career.
"We Ran It Through AI"
At the pretrial conference, Mr. Kachouroff confessed, "I did an outline and drafted the motion, then we ran it through AI." He added, "I did not personally check it."
The Citation Errors:
Case or Citation Error | Description |
---|---|
World Wide Ass’n of Specialty Programs v. Pure, Inc. | Misquoted a principle on reputation and character that is not in the decision. |
United States v. Reaves | Cited for an evidentiary rule it does not support. |
Estate of Martinelli v. City & County of Denver | Non‑existent case citation. |
United States v. Hoffman | Non‑existent case citation. |
United States v. Hassan | Incorrectly attributed to the Tenth Circuit (it is a Fourth Circuit case). |
Ginter v. Northwestern Mutual Life Ins. Co. | Incorrectly attributed to the District of Colorado. |
Following the hearing, both Mr. Kachouroff and second chair Jennifer DeMaster were told to show cause why they shouldn't be fined or referred to the state bar.
In their response, the attorneys went on the offensive, blaming the court for not catching the errors in their filings. Diplomacy was not in their toolbox.
That argument didn’t land well. The court called their tone “troubling and not well taken.” Apparently, lawyers don’t get to ask the judge to check their homework.
DeMaster leaned on Westlaw’s AI, saying it didn’t flag any bad law. That may be technically true, but it misses the point. Just because the robot didn’t catch it doesn’t mean you're off the hook.
The Errors Keep Coming
It got worse. The "corrected" version of the brief still had the same fake citations. One section read: "Add language here," which almost conclusively shows that the law firm's editing was atrocious.
Even worse: these same lawyers had used fabricated citations in a different case in Wisconsin. The court discovered this only after issuing its show cause order.
The Problem with Hallucination
Claude performed better than my ChatGPT research on misquoted citations: it found no case law at all before AI. In the past, outright fabrications were rare before the advent of AI. AI has made case citation so effortless that it now seems easier to hallucinate law than to research it.
A "hallucination" is when AI confidently provides false information. The term is apt: the machine isn't lying, but it believes what it's saying. Lawyers relying on it? They're the ones who must answer to the judge.
Axios noted that AI expert Damien Charlotin tracked over 30 cases of hallucinated legal citations in May 2025 alone.
Getting Worse, Not Better
Karen Weise of The New York Times reported in May 2025 that hallucinations are worsening with the use of newer AI models. NYT Article: More power has not meant more accuracy.
Stanford researchers tested legal AI tools, such as Lexis+ AI and Westlaw AI. Between 17% and 33% of their answers were false or misleading, despite vendor claims of being "hallucination-free."
Lexis+ AI was the best performer, correctly answering 65% of questions. But if a paralegal made that many errors, they'd be out of a job.
A New Kind of Legal Error
Hallucinations aren’t deliberate deceit. But they carry no less risk. Rule 11 doesn’t require intent—just that a reasonable attorney should have known that their argument was not based on proper law. And fabricated cases are, by definition, not existing law.
Law students are required to take a professional responsibility course and pass an ethics exam. The core lessons? Don’t steal money. Don’t sleep with clients. Five years ago, a lawyer citing hallucinated cases would have been unthinkable.
Now? It’s happening in open court. And the stogy irony towers of legal academia have yet to catch up.
Takeaway
If the AI writes your brief, you still own it. And judges don’t grade on a curve.