Elite law firm Sullivan & Cromwell admits to AI “hallucinations”

$2,000 an hour – and you still get AI slop

 

22 April 2026 (New York, NY) – Sullivan & Cromwell told a U.S. federal bankruptcy court that a major filing it made in a high-profile case contained multiple “hallucinations” made by AI software.

Andrew Dietderich, the head of S&C’s restructuring practice, apologised in a letter to New York federal judge Martin Glenn on Saturday for mistakes that included misquoting the US bankruptcy code and citing cases incorrectly in a court filing made on April 9th:

“We deeply regret that this has occurred”.

I mean, come on Sullivan & Cromwell – can’t you just own it? Can’t you just say “we apologize for our mistakes”? Or “we deeply regret that we did this”? Or “we really screwed up”?

It reminds me when the police gun down an unarmed man, and we get “Shots were fired”.

No, Dietderich comes along and says the firm’s policies on the use of AI had not been followed when the document was prepared, and it was considering whether it needed to make “further enhancements” to its internal training and review processes. The letter did not identify which lawyers prepared the documents or whether they were still at the firm. S&C declined to comment to the media beyond the statement.

But these errors are the latest example of a professional services firm grappling with the use of cutting-edge technology to speed up laborious research and cut down on staffing while also trying to maintain quality standards.

In multiple instances, S&C in the April 9th filing erroneously summarised the conclusions made in other cases, according to a list of strike-through corrections the firm submitted to the judge.

The firm’s partners typically charge more than $2,000 per hour in bankruptcy cases. The firm earned several hundred million dollars in fees in representing crypto exchange FTX in its bankruptcy liquidation.

Boies Schiller Flexner, the law firm representing Prince and Zhi, spotted the errors in S&C’s filing. In a document filed last week, BSF said words that S&C had quoted in its motion “do not appear in chapter 15 of the US Bankruptcy Code” and pointed to “multiple cited decisions” that were “misquoted or misidentified”. It said a case cited by S&C in the motion “is not a case” and the reference was to “a different decision in a different circuit”. The firm said their lawyers went through the S&C pleadings “in detail, scrupulously”.

S&C told the court that the firm maintained “rigorous” standards when using AI tools and that it “instructs lawyers to ‘trust nothing and verify everything’”. Failure to verify AI-generated output “constitutes a violation of firm policy”, it said.

It is the latest in a series of errors by law firms using AI tools. Last year, Latham & Watkins admitted that one of its lawyers had used Anthropic’s Claude model to help draft a filing which contained an apocryphal title and author for a journal article.

In another instance, a federal appeals court in New Orleans ordered a $2,500 sanction against a lawyer who had submitted a brief with 21 errors or fabrications that had been inserted by AI.

Separately, in September John Kucera, then a partner at BSF, said in a case against Amazon that a document for which he was responsible, prepared using AI tools, contained “material citation errors” due to his “failure to verify” details. “I am embarrassed by and very much regret these errors”, he said in the filing. BSF did not respond to a request for comment.

S&C told the judge overseeing the Prince Group case that its document review also showed “non-substantive and/or clerical errors in other filings in this matter”. The firm said those errors were made by humans, not AI.

AI slop is commonplace in all industries, but is more and more prevalent in the legal industry. And the legal industry isn’t even trying to do better.

Anyone who has used AI in a professional services setting has experienced this and knows it happens all the time (and indeed is unavoidable). AI is not intelligence – it is large scale pattern recognition. It is not reasoning; it is not thinking for you. Amazing this wasn’t picked up but reassuring for everyone who thinks AI will render humans obsolete any time soon.

And it’s funny they have an “enterprise license” but not an “enterprise grade AI toolkit”. It smacks obvious that you need a custom LLM stack, trained to your IP and with total control over the prompts, tool calls, and measured outputs. All of this is viable and within reach. But if you just give ChatGPT licenses out, the user is as smart as any other user.

The mistakes being made here could be discovered with basic cross referencing. Lawyers are the most likely people to be good at this skill, so what you are also seeing is professional skills being atrophied here.

It is hysterical, really. For the cost of rolling out hyperscalers now sitting at $9.6 trillion dollars – not including labor, updates and chip renewals – you see the underbelly. American desperation to create something, anything just to compete against the failing U.S. race against China.

The U.S. tried Biden’s Chips Act and on-shoring, and it all failed. So they hyped AI to be “better than humans”. But it makes child-like errors all day, everyday – and advertises it like this story about S&C.

And a law firm charging $2,000 an hour relying on a large language model like ChatGPT? There are specialized AIs just for legal work, that are trained for that purpose. But it still requires careful human oversight.

The prosecution rests, your honor.

Leave a Comment: