How Readwise Reader is the best law research tool that no lawyer is using.

After I passed the bar, a mentor of mine mentioned there are three things you need to do to be a good lawyer: (1) mind your manners, (2) tell the truth, and (3) know the law.

In one recent case in Illinois, a whole team of lawyers from a national law firm did not follow these commands, and the story was almost made for a TV movie, if TV Movies were still being made. What they did not know is that Readwise Reader is the most critical tool that lawyers are not using, and it could have helped. 

The Chicago Housing Authority was found liable in a pediatric lead poisoning case, with a jury awarding $24.1 million to children who suffered permanent brain damage. But what followed raised even deeper concerns. The CHA’s legal team—Goldberg Segalla—tried to challenge the jury verdict using fake case law that AI had generated. In their brief to overturn, the citations weren’t just wrong; they were entirely made up. 

Goldberg Segalla is no small outfit—they’re a national civil litigation firm with more than 450 attorneys on the roster. An entire team of lawyers had their hands on this case, but when things fell apart, only one of them took the blame, and Golberg Segalla fired her. She fell on her sword while the rest stayed quiet.

Although the firm initially claimed the problem was limited to "one case out of 50-some-odd case citations," the court “did their homework” and revealed that the Post-Trial Motion had multiple misrepresentations, including outright factual falsehoods in their own case and problematic legal citations that were not limited to just one case or one attorney. The judge considered this a cover-up to protect the firm from bad publicity and to save face.

The judge didn’t mince words in his order and pointed out that neither the CHA nor Goldberg Segalla withdrew the bogus filing, not even a portion of it. They didn’t try to replace it with anything accurate. They didn’t own up to the other false citations the plaintiffs identified—ones that went beyond the one made-up case. And they never disclosed whether any other filings in the case also included fake or faulty citations.

But the problems didn’t stop with fake case law. In their post-verdict brief, Goldberg Segalla went so far as to accuse the plaintiffs’ attorney of making “racist, inflammatory and highly prejudicial comments” in front of the jury—claims that had no support in the record. They wrote, “Yet, despite its [the Court’s] efforts to correct Plaintiffs’] improper trial presentation, the ‘scent of the skunk’ never left the jury box.” That wasn’t just a dramatic turn of phrase—it was defamatory. They asserted the attorney made racist slurs and the Judge never corrected the attorney. They were accusing opposing counsel of poisoning the jury, while their own filing was built on fabricated citations and misrepresentations.

It bears repeating that at the heart of this case are two innocent children who will live the rest of their lives with permanent cognitive impairments because of lead poisoning. That kind of harm isn’t theoretical; it’s lifelong, and it’s irreversible. The gravity of that fact should’ve grounded every part of the legal process.

The attorney on the case claimed she didn’t fully understand how often AI could hallucinate case law. But just months earlier, in September 2024, she had written a piece for the firm titled “Artificial Intelligence in the Legal Profession: Ethical Considerations.” She also posted it on her LinkedIn account. So it’s hard to square the claim of ignorance with her own published words.

The Problem

Mindlessly telling a black-box consumer platform to research case law—even just as a starting point—isn’t okay. It’s no different than failing to check a paralegal’s research before filing. The lawyers whom courts have called out didn’t rely on AI as a final authority—they used ChatGPT as a starting point, like a kind of secondary source. That might make sense if it were a reliable one. But it isn’t a secondary source.

Fact-checking hallucinated case law usually takes more time than just researching the case properly to begin with. And while you could try using something like Claude to double-check the results, that’s still a predictive model—it can just as easily make up new mistakes.

 When a lawyer does catch a fake citation, they’re not off the hook. Now they need to find a valid case, and just as importantly, they have to rewrite their argument to fit the real authority. That’s not just swapping a name—it’s a complete rewrite, because legal reasoning has to line up with the facts and the precedent.

Missing the fake citation is arguably the worst thing a lawyer can do, especially when it is submitted to a court.

A lawyer’s grasp of case law should be a no-brainer—or maybe a full-brainer. Most of us didn’t walk into law school planning to coast on autopilot. Intellectual curiosity is what separates great law students—and competent lawyers—from the rest. Studying the facts and applying the right cases is the fuel that keeps the legal vehicle running. Take that away, and the whole thing stalls.

Why even be a lawyer if you’re not willing to get fully engrossed in the issue you’re drafting? Why take on this role if you don’t want to live in the life of the mind or if reading case law feels like a chore instead of a practice and profession?

When you hand over that work to a black-box tool and don’t engage with the material yourself, you’re not just cutting corners—you’re robbing yourself of your agency as a lawyer. The point of this profession isn’t to just produce filings. It’s to think, to reason, to advocate. That requires showing up with your whole mind—not outsourcing it to a machine. Lawyers’ research requires them to reach a specific case point while digesting large amounts of case law, and it should be meaningful.

Readwise Reader:

But let me get to my point. The Goldberg Segalla case exposes a fundamental problem: lawyers are increasingly relying on AI tools without maintaining direct engagement with source material. But there's a better way forward—one that leverages technology while keeping lawyers grounded in actual case law.

Enter Readwise Reader. Readwise Reader may be the first line of defense against fictitious citations. Readwise is an all-in-one tool that helps you manage what you read and remember it. It pulls together highlights and notes from books, articles, PDFs, emails, and even tweets, then resurfaces them over time so they stick. It’s like a second brain for anything you’ve read.

Readwise Reader lets you save just about anything you come across: web articles, newsletters, YouTube transcripts, PDFs, and even Twitter threads. You read everything in one clean space, highlight as you go, and all of it syncs back to your Readwise library.

Being able to pull case law from Vlex or Google Scholar, then grab trial court orders in PDF form and tag them all with the same labels, makes research feel like a proper flow. You’re not jumping between tools or losing track of where things came from. It turns the whole process—from finding to synthesizing—into something of a breeze. You stay in the zone, and that’s where good thinking happens.

For lawyers, Readwise might be one of the most important tools out there that they don’t know yet. It keeps you tied to the source text, which means you’re not relying on memory—or worse, hallucinated summaries. You’re actually reading and working with the real material. In a time when even legal filings are getting tripped up by fake case law, that kind of grounded research matters more than ever.

 At the core of this process is the highlight feature. That’s how you interact with the document. It’s the moment where passive reading becomes active thinking. You’re not just skimming anymore. You’re locking in the vital parts, marking what matters, and shaping the raw material into something usable.

Highlighting is what makes the research relevant to you and your case. It’s the act of saying, “this part right here is worth remembering.” Once you’ve done that, everything else—tagging, organizing, building arguments—flows naturally. It’s how you start turning information into insight. Having direct access to source material allows lawyers to catch and correct such errors before they become part of legal filings.

Readwise Reader has a newer feature that lets you chat with the actual document. You’re not searching a database—you’re just talking to one case at a time. It’s like having a back-and-forth with the text itself. You can ask it questions or pull out key points, all while staying locked into the source. It keeps things focused and cuts out the extra noise.

Once the chat ends, you can highlight the AI’s answer, just like you would with the actual text. That highlight goes right into your notes, side by side with the parts you’ve marked yourself from the case. It keeps everything in one place: what the document says, what you pulled out, and what the AI helped clarify. All of it ends up in the same workflow.

A different feature is Readwise Chat, which looks like ChatGPT, but it only works with the highlights and notes you’ve already made. That’s a significant shift from asking an AI to give you the law before you’ve done any digging. Readwise Chat is a different feature from chatting with the document.

Here, you do the research first. You decide what matters. The AI just helps organize and pull from what you’ve already thought through. You’re not handing over the job—you’re keeping control. You’re using AI to support your work, not letting it lead. You’re reading the real caselaw and secondary sources as you type, not trusting a black box to summarize it for you. And the actual documents show up in your chat from Readwise Reader.

Another feature is the ability to export all your highlights into a Google Doc. From there, you can use that doc as the base for something like Google Notebook LM, which can turn your highlights into a podcast. That gives you a new way to learn—by listening back to the material in your own words. Again, your notes form a basis for study, not anything else.

Even Readwise Reader's AI features aren't perfect—and that's exactly why the tool's approach matters. When I used Readwise's AI to summarize the Chicago case, it made errors. It incorrectly stated that the racist allegations came after the fake citations were discovered, when they actually appeared in the same brief. It also flipped a key exchange, describing the lawyer as asking the judge 'Is there anything you'd like to tell me?' when it was actually the judge who posed that question to the lawyer.

But here's the crucial difference: because I was working directly with the source documents in Readwise Reader, I caught these mistakes immediately. I had the actual court order right in front of me. The AI's errors were obvious because I wasn't relying on it as my sole source—I was using it to help process material I'd already read and highlighted myself.

This illustrates exactly why Readwise Reader's approach works. The AI supports your research without replacing your judgment. When it makes mistakes—and it will—you're equipped to catch them because you're still engaged with the primary sources. You're not trusting a black box; you're using AI as a thinking partner while keeping the real documents at your fingertips.

In a legal tech world where most AI tools are marketed as taking over the work for you, Readwise does the opposite. It empowers lawyers to stay engaged in their research. It supports you, but it doesn’t try to replace you. It keeps you thinking, reading, and staying sharp. That’s the part that matters. 

And let’s be honest—doing the work yourself, staying connected to the source, and using tools that back you up (instead of stepping in for you) sure beats getting benchslapped and having to rebuild your standing in the legal community from square one as in the case of Goldberg Segalla.

So in a way, Readwise Reader helps lawyers stick to my mentor’s three rules:

(1) Mind your manners, (2) Tell the truth, and (3) Know the law.

It does this by minimizing the risk of citing fake cases because:

  • If you’re using Readwise Reader the right way, reading the cases and highlighting them yourself, then citing something fictitious becomes pretty unlikely.

  • You’re working from real documents you’ve chosen, not outsourcing your judgment to an AI that might guess the law for you. That alone helps keep things honest.

  • The chat feature stays within your research. It doesn’t pull from a giant database or the internet—it just uses what you’ve imported. So if there’s no hallucinated case in your notes, it’s not going to create one.

Readwise did not offer me anything to provide this review.

Keep Reading

No posts found