A woman sues her insurance company for terminating her disability benefits.
-
“OPENAI, through its AI chatbot program ChatGPT, provides legal advice, legal analysis, legal research and can draft legal documents and papers for submission to a Court. ChatGPT provides these legal services to any user who requests them. ChatGPT is not licensed to practice law in Illinois.”
They're asking for declaratory judgement that OpenAI has been practicing law without a license, a permanent injunction barring them from providing the disgruntled woman with any more legal assistance, $300,000 to reimburse their costs in responding to the bogus motions, and $10 million in punitive damages.
@mjd hahahahahahaha

(not mocking laughter, it's a shame that this happened to the woman, it's just that this fight between two big corporations promises to be both legally enlightening and very entertaining)
-
@mjd hahahahahahaha

(not mocking laughter, it's a shame that this happened to the woman, it's just that this fight between two big corporations promises to be both legally enlightening and very entertaining)
@diazona I don't think it is a shame that this happened to this woman. It appears that she is a very ordinary type of vexatious litigant, except that she is also being aided by ChatGPT.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd has she sought a second opinion (e.g. unfrozen Grokman lawyer)?
-
@diazona I don't think it is a shame that this happened to this woman. It appears that she is a very ordinary type of vexatious litigant, except that she is also being aided by ChatGPT.
@mjd Yeah fair point... I was thinking more along the lines of, whatever happened to her (possibly long ago) to put her in the frame of mind to pursue this case regardless of the legal merits, and to believe ChatGPT over actual lawyers, is a shame. But I'm definitely not trying to absolve her of responsibility for her actions and their consequences.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd Levine (12/03/26):
Here’s a Perkins Coie memo from last month:
> On February 17, 2026, the Southern District of New York, in United States v. Bradley Heppner, held that a criminal defendant's written exchanges with a “publicly available AI platform” are not protected by attorney-client privilege or work product doctrine and, thus, could be inspected by the government. -
@mjd Levine (12/03/26):
Here’s a Perkins Coie memo from last month:
> On February 17, 2026, the Southern District of New York, in United States v. Bradley Heppner, held that a criminal defendant's written exchanges with a “publicly available AI platform” are not protected by attorney-client privilege or work product doctrine and, thus, could be inspected by the government.@rowat_c Wild times we live in.
-
@mjd Yeah fair point... I was thinking more along the lines of, whatever happened to her (possibly long ago) to put her in the frame of mind to pursue this case regardless of the legal merits, and to believe ChatGPT over actual lawyers, is a shame. But I'm definitely not trying to absolve her of responsibility for her actions and their consequences.
I agree with you 100% on the hahahahahahaha 🍿 thing though
-
“OPENAI, through its AI chatbot program ChatGPT, provides legal advice, legal analysis, legal research and can draft legal documents and papers for submission to a Court. ChatGPT provides these legal services to any user who requests them. ChatGPT is not licensed to practice law in Illinois.”
They're asking for declaratory judgement that OpenAI has been practicing law without a license, a permanent injunction barring them from providing the disgruntled woman with any more legal assistance, $300,000 to reimburse their costs in responding to the bogus motions, and $10 million in punitive damages.
@mjd the punitive damages seem a bit on the low end.
-
@mjd the punitive damages seem a bit on the low end.
@tessarakt 10% of current company valuation might make a dent... maybe.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd on LinkedIn I saw a breathless post about how the professionals of the future are using autonomous agents to do all this magic....
Then I come back here for a reality check
-
@mjd on LinkedIn I saw a breathless post about how the professionals of the future are using autonomous agents to do all this magic....
Then I come back here for a reality check
@krupo It's an interesting time. Many of the successes are overstated. So are many of the failures. Nobody knows how it will shake out in the end.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd The only use case for Generative AI is fraud.
-
@mjd The only use case for Generative AI is fraud.
@wcbdata That is demonstrably false.
-
@wcbdata That is demonstrably false.
@mjd Try me. There isn't a use case for it that isn't, at its core, fraud.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd “41. On October 29, 2025, OPENAI amended the terms and usage policies of ChatGPT to prohibit users from using ChatGPT to provide tailored legal advice. Prior to the October 29, 2025 emendation, ChatGPT’s terms of use did not prohibit users from using ChatGPT to draft legal papers, conduct legal research, provide legal analysis or give legal advice.”
-
@mjd Try me. There isn't a use case for it that isn't, at its core, fraud.
@mjd Couldn't think of even one reasonable candidate in 15 minutes, even with your precious AI right there in front of you? I rest my case.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.
The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.
Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.
-
@mjd Levine (12/03/26):
Here’s a Perkins Coie memo from last month:
> On February 17, 2026, the Southern District of New York, in United States v. Bradley Heppner, held that a criminal defendant's written exchanges with a “publicly available AI platform” are not protected by attorney-client privilege or work product doctrine and, thus, could be inspected by the government. -
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd My guess is her thought process was that her insurance company lowballed her on the settlement offer, and she was forced to accept it anyways because she couldn't afford pursue an aggressively in court, versus the insurance company has all the resources in the world and can just sit there and bankrupt anybody trying to seek justice in that manner.
-
@mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.
The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.
Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.
@divVerent @mjd
OpenAI are certainly marketing ChatGPT as being useful, whatever the fine print says, so they do bear some responsibility there