A woman sues her insurance company for terminating her disability benefits.
-
@mjd The only use case for Generative AI is fraud.
@wcbdata That is demonstrably false.
-
@wcbdata That is demonstrably false.
@mjd Try me. There isn't a use case for it that isn't, at its core, fraud.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd “41. On October 29, 2025, OPENAI amended the terms and usage policies of ChatGPT to prohibit users from using ChatGPT to provide tailored legal advice. Prior to the October 29, 2025 emendation, ChatGPT’s terms of use did not prohibit users from using ChatGPT to draft legal papers, conduct legal research, provide legal analysis or give legal advice.”
-
@mjd Try me. There isn't a use case for it that isn't, at its core, fraud.
@mjd Couldn't think of even one reasonable candidate in 15 minutes, even with your precious AI right there in front of you? I rest my case.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.
The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.
Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.
-
@mjd Levine (12/03/26):
Here’s a Perkins Coie memo from last month:
> On February 17, 2026, the Southern District of New York, in United States v. Bradley Heppner, held that a criminal defendant's written exchanges with a “publicly available AI platform” are not protected by attorney-client privilege or work product doctrine and, thus, could be inspected by the government. -
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd My guess is her thought process was that her insurance company lowballed her on the settlement offer, and she was forced to accept it anyways because she couldn't afford pursue an aggressively in court, versus the insurance company has all the resources in the world and can just sit there and bankrupt anybody trying to seek justice in that manner.
-
@mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.
The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.
Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.
@divVerent @mjd
OpenAI are certainly marketing ChatGPT as being useful, whatever the fine print says, so they do bear some responsibility there -
@mjd My guess is her thought process was that her insurance company lowballed her on the settlement offer, and she was forced to accept it anyways because she couldn't afford pursue an aggressively in court, versus the insurance company has all the resources in the world and can just sit there and bankrupt anybody trying to seek justice in that manner.
@mjd So, she turned the whole thing around and used an llm to generate a large quantity of motions and filings which the insurance company now had to analyze and rebut, costing them lots of time and money.
-
@divVerent @mjd
OpenAI are certainly marketing ChatGPT as being useful, whatever the fine print says, so they do bear some responsibility there -
@mjd So, she turned the whole thing around and used an llm to generate a large quantity of motions and filings which the insurance company now had to analyze and rebut, costing them lots of time and money.
@Infoseepage Yes, and which the judge and the court clerks also had to deal with, consuming public resources that should have been allocated to more deserving citizens.
-
@mjd My guess is her thought process was that her insurance company lowballed her on the settlement offer, and she was forced to accept it anyways because she couldn't afford pursue an aggressively in court, versus the insurance company has all the resources in the world and can just sit there and bankrupt anybody trying to seek justice in that manner.
@Infoseepage You made that up out of your head to come to the conclusion you selected beforehand.
I don't know what actually happened, and neither do you.
-
A woman sues her insurance company for terminating her disability benefits. They reach a settlement and agree that the suit will be dismissed with prejudice.
She decides she doesn't like the settlement and asks her lawyers to reopen the case.They say they can't: it was dismissed, and in the settlement she agreed not to reopen the case.
She asks ChatGPT if her attorneys are lying to her. It says they are. She fires them and continues pro se, advised by ChatGPT.
CharGPT generates legal arguments for reopening the case, which she files, and 21 more motions, a subpoena, and eight other notices and statements, which she files.
The court denies her motion to reopen the case.
Advised by ChatGPT, she files a new suit against the insurance company and submits 44 more motions, memoranda, etc., which include citations to nonexistent cases.
Now the insurance company has sued OpenAI for tortious interference with their settlement contract.
🍿
https://storage.courtlistener.com/recap/gov.uscourts.ilnd.496515/gov.uscourts.ilnd.496515.1.0_1.pdf
@mjd Wouldn't that be "tortuous inference"?
-
@mjd Wouldn't that be "tortuous inference"?
@GyrosGeier @mjd torturous interference
-
@krupo It's an interesting time. Many of the successes are overstated. So are many of the failures. Nobody knows how it will shake out in the end.
-
@mjd “41. On October 29, 2025, OPENAI amended the terms and usage policies of ChatGPT to prohibit users from using ChatGPT to provide tailored legal advice. Prior to the October 29, 2025 emendation, ChatGPT’s terms of use did not prohibit users from using ChatGPT to draft legal papers, conduct legal research, provide legal analysis or give legal advice.”
@marshray I wonder if that will help get them off the hook. If not, it shows that they were aware that what they were doing could be a problem.
-
@GyrosGeier @mjd torturous interference
@falcennial @mjd I mean, because running an AI model is called "inference."
-
@GyrosGeier @mjd torturous interference
@falcennial @GyrosGeier They're all closely related. They're from the Latin verb “to twist”.
-
@mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.
The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.
Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.
@divVerent @mjd ChatGPT is not a person, which is why ChatGPT is not being sued. OpenAI sells a tool that gave her legal advice, and they certainly didn't say anywhere that it's actually just a "fancy random text generator"
-
@mjd TBH I do not think OpenAI should be responsible. They're just providing a fancy random text generator to the public. And it's outright impossible to teach a random text generator to _not_ output a specific kind of text, as whatever you do, there is a way around it.
The woman should pay all costs, as per the usual "vexatious filings" or "frivolous lawsuits" standards.
Plus, the law in her state against practicing law without a license starts with "No person shall...". ChatGPT isn't a person.
@divVerent Except that there are laws against providing bogus legal advice to people, to prevent exactly this sort of situation.
And, as you pointed out, it was OpenAI, not ChatGPT, providing the advice.