oh someone finally offered cory doctorow a big enough sack of cash to do a cory doctorow?
-
oh someone finally offered cory doctorow a big enough sack of cash to do a cory doctorow? I’m shocked
-
oh someone finally offered cory doctorow a big enough sack of cash to do a cory doctorow? I’m shocked
thx for telling me that everything I have hosted on the web getting repeatedly scraped to death by what would previously be considered a massive attack but is now being carried out by the largest corporations in the world is normal, actually. hope they give us good licensing terms on our data, uhhh no wait their IP, once they’re done killing and buying all the original data sources
-
thx for telling me that everything I have hosted on the web getting repeatedly scraped to death by what would previously be considered a massive attack but is now being carried out by the largest corporations in the world is normal, actually. hope they give us good licensing terms on our data, uhhh no wait their IP, once they’re done killing and buying all the original data sources
I am wasting my time of course, Cory is and always has been a stack of Wired magazines with a flesh-colored mic strapped to it. every talk Cory does is a ted talk.
-
thx for telling me that everything I have hosted on the web getting repeatedly scraped to death by what would previously be considered a massive attack but is now being carried out by the largest corporations in the world is normal, actually. hope they give us good licensing terms on our data, uhhh no wait their IP, once they’re done killing and buying all the original data sources
@zzt I’ve had to shut down a website I’ve been running since 1995. I’ve left a single page that basically says fuck AI. This was a teeny niche site with minuscule traffic. Llm scrapers drove my hosting fees into the hundreds. My ISP said they wouldn’t charge me, all I had to do was destroy a lifetime of work and remove anything the llm could scrape.
-
oh someone finally offered cory doctorow a big enough sack of cash to do a cory doctorow? I’m shocked
it’s important to note that it isn’t always a big sack of cash. lately I keep seeing this pattern happen with engineers:
- “as an AI skeptic I finally have empirical proof that LLMs are good/useful/thinking/feeling <posts slop>”
- “uhhh are you ok? I checked the LLM output you posted and it doesn’t make any sense if you dig in at all and the citations are all fake”
- “this is empirical proof and you’re being emotional.”this is engineer brain. Doctorow isn’t an engineer, so sack of cash it is.
-
it’s important to note that it isn’t always a big sack of cash. lately I keep seeing this pattern happen with engineers:
- “as an AI skeptic I finally have empirical proof that LLMs are good/useful/thinking/feeling <posts slop>”
- “uhhh are you ok? I checked the LLM output you posted and it doesn’t make any sense if you dig in at all and the citations are all fake”
- “this is empirical proof and you’re being emotional.”this is engineer brain. Doctorow isn’t an engineer, so sack of cash it is.
@zzt I keep hearing about engineers falling into the "it's thinking/feeling/sapient" trap, but I've never actually seen it happen. As for "good" and "useful", I think those are a bit subjective. For me, they're more "meh?" and "I guess?", but that's just me.
-
it’s important to note that it isn’t always a big sack of cash. lately I keep seeing this pattern happen with engineers:
- “as an AI skeptic I finally have empirical proof that LLMs are good/useful/thinking/feeling <posts slop>”
- “uhhh are you ok? I checked the LLM output you posted and it doesn’t make any sense if you dig in at all and the citations are all fake”
- “this is empirical proof and you’re being emotional.”this is engineer brain. Doctorow isn’t an engineer, so sack of cash it is.
@zzt I think AI boosterism is the first stage of AI psychosis
-
it’s important to note that it isn’t always a big sack of cash. lately I keep seeing this pattern happen with engineers:
- “as an AI skeptic I finally have empirical proof that LLMs are good/useful/thinking/feeling <posts slop>”
- “uhhh are you ok? I checked the LLM output you posted and it doesn’t make any sense if you dig in at all and the citations are all fake”
- “this is empirical proof and you’re being emotional.”this is engineer brain. Doctorow isn’t an engineer, so sack of cash it is.
@zzt I actually wondered if being accused of indulging in purity culture in that piece stems from a fear that when when they do, some substantial segment of his core audience simply won't need/want to read him anymore?
-
@zzt I keep hearing about engineers falling into the "it's thinking/feeling/sapient" trap, but I've never actually seen it happen. As for "good" and "useful", I think those are a bit subjective. For me, they're more "meh?" and "I guess?", but that's just me.
@mos_8502 here’s a good example from an engineer I don’t follow https://bsky.app/profile/coloradotravis.bsky.social/post/3mezgaylny22p
I have more but unfortunately they’re engineers I do follow and would prefer not to link. one of them posted a Datalog love poem from an LLM as proof of something???? and I read it and the Datalog was incorrect
-
@mos_8502 here’s a good example from an engineer I don’t follow https://bsky.app/profile/coloradotravis.bsky.social/post/3mezgaylny22p
I have more but unfortunately they’re engineers I do follow and would prefer not to link. one of them posted a Datalog love poem from an LLM as proof of something???? and I read it and the Datalog was incorrect
@zzt I have this vague feeling that there's some underlying need for "true" sapient AI that's feeding this.
-
@zzt I have this vague feeling that there's some underlying need for "true" sapient AI that's feeding this.
@zzt Me personally, I'd love to have a sentient android friend, but reality just doesn't support that right now. Maybe one day far in the future someone will crack real AI, but these models are nowhere near real cognition.
-
@zzt Me personally, I'd love to have a sentient android friend, but reality just doesn't support that right now. Maybe one day far in the future someone will crack real AI, but these models are nowhere near real cognition.
@zzt I could interact with ChatGPT or Claude all day every day for a year and I would never begin to think it a sapient being. I know what it is, I know what it can and can't do. I can, subjectively speaking, feel the difference between talking to it and talking to a person, even if I can't articulate the difference well.
-
@zzt I could interact with ChatGPT or Claude all day every day for a year and I would never begin to think it a sapient being. I know what it is, I know what it can and can't do. I can, subjectively speaking, feel the difference between talking to it and talking to a person, even if I can't articulate the difference well.
@mos_8502 as an outside observer I just wanted to read a Datalog love poem from that person’s SO, because I think that’d be a really cool thing to receive
unfortunately I didn’t get that; I got some plagiarized trite horseshit poorly translated into broken Datalog by a machine learning model built to make shitty translations more convincing
what’s interesting in all of these cases is that the LLM output is very special to the person who generated it, and not at all to most others.
-
@zzt I think AI boosterism is the first stage of AI psychosis
-
@JoBlakely @zzt there were at least two remarks about purity but one was referring to LLMs
> Doubtless some of you are affronted by my modest use of an LLM. You think that LLMs are "fruits of the poisoned tree" and must be eschewed because they are saturated with the sin of their origins. I think this is a very bad take, the kind of rathole that purity culture always ends up in.
-
@zzt I could interact with ChatGPT or Claude all day every day for a year and I would never begin to think it a sapient being. I know what it is, I know what it can and can't do. I can, subjectively speaking, feel the difference between talking to it and talking to a person, even if I can't articulate the difference well.
-
oh someone finally offered cory doctorow a big enough sack of cash to do a cory doctorow? I’m shocked
oh good, the “you’re just doing purity culture” thing is already taking hold over on bluesky
so the line is now supposed to be that local LLMs are good and moral and SaaS LLMs are bad, when local LLMs come from the same fucked system that’s also actively making it impossible to buy computing hardware powerful enough to run even a shitty local LLM? is that about right? I’m supposed to clap cause someone with money is running a plagiarism machine but slower and shittier on their desktop?
-
oh good, the “you’re just doing purity culture” thing is already taking hold over on bluesky
so the line is now supposed to be that local LLMs are good and moral and SaaS LLMs are bad, when local LLMs come from the same fucked system that’s also actively making it impossible to buy computing hardware powerful enough to run even a shitty local LLM? is that about right? I’m supposed to clap cause someone with money is running a plagiarism machine but slower and shittier on their desktop?
@zzt I thought that with local models you strictly controlled what information that they were drawing upon?
-
@zzt I thought that with local models you strictly controlled what information that they were drawing upon?
@Da_Gut that’s incorrect for all of the local, supposedly open source models I know of
all of the research I’ve read on this has easily extracted verbatim plagiarized text from the models, because all of them have their origins in the same sources — usually Facebook’s leaked llama model or deepseek (which itself took from previous models). it isn’t possible for LLM models to be trained by anything other than a billion dollar company or a state operating like one.
-
@zzt I thought that with local models you strictly controlled what information that they were drawing upon?