chatgpt apparently can't count to 100.
-
chatgpt apparently can't count to 100. putting this on a shelf alongside "midjourney can't imagine a full glass of wine" and just trying to get people to look at it at all
i was trying to make reference to the way people are talking and thinking about ai today and i found myself wondering what the opposite of a moral panic is. F suggested "gold rush" which is close...
-
i was trying to make reference to the way people are talking and thinking about ai today and i found myself wondering what the opposite of a moral panic is. F suggested "gold rush" which is close...
anyway i think a lot of people say "ai isn't conscious" and a lot of other people say "yeah, but if it talks like it's conscious then what's the difference, practically speaking?" and the answer to that is that consciousness is not something we believe in from a minute of conversation with a chatbot, and there are more angles to probe whether it's conscious from. it talks like it's conscious in *specific circumstances*
-
chatgpt apparently can't count to 100. putting this on a shelf alongside "midjourney can't imagine a full glass of wine" and just trying to get people to look at it at all
@shoofle it's a Large Language Model, not a Large Math Model. it can produce plausible sounding speech based on statistical probability, and that is all. computers can't imagine, no matter how many GPUs you shove it into them.
-
anyway i think a lot of people say "ai isn't conscious" and a lot of other people say "yeah, but if it talks like it's conscious then what's the difference, practically speaking?" and the answer to that is that consciousness is not something we believe in from a minute of conversation with a chatbot, and there are more angles to probe whether it's conscious from. it talks like it's conscious in *specific circumstances*
@shoofle transactional consciousness
-
anyway i think a lot of people say "ai isn't conscious" and a lot of other people say "yeah, but if it talks like it's conscious then what's the difference, practically speaking?" and the answer to that is that consciousness is not something we believe in from a minute of conversation with a chatbot, and there are more angles to probe whether it's conscious from. it talks like it's conscious in *specific circumstances*
i think of people like. like these shells, sorta. and we don't know what's inside. and what we know is that we should treat it morally and kindly. and if something looks from every angle like a human, then i think we can take it for granted that it's human. or conscious. or whatever. a moral actor we should treat kindly. but if something is a cardboard cutout of a conscious, moral actor, then we are not obligated to treat it like a whole-ass person
-
i think of people like. like these shells, sorta. and we don't know what's inside. and what we know is that we should treat it morally and kindly. and if something looks from every angle like a human, then i think we can take it for granted that it's human. or conscious. or whatever. a moral actor we should treat kindly. but if something is a cardboard cutout of a conscious, moral actor, then we are not obligated to treat it like a whole-ass person
passing the turing test means you look human from *one specific angle*. we shouldn't treat chatbots like people because they only appear to be people from one specific angle.
if something appeared to be human, or human-like, conscious, a moral actor, whatever, from *most* angles, then we could have a very different discussion. but it's quite easy to probe the edges of what a chatbot is capable of.
-
anyway i think a lot of people say "ai isn't conscious" and a lot of other people say "yeah, but if it talks like it's conscious then what's the difference, practically speaking?" and the answer to that is that consciousness is not something we believe in from a minute of conversation with a chatbot, and there are more angles to probe whether it's conscious from. it talks like it's conscious in *specific circumstances*
@shoofle yes! Also not entirely right-feeling, but I keep thinking about the tulip mania, pogs, and beanie babies. But it’s worse than all of this, because it undermines reality. Like a satanic panic, but where everyone is convinced the fiction is *good.* Which I guess is just… a cult?
And it makes me very sad to say, “It sounds conscious; what’s the difference?” One’s experience of life and consciousness and other people must be so shallow, so pale, to be so easily fooled
-
passing the turing test means you look human from *one specific angle*. we shouldn't treat chatbots like people because they only appear to be people from one specific angle.
if something appeared to be human, or human-like, conscious, a moral actor, whatever, from *most* angles, then we could have a very different discussion. but it's quite easy to probe the edges of what a chatbot is capable of.
this is all to say, i don't think you have to believe in All That Woo to believe in souls (patterns and qualities in humans that are not physically obvious or measurable but nevertheless exist and can be observed through social and moral interaction) and also to believe that chatbots do not have them and like. idk.
i think that chatbots have as much claim to "imagination" as humans do. but that doesn't mean they are good at stuff!
-
this is all to say, i don't think you have to believe in All That Woo to believe in souls (patterns and qualities in humans that are not physically obvious or measurable but nevertheless exist and can be observed through social and moral interaction) and also to believe that chatbots do not have them and like. idk.
i think that chatbots have as much claim to "imagination" as humans do. but that doesn't mean they are good at stuff!
anyway my cosmology holds that souls are not a "real, physical object", but do/should be thought to exist as our term to handwave away "it seems like people act as moral, empathetic actors even when it's against their 'rational' best interest" like. there's clearly something there even if we don't know how to describe it, right? i call it a "soul"
-
anyway my cosmology holds that souls are not a "real, physical object", but do/should be thought to exist as our term to handwave away "it seems like people act as moral, empathetic actors even when it's against their 'rational' best interest" like. there's clearly something there even if we don't know how to describe it, right? i call it a "soul"
your soul is your moral core, the standards by which you hold and the relationship you have with them, the relationships and history you have with people, the lasting effects of memory, your traumas and reactions
-
this is all to say, i don't think you have to believe in All That Woo to believe in souls (patterns and qualities in humans that are not physically obvious or measurable but nevertheless exist and can be observed through social and moral interaction) and also to believe that chatbots do not have them and like. idk.
i think that chatbots have as much claim to "imagination" as humans do. but that doesn't mean they are good at stuff!
@shoofle maybe for a rather shallow definition of "imagination"