@hipsterelectron @clayote @shoofle @mtrc my attempt to learn about expert systems tonight failed.
-
@aeva @clayote @shoofle @mtrc i consider the chinese room experiment to be quite literally racist in that it assumes a human being does not learn chinese in the act of translating it by rote. i think the language was chosen precisely to obscure this basic fact. it makes the circular assumption that a human can somehow ever be made to act like a turing machine. possibly the most ridiculous and unserious thing which is taught in "AI" courses.
@hipsterelectron @clayote @shoofle @mtrc "this john guy seems racist" was the other main conclusion I came away with, though for my own happiness I decided to assume the unfortunate framing was a clumsy attempt to at a setup for the second part to wherein he is circumstantially illiterate and a structured set of rules processed without requiring his understanding does not empower him to understand the text, but the thought experiment doesn't improve anything and makes him seem sinophobic
-
@hipsterelectron @clayote @shoofle @mtrc "this john guy seems racist" was the other main conclusion I came away with, though for my own happiness I decided to assume the unfortunate framing was a clumsy attempt to at a setup for the second part to wherein he is circumstantially illiterate and a structured set of rules processed without requiring his understanding does not empower him to understand the text, but the thought experiment doesn't improve anything and makes him seem sinophobic
@hipsterelectron @clayote @shoofle @mtrc also merriam-webster dot com doesn't have a definition for "sinophobia" which I found out just now because I wanted to double check the spelling since I'm talking to smart people, and merriam-webster dot com not having a definition for "sinophobia" is absolutely sinophobic and honestly a lot more immediately problematic
-
@aeva @clayote @shoofle @mtrc related possibility might be a similar construction of time-based "feeding" input, but with some sort of not-too-complex transformation that makes it possible to produce predictable output if you can guess the relationship. it would be cool to try to "feel" the response of the machine and to grasp the way it "understands" my input by intuition
@hipsterelectron @clayote @shoofle @mtrc love this idea
-
@aeva @clayote @shoofle @mtrc related possibility might be a similar construction of time-based "feeding" input, but with some sort of not-too-complex transformation that makes it possible to produce predictable output if you can guess the relationship. it would be cool to try to "feel" the response of the machine and to grasp the way it "understands" my input by intuition
@hipsterelectron @aeva @clayote @shoofle Sorry I'm now randomly inserting myself into the thread at various points - "the Chinese Room" is definitely not what you would call this if you were pitching it in 2025 (I hope) but the core idea can be salvaged I think by using a different metaphor - like a human-powered DNS server that just receives and sends codes they don't understand.
-
@hipsterelectron @aeva @clayote @shoofle Sorry I'm now randomly inserting myself into the thread at various points - "the Chinese Room" is definitely not what you would call this if you were pitching it in 2025 (I hope) but the core idea can be salvaged I think by using a different metaphor - like a human-powered DNS server that just receives and sends codes they don't understand.
@mtrc @hipsterelectron @clayote @shoofle another problem with the thought experiment is i've learned so much by "playing computer", and that's also how a lot of grade school math is taught. if such a rules-based conversational algorithm existed it's kinda bold of what's his name to assume one couldn't accidentally learn Chinese by stepping through it by hand for long enough
-
@mtrc @hipsterelectron @clayote @shoofle another problem with the thought experiment is i've learned so much by "playing computer", and that's also how a lot of grade school math is taught. if such a rules-based conversational algorithm existed it's kinda bold of what's his name to assume one couldn't accidentally learn Chinese by stepping through it by hand for long enough
@mtrc @hipsterelectron @clayote @shoofle LLMS are good at being black boxes for the large amount of memory used, but not because they're convolving. i know this because i've gained an intuitive understanding of how simple convolutions like blurs, image recognition, and others work by implementing convolution reverb
-
@mtrc @hipsterelectron @clayote @shoofle LLMS are good at being black boxes for the large amount of memory used, but not because they're convolving. i know this because i've gained an intuitive understanding of how simple convolutions like blurs, image recognition, and others work by implementing convolution reverb
@mtrc @hipsterelectron @clayote @shoofle the blur one before that actually because i learned it on accident schedule while converting a convolution bloom shader from accumulating overlapping draws with hw raster blending ops to a compute shader that has no such thing. i only had time to figure out something mathematically equivalent, and i had never heard of convolution before, but i understood how it worked by the end of it on accident
-
@mtrc @hipsterelectron @clayote @shoofle the blur one before that actually because i learned it on accident schedule while converting a convolution bloom shader from accumulating overlapping draws with hw raster blending ops to a compute shader that has no such thing. i only had time to figure out something mathematically equivalent, and i had never heard of convolution before, but i understood how it worked by the end of it on accident
@mtrc @hipsterelectron @clayote @shoofle likewise I've gained an intuitive understanding of what different basic math operators do beyond their nominal purposes by working with shaders daily for 10+ years, to the extent that I can usually determine the high level purpose of a shader from reading their context-free disassembly. that was not an intentionally developed skill
-
@mtrc @hipsterelectron @clayote @shoofle likewise I've gained an intuitive understanding of what different basic math operators do beyond their nominal purposes by working with shaders daily for 10+ years, to the extent that I can usually determine the high level purpose of a shader from reading their context-free disassembly. that was not an intentionally developed skill
@aeva There's two ways of looking at it - you definitely would learn something by doing this, but you might not learn the entire language/code/process you were mimicking. You also might not be sure if you had learned it correctly.
-
@aeva There's two ways of looking at it - you definitely would learn something by doing this, but you might not learn the entire language/code/process you were mimicking. You also might not be sure if you had learned it correctly.
@mtrc ok how about this: Suppose through some exciting new developments we created a way of perfectly simulating John Searle from an old MRI or something which includes handily accomplishing all facets of the human experience including but not limited to the ability to learn, understand information and context, be curious, experience boredom, live, laugh, love, and so on. (1/)
-
@mtrc ok how about this: Suppose through some exciting new developments we created a way of perfectly simulating John Searle from an old MRI or something which includes handily accomplishing all facets of the human experience including but not limited to the ability to learn, understand information and context, be curious, experience boredom, live, laugh, love, and so on. (1/)
@mtrc and let's also suppose that through a series of improbable events we have been tasked with the improbable task of creating an algorithm for John Searle to execute that simulates being a fluent speaker in a given language, but it is absolutely paramount that we do not accidentally teach John Searle how to read or speak the language under any circumstances, such that he can converse in the language but may never understand it. (2/)
-
@mtrc and let's also suppose that through a series of improbable events we have been tasked with the improbable task of creating an algorithm for John Searle to execute that simulates being a fluent speaker in a given language, but it is absolutely paramount that we do not accidentally teach John Searle how to read or speak the language under any circumstances, such that he can converse in the language but may never understand it. (2/)
@mtrc one might propose we pick a dignified language with a complex history and lots of sophisticated characters and so on but I believe we wouldn't know for sure because we'd be relying on underestimating John Searle's cognitive abilities, and as my grandma always said never bet against John Searle (my grandma never said this but suppose she did for the sake of rigor).
So I propose instead, we select Toki Pona for this thought experiment instead as it's known for being fairly simple (3/)
-
@mtrc one might propose we pick a dignified language with a complex history and lots of sophisticated characters and so on but I believe we wouldn't know for sure because we'd be relying on underestimating John Searle's cognitive abilities, and as my grandma always said never bet against John Searle (my grandma never said this but suppose she did for the sake of rigor).
So I propose instead, we select Toki Pona for this thought experiment instead as it's known for being fairly simple (3/)
@mtrc thus, I ask: is it possible to construct an algorithm that allows a person with normal human cognitive abilities and tendencies (and they possibly have access to excessive computer processing power) the ability to converse in Toki Pona and guarantee that they never accidentally learn any of it at all not even how to say a basic greeting without the aid of the algorithm? (4/4)
Did they do something?