@hipsterelectron @clayote @shoofle @mtrc my attempt to learn about expert systems tonight failed.
-
@hipsterelectron @clayote @shoofle @mtrc my attempt to learn about expert systems tonight failed. the wikipedia article did not hold my attention and i ended up on the article about the Chinese Room thought experiment, which mostly has only convinced me that John Searle had never read XKCD 505
-
@hipsterelectron @clayote @shoofle @mtrc my attempt to learn about expert systems tonight failed. the wikipedia article did not hold my attention and i ended up on the article about the Chinese Room thought experiment, which mostly has only convinced me that John Searle had never read XKCD 505
@hipsterelectron @clayote @shoofle @mtrc I find beauty in the idea that life can meaningfully exist in the liminality of information and a well chosen set of processing instructions (even if John Searle is functioning as its unwitting animating force) and be theoretically bound in its sophistication complexity only by the bounds of available time and memory. I decided to stop there, as the idea is too precious to let the used car salesmen running the tech industry defile it.
-
@hipsterelectron @clayote @shoofle @mtrc I find beauty in the idea that life can meaningfully exist in the liminality of information and a well chosen set of processing instructions (even if John Searle is functioning as its unwitting animating force) and be theoretically bound in its sophistication complexity only by the bounds of available time and memory. I decided to stop there, as the idea is too precious to let the used car salesmen running the tech industry defile it.
@hipsterelectron @clayote @shoofle @mtrc fuck i'm having art thoughts now. what if I made a patch in mollytime where you have to "feed" it periodically, and if you do it plays beautiful music, and if you don't the music wilts into something horrific as it self destructs. can I make a simple synthesizer patch that is capable of suffering for our entertainment? it would be art AND an artist!
-
@hipsterelectron @clayote @shoofle @mtrc my attempt to learn about expert systems tonight failed. the wikipedia article did not hold my attention and i ended up on the article about the Chinese Room thought experiment, which mostly has only convinced me that John Searle had never read XKCD 505
@aeva @clayote @shoofle @mtrc i consider the chinese room experiment to be quite literally racist in that it assumes a human being does not learn chinese in the act of translating it by rote. i think the language was chosen precisely to obscure this basic fact. it makes the circular assumption that a human can somehow ever be made to act like a turing machine. possibly the most ridiculous and unserious thing which is taught in "AI" courses.
-
@hipsterelectron @clayote @shoofle @mtrc fuck i'm having art thoughts now. what if I made a patch in mollytime where you have to "feed" it periodically, and if you do it plays beautiful music, and if you don't the music wilts into something horrific as it self destructs. can I make a simple synthesizer patch that is capable of suffering for our entertainment? it would be art AND an artist!
@aeva @clayote @shoofle @mtrc related possibility might be a similar construction of time-based "feeding" input, but with some sort of not-too-complex transformation that makes it possible to produce predictable output if you can guess the relationship. it would be cool to try to "feel" the response of the machine and to grasp the way it "understands" my input by intuition
-
@aeva @clayote @shoofle @mtrc i consider the chinese room experiment to be quite literally racist in that it assumes a human being does not learn chinese in the act of translating it by rote. i think the language was chosen precisely to obscure this basic fact. it makes the circular assumption that a human can somehow ever be made to act like a turing machine. possibly the most ridiculous and unserious thing which is taught in "AI" courses.
@hipsterelectron @clayote @shoofle @mtrc "this john guy seems racist" was the other main conclusion I came away with, though for my own happiness I decided to assume the unfortunate framing was a clumsy attempt to at a setup for the second part to wherein he is circumstantially illiterate and a structured set of rules processed without requiring his understanding does not empower him to understand the text, but the thought experiment doesn't improve anything and makes him seem sinophobic
-
@hipsterelectron @clayote @shoofle @mtrc "this john guy seems racist" was the other main conclusion I came away with, though for my own happiness I decided to assume the unfortunate framing was a clumsy attempt to at a setup for the second part to wherein he is circumstantially illiterate and a structured set of rules processed without requiring his understanding does not empower him to understand the text, but the thought experiment doesn't improve anything and makes him seem sinophobic
@hipsterelectron @clayote @shoofle @mtrc also merriam-webster dot com doesn't have a definition for "sinophobia" which I found out just now because I wanted to double check the spelling since I'm talking to smart people, and merriam-webster dot com not having a definition for "sinophobia" is absolutely sinophobic and honestly a lot more immediately problematic
-
@aeva @clayote @shoofle @mtrc related possibility might be a similar construction of time-based "feeding" input, but with some sort of not-too-complex transformation that makes it possible to produce predictable output if you can guess the relationship. it would be cool to try to "feel" the response of the machine and to grasp the way it "understands" my input by intuition
@hipsterelectron @clayote @shoofle @mtrc love this idea
-
@aeva @clayote @shoofle @mtrc related possibility might be a similar construction of time-based "feeding" input, but with some sort of not-too-complex transformation that makes it possible to produce predictable output if you can guess the relationship. it would be cool to try to "feel" the response of the machine and to grasp the way it "understands" my input by intuition
@hipsterelectron @aeva @clayote @shoofle Sorry I'm now randomly inserting myself into the thread at various points - "the Chinese Room" is definitely not what you would call this if you were pitching it in 2025 (I hope) but the core idea can be salvaged I think by using a different metaphor - like a human-powered DNS server that just receives and sends codes they don't understand.
-
@hipsterelectron @aeva @clayote @shoofle Sorry I'm now randomly inserting myself into the thread at various points - "the Chinese Room" is definitely not what you would call this if you were pitching it in 2025 (I hope) but the core idea can be salvaged I think by using a different metaphor - like a human-powered DNS server that just receives and sends codes they don't understand.
@mtrc @hipsterelectron @clayote @shoofle another problem with the thought experiment is i've learned so much by "playing computer", and that's also how a lot of grade school math is taught. if such a rules-based conversational algorithm existed it's kinda bold of what's his name to assume one couldn't accidentally learn Chinese by stepping through it by hand for long enough
-
@mtrc @hipsterelectron @clayote @shoofle another problem with the thought experiment is i've learned so much by "playing computer", and that's also how a lot of grade school math is taught. if such a rules-based conversational algorithm existed it's kinda bold of what's his name to assume one couldn't accidentally learn Chinese by stepping through it by hand for long enough
@mtrc @hipsterelectron @clayote @shoofle LLMS are good at being black boxes for the large amount of memory used, but not because they're convolving. i know this because i've gained an intuitive understanding of how simple convolutions like blurs, image recognition, and others work by implementing convolution reverb
-
@mtrc @hipsterelectron @clayote @shoofle LLMS are good at being black boxes for the large amount of memory used, but not because they're convolving. i know this because i've gained an intuitive understanding of how simple convolutions like blurs, image recognition, and others work by implementing convolution reverb
@mtrc @hipsterelectron @clayote @shoofle the blur one before that actually because i learned it on accident schedule while converting a convolution bloom shader from accumulating overlapping draws with hw raster blending ops to a compute shader that has no such thing. i only had time to figure out something mathematically equivalent, and i had never heard of convolution before, but i understood how it worked by the end of it on accident
-
@mtrc @hipsterelectron @clayote @shoofle the blur one before that actually because i learned it on accident schedule while converting a convolution bloom shader from accumulating overlapping draws with hw raster blending ops to a compute shader that has no such thing. i only had time to figure out something mathematically equivalent, and i had never heard of convolution before, but i understood how it worked by the end of it on accident
@mtrc @hipsterelectron @clayote @shoofle likewise I've gained an intuitive understanding of what different basic math operators do beyond their nominal purposes by working with shaders daily for 10+ years, to the extent that I can usually determine the high level purpose of a shader from reading their context-free disassembly. that was not an intentionally developed skill