@vitaut worthy of a T-shirt quote!
credentials: I'm the author of a nanosecond-scale latency instrum lib for Golang.
@vitaut worthy of a T-shirt quote!
credentials: I'm the author of a nanosecond-scale latency instrum lib for Golang.
@vitaut cant tell if serious
@nyrath @isaackuo @cainmark @glyph in the case of ELIZA its script/algorithm for directing how it converses with the user was simple and predictable enough that after a day or so, maybe even a few hours or less, an intelligent user can experience the AHA moment where they realize how they've been manipulated. the abstraction leaks through. cracks appear. and it shatters their willing suspension of disbelief.
with LLMs I'd argue that since their script/algorithm and database is so much bigger they can keep the user in their honeymoon period longer. one might go months, if ever, before the illusion is broken. for younger or less sophisticated users that honeymoon period will likely last longer, possibly forever
corollary: youngest people might be at most danger of getting warped by heavy/longtime LLM use. and the already mentally ill, or IQ-challenged. now add in a culture with easy access to guns, and hostile nation states running influence ops online, at scale. BAD
@glyph @jalefkowit I think both philosophies should be supported. one option is to provide a "Mac binary" for those who dont know or dont want to have to know about their exact hardware architecture or OS version. under the hood its multi-arch/os as needed. then also provide slim narrow releases for folks who do know or do care or truly need to minimise network or disk footprint
@glyph nailed it. its IP theft at hyperscale. and when you ask it to make a de facto literal copy (like "read aloud to me such-and-such Sherlock Holmes book") it can't even do that correctly, rather it lies or hallucinates. its been a clownshow in my experience