Sigh.
-
@cstross @mwl Heh, well, guess I'm doomed to ignorance.
FWIW the writing itself was not an absolute block. The combo of crawler and writing (and maybe just being generally unfocused) had to all drag me down.
But, um, Mandarin... I'll have to wait for the paid journos to bring those to light.
It's all just as well, really. Breakthroughs today are not likely to see general application within the years I have left.
-
It doesn't mean LLMs are a dead end, even though yeah they probably are.
It means that the way LLMs "reason", or whatever the heck you want to call it, is not at some fundamental level the way meat brains do it. We are more "hardware" (or firmware or wetware or whatever) at the basic level than software/state.
Don't be too excited. It is *highly unlikely* that evolution builds brains in an optimal manner. It may well be we eventually build our own successors. We just won't (quickly/soon) build better "us"es.
Evolution optimizes for reproductive success as quickly as practical. I work in pulmonary research and it's always been interesting to me why evolution went with a system of rubber bands as a way to handle gas exchange. Seems like a pretty dumb solution since rubber bands (elastin and collagen) break and replacing a #tensegrity structure that's constantly in use doesn't appear to be something we're good at. But when you look at lungs as a cheap and effective way to do gas exchange long enough to reproduce a few times it makes sense; we didn't evolve to use our lungs for 50 years, we evolved to use them for 15 or 20 and then die. Maybe fewer years depending on how close you get to that little shrew looking mammal that survived the meteor strike 65,000,000 years ago right?
Birds make a lot more sense, a solid gas exchange structure ventilated by air sacks that pump air in at one end and out at the other end. Unidirectional, continuous airflow is a MUCH better evolutionary solution and worked great for millions of years and still does! Tidal airflow is just dumb mechanically.
Same with brains I'd guess, evolution found complex, physically large brains to be SO selective that it selected women who could alter the geometry of their own hips in order to maximize the amount of brain and skull growth they could support while pregnant. All of that to get these ridiculously large skulls out into the world where they could accept more and more sensory input, build neural connections and become fertile. Optimizing for brain size at the cost of being totally helpless for years put humans into a selective loop where those brains were required in order to support the next generation of helpless little tykes.
#Evolution is fun.
#Mammal lungs are dumb.
#Science -
@pwassonchat@eldritch.cafe @cstross@wandering.shop @mwl@io.mwl.io
I'm not surprised by this at all
after getting asked to "please do the needful" by some indian clients at an old job on a bunch of emails I had to figure the origin of the phrase
Turns out it is a remament of old UK English that fell out of use elsewhere but still survives in Indian-English, as opposed to any sort of English as a second language grammatical "error", there were a bunch of other examples as well -
-
@cstross So it seems this could be the beginning of cortical stacks development, isn't it?
@eldadoinquieto No.
This is evidence of three things:
- the connectome doesn't generalize
- the entire developmental pathway is required to get from the fusion of gametes to the organism
- how the organism functions is developmentally dependentThis matches current biology; developmental plasticity and selection explain how we've got what we see across life, including all of our own capabilities.
There's some reason to believe we can't comprehend this well enough to apply design.
-
@eldadoinquieto No.
This is evidence of three things:
- the connectome doesn't generalize
- the entire developmental pathway is required to get from the fusion of gametes to the organism
- how the organism functions is developmentally dependentThis matches current biology; developmental plasticity and selection explain how we've got what we see across life, including all of our own capabilities.
There's some reason to believe we can't comprehend this well enough to apply design.
-
I started learning English at 15. I ended up studying English first in college, later at Uni, where I got an MA in Linguistics and later a post-grad in PR and Effective Communication. I'm also autistic and, especially when copy writing, very detail-oriented.
Up until three years ago, I often received compliments for my writing. My uni essays from twenty years ago were packed with words and phrases that are now often flagged as AI.
In the past few years, I have been accused of using AI a few times. Apparently, writing well and knowing Oxford / AP Press punctuation rules are now considered a liability, not an asset.
I found myself actively dumbing down my writing a few times recently.
We created a system where sceptics dismiss genuine images, videos and articles as AI, while the gullible believe obvious fakes.
Carl Sagan was spot on with his predictions.
-
But I'm REALLY HAPPY right now because this kinda-sorta validates the key premise of the SF novel I just handed in last month (which involves serial reincarnation via destructive brain-slicing-and-imaging then imprinting onto an immature cortex, and then explores its disastrous societal failure modes).
... And it also hints that artificial consciousness might, eventually, be possible, if only via the hard path of doing it the same way we do it, only in simulation in silico.
/6 (ends)
@cstross my primary concern about uploaded consciousness, that is, why I consider it impossible as we understand it, is that our consciousness is VERY influenced by a cocktail of hormones released in response to environmental and social factors. If we uploaded our brains to a system that isn't feeding us those, what we'd have wouldn't be *us* in the same sense. Like yeah it would be great to not experience social anxiety anymore, but how do you replace a dopamine hit?
-
@pwassonchat@eldritch.cafe @cstross@wandering.shop @mwl@io.mwl.io
I'm not surprised by this at all
after getting asked to "please do the needful" by some indian clients at an old job on a bunch of emails I had to figure the origin of the phrase
Turns out it is a remament of old UK English that fell out of use elsewhere but still survives in Indian-English, as opposed to any sort of English as a second language grammatical "error", there were a bunch of other examples as well@rachel @cstross @mwl @pwassonchat I actually wouldn't think that phrase especially odd (Uk boomer)
-
@breathOfLife @petealexharris @cstross Plus, some genes can overlap, so you can get a lot more instructional data in the same length of base-4 values than it seems.
I don't think that helps, since where the genes have overlap, the bit that overlaps is the same for both genes, and so can't encode *more information*. It just cuts out some repetition.
Plus a gene is a protein that folds up to do a thing. The digits encode the shape, not a program for the protein to act on, so its action has less defining info.
How much info is a network of 127k neurons and their connections? I don't know. It feels, combinatorially, like a LOT.
-
I don't think that helps, since where the genes have overlap, the bit that overlaps is the same for both genes, and so can't encode *more information*. It just cuts out some repetition.
Plus a gene is a protein that folds up to do a thing. The digits encode the shape, not a program for the protein to act on, so its action has less defining info.
How much info is a network of 127k neurons and their connections? I don't know. It feels, combinatorially, like a LOT.
@drwho @breathOfLife @cstross
It's a genuine and amazing mystery. Obviously it works. How it works will be pretty eye-opening when we find out. -
@cstross Kick a neuron out of place in the "Brain Scanning Transfer" and your Elon Musk digital clone becomes somebody else. Which, I mean, it could get you a worse person, but not a lot worse.
@Illuminatus @cstross I enjoy the thought of Dilbert Stark submitting to brain uploading only to find that due to lack of chemical modelling, he can no longer get high.
-
@Illuminatus @cstross I enjoy the thought of Dilbert Stark submitting to brain uploading only to find that due to lack of chemical modelling, he can no longer get high.
-
I don't think that helps, since where the genes have overlap, the bit that overlaps is the same for both genes, and so can't encode *more information*. It just cuts out some repetition.
Plus a gene is a protein that folds up to do a thing. The digits encode the shape, not a program for the protein to act on, so its action has less defining info.
How much info is a network of 127k neurons and their connections? I don't know. It feels, combinatorially, like a LOT.
@petealexharris @breathOfLife @cstross From a data compression perspective, I think it does. Re-using bits in the data dictionary to encode more means more data is represented.
A gene represents a protein, it is not a protein itself. That brings in some incorrect assumptions.
I really don't know. My intuition says "a fuckload." It would depends on what is captured and how it's represented.
-
@drwho @breathOfLife @cstross
It's a genuine and amazing mystery. Obviously it works. How it works will be pretty eye-opening when we find out.@petealexharris @breathOfLife @cstross Indeed. It's amazing, no matter how you look at it.
-
... The next step on from Drosophila, the mouse brain, is 560 times largerānever mind a vastly more complex human brain. And to get the murine connectome we'll have to chop up *a lot* of brains: a human upload won't pass any kind of medical ethics review at this point!
But near-term, it's expected to yield "fundamentally new architectural principles for AI systems that are more sample-efficient, more robust, and more capable of behavioral generalization than current approaches"
/5
@cstross
I expect TESCREAL types to dismiss the ethical concerns. If we can improve the lives of trillions of hypothetical future humans, it would justify murdering and dissecting millions of actual contemporary humans. -
... The next step on from Drosophila, the mouse brain, is 560 times largerānever mind a vastly more complex human brain. And to get the murine connectome we'll have to chop up *a lot* of brains: a human upload won't pass any kind of medical ethics review at this point!
But near-term, it's expected to yield "fundamentally new architectural principles for AI systems that are more sample-efficient, more robust, and more capable of behavioral generalization than current approaches"
/5
@cstross
Iāve been wondering about whether they will bother. Drey Dossier has a series that talks about human experimentation possibly already happening with Neuralink., using ICE detainees.
https://thedreydossier.substack.com/p/who-tf-is-in-my-head-part-1-the-neural -
Sigh.
So it turns out we've mapped the neural connectome of Drosophila *and simulated it in silico*.
Pop-sci explainer here:
Key quote: "The step from a complete connectome to a working computational brain model is not trivial." And there's an even more important finding in this screenshot (alt text via OCR):
"The wiring is the computation".
/1
@cstross why the sigh?
-
@cstross why the sigh?
@elduvelle Because back in 1997 I started writing a story that ended up as the opening of "Accelerando" which began by exploring *exactly* this sort of process and asking questions about what it would lead to.
I've been waiting for reality to catch up with my imagination for a third of a century, and I'm not happy.
-
@solitha @cstross I don't expect ethical guidelines to do very much, I suppose. Not ultimately, anyway. You can only prevent so much suffering by curing illness - after all, we all die eventually. I reckon we could prevent more suffering by having a humane and warm attitude to each other and to other creatures. I do accept that research in general has given us many good things. But.. well I think there's a limit to the benefits of certain paths of research, simply due to how we operate as humans