@glyph Did you quote post something?
-
-
@glyph I really like your point about scaling down. Even when I was in undergrad, there were already pieces about how the long free lunch was finally over, but damned if the strategy of relying on hardware progress for all software improvements isn't entirely unsustainable.
-
-
@ireneista @cthos @xgranade this isn't exactly a "cheerful" thought, but it's also not horribly grim: I think we already saw it break down in 2020, and we saw both how brittle it is (nobody had enough slack in their supply chain to actually weather the disruption without exposing catastrophic delays to customers) but also its resilience (customers were super mad, alternate pathways DID come online in less than a year)
-
@glyph I see a pretty clear parallel to the middle period of, and tail end of the 19th century, where a lot of the "easy" discoveries had been, well... discovered... and the rate of progress in the sciences and inventions slowed until there was some significant breakthrough (the steam engine, in the first part of the century, and then electricity, oil, and oceanic telegraph lines, and later the linotype, breaking the mid-century lul and speeding advances up to WWI again)
-
@glyph I see a pretty clear parallel to the middle period of, and tail end of the 19th century, where a lot of the "easy" discoveries had been, well... discovered... and the rate of progress in the sciences and inventions slowed until there was some significant breakthrough (the steam engine, in the first part of the century, and then electricity, oil, and oceanic telegraph lines, and later the linotype, breaking the mid-century lul and speeding advances up to WWI again)
@glyph not to say we'll have another linotype or telegraph; more that I think we're running out of "easy" parts for progress with computers, so unless we get something new through material science, physics, chemistry, something like that, it's likely that all we have left available with current understanding is the hard stuff.
-
@glyph maaaaaan I just started reading and I am ALREADY MAD
about your rudely accurate observation we are now ONE QUARTER INTO THE CENTURY
🕸️🧓🏻🪦
-
@glyph not to say we'll have another linotype or telegraph; more that I think we're running out of "easy" parts for progress with computers, so unless we get something new through material science, physics, chemistry, something like that, it's likely that all we have left available with current understanding is the hard stuff.
@glyph Maybe quantum computing will unlock a whole new set of easy things (I'm... not confident in that, but, it could do. It's at least a lot faster for certain sorts of things, in the rare cases we can figure out how to actually implement something useful.)
But, there could also just... not be "easy" parts left. -
@glyph maaaaaan I just started reading and I am ALREADY MAD
about your rudely accurate observation we are now ONE QUARTER INTO THE CENTURY
🕸️🧓🏻🪦
@bitprophet you're the first one to catch that (I guarantee it hurt more to write than it does to read :))
-
@glyph Maybe quantum computing will unlock a whole new set of easy things (I'm... not confident in that, but, it could do. It's at least a lot faster for certain sorts of things, in the rare cases we can figure out how to actually implement something useful.)
But, there could also just... not be "easy" parts left.@miss_rodent the expert I would go to ask about that is @xgranade and I am pretty confident that she would not be bullish on this particular likelihood any time soon
-
@glyph This is really nice!
I hope that open source that individuals can use isn’t a ZIRP. That would be a severe tragedy of the commons situation for sure.
-
@glyph This is really nice!
I hope that open source that individuals can use isn’t a ZIRP. That would be a severe tragedy of the commons situation for sure.
@alwayscurious thank you for saying so!
1. The phenomenon itself isn't a ZIRP, but the way that it was being funded probably was.
2. Tragedy of the commons is fake, c.f. https://en.wikipedia.org/wiki/Elinor_Ostrom -
@miss_rodent the expert I would go to ask about that is @xgranade and I am pretty confident that she would not be bullish on this particular likelihood any time soon
@glyph Fair, was just a suggestion for a potential 'next thing' that might actually work, though last I looked into it (admittedly been a few years) it did... not seem particularly hopeful as a 'next big thing', at least not anytime soon.
As far as I know it still has quite a few limitations and caveats to what it's useful for, and of course the major limitation of all computers, being restricted to, y'know, only things which can be computed or achieved by computation in the first place. -
@glyph Fair, was just a suggestion for a potential 'next thing' that might actually work, though last I looked into it (admittedly been a few years) it did... not seem particularly hopeful as a 'next big thing', at least not anytime soon.
As far as I know it still has quite a few limitations and caveats to what it's useful for, and of course the major limitation of all computers, being restricted to, y'know, only things which can be computed or achieved by computation in the first place.@miss_rodent I'm pretty sure it will be useful, but probably in some pretty limited verticals. It's definitely not The Way Computers Will Work and the things that it will unlock seem like pretty quiet infrastructural improvements and not significant population-wide stuff. Per my existing thesis I do think it's almost certain that it will, at some point, be A Thing, but it will not be Big
-
@miss_rodent the expert I would go to ask about that is @xgranade and I am pretty confident that she would not be bullish on this particular likelihood any time soon
@glyph @miss_rodent Yeah, no, I'm not particularly bullish. There's a couple parts to why not... while I'm quite convinced that building a quantum computer is probably possible, and we have some good mathematical evidence to back that up, it's been five years away since 1997. I'm suspicious of any particular claimed timelines, as the problems left to be solved are huge.
The other part, will quantum computers create a new class of easy next big things once built, that's more complex still.
-
@glyph I've aspired to "lunch-scale projects" for several years now. Intentionally meaningless—semantic choice in opposition to the differently meaningless "Web scale"—but generally:
- not for crowds (used by one person at a time, maybe a small group)
- bite-sized projects with a smaller scope
- the kind of work where I could fix the average bug before lunchtime
These ideals *have* made it hard to be persuasive in dev job interviews, alas.
All this to say I agree with the thoughts and CTA.
-
@glyph @miss_rodent Yeah, no, I'm not particularly bullish. There's a couple parts to why not... while I'm quite convinced that building a quantum computer is probably possible, and we have some good mathematical evidence to back that up, it's been five years away since 1997. I'm suspicious of any particular claimed timelines, as the problems left to be solved are huge.
The other part, will quantum computers create a new class of easy next big things once built, that's more complex still.
@glyph @miss_rodent The short version there is that we currently haven't found any *practical* problems for which quantum computers are *provably* better than classical computers by a large enough margin to justify using a quantum computer.
We've found impractical problems (she said handwavingly af), and practical problems like cryptanalysis where we have good evidence that quantum computers are better, but no hard proof.
-
@glyph @miss_rodent Yeah, no, I'm not particularly bullish. There's a couple parts to why not... while I'm quite convinced that building a quantum computer is probably possible, and we have some good mathematical evidence to back that up, it's been five years away since 1997. I'm suspicious of any particular claimed timelines, as the problems left to be solved are huge.
The other part, will quantum computers create a new class of easy next big things once built, that's more complex still.
-
@glyph @miss_rodent The short version there is that we currently haven't found any *practical* problems for which quantum computers are *provably* better than classical computers by a large enough margin to justify using a quantum computer.
We've found impractical problems (she said handwavingly af), and practical problems like cryptanalysis where we have good evidence that quantum computers are better, but no hard proof.
@glyph @miss_rodent Part of that is that it's hard to come up with new algorithms when you don't have the hardware to run and test them on, leading to a chicken and egg kind of situation. Part of it, though, is that what quantum algorithms — not programming quantum computers, but coming up with genuinely new algorithms for them — tend to require a peculiar kind of cleverness and trickery that slows down the search for new problems which might admit a quantum advantage.
-
@glyph I am very, very tired of Next Big Things anyway.