Is this platform still massively against AI or has it moved more towards acceptance?
-
Is this platform still massively against AI or has it moved more towards acceptance?
-
Is this platform still massively against AI or has it moved more towards acceptance?
@mitsuhiko Armin, any chance I can convince you to use the term "LLMs" instead of "AI" when you want to talk about LLMs? Or maybe "generative AI" if you think LLM is not flashy enough? AI is an umbrella term that covers a lot of things, some good, some bad.
-
@mitsuhiko Armin, any chance I can convince you to use the term "LLMs" instead of "AI" when you want to talk about LLMs? Or maybe "generative AI" if you think LLM is not flashy enough? AI is an umbrella term that covers a lot of things, some good, some bad.
@miguelgrinberg I don’t think it really matters. LLM are a subset of AI. I’m not convinced why being more precise here would matter?
-
@miguelgrinberg I don’t think it really matters. LLM are a subset of AI. I’m not convinced why being more precise here would matter?
@mitsuhiko @miguelgrinberg Language matters maybe more than you think. Saying "I'm pro AI" means you've (perhaps unintendedly) approved of the broader implications in the way AI is marketed by the tech companies themselves.
I suspect your question is more about LLM/agentic workflows in an engineering scenario than AI usage in government, police, military decision-making (as an example).
If you're saying "it doesn't matter" to how we use the language, then you'll likely find a lot of resistance from folks who believe that language matters. But that's a different discussion altogether.
-
@mitsuhiko @miguelgrinberg Language matters maybe more than you think. Saying "I'm pro AI" means you've (perhaps unintendedly) approved of the broader implications in the way AI is marketed by the tech companies themselves.
I suspect your question is more about LLM/agentic workflows in an engineering scenario than AI usage in government, police, military decision-making (as an example).
If you're saying "it doesn't matter" to how we use the language, then you'll likely find a lot of resistance from folks who believe that language matters. But that's a different discussion altogether.
@pythonbynight @miguelgrinberg I really don’t subscribe to the idea that there is so much nuance in this space that language matters here. Everyone knows quite quickly what is meant.
-
@miguelgrinberg I don’t think it really matters. LLM are a subset of AI. I’m not convinced why being more precise here would matter?
@mitsuhiko It would matter to a big portion of the people you are addressing and hoping will engage with you. I myself have a highly positive attitude about AI in general, even though as you already know I have zero interest in LLMs.
-
@mitsuhiko It would matter to a big portion of the people you are addressing and hoping will engage with you. I myself have a highly positive attitude about AI in general, even though as you already know I have zero interest in LLMs.
@miguelgrinberg I honestly think it's counter productive because "I'm actually pro AI, but contra LLM" is in itself hard to judge because … what AI are you pro about? It's not like we have a tremendous amount of choice available here. Being abstractly in favor of something that does not really exist today, is just a special form of watering down the discourse.
-
Is this platform still massively against AI or has it moved more towards acceptance?
@mitsuhiko I wouldn't say "acceptance". I would say that we talk about it less these days like I hope Ukrainian people find the energy to talk about something else than the Russian invasion. Because you know, life has to go on.
But I still see people un-share a content when they realize it might be AI-generated. And I still see prominent people posting examples of how AI hurts humanity, begging their audience to resist. And it always resonates much more to me than any AI IDE integration demo. Because deep down I just know that **it is not worth it**.
Make no mistake @mitsuhiko , this is an invasion war, and you are on the winning side. Maybe this is why your message sounds so pedantic to me.
-
@miguelgrinberg I honestly think it's counter productive because "I'm actually pro AI, but contra LLM" is in itself hard to judge because … what AI are you pro about? It's not like we have a tremendous amount of choice available here. Being abstractly in favor of something that does not really exist today, is just a special form of watering down the discourse.
Wtf I just read here?
@mitsuhiko Do you really classify every single ML-powered or ML-based app / tool / system / solution pre-LLMs as "non-existent"?
Do you use a QR code reader? How about Apple's FaceId?
Seriously man ...
@miguelgrinberg is absolutely right.
LLMs can't be generalized as AI just because tech overlords right now run a massive marketing campaign to convince us so. Some years ago they tried to convince us* that BTC was money too, but just go to your local grocery store and trying to pay with your favorite BTC app. Tell me whether that works.
I work in Data Engineering department, and within teams in which the amount of money we spend on tools matters in the end of month, we challenge ourselves everyday on whether we need a LLM-based solution to every existing problem, because we acknowledge that "traditional" ML solutions can solve the same category of problems just as well, but they are cheaper, faster, an easier to self-host and manage.
Do you think having this type of discussion underpinning Engineering decisions is "counter productive"?
In addition to that : do you think that LLM-based solutions will become cheaper in the long run?
I honestly want to hear from you.
-
@pythonbynight @miguelgrinberg I really don’t subscribe to the idea that there is so much nuance in this space that language matters here. Everyone knows quite quickly what is meant.
@mitsuhiko @pythonbynight @miguelgrinberg The problem is that "everyone knows" one of two different things.
The general public "knows" the marketing hype of "these things are intelligent and helpful and better at everything than humans, like Data from Star Trek and other sentient artificial beings".
While lots of techies know that "AI" in an announcement means "content generation engine that isn't actually intelligent but is presented as such because it gets investment money" (or "actually solid classic probable ML, but we have to present it with the hype word")
-
@pythonbynight @miguelgrinberg I really don’t subscribe to the idea that there is so much nuance in this space that language matters here. Everyone knows quite quickly what is meant.
@mitsuhiko
I call BS. Not being precise enough is you making up a strawman to spar with. Bayesian spam classifiers and OCR can be argued to be "a subset of AI" too, but you'll hardly find the same pushback against it.The wide-spread use of GenAI, be it stable diffusion or LLMs, has unique characteristics that are undesirable in how they are pushed to shape the future of society as well as being extremely resource intensive for an arguably shitty ROI and flaky economics.
-
@mitsuhiko
I call BS. Not being precise enough is you making up a strawman to spar with. Bayesian spam classifiers and OCR can be argued to be "a subset of AI" too, but you'll hardly find the same pushback against it.The wide-spread use of GenAI, be it stable diffusion or LLMs, has unique characteristics that are undesirable in how they are pushed to shape the future of society as well as being extremely resource intensive for an arguably shitty ROI and flaky economics.
@ddelemeny It feels like y'all just want to unload your frustrations into this conversation. I have no desire to engage on that front.
-
@ddelemeny It feels like y'all just want to unload your frustrations into this conversation. I have no desire to engage on that front.
@mitsuhiko @ddelemeny It seems a little unfair to ask a question like "is this platform still massively against AI" and then pushback against attempts at clarifying what you mean by that statement.
In your opinion, language doesn't matter, but to others, language clearly does... that's not to say you are wrong, or they are wrong--but it's a matter of importance to agree upon what it is that we are "massively against" in the first place.
If you were to lead with something ilke--"I am massively against the AI industry as a whole and the violations enumerated elsewhere, as well as the problematic usages of "AI" in spaces such as surveillance, military, police, big tech, etc..--but I do see ways in which we can harness a subset of that technology in a positive way in our engineering workflows... well, that might lead to a better discussion.
For example, I would be very curious to hear what your workflow would look like if (when?) the unsustainable valuations finally deflate. I read most of your blog posts and find them very interesting...
But the initial question read more like a provocation than a request for discussion in general.
-
@mitsuhiko @ddelemeny It seems a little unfair to ask a question like "is this platform still massively against AI" and then pushback against attempts at clarifying what you mean by that statement.
In your opinion, language doesn't matter, but to others, language clearly does... that's not to say you are wrong, or they are wrong--but it's a matter of importance to agree upon what it is that we are "massively against" in the first place.
If you were to lead with something ilke--"I am massively against the AI industry as a whole and the violations enumerated elsewhere, as well as the problematic usages of "AI" in spaces such as surveillance, military, police, big tech, etc..--but I do see ways in which we can harness a subset of that technology in a positive way in our engineering workflows... well, that might lead to a better discussion.
For example, I would be very curious to hear what your workflow would look like if (when?) the unsustainable valuations finally deflate. I read most of your blog posts and find them very interesting...
But the initial question read more like a provocation than a request for discussion in general.
@pythonbynight @ddelemeny the purpose of the post was to figure out if I should post AI content here. From the responses I can see that it would be unwise.
-
@pythonbynight @ddelemeny the purpose of the post was to figure out if I should post AI content here. From the responses I can see that it would be unwise.
@mitsuhiko @ddelemeny I have several friends and other accounts I follow that post very interesting content about their agentic workflows, the things they're building with the latest tooling, or their views on the latest frontier models.
Their enthusiasm and interest does not come at the expense of minimizing the threats that others feel from the very same industry.
1/3
-
@mitsuhiko @ddelemeny I have several friends and other accounts I follow that post very interesting content about their agentic workflows, the things they're building with the latest tooling, or their views on the latest frontier models.
Their enthusiasm and interest does not come at the expense of minimizing the threats that others feel from the very same industry.
1/3
@pythonbynight @ddelemeny None of this was the topic of discussion though? It was about language policing my use of the term AI. As for the other stuff: I take issue with your insinuation that I would not want to engage or care about concerns with it. I do, but I can also separate out these things into distinct conversations. I find that still very hard to do here and Bluesky because of the overwhelming negativity. This thread being an excellent example of it.
-
@pythonbynight @ddelemeny None of this was the topic of discussion though? It was about language policing my use of the term AI. As for the other stuff: I take issue with your insinuation that I would not want to engage or care about concerns with it. I do, but I can also separate out these things into distinct conversations. I find that still very hard to do here and Bluesky because of the overwhelming negativity. This thread being an excellent example of it.
@mitsuhiko @pythonbynight seen from the other side, you set the conversation in terms that are way too broad for what you actually want to talk about and frankly read as a taunt on a hot topic, then complain that the conversation doesn't flow the way you assumed everyone would agree on (and I fail to imagine what you expected with such a starter).
"Massively against AI" is a mischaracterization of a variety of more nuanced and precise positions, which are only tangentially related to AI. -
@mitsuhiko @pythonbynight seen from the other side, you set the conversation in terms that are way too broad for what you actually want to talk about and frankly read as a taunt on a hot topic, then complain that the conversation doesn't flow the way you assumed everyone would agree on (and I fail to imagine what you expected with such a starter).
"Massively against AI" is a mischaracterization of a variety of more nuanced and precise positions, which are only tangentially related to AI.@ddelemeny @mitsuhiko I appreciate that you say you take issue with some of these concerns, I apologize for the insinuation otherwise. Up until now, you either felt the concern was overblown, not important, or irrelevant so as not to make your position more clear.
Is it accurate to say that you believe it is irrelevant for people genuinely concerned about the AI industry to bring up their issues in a question that asks if people here are "massively against AI"?
In the end, it seems to me (correct me if I'm wrong) that you take issue with individuals asking for more precise language, or further, explaining why they might be "massively against AI", because you see that as a matter of language policing?
As for my part, I come from a humanities background, where language matters a lot, and asking for clarity is key to more effective communication. I do apologize for mischaracterizing your intentions.
-
@ddelemeny @mitsuhiko I appreciate that you say you take issue with some of these concerns, I apologize for the insinuation otherwise. Up until now, you either felt the concern was overblown, not important, or irrelevant so as not to make your position more clear.
Is it accurate to say that you believe it is irrelevant for people genuinely concerned about the AI industry to bring up their issues in a question that asks if people here are "massively against AI"?
In the end, it seems to me (correct me if I'm wrong) that you take issue with individuals asking for more precise language, or further, explaining why they might be "massively against AI", because you see that as a matter of language policing?
As for my part, I come from a humanities background, where language matters a lot, and asking for clarity is key to more effective communication. I do apologize for mischaracterizing your intentions.
@pythonbynight @ddelemeny There are specific concerns and there are abstract fears. It's impossible to work with the latter, it's possible to do a lot with the former. As an example I have a lot of concern about how society is going to deal with AI and that's also something that I'm trying to understand and work in the right way with. But that is a lot more nuance and complex than a policing the use of the word AI which does very little to navigate those complexities.
-
@pythonbynight @ddelemeny There are specific concerns and there are abstract fears. It's impossible to work with the latter, it's possible to do a lot with the former. As an example I have a lot of concern about how society is going to deal with AI and that's also something that I'm trying to understand and work in the right way with. But that is a lot more nuance and complex than a policing the use of the word AI which does very little to navigate those complexities.
@mitsuhiko @ddelemeny I happen to think that "language policing" (as crude a term as that is) is one of the ways we can start addressing some of these complexities.
The problems (real or perceived) with using AI in an engineering workflow can be addressed separately (and in good faith) when we're able to think about it properly, as opposed to, say the problems of AI usage in the general public (chatbot use by young children or people at risk and the psychological effects), as an example.
Perhaps your initial question, from your perspective, was specifically geared toward the former (engineering workflows), but in that case, I don't think it's fair to assume everyone knew what you meant. It's like saying, "it's your fault for misunderstanding my question"...