Here, in a nutshell, is why using AI for military purposes is worse than dangerous:
-
Here, in a nutshell, is why using AI for military purposes is worse than dangerous:
If you ask it for a plan to attack something, it will never respond "that's bad idea, don't do it."
@petergleick THE ONLY WINNING MOVE IS NOT TO PLAY
-
Here, in a nutshell, is why using AI for military purposes is worse than dangerous:
If you ask it for a plan to attack something, it will never respond "that's bad idea, don't do it."
That's not... How AI works...
Folks who do not use it, create AI mythos, that's based around their understanding of it.
Which is fine... If you don't need credibility and you are talking into your echo chamber
-
@NeussWave @petergleick An LLM isn't a real AI. It's not doing the thinking you're doing to decide which things are relevant.
We did not make any big breakthroughs in *real* AI, we just made Stochastic parrots.
@NeussWave @petergleick @gooba42
See, this is what I mean by "mythos"
"Stochiastic parrot" is right up there with "Strawberry" and number of fingers.
Energy Language Model is a thing (kona.1) but you'd never know about it dancing around the fire with the other woodfolk.
-
That's not... How AI works...
Folks who do not use it, create AI mythos, that's based around their understanding of it.
Which is fine... If you don't need credibility and you are talking into your echo chamber
@n_dimension @petergleick People's experience of "AI" is diverging... for coding, I agree with you. People who hate on it without trying it once aren't going to make it.
But for normal human use, OP has a point, if OpenAI remove their guardrails it actually will be, "So you want to kill those terrorists in that building, that's a brilliant idea! Would like like me to auto-target fire missiles from the robotaxi two blocks away? Is there anyone else you don't like?".
-
Here, in a nutshell, is why using AI for military purposes is worse than dangerous:
If you ask it for a plan to attack something, it will never respond "that's bad idea, don't do it."
apart from assigning fighting groups to multiple different theaters or inventing new units
-
Here, in a nutshell, is why using AI for military purposes is worse than dangerous:
If you ask it for a plan to attack something, it will never respond "that's bad idea, don't do it."
-
Here, in a nutshell, is why using AI for military purposes is worse than dangerous:
If you ask it for a plan to attack something, it will never respond "that's bad idea, don't do it."
@petergleick
In fact, it will *compliment* you on your bad idea. -
Here, in a nutshell, is why using AI for military purposes is worse than dangerous:
If you ask it for a plan to attack something, it will never respond "that's bad idea, don't do it."
@petergleick 'I love your way of thinking! A pre-emptive attack is indeed the best strategy. I can help you draft a detailed plan of action. Alternatively, we could discuss the total annihilation of the human race. Just let me know how you would like to proceed.'
-
@NeussWave @petergleick @gooba42
See, this is what I mean by "mythos"
"Stochiastic parrot" is right up there with "Strawberry" and number of fingers.
Energy Language Model is a thing (kona.1) but you'd never know about it dancing around the fire with the other woodfolk.
@n_dimension @NeussWave @petergleick It's literally just a very large autocomplete generator. It doesn't think. It doesn't have any epistemology, it's neither true nor false because it doesn't have those values in it.
"We don't know how it works but it does! Maybe magic?" - The appeal to ignorance doesn't make your argument true.
We do know how it works.
-
undefined oblomov@sociale.network shared this topic