The stop using non-deterministic technologies to do tasks requiring deterministic outputs challenge.
-
@mttaggart I forgot the damn partner program.
For each new customer you refer to us, you get a 0.5% probability boost to your favoured outcomes*
For an additional $325k/yr you can get a spotlight announcement and a seat on our board of directors with voting rights.
*Annual quotas apply to maintain participation. Limited to maximum 120% probability per outcome.
@mttaggart make this an incremental game and I’d play it.
*Proceeds to stave off overwhelming ADHD impulse to drop all current projects and learn how to make this happen.*
-
@mttaggart make this an incremental game and I’d play it.
*Proceeds to stave off overwhelming ADHD impulse to drop all current projects and learn how to make this happen.*
@SecurityWriter @mttaggart
You should also add some non-deterministic input. Promoting can hook users and at the same time you can always blame the user "if the results are incorrect you are prompting it wrong" -
The stop using non-deterministic technologies to do tasks requiring deterministic outputs challenge.
Let’s try that.
@SecurityWriter "i asked chatgpt and it said that it's fine and i should stop worrying about details"
-
The stop using non-deterministic technologies to do tasks requiring deterministic outputs challenge.
Let’s try that.
@SecurityWriter I witnessed someone saying that their “agentic” AI doohickey does what you ask it to do 85% of the time, and their friends celebrated that as an accomplishment.
The bar is so very low.
-
The stop using non-deterministic technologies to do tasks requiring deterministic outputs challenge.
Let’s try that.
@SecurityWriter stuff everything in a may-return monad and show convergence to a deterministic result to exit the monad!
-
The stop using non-deterministic technologies to do tasks requiring deterministic outputs challenge.
Let’s try that.
@SecurityWriter As I understood it, LLM output is technically deterministic, but because of the random seed thingy it gives different answers each time, no? -
@SecurityWriter As I understood it, LLM output is technically deterministic, but because of the random seed thingy it gives different answers each time, no?
@star the randomness is guaranteed 🤭
-
@star the randomness is guaranteed 🤭
@SecurityWriter well, the wrongness certainly is, and, FYI, i am against ai; i just thought that this was how it worked: same seed, same input, same context = same output (on same model), or am i mistaken? -
The stop using non-deterministic technologies to do tasks requiring deterministic outputs challenge.
Let’s try that.
@SecurityWriter @drahardja difficulty: impossible
-
@SecurityWriter @mttaggart
You should also add some non-deterministic input. Promoting can hook users and at the same time you can always blame the user "if the results are incorrect you are prompting it wrong"@realn2s @SecurityWriter @mttaggart I hate all y’all! Some douche burrito might see this and make it.
-
undefined oblomov@sociale.network shared this topic on