The AI did spot it, and started spewing nonsense because it’s shit. It looks like it was trying to write about straight vs strait but was unable to resolve that into actual correct text and instead spewed nonsense about straight and “a sound” being homophones.
Problem is there will be people lapping up this nonsense or more subtle errors. AI is alpha software at best and it’s crazy how it’s being pushed onto users.
So you probably already know this, but the AI wasn’t trying to write about anything, since it works without intent. It predicts the most likely combination of words in reply to your prompt. Since this is probably not a very common question which becomes implausible due to the spelling error, the AI doesný have anything to go on and it returns a combination of words that may be the most likely correct according to the model, but with a low probability of actually being correct.
A LLM is without intent as much as a motor is without intent. But if you block it from doing its job, we’d still say that it’s “trying to spin”. What would you propose as an alternative to “trying”?
“Trying” is fine, “attempting” could also be used. I’ve never heard that there needed to be intent behind trying something, only an underlying directive, as you said.
The AI did spot it, and started spewing nonsense because it’s shit. It looks like it was trying to write about straight vs strait but was unable to resolve that into actual correct text and instead spewed nonsense about straight and “a sound” being homophones.
Problem is there will be people lapping up this nonsense or more subtle errors. AI is alpha software at best and it’s crazy how it’s being pushed onto users.
Ah, thank you, I couldn’t figure out where it assembled that nonsense from.
So you probably already know this, but the AI wasn’t trying to write about anything, since it works without intent. It predicts the most likely combination of words in reply to your prompt. Since this is probably not a very common question which becomes implausible due to the spelling error, the AI doesný have anything to go on and it returns a combination of words that may be the most likely correct according to the model, but with a low probability of actually being correct.
A LLM is without intent as much as a motor is without intent. But if you block it from doing its job, we’d still say that it’s “trying to spin”. What would you propose as an alternative to “trying”?
“Trying” is fine, “attempting” could also be used. I’ve never heard that there needed to be intent behind trying something, only an underlying directive, as you said.