Sent to me by a friend, don’t judge the misspelling of “strait” lol.

  • morphballganon@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    32
    ·
    28 days ago

    “Don’t judge the misspelling of straight”

    Reaction 1: the misspelling contributed to the confusion

    Reaction 2: AI should be able to spot and correct the misspelling

    • BananaTrifleViolin@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      28 days ago

      The AI did spot it, and started spewing nonsense because it’s shit. It looks like it was trying to write about straight vs strait but was unable to resolve that into actual correct text and instead spewed nonsense about straight and “a sound” being homophones.

      Problem is there will be people lapping up this nonsense or more subtle errors. AI is alpha software at best and it’s crazy how it’s being pushed onto users.

      • zout@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        2
        ·
        28 days ago

        So you probably already know this, but the AI wasn’t trying to write about anything, since it works without intent. It predicts the most likely combination of words in reply to your prompt. Since this is probably not a very common question which becomes implausible due to the spelling error, the AI doesný have anything to go on and it returns a combination of words that may be the most likely correct according to the model, but with a low probability of actually being correct.

        • howrar@lemmy.ca
          link
          fedilink
          arrow-up
          14
          ·
          28 days ago

          A LLM is without intent as much as a motor is without intent. But if you block it from doing its job, we’d still say that it’s “trying to spin”. What would you propose as an alternative to “trying”?

          • samus12345@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            1
            ·
            27 days ago

            “Trying” is fine, “attempting” could also be used. I’ve never heard that there needed to be intent behind trying something, only an underlying directive, as you said.

    • taiyang@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      28 days ago

      Given most search engines figure that shit out the moment I search, yeah. It’s a mild assumption, but even then it gives the option to show only results for straight.

    • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      27 days ago

      How does the misspelling contribute to the confusion? It’s not like straight OR strait sound anything remotely close to “sound.” The stupid part isn’t that it doesnt understand the OP meant the body of water; it thinks the words “straight” and “sound” are homophones.

  • bizarroland@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    28 days ago

    I was so confused for a moment. I was like, “What the hell does heterosexuality have to do with a noise?”

  • Gladaed@feddit.org
    link
    fedilink
    arrow-up
    4
    arrow-down
    11
    ·
    28 days ago

    A probable answer. That’s not a sensible question so a ridiculous answer is expected.

    • Dr. Bob@lemmy.ca
      link
      fedilink
      English
      arrow-up
      18
      ·
      28 days ago

      Strait is misspelled. Both straits and sounds are bodies of water so it’s a very sensible question. You might also ask what the difference between a cove and a bight is.

      • Gladaed@feddit.org
        link
        fedilink
        arrow-up
        7
        arrow-down
        11
        ·
        28 days ago

        Didn’t make the connection. Very difficult for transformers since they do not listen to the words. They also don’t read the letters. So this is a ‘don’t use an ai for something it fundamentally cannot do’ example.

        • JohnnyCanuck@lemmy.caOPM
          link
          fedilink
          arrow-up
          9
          ·
          28 days ago

          An error in a question should either result in correcting the question or indicating that the question doesn’t make sense.

          Calling “straight” and “sound” homophones is a pure demonstration of the LLM’s ignorance. Maybe it got fooled by “straight” and “strait” being homophones and some how crossed wires, but that’s actually the point. It is ignorant, despite how “intelligent” it might sound.

          • partial_accumen@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            1
            ·
            28 days ago

            I think you’re holding a fundamental misunderstanding of what today’s LLMs are.

            An error in a question should either result in correcting the question

            LLMs don’t have the ability to reason what you may have meant. The most they can do, if they are exposed to the right training, is understand something like “people that have used words or patterns similar to what you are using now meant X, Y, or Z, and of those the highest probability with the words you chose would be X.” This is exactly what it did.

            or indicating that the question doesn’t make sense.

            This would require the holy grail of AI which doesn’t exist yet: Artificial General Intelligence (AGI)

            AGI is the ability to reason that humans (and some animals) can. None of today’s LLMs (Grok, Claude, ChatGPT, etc) are AGI. They are all the much more limited ANI (Artificial Narrow Intelligence). ANI can only work with whatever training data you give it, and even giant LLMs today are only a tiny fraction of what a process would need to have AGI. None of our current technology can take data we have today and build an AGI model. As the models scale the limits of LLMs start to fracture and fall apart.

            • JohnnyCanuck@lemmy.caOPM
              link
              fedilink
              arrow-up
              5
              ·
              28 days ago

              I think you’re holding a fundamental misunderstanding of what today’s LLMs are.

              I think you have severe misunderstanding of what this community is.

              • partial_accumen@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                28 days ago

                I…assumed it was a community to point where AI should would, but doesn’t. In the example we have here its not a flaw of the LLM, instead what is being asked of it is beyond its limits.

                I don’t make fun of my screwdriver because its horrible and hammering in nails. If that’s what this community is for, then the mistake is mine to post in here. My apologies.

                • JohnnyCanuck@lemmy.caOPM
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  27 days ago

                  I…assumed it was a community to point where AI should would, but doesn’t.

                  …and that’s what’s happening in this case. You’re acting like it’s completely impossible for an LLM to go down a path where it handles that the question contained a misspelling because it isn’t AGI. In fact, to be useful an LLM should hand this better. It certainly shouldn’t start making up weird unrelated connections.

                  Also, it’s not impossible, and I guarantee that some LLMs would give a more appropriate answer. But this particular LLM couldn’t handle it, and went completely off the rails. Why are we not allowed to make fun of that? Why are you defending it from ridicule?

                  I don’t make fun of my screwdriver because its horrible and hammering in nails.

                  Holy strawman. We aren’t asking the LLM to be a different tool. The LLM is supposed to handle language, and a simple misspelling of a homophone caused it to misunderstand the question completely and sent it down a path of calling completely different words “homophones”. Yeah I wouldn’t make fun of my screwdriver for not being able to hammer nails, but I would be pretty annoyed if it constantly slipped due to slight imperfections in how screws were manufactured.

          • Gladaed@feddit.org
            link
            fedilink
            arrow-up
            4
            ·
            28 days ago

            Maybe the LLM successfully predicted that this is a homophone issue, but screwed up correcting the word and then explained the non corrected word. Didn’t even occur to me. Fun.

        • teft@piefed.social
          link
          fedilink
          English
          arrow-up
          5
          ·
          28 days ago

          They might not listen to words but they can rhyme and compose songs just fine so they must have some sort of statistical correlation for the sounds of words being related.

        • [deleted]@piefed.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          28 days ago

          10 years ago all of the search engines would have returned a site explaining the right answer, which I know because they always returned the right results even with mispellings.

          Not only did it misunderstand the question, the answer was gibberish.‘Straight’ and ‘a sound’ are NOT homophones. ‘Strait’ and ‘straight’ are homophones.